I am using the NCS2+Raspberry Pi 3 Model B. Due to the USB2 speed limit on the Pi, the inference takes too long for synchronous mode. I decided to use the asynchronous mode with two parallel processes; one for loading the image to the stick and the other one for the inference to run.
I got the following error:
File "stringsource", line 2, in openvino.inference_engine.ie_api.ExecutableNetwork.__reduce_cython__ TypeError: self.impl cannot be converted to a Python object for pickling
I tried using both multiprocessing python module and the pathos.multiprocessing and both gave me the same error.
How to use the OpenVINO inference APIs inside python multi-processing?
Would be nice to see your implementation or at least the code around the lines which cause the error. But it looks like you are trying to pass ExecutableNetwork object between processes, which is impossible since python multiprocessing module uses picklicking to establish data transfer between processes. To be able to pickle an object it should be serializable but Inference Engine python objects can't be fully serialized because under the hood it stores the pointer to a real C++ object.
BTW, from my point of view using multiprocessing is not a really good approach to solve your problem. It may even introduce some additional overhead on data transferring between processes and also will require any additional logic on communication between processes and its synchronization. I propose to have a look on object_detection_demo_ssd_async sample to see how to implement asynchronous infer scenario without multiprocessing. It implements pretty simple scenario with 2 infer requests executed asynchronously, but the logic can be extended to an arbitrary number of requests.