Hi @resh, if I understood your question correctly, you are looking for sample/demo which showcase inferencing of several models in one application? If that is the case, I'd recommend you to take a look at OpenVINO Open Model Zoo demo applications, some of them doing exactly this, for example, Python action_recognition_demo, which run Intel action-recognition-0001 composite model. We call composite model when single task, like action recognition in this case, implemented with two CNN models, encoder and decoder for action recognition. An another example is MTCNN model, which implements face detection and implemented with three CNN models which have to be run in certain sequence.
Another scenario is when application runs inference of several models, which solve different tasks, like C++ interactve_face_detection_demo, which run face detection model and then runs different models which extract various attributes on detected faces (age, gender, emotions, head pose estimation and so one).
With using OpenVINO asynchronous inference API it is possible to run inference of several models simultaneously, when platform provides sufficient capabilities (number of CPU cores or other acceleration devices, like number of VPU cores on HDDL board).
Hi Reshu Singh,
This thread will no longer be monitored since we have provided suggestions.
If you need any additional information from Intel, please submit a new question.