Am using OpenVino Toolkit for multi camera multi targets.
The _compute_mct_distance_matrix function checks the cosine distance between each and every track to each other (across multiple cameras). That is if there are 1000 tracks then it will run through all 1000 tracks comparing them to all 999 tracks to find those that are the same/similar features therefore the same person/track. However there could be hundreds of thousands of tracks that run through this nested for loop. This is hours to days to check the tracks.
Tried looking at threading but seems not to be the answer. As for 5 minutes there are 15,000 tracks to parse which takes hours. For a full day I'm sure no amount of threads will resolve.
First and foremost, are you actually training your model?, validating it? or inferencing it?
If you are using PyTorch, you could validate your model by evaluating model performance on unseen data (for test dataset, NOT training dataset) using " with torch.no_grad( )". This will impact the auto gradient engine and essentially deactivate it.
In this context, we are telling the program not to worry about backpropagation since we are just evaluating the model (no need to change weight/biases/etc).
This would reduce memory usage and speed up computation.
If you have your model ready and expect to increase performance from OpenVINO perspective (means you are trying to do inferencing), you may use the Post Training Optimization Tool (POT). This POT uses Low Precision Optimization which should reduce the inferencing time.
Thank you for the reply.
We have deployed the "standard" :
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.