I would like to know that whether there is a proper way or standard method to measure and record the inference power of the neural network model. Currently my team and I had an object detection model, we run it on UPXtreme with Vision Plus X accelerator. We simply use a power meter and plug it between the UPXtreme plug and the power socket. Then, we read the power meter reading while running the inference (object_detection_demo.py) on a single image. For what we have observed is that the reading is not stable during inference and will somehow fluctuate with 1 or 2 watts over time. We understanding that this method is somehow not really legit, so any suggestion on this? Any method we can try to get a more accurate power consumption for inferencing?
Our findings indicated that you would not be able to measure power consumption via OpenVINO and might need to rely upon using external equipment to perform the necessary measurement.
Attached is the paper [Pena.pdf (juxi.net)] that might be useful to you. Check out the Experimental Setup to see how they were able to measure power consumption with different networks.
This thread will no longer be monitored since we have provided information and an article. If you need any additional information from Intel, please submit a new question.