Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

How can I use the get_metric() function to read the temperature of my NCS2

JRt00
Beginner
1,275 Views

I am trying to read the temperature of my compute stick using the python api. I think I found the right function but I am not sure how to use it. Any sample code in python?

0 Kudos
15 Replies
JAVIERJOSE_A_Intel
1,031 Views

Hi JRt00,

 

Thank you for reaching out.

 

To read the device temperature please take a look at the following snip of code found in the Inference Engine Query API documentation.

 

Regards,

 

Javier A.  

Intel Customer Support Technician  

A Contingent Worker at Intel

0 Kudos
JRt00
Beginner
1,031 Views

Thanks for your answer. The snippet is in C and I am working in python. I can't find the same code in the python api. Any idea?

0 Kudos
JAVIERJOSE_A_Intel
1,031 Views

Hi JRt00,

 

You can find the Python API to read the device temperature here.

 

Regards,

 

Javier A.  

Intel Customer Support Technician  

A Contingent Worker at Intel

 

0 Kudos
JRt00
Beginner
1,031 Views

HI Javier,

 

Thanks a lot for your answer. This is the code I was initially referring to. As you can see, running this code on the myriad device only outputs a list of words and nothing relating to temperature.

0 Kudos
JAVIERJOSE_A_Intel
1,031 Views

Hi JRt00,

 

You need to specify the metric name to request as "DEVICE_THERMAL" and the name of the device as "MYRIAD" in order to get the Intel® NCS2 temperature.

 

The snip of code looks like this:

 

ie = IECore() ie.get_metric(metric_name="DEVICE_THERMAL", device_name="MYRIAD")

 

I hope this helps.

 

Regards,

 

Javier A.  

Intel Customer Support Technician  

A Contingent Worker at Intel

 

0 Kudos
JRt00
Beginner
1,031 Views

Hi Javier,

 

Unfortunately, the code does not work. Here is what I used:

 

import sys from openvino.inference_engine import IECore ie = IECore() temperature = ie.get_metric(metric_name="DEVICE_THERMAL", device_name="MYRIAD") print(temperature)

 

and here is the error I received:

Traceback (most recent call last): File "/mnt/SSD-120GB/python3.7/albert/interfaces/lcd/temp.py", line 4, in <module> temperature = ie.get_metric(metric_name="DEVICE_THERMAL", device_name="MYRIAD") File "ie_api.pyx", line 121, in openvino.inference_engine.ie_api.IECore.get_metric RuntimeError: [NOT_IMPLEMENTED]

Any idea?

 

0 Kudos
JAVIERJOSE_A_Intel
1,031 Views

Hi JRt00,

 

Could you please tell me which OpenVINO™ Toolkit version are you using?

 

Regards,

 

Javier A.  

Intel Customer Support Technician  

A Contingent Worker at Intel

 

0 Kudos
JRt00
Beginner
1,031 Views

I have the version 2020.1

0 Kudos
JAVIERJOSE_A_Intel
1,031 Views

Hi JRt00,

 

Try adding this piece of code in the demo you are using, note you will need to give a path to an IR model:

 

import sys   from openvino.inference_engine import IENetwork, IECore     ir = "<PATH_TO_IR>/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml"   device = "MYRIAD"     def main():   ie = IECore()   net = IENetwork(model = ir, weights = ir[:-3] + 'bin')   exec_net = ie.load_network(network = net, device_name=device)     device_thermal = ie.get_metric(metric_name="DEVICE_THERMAL", device_name=device)   print("Device Thermal: " + str(device_thermal))     if __name__ == "__main__":   sys.exit(main())

 

 

Regards,

 

Javier A.  

Intel Customer Support Technician  

A Contingent Worker at Intel

 

0 Kudos
JRt00
Beginner
1,031 Views

HI Javier,

 

This is really kind of you to keep helping me. I adapted your code to this:

weights = "/mnt/SSD-120GB/python3.7/.../frozen_inference_graph.bin" model = "/mnt/SSD-120GB/python3.7/.../frozen_inference_graph.xml" ie = IECore() net = IENetwork(model=model, weights=weights) exec_net = ie.load_network(network = net, device_name="MYRIAD") device_thermal = ie.get_metric(metric_name="DEVICE_THERMAL", device_name="MYRIAD")   while True:   print("Device Thermal: " + str(device_thermal))

and unfortunately I keep getting the same error:

 

device_thermal = ie.get_metric(metric_name="DEVICE_THERMAL", device_name="MYRIAD")

 File "ie_api.pyx", line 121, in openvino.inference_engine.ie_api.IECore.get_metric

RuntimeError: [NOT_IMPLEMENTED] 

0 Kudos
JAVIERJOSE_A_Intel
1,031 Views

Hi JRt00,

 

We tried the code from our end and it works successfully, the issue may be with your setup of the OpenVINO™ toolkit.

 

Could you please tell me which Operative System are you using?

Did you follow the get started guide? If so, did you were able to run the demo_squeezenet verification script?

 

Regards,

 

Javier A.  

Intel Customer Support Technician  

A Contingent Worker at Intel

 

0 Kudos
JRt00
Beginner
1,031 Views

Hi! I am on Raspberry Pi. I installed it a month ago and can't remember what I tested.

0 Kudos
JAVIERJOSE_A_Intel
1,031 Views

Hi JRt00,

 

We tested the code we sent you on a Raspberry Pi 3 B+ and it works successfully. Try creating a new .py file which contains only that code, and make sure you are using the OpenVINO™ toolkit 2020.1 version. Make sure you use the absolute path (line 7) where you stored the xml file.

 

Also, for Raspbian OS you will need to use an xml file from the OpenVINO™ toolkit 2019 R3.1 release; you can download the files here.

 

Regards,

 

Javier A.  

Intel Customer Support Technician  

A Contingent Worker at Intel

 

0 Kudos
JRt00
Beginner
1,031 Views

I appreciate the help and the time that you took, but if some of the functionalities only work with demo models, then Intel should be upfront about it. We are talking about a 70 billion dollar company here, not a kickstarter project.

0 Kudos
JAVIERJOSE_A_Intel
1,031 Views

Hi JRt00,

 

 

Thank you for your feedback regarding the documentation, we will pass this information to the development team to improve the documentation.

 

The code we provided can be used on its own to get the device's temperature. However, we first need to load a model to the device which sets the device to inference mode, any model that is supported should work.

 

Please let us know if you were able to use the provided code successfully or have additional questions.

 

 

Regards,

 

Javier A. 

0 Kudos
Reply