Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
158 Views

cl_cache model security

Jump to solution

Hi,

We ship our products with encrypted DNN models which get decrypted at runtime in memory only. Enabling cl_cache will save a compiled but not encrypted model into a file. This seem to void our efforts to protect Intellectual Property (IP) in DNN models? Is there a way to disable this feature? 

Thanks

Alexey

 

 
 
Labels (1)
0 Kudos

Accepted Solutions
Highlighted
Moderator
68 Views

Hi Alexey,


The following are from the developers team feedback.


Cache enabled via IE API must be enabled explicitly in the app, and some app can't enable caching for another app. In both cases, we save only compiled kernels required for model execution. Graph structure (connections between layers), weights values, execution order and stuff like that are not cached. So kernels cache has a risk that someone can disassembly kernels and get some info on layers in the model, but full model structure can't be reconstructed from the cached data.


It is not possible to run the model directly from the cache. Therefore, I would classify this risk as low.


Regards,

Rizal


View solution in original post

0 Kudos
8 Replies
Highlighted
Moderator
135 Views

Hi Alexey.


This is a feature of OpenCL which is used to implement the GPU plugin.

It is impossible to turn of this feature for other users, but we will look into this further to see if there options available.


Regards,

Rizal


0 Kudos
Highlighted
Beginner
130 Views

Hi Rizal,

 

Thank you for prompt response.

I also noticed that this sits in the openCL driver which is outside of openVINO team, however still developed by Intel.

If it is not possible to turn this feature off. To assess the risks of our DNN models being used outside of our s/w. I would appreciate if you could clarify how easy it would be to use files from cache directly. I.e. I guess these files are something like ".exe" files for openCL device (sorry for comparison, I don't know openCL), so if one can load them into GPU and feed data in the right format he can use the DNN models. Am I right?

 

Thanks in advance,

Alexey

 

 

 
 
 
0 Kudos
Highlighted
Moderator
127 Views

Hi Alexey,


It is not trivial to simply load the cache to the GPU to run the model.

Using the Inference Engine high level API, it is not possible to run it without the IR model.

This is my opinion thus far.


I would need to confirm this and I will get back to you with the appropriate information.


Regards,

Rizal


0 Kudos
Highlighted
Moderator
88 Views

Hi Alexey,


I would like to share some information that may help on how to protect encrypted models https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_protecting_model_guide.html.

 

Also, this article provides information on managing the cl_cache - https://github.com/intel/compute-runtime/blob/master/opencl/doc/FAQ.md#feature-cl_cache


Regards,

Rizal


0 Kudos
Highlighted
Beginner
79 Views

Hi Rizal,

We are doing exactly as siggested in your fisr link.

However, as it is explained in the second link. User of our s/w could enable the cl_cache quite easily by setting environment variable or creating cl_cache directory. This will result in openCL compiled model be saved onto the disk in unencrypted form.

If one could use this compiled model he can effectively bypass the encryption.

Regards,
Alexey

 

 

 

 
0 Kudos
Highlighted
Moderator
69 Views

Hi Alexey,


The following are from the developers team feedback.


Cache enabled via IE API must be enabled explicitly in the app, and some app can't enable caching for another app. In both cases, we save only compiled kernels required for model execution. Graph structure (connections between layers), weights values, execution order and stuff like that are not cached. So kernels cache has a risk that someone can disassembly kernels and get some info on layers in the model, but full model structure can't be reconstructed from the cached data.


It is not possible to run the model directly from the cache. Therefore, I would classify this risk as low.


Regards,

Rizal


View solution in original post

0 Kudos
Highlighted
Beginner
66 Views

Hi Rizal,

 

Thanks for detailed response.

 

Just to put some final touches. The cl_cache we are talking about is not enabled in IE API it is bit more of a lower level (NEO driver). To enable it one can just create a cl_cache directory or set an ENV variable. So, I would disagree with the first sentence

I have just checked the file sizes in the cache directory and it does look like files are too small to keep the model weights.

Again thanks for clarification and I agree with yout assessment that the risk is low.

 

Best regards,

Alexey

 
0 Kudos
Highlighted
Moderator
54 Views

Hi Alexey,


You're welcome and I hope this discussion has helped you.


Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Regards,

Rizal


0 Kudos