Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
485 Discussions

Worried about your Deep Learning model being stolen or tampered with? Worry no more...

MaryT_Intel
Employee
0 1 1,529

Summary

You have been working on developing and perfecting an efficient and accurate deep learning model, and now you want to find out which Intel hardware delivers optimal performance for this model. You know Intel® DevCloud for the Edge gives you full access to hardware platforms hosted on the cloud environment, designed specifically for deep learning. You can test your model’s performance using the Intel® Distribution of OpenVINO™ toolkit, as well as various CPU, GPU, and VPU combinations, including the Intel® Neural Compute Stick 2 (NCS2). 

But even after perfecting the model and deciding on hardware, you still need to handle security. How do you ensure inference integrity and copyright protection of your deep learning models? One possible solution is to use cryptography to protect models as they are deployed and stored on edge devices. Model encryption, decryption, and authentication are not provided by the OpenVINO™ toolkit but can be implemented with third-party libraries or modules, like Fernet* and Argon2*. In this way, the model is protected while it is in transit and at rest on the edge devices. 

Let’s see how to use the OpenVINO™ toolkit securely with protected models. After a model is optimized by the OpenVINO™ Model Optimizer, it's then deployed to target devices in the Intermediate Representation (IR) format. An optimized model is stored on an edge device and executed by the OpenVINO™ toolkit Inference Engine. To protect deep-learning models, you can encrypt an optimized model before deploying it to the edge device. The edge device should always keep the stored model encrypted and only decrypt at runtime for use by the OpenVINO™ Inference Engine.

Let’s use Fernet* and Argon2* to encrypt and decrypt IR files. The Fernet module is a symmetric encryption method that ensures that the integrity and confidentiality of the message encrypted cannot be compromised without the key. It uses URL-safe encoding for the keys, 128-bit AES in CBC mode and PKCS7 padding, with HMAC using SHA256 for authentication. The initialization vector (IV) is created from os.random().

The Argon2 module is a KDF (Key Derivation Function) to generate a symmetric key to encrypt the model files and to protect the user-supplied password. This password will be the input for encryption and decryption as you will see in the code snippet below. According to the Argon2 IETF draft. There are two main versions of Argon2: Argon2i which is the safest option against side-channel attacks and Argon2d which is the safest option against GPU cracking attacks. Argon2id works as Argon2i for the first half of the first iteration over the memory and as Argon2d for the rest, thus providing both side-channel attack protection and brute force cost savings due to time-memory tradeoffs. 
 

figure 1

Encrypt Model

In this section, we will be focusing on how to encrypt the IR model files using Argon2 and Fernet modules. 

Before encrypting the model, you will have to download the raw model, create FP16/FP32 IR files & create INT8 precision IR. For more details on these steps check out the complete Accelerated Object Detection demo
 

figure 2

To encrypt the model files, you will need a password, and we recommend that you use a strong password that includes upper case, lower case letters, digits, and symbols.

Note: The steps in this section should be performed outside of Intel® DevCloud for the Edge in a secure environment.

The code (snippet) below shows how to encrypt a file using Argon2 and Fernet modules.
 

import base64
from cryptography.fernet import Fernet
import argon2, binascii

def encrypt(password, argon2_parameters, in_file, out_file):
    """
    Encrypts in_file and writes the encrypted content to out_file 
    """

    # create the hasher object using the given parameters
    hasher = argon2.PasswordHasher(time_cost=argon2_parameters.time_cost, \
                               memory_cost=argon2_parameters.memory_cost, \
                               parallelism=argon2_parameters.parallelism, \
                               hash_len=argon2_parameters.hash_len, \
                               salt_len=argon2_parameters.salt_len)

    # generate the hash for the given password
    pw_hash = hasher.hash(password)

    # extract the salt generated by the hash function to be used for generating the fernet key
    salt = pw_hash.split("$")[-2]

    # generate the raw hash to be used as fernet key	
    raw_hash = argon2.low_level.hash_secret_raw(time_cost=argon2_parameters.time_cost, \
                                                memory_cost=argon2_parameters.memory_cost, \
                                                parallelism=argon2_parameters.parallelism, \
                                                hash_len=argon2_parameters.hash_len, \
                                                secret=bytes(password, "utf_16_le"), \
                                                salt=bytes(salt, "utf_16_le"), \
                                                type=argon2_parameters.type)
    key = base64.urlsafe_b64encode(raw_hash)
    fernet = Fernet(key)
    
    # save the password hash first before writing the encrypted data to the out_file
    with open(in_file, 'rb') as f_in, open(out_file, 'wb') as f_out:
        data = f_in.read()
        enc_data = fernet.encrypt(data)
        pw_hash = pw_hash + '\n'
        f_out.write(bytes(pw_hash, "utf-8"))
        f_out.write(enc_data)

if __name__ == '__main__':
    xml_file = "mobilenet-ssd.xml"
    bin_file = "mobilenet-ssd.bin"
    encrypt_xml_file = "mobilenet-ssd-encrypt.xml"
    encrypt_bin_file = "mobilenet-ssd-encrypt.bin"
           
    # hashing algorithm is Argon2id
    # details on choosing the parameters can be found here 
    # https://argon2-cffi.readthedocs.io/en/stable/parameters.html
    p = argon2.Parameters(type=argon2.low_level.Type.ID, \
                      version=19, \
                      salt_len=16, \
                      hash_len=32, \
                      time_cost=16, \
                      memory_cost=2**20, \
                      parallelism=8)
    # encrypt the .xml file	
    encrypt(pw, p, xml_file, encrypt_xml_file)
    # encrypt the .bin file	
    encrypt(pw, p, bin_file, encrypt_bin_file)

 

code 1

Note: We are using the same salt that was generated by the hashing function because we will be saving the complete hash into the encrypted file and we can extract the salt in a similar manner as we did above during decryption. The Argon2 spec provides more details. 

Decrypt Model 

The following section will go through the steps on how to decrypt & load the model and then run the inference application on the Intel® DevCloud for the Edge.

Note: The steps in this section should be performed on the Intel DevCloud for the Edge after uploading the encrypted model as shown above
 

figure 3

The OpenVINO™ toolkit Inference Engine requires model decryption before loading. Allocate a temporary memory block for model decryption and use IECore methods to load the model from the memory buffer. For more information, see the IECore Class Reference Documentation.

The below code snippet shows how to decrypt the encrypted files using the same Argon2 and Fernet modules and load them into the OpenVINO™ inference engine.

Below is the code snippet for decrypting and loading the model IR files.

# IR files decryption function
# input: password and encrypted file path
# output: buffer containing decrypted file content
def decrypt(password, in_file):

    # Read the password hash from the encrypted file
    with open(in_file, 'r') as f:
        pw_hash = f.readline()[:-1]

    # Extract the Argon2 parameters from the hash
    p = argon2.extract_parameters(pw_hash)
    hasher = argon2.PasswordHasher(time_cost=p.time_cost, \
                               memory_cost=p.memory_cost, \
                               parallelism=p.parallelism, \
                               hash_len=p.hash_len, \
                               salt_len=p.salt_len)
    # Verify that the password used during encryption matches 
    # with the one provided during decryption, if not stop
    try:
        hasher.verify(pw_hash, password)
        log.info("Argon2 verify: true")
    except:
        log.error("Argon2 verify: false, check password")
        exit()

    # Extract the salt from the hash that will be used for generating the fernet key
    salt = pw_hash.split("$")[-2]

    # Generate the raw hash to be used as fernet key as done during encryption above	
    raw_hash = argon2.low_level.hash_secret_raw(time_cost=p.time_cost, \
                                    memory_cost=p.memory_cost, \
                                    parallelism=p.parallelism, \
                                    hash_len=p.hash_len, \
                                    secret=bytes(password, "utf_16_le"), \
                                    salt=bytes(salt, "utf_16_le"), \
                                    type=p.type)

    # base64 encode the raw hash (key) 
    key = base64.urlsafe_b64encode(raw_hash)

    # Create the Fernet key for decryptin the files
    fernet = Fernet(key)
    dec_data = b''
    with open(in_file, 'rb') as f_in:
        enc_data = f_in.readlines()[1]
        try:
            dec_data = fernet.decrypt(enc_data)
        except:
            log.error("decryption failed")
    return dec_data
#------------------------------------------------------------------------
...
# decrypting and loading the model IR files
dec_xml = decrypt(pw, model_xml)
dec_bin = decrypt(pw, model_bin)
net = ie.read_network(model=dec_xml, weights=dec_bin, init_from_buffer=True)
...

 

Summary of the steps performed to decrypt and loading the model IR files. 

  1. Open the encrypted file for reading and extract the hashing parameters using the Argon2 method extract_parameters
  2. Create a hasher object based on the parameters read from the encrypted file.
  3. Verify the password matches the input password and the hashed value using the verify method of the hasher. If the password doesn’t match error out.
  4. Extract the salt from the hash 
  5. Fernet key (Encryption key) using the same password as secret and the Salt generated during the hashing
  6. Decrypt the out_file content into a buffer to be loaded into the OpenVINO™ toolkit inference engine network.

Conclusion:

Encrypting your optimized model before deploying it to the edge device will protect the integrity and confidentiality of your model while in transit and at rest.

We picked Argon2 and Fernet for hashing and symmetric encryption, but you are free to pick other modules/algorithms based on your needs. Check out the working sample of Accelerated Object Detection with an encrypted model at Intel® DevCloud for the Edge. Note that this approach does not provide runtime protection of the model as it is decrypted in the memory while in use, stay tuned for more details on the runtime protection of models in my upcoming blogs. 
 

 

Notices & Disclaimers

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.  

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
 
Your costs and results may vary. 

Intel technologies may require enabled hardware, software or service activation.

© Intel Corporation. Intel, the Intel logo, OpenVINO, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.  
 

1 Comment
LM60
Beginner
This mode I tested in c++ environment passed, but not in c# call c++ library environment, c++ part is the same code, decryption and loading model are completed in c++, do not know what the problem caused
About the Author
Mary is the Community Manager for this site. She likes to bike, and do college and career coaching for high school students in her spare time.