Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Convert BERT TF model.pb to IR

Tye__Stephen
Beginner
973 Views

Can you convert a tensorflow BERT model.pb to IR? 

 

0 Kudos
4 Replies
Munesh_Intel
Moderator
957 Views

Hi Stephen,

Thanks for reaching out to us.

OpenVINO supports the following Bidirectional Encoder Representations from Transformers (BERT) models:

·       BERT-Base, Cased

·       BERT-Base, Uncased

·       BERT-Base, Multilingual Cased

·       BERT-Base, Multilingual Uncased

·       BERT-Base, Chinese

·       BERT-Large, Cased

·       BERT-Large, Uncased

 

Steps for converting BERT models to IR are available here:

Convert TensorFlow BERT Model to the Intermediate Representation

 

Regards,

Munesh


0 Kudos
Tye__Stephen
Beginner
944 Views

Thanks I have reviewed this page but I do not have a bert_model.ckpt.meta file, I have a bert_model.pb file.

 

Also on a separate note is the distilBERT model supported?

 

Regards,

Stephen

0 Kudos
Munesh_Intel
Moderator
924 Views

Hi Stephen,

We do have the following Intel pre-trained models ;


And also the following public pre-trained model:


IR files are provided for Intel pre-trained models. Meanwhile, the sole public pre-trained model, bert-base-ner, is a PyTorch model, which needs to be converted to ONNX, and subsequently to IR. You can use Model Converter to convert bert-base-ner model into IR format using Model Optimizer.

 

For your information, the Model Optimizer conversion arguments are given here:

https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/bert-base-ner/model.yml#L86

 

I would suggest you try using similar arguments to convert your .pb file to IR.

 

For your second question, DistilBert is not yet supported, but we are working on including it in our future releases.

 

On another note, the ‘small’ Intel pre-trained models are distilled versions as well. For example, ‘bert-small-uncased-whole-word-masking-squad-0001’ is a small BERT-large like model distilled on SQuAD v1.1 training set from the original ‘bert-large-uncased-whole-word-masking-finetuned-squad’ provided by the Transformers library.


Perhaps you can give a try with these pre-trained models to see if they meet your requirements. 



Regards,

Munesh


0 Kudos
Munesh_Intel
Moderator
888 Views

Hi Stephen,

This thread will no longer be monitored since we have provided suggestion and explanation. If you need any additional information from Intel, please submit a new question.



Regards,

Munesh


0 Kudos
Reply