Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

model optimization of Google's albert (a lite Bert)

Windt__Nicolas
Beginner
555 Views

Dear community,

I want to optimize Google's albert (a lite Bert) to an intermediate representation.

Converting Bert works like a charm. It runs out of the box with a lot of warnings.

My preconditions are

  • Dell precision
  • Intel® Core™ i7-6820HQ CPU @ 2.70GHz × 8
  • Intel® HD Graphics 530 (Skylake GT2)
  • 15,5 GiB Memory

My OS is:

$ uname -a
Linux orquideaWindt 5.3.0-28-generic #30~18.04.1-Ubuntu SMP Fri Jan 17 06:14:09 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

my openvino model optimizer version is

$ mo.py --version
Version of Model Optimizer is: 2019.3.0-408-gac8584cb7

I downloaded the tensorflow model from Google public drive

In a first trial my tensorflow version is

$python3
Python 3.6.9 (default, Nov  7 2019, 10:44:02) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>>tf.__version__
'1.14.0'

(btw. I get a lot of ugly "future warnings" when importing tensorflow. For the sake or readability, I do not report them in the description above, but you will see them in the log below)

Now the optimizer's output:

$mo.py --framework tf --input_meta_graph albert_base_v2/model.ckpt-best.meta
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	None
	- Path for generated IR: 	/home/nicolas/Dokumente/deepLearning/albert/.
	- IR output name: 	model.ckpt-best
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	2019.3.0-408-gac8584cb7
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
[ FRAMEWORK ERROR ]  'Einsum'
Cannot load input model: 'Einsum'

In a second trial I installed tensorflow 2.0.0 in a conda environment. This is the result

$mo.py --framework tf --input_meta_graph albert_base_v2/model.ckpt-best.meta 
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	None
	- Path for generated IR: 	/home/nicolas/Dokumente/deepLearning/albert/.
	- IR output name: 	model.ckpt-best
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	2019.3.0-408-gac8584cb7
[ ERROR ]  
Detected not satisfied dependencies:
	tensorflow: installed: 2.0.0, required: 2.0.0
	networkx: installed: 2.4, required: 2.4

Please install required versions of components or use install_prerequisites script
/home/nicolas/Dokumente/intel/openvino_2019.3.376/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf.sh
Note that install_prerequisites scripts may install additional components.

Until now I did not succeded in converting albert into an intermediate representation. I would appreciate some support very much

Thanks in advance

Nicolas Windt

0 Kudos
1 Solution
Sahira_Intel
Moderator
555 Views

Hi Nicolas,

I believe this error is due to an incompatibility with Networkx v2.4

We have had success in converting models to IR after we downgraded our Networkx version to 2.3. 

Install v2.3 with this command and try again.

pip3 install networkx==2.3

Best Regards,

Sahira 

View solution in original post

0 Kudos
2 Replies
Sahira_Intel
Moderator
556 Views

Hi Nicolas,

I believe this error is due to an incompatibility with Networkx v2.4

We have had success in converting models to IR after we downgraded our Networkx version to 2.3. 

Install v2.3 with this command and try again.

pip3 install networkx==2.3

Best Regards,

Sahira 

0 Kudos
Windt__Nicolas
Beginner
555 Views

Hello Sahira,

thanks for the recommendation. I have applied that. The error is now not longer happening.

However I'm now facing the same issue as Andrés Ortiz concerning Error shape/value propagation / converting pre-trained tensor flow model

I'm waiting for your answer on this topic while trying to manage  a solution for this by myself.

Best Regards

 

Nicolas Windt

0 Kudos
Reply