Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

EfficientDet-D0 trained and exported in Tensorflow 2.0 Object Detection API

macimovic
Beginner
8,803 Views
System information (version)
  • OpenVINO=> For conversion OpenVino in Github c8af311 from 2021-06-11 and for inference openvino_2021.2.185
  • Operating System / Platform => Windows 10
  • Compiler =>
  • Problem classification => Model Inference
  • Framework: Tensorflow Object Detection API 2.0
  • Model name: Efficientdet D0 input resolution 512x512 trained with custom data and exported with Tensorflow object detection API 2.0
  • Model optimizer conversion file: model-optimizer\extensions\front\tf\efficient_det_support_api_v2.4.json
Detailed description

At inference, the result of a retrained TF2 Object Detection API EfficientDet D0 with custom data is only an array of zeros except the first value, which is -1, instead of providing bounding boxes.

Steps to reproduce
  1. We trained EfficientDet D0 (model) from the Tensorflow Object 2.0 Detection API on custom data and

  2. exported the model with the exporter for TF2 to a saved_model.pb

  3. Then, we converted the EfficientDet D0 with model-optimizer\mo_tf.py (OpenVino in Github c8af311) and used model-optimizer\extensions\front\tf\efficient_det_support_api_v2.4.json.

We used the following settings: APIFILE=%OPENVINOINSTALLDIR%\model-optimizer\extensions\front\tf\efficient_det_support_api_v2.4.json python %OPENVINOINSTALLDIR%\model-optimizer\mo_tf.py ^ --saved_model_dir="exported-models%MODELNAME%\saved_model" ^ --tensorflow_object_detection_api_pipeline_config=exported-models%MODELNAME%\pipeline.config ^ --transformations_config=%APIFILE% ^ --reverse_input_channels ^ --data_type FP32 ^ --output_dir=exported-models-openvino%MODELNAME%_OV%PRECISION%

The model was successfully converted. The model optimizer log can be found here:

Apply to model tf2oda_efficientdet_512x512_pedestrian_D0_LR08 with precision FP32 '#2 Model to OpenVino Intermediate Representation '#21_SoC_EML\openvino"\model-optimizer\extensions\front\tf\efficient_det_support_api_v2.4.json "Start conversion" Model Optimizer arguments: Common parameters: - Path to the Input Model: None - Path for generated IR: C:\Projekte\21_SoC_EML\scripts-and-guides-samples\oxford_pets_reduced_openvino\exported-models-openvino\tf2oda_efficientdet_512x512_pedestrian_D0_LR08_OVFP32 - IR output name: saved_model - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: None - Reverse input channels: True TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: C:\Projekte\21_SoC_EML\scripts-and-guides-samples\oxford_pets_reduced_openvino\exported-models\tf2oda_efficientdet_512x512_pedestrian_D0_LR08\pipeline.config - Use the config file: None [ WARNING ] Failed to import Inference Engine Python API in: PYTHONPATH [ WARNING ] DLL load failed while importing ie_api: Das angegebene Modul wurde nicht gefunden. [ WARNING ] Could not find the Inference Engine Python API. At this moment, the Inference Engine dependency is not required, but will be required in future releases. [ WARNING ] Consider building the Inference Engine Python API from sources or try to install OpenVINO (TM) Toolkit using "install_prerequisites.sh" Model Optimizer version: custom_main_1d892296429f4c47c839bce4eba524edff8eb0d3 2021-06-11 13:57:50.460822: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2021-06-11 13:57:50.478509: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2021-06-11 13:58:23.317335: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-06-11 14:00:13.380157: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found 2021-06-11 14:00:13.408711: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303) 2021-06-11 14:00:13.435937: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: dp3510 2021-06-11 14:00:13.461395: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: dp3510 2021-06-11 14:00:13.490841: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-06-11 14:00:13.575427: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-06-11 14:02:19.125553: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 2021-06-11 14:02:19.144580: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session 2021-06-11 14:02:19.154771: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-06-11 14:02:21.336527: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize function_optimizer: Graph size after: 5666 nodes (4965), 13173 edges (12465), time = 405.86ms. function_optimizer: Graph size after: 5666 nodes (0), 13173 edges (0), time = 145.277ms. Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_false_12678_30252 function_optimizer: function_optimizer did nothing. time = 0.002ms. function_optimizer: function_optimizer did nothing. time = 0ms. Optimization results for grappler item: __inference_map_while_body_12631_30367 function_optimizer: Graph size after: 117 nodes (0), 126 edges (0), time = 2.744ms. function_optimizer: Graph size after: 117 nodes (0), 126 edges (0), time = 2.804ms. Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_true_12677_15652 function_optimizer: function_optimizer did nothing. time = 0.001ms. function_optimizer: function_optimizer did nothing. time = 0.001ms. Optimization results for grappler item: __inference_map_while_cond_12630_38606 function_optimizer: function_optimizer did nothing. time = 0.001ms. function_optimizer: function_optimizer did nothing. time = 0ms.

[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size. Specify the "--input_shape" command line parameter to override the default shape which is equal to (512, 512). The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept. [ WARNING ] Using fallback to produce IR. [ SUCCESS ] Generated IR version 10 model. [ SUCCESS ] XML file: C:\Projekte\21_SoC_EML\scripts-and-guides-samples\oxford_pets_reduced_openvino\exported-models-openvino\tf2oda_efficientdet_512x512_pedestrian_D0_LR08_OVFP32\saved_model.xml [ SUCCESS ] BIN file: C:\Projekte\21_SoC_EML\scripts-and-guides-samples\oxford_pets_reduced_openvino\exported-models-openvino\tf2oda_efficientdet_512x512_pedestrian_D0_LR08_OVFP32\saved_model.bin [ SUCCESS ] Total execution time: 510.97 seconds. "Conversion finished" The converted model can be found here: https://owncloud.tuwien.ac.at/index.php/s/0zmcqbY3HkhUIrL

  1. Then we executed the model first with the benchmark_app.py from OpenVino 2021.2.185, which was successful.

  2. We executed the model with the python script object_detection_sample_ssd.py

  3. The result was an array with Zeros for each image, except the first value.

Loading network and perform on CPU Starting inference for picture: Abyssinian_116.jpg {'DetectionOutput': array([[[[-1., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0.]]]], dtype=float32)} 

0 Kudos
1 Solution
Zulkifli_Intel
Moderator
4,573 Views

Hello Milos Acimovic,

 

Thank you for your patience, we received an update from developer that the issue has been fixed in the latest OpenVINO release (OpenVINO 2022.1) which will be available soon, or you can pull them from the OpenVINO GitHub (master branch).

 

Here are quick guides that you can use.

1.    Once the latest OpenVINO is available, download and install it in your system.

 

2.    Convert the model to IR using this command:

mo --saved_model_dir ~/openvino_models/saved_model --transformations_config front/tf/efficient_det_support_api_v2.4.json --tensorflow_object_detection_api_pipeline_config ~/openvino_model/fruit_export/pipeline.config -o ~output_directory

 

3.    We also made some modifications to the Python script that is compatible with the latest OpenVINO library (see attachment).

Here are the changes we made: 

net = IENetwork(model=model_xml, weights=model_bin) to net --> ie.read_network(model_xml, model_bin)

net.input --> net.input_info

net.inputs[self.input_blob].shape --> net.input_info[self.input_blob].input_data.shape

 

4.    Run inference using the optimized model and please ensure the input image is 512x512 (since this is the input image dimension that the model was trained on).

 

Hope this information helps.

 

Sincerely,

Zulkifli

 

View solution in original post

0 Kudos
38 Replies
Vicky_Rama
Beginner
2,328 Views

Thanks Zulkifli........I will check the script and use it and see how it is getting deployed......

0 Kudos
Zulkifli_Intel
Moderator
2,341 Views

Hello Milos Acimovic,

 

We received a similar error when converting the model. Based on our investigation, we found that the EfficientDet-D0 from TensorFlow 2 Detection Model Zoo was trained using a different TensorFlow 2 version than the one that you used to train your model. In order to convert it to IR format, efficient_det_support_api_v2.4.json is required.

 

We did a comparison between your custom-trained model with the TensorFlow Open Model Zoo model and Intel Open Model Zoo model and test it using your test image, none of these models can detect the objects in the image. We will refer this issue to the engineering team. Once we receive the feedback, we’ll revert to you.

 

Sincerely,

Zulkifli 


0 Kudos
boe
Novice
2,238 Views

Hi Zulkifli,

Any updates from the engineering team?

It's rather disappointing that we can't use IR from tensorflow efficientdet models since we would like to use it in edge device for CPU efficiency.

 

Best regards

boe

0 Kudos
boe
Novice
2,316 Views

Hi, i'm also facing the same problem as Macimovic. I can't get the bounding boxes when running Intel's object detection script either even if the conversion from TF Object Detection 2 API efficientdet_d0 to IR was successful. Is Intel team trying to fix this issue?

0 Kudos
Vicky_Rama
Beginner
2,274 Views

There are some compatibility issue between TF2 OD API and OpenVINO......For instance, when we use OpenVINO to convert a model from TF2 OD API Model Zoo, we just call the saved_model file.....It works perfectly and the IR files, when we use for Inference works perfectly too......But when we use the TF2 OD API and convert a model on our custom dataset, from the model zoo, we export the model into a "saved_model" format......And it invariably land-up in errors and exceptions, as pointed by OP.........It was brought to the attention of TF2 OD API development team and they provided some fix, by way of recent commits to the "Exporter-mainv2.py" and another "libv2.py" and it seems to work perfect on SNPE API's......I thought the same new commits, will work when I use OpenVINO to convert and grappling with the same issue.......It seems to be kind of a universal problem, between frameworks like TF/Pytorch and chip/SoC/SoM makers......These are my early days, and as a core Mechanical Engineering person, trying to unravel them......I bought a few Edge AI inference kits with Myriad for commercial deployment and really stuck at how to solve this issue......Can't change to other makers, as this inter-operability issues are there in other platforms also.......I tried converting TF models using Keras-API and some other issue cropped up.......Have kept further trials in abeyance, and looking at similar posts and threads for a workaround......

0 Kudos
Zulkifli_Intel
Moderator
2,304 Views

Hello Milos Acimovic,

 

Looking at Object Detection SSD Python* Sample the bounding boxes will appear when the confidence is more than 0.5 as the script below:

 

 for i, detection in enumerate(detections):

   if len(net.outputs) == 1:

     _, class_id, confidence, xmin, ymin, xmax, ymax = detection

   else:

     class_id = labels[i]

     xmin, ymin, xmax, ymax, confidence = detection

 

   if confidence > 0.5:

     label = int(labels[class_id]) if args.labels else int(class_id)

 

     xmin = int(xmin * w)

     ymin = int(ymin * h)

     xmax = int(xmax * w)

     ymax = int(ymax * h)

 

     log.info(f'Found: label = {label}, confidence = {confidence:.2f}, ' f'coords = ({xmin}, {ymin}), ({xmax}, {ymax})')

 

     # Draw a bounding box on a output image

     cv2.rectangle(output_image, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2)

 

 cv2.imwrite('out.bmp', output_image)

 log.info('Image out.bmp created!')

 

We had debugged both models (your custom-trained model and Intel OMZ model) and similar to the previous result, we can see that the OMZ model was able to detect the confidence level of 0.56 and therefore we are able to see the bounding box on our image when using this model.

 

However, looking at your model, the confidence value was 0.00 and this is the reason why no bounding box was detected as based on our condition in object_detection_sample_ssd.py, the bounding box will appear when confidence > 0.5.

 

These are the suggestion that you can try:

1) Re-train his model to increase the confidence value.

2) Fine-tune the confidence value set in object_detection_sample_ssd.py to follow his requirement, maybe lower down the value so that it will be able to show the bounding box when the confidence value is low.

 

0 Kudos
macimovic
Beginner
2,298 Views

Hey Zulkifli, 

 

Appreciate your suggestion, however, that doesn't seem to be the problem

since before an OpenVINO optimization, the confidence is above the required.

 

Furthermore, if you add a line

for i, detection in enumerate(detections):
    print(detection)
    if len(net.outputs) == 1:
        _, class_id, confidence, xmin, ymin, xmax, ymax = detection

 

before you even start filtering by confidence values you will see there are no boxes predicted.

 

The minimum confidence value set in the pipeline.config file is almost 0 i.e. 1e-08.

0 Kudos
Munesh_Intel
Moderator
2,177 Views

Hi Milos,

We've escalated this issue to the development team for any potential solution from code perspective. We shall update you once we obtain their feedback.



Regards,

Munesh


0 Kudos
msolmaz
Employee
2,018 Views

Munesh,

Is there any update on a potential solution? I came across the same problem. I believe the model optimizer is optimizing a re-trained model differently than the original model.

 

thanks,

Mehmet

0 Kudos
Zulkifli_Intel
Moderator
2,004 Views

Hello,

 

My Apology for the delay. We realized that the issue most likely came from the efficient_det_support_api_v2.4.json, there are some fixes required on the .json file for this model. This issue has been brought to the attention of our developer and we're expecting the fix to be available very soon.

 

Sincerely,

Zulkifli

 

0 Kudos
macimovic
Beginner
1,675 Views

Hey Zulkifli,

Any update on when the fix will be available?

 

Thanks,

Milos

0 Kudos
msolmaz
Employee
1,995 Views

I figured out a temporary solution for this problem, with some help. Please convert "background_label_id" from "0" to "999" in converted .xml and the inference should work as it should.

 

 

msolmaz
Employee
1,939 Views

I want to give a little more detail on this problem. If in your pipeline.config, "add_background_class" is set to "false", then according to this link (https://github.com/openvinotoolkit/openvino/blob/master/docs/ops/detection/DetectionOutput_1.md), "background_label_id" should be changed from default value of "0" to "-1" in converted .xml.

I confirmed: -1 also works.

 

0 Kudos
macimovic
Beginner
1,918 Views

Hi msolmaz,

 

Thank you so much for looking into the problem we all had.

I will try out the solution and accept this if it works.

0 Kudos
macimovic
Beginner
1,916 Views

Hi msolmaz,

 

Thanks again for your suggestion. However, I still think there is a bug since whether I set 

add_background_class: true

or

add_background_class: false

 

it doesn't make a difference with respect to running the Model Optimizer since the generated .xml still has background_class_id="0".

 

I confirmed as well that modifying background_class_id manually to either -1, 999 or even 1 in my case as well as in yours works as expected when running inference.

0 Kudos
Zulkifli_Intel
Moderator
4,574 Views

Hello Milos Acimovic,

 

Thank you for your patience, we received an update from developer that the issue has been fixed in the latest OpenVINO release (OpenVINO 2022.1) which will be available soon, or you can pull them from the OpenVINO GitHub (master branch).

 

Here are quick guides that you can use.

1.    Once the latest OpenVINO is available, download and install it in your system.

 

2.    Convert the model to IR using this command:

mo --saved_model_dir ~/openvino_models/saved_model --transformations_config front/tf/efficient_det_support_api_v2.4.json --tensorflow_object_detection_api_pipeline_config ~/openvino_model/fruit_export/pipeline.config -o ~output_directory

 

3.    We also made some modifications to the Python script that is compatible with the latest OpenVINO library (see attachment).

Here are the changes we made: 

net = IENetwork(model=model_xml, weights=model_bin) to net --> ie.read_network(model_xml, model_bin)

net.input --> net.input_info

net.inputs[self.input_blob].shape --> net.input_info[self.input_blob].input_data.shape

 

4.    Run inference using the optimized model and please ensure the input image is 512x512 (since this is the input image dimension that the model was trained on).

 

Hope this information helps.

 

Sincerely,

Zulkifli

 

0 Kudos
Zulkifli_Intel
Moderator
1,634 Views

Hello Milos,


Did you get a chance to try our suggestion on running the model using OpenVINO GitHub (master Branch)?  If you have any issues, please reply to us.


Sincerely,

Zulkifli 


0 Kudos
Zulkifli_Intel
Moderator
1,620 Views

Hello Milos,


Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.


Sincerely,

Zulkifli


0 Kudos
Reply