Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Intel Community
- Software Development SDKs and Libraries
- Intel® Distribution of OpenVINO™ Toolkit
- Post-process output with TF commands

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

nikos1

Valued Contributor I

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

09-15-2018
08:40 PM

136 Views

Hello,

Is there any reference how to post-process network output with some Tensorflow commands?

To be more specific I already have the the tf pose output 'TfPoseEstimator/Openpose/concat_stage7:0' using OpenVino R3, and would like to add the tf.image.resize and tf.nn.pool etc as show below

self.tensor_image = self.graph.get_tensor_by_name('TfPoseEstimator/image:0') self.tensor_output = self.graph.get_tensor_by_name('TfPoseEstimator/Openpose/concat_stage7:0') self.tensor_heatMat = self.tensor_output[:, :, :, :19] self.tensor_pafMat = self.tensor_output[:, :, :, 19:] self.upsample_size = tf.placeholder(dtype=tf.int32, shape=(2,), name='upsample_size') self.tensor_heatMat_up = tf.image.resize_area(self.tensor_output[:, :, :, :19], self.upsample_size, align_corners=False, name='upsample_heatmat') self.tensor_pafMat_up = tf.image.resize_area(self.tensor_output[:, :, :, 19:], self.upsample_size, align_corners=False, name='upsample_pafmat') smoother = Smoother({'data': self.tensor_heatMat_up}, 25, 3.0) gaussian_heatMat = smoother.get_output() max_pooled_in_tensor = tf.nn.pool(gaussian_heatMat, window_shape=(3, 3), pooling_type='MAX', padding='SAME') self.tensor_peaks = tf.where(tf.equal(gaussian_heatMat, max_pooled_in_tensor), gaussian_heatMat, tf.zeros_like(gaussian_heatMat)) self.heatMat = self.pafMat = None

Link Copied

Accepted Solutions

Monique_J_Intel

Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

09-18-2018
10:02 AM

136 Views

Hi Nikos,

Yes, a way to do it is treat the post process section as a custom layer in your graph that you want to offload to native Tensorflow also once you do this it can only run on CPU so hopefully that is the target you'd like to deploy your application on. The instructions to do this reside in the in-package documentation /opt/intel/computer_vison_sdk/deployment_tools/documentations/docs/CustomLayersOffloadSubgraph.html. Let me know if you run into any issues following the process and also if you have any questions or concerns.

Kind Regards,

Monique Jones

1 Reply

Monique_J_Intel

Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

09-18-2018
10:02 AM

137 Views

Hi Nikos,

Yes, a way to do it is treat the post process section as a custom layer in your graph that you want to offload to native Tensorflow also once you do this it can only run on CPU so hopefully that is the target you'd like to deploy your application on. The instructions to do this reside in the in-package documentation /opt/intel/computer_vison_sdk/deployment_tools/documentations/docs/CustomLayersOffloadSubgraph.html. Let me know if you run into any issues following the process and also if you have any questions or concerns.

Kind Regards,

Monique Jones

For more complete information about compiler optimizations, see our Optimization Notice.