- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all,
I'm getting started on OpenVino and inference engines and I wanted to know if someone knows the memory the neural compute sticks come with. I have one of the first version and I'm thinking about buying the current generation. The reason I'm asking is that I'm investigating as a researcher in 3d medical image segmentation and I would like to know if I can run my U-Net models on these sticks. Due to the skip connections the U-Net typically require a lot of RAM on my GPUs during inference so I'm afraid this might be a bottleneck here.
I'm very happy about all input!
Many thanks,
Thomas
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello ThomasBudd,
Thank you for reaching out to us,
The internal memory of the NCS2 is about 500 MB, 100 MB of memory is set aside for intermediate processing, which is the memory that is allocated for processing the current layer. For example, a model that is 181 MB exceeds the memory capacity for intermediate processing.
Intel Open Model Zoo has several models that have the UNet architecture. Also, the Brain Tumor Segmentation (BraTS) implements the architecture to segment brain tumors from raw MRI scans. The mentioned models are listed here:
In addition, we also have the Deep Learning Medical Decathlon Demos for Python which contains 2D and 3D U-Net TensorFlow scripts for training models using the Medical Decathlon dataset. If you are looking for implementation of UNet in TensorFlow, the UNet-in-TensorFlow repository might interest you.
Sincerely,
Zulkifli
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@ThomasBudd we also provide tables which list supported devices for each Open Model Zoo model (based on result of accuracy validation), this is for Intel pre-trained models and this one for Public-pre-trained models.
Regards,
Vladimir
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Many thanks for both the very insightful answers!
100MB are indeed restrictive for 3d U-Nets.
I was wondering if there were some benchmarks of the NCS2 vs some Intel CPUs, do you have anything like this?
The reason why I'm interested in this product is because I wanted to know if I can build an inference machine for our radiologists to run my 3d segmentation model. I was thinking about something like a raspberry pi plus the NCS. Does Intel have any other solutions that would allow building these inference machines?
Many thanks!
Thomas
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello ThomasBudd,
You can use the Benchmark Python Tool to estimate deep learning inference performance on supported devices such as CPU, GPU, and VPU.
Sincerely,
Zulkifli
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello ThomasBudd,
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Sincerely,
Zulkifli

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page