- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Not sure this is the right group so apologies in advance.
I’ve been trying to build the intel mlperf container from https://github.com/mlcommons/inference_results_v4.0 [github.com].
but am running into some problems.
I can work around most, except the OpenDNN compile issue at the moment - Have no idea how to fix those compile errors right now.
- Intel’s conda channel has disappeared. I modified the docker container to point at that channel and take mkl and openmp from there.
RUN /opt/conda/bin/conda config --add channels https://software.repos.intel.com/python/conda [software.repos.intel.com]
…
RUN /opt/conda/bin/conda install -y -c https://software.repos.intel.com/python/conda [software.repos.intel.com] mkl==2023.1.0 \
mkl-include==2023.1.0 \
intel-openmp==2023.1.0
- Build of LLVM failed. This was caused by CONDA_PREFIX not being defined prior to the LLVM build I added ENV CONDA_PREFIX "/opt/conda" just before ARG PYTORCH_VERSION=v1.12.0
final problem is
- OneDNN build fails. Specifically, the code under … / code/bert-99/pytorch-cpu/mlperf_plugins/csrc
Does not compile as it looks like AVX512 types are not known at compile time. See attached file
NOTE: I am building the container under WSL
- Tags:
- OneDNN Build
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
Any issue to try on docker images following the document?
feel free to send an email mentioned in the doc for further support.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Liam_murphy,
Which Windows version and hardware are you working on?
We haven't tried to build mlperf from WSL. but if possible, would you like to try the mlperf docker image directly?
you may follow the guide: https://www.intel.com/content/www/us/en/developer/articles/guide/get-started-mlperf-intel-optimized-docker-images.html
and docker pull intel/intel-optimized-pytorch:mlperf-inference-4.1-bert is ready in https://hub.docker.com/r/intel/intel-optimized-pytorch/tags
please feel free to let us know if any result.
Thanks
Ying
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
Any issue to try on docker images following the document?
feel free to send an email mentioned in the doc for further support.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page