Published: November 3, 2020
Key Takeaways
- Discover how you can accelerate seismic interpretations on AWS using the Intel® Distribution of OpenVINO™ toolkit.
- Get started today by using open-sourced, pre-trained models and code samples, and Jupyter notebooks.
Authors:
Flaviano Christian Reyes, Ravi Panchumarthy, Vibhu Bithar, Alexey Khorkin,
Alexey Gruzdev, Louis Desroches, Manas Pathak
Guest Author: Dhruv Vashisth (AWS)
Introduction
Convolutional Neural Networks (CNN) offers state-of-the-art performance not only for traditional computer vision applications but also for seismic interpretation. Geoscientists can use CNNs for basin-wide quick look interpretation of seismic data for fault, salt, or facies identification. This helps in saving time on a lot of tedious work, thereby reducing the time to first oil. Machine Learning is a good tool for basin-wide quick look interpretation, and prospect generation – which cannot be done with the classic tools available. When manually performed, only very few interpretations are feasible.
A recent development in this field shows that CNN models that have been trained on synthetic seismic datasets are producing acceptable accuracy in identifying faults using real datasets1,2. Such solutions accelerate oil and gas exploration since geoscientists do not need to train models from scratch on newly acquired seismic datasets to get a quick look at the interpretation results - even at the basin scale. Intel® Distribution of OpenVINO™ toolkit can help accelerate the inference pipeline for seismic interpretation, technical details on training and inference can be found in a recently published work. In that work, we showed a workflow (Figure 1) to use OpenVINO™ toolkit on a pre-trained model to perform faster inference on 2nd Generation Intel® Xeon® Scalable Processors (Cascade Lake) that can be availed in C5 instances on AWS. Another recent blog shows over 3x improvement compared to GPUs when performing inference with lower precision (INT8) on a seismic workload detecting faults by leveraging OpenVINO™ toolkit and Intel® Deep Learning Boost (Intel® DL Boost). Intel® DL Boost includes new Vector Neural Network Instructions (VNNI) which enable INT8 deep learning inference. (Refer to Optimization Notice for more information regarding performance and optimization choices in Intel software products).
Figure 1: End to end to end workflow showing deep learning performed on a seismic dataset. The training/trained model must be in OpenVINO™ toolkit’s supported Frameworks and Formats . The inference in this case is fault detection performed on F3 Seismic data.
OpenVINO™ toolkit AMI offering in the AWS Marketplace
To facilitate accelerated seismic interpretation on AWS, Intel Energy and AWS Energy teams worked together to create OpenVINO™ toolkit AMI (Amazon Machine Image) based on Amazon Linux 2 operating system and published it in the AWS marketplace (Figure 2).
Figure 2: OpenVINOTM AMI offering in the AWS marketplace. Goto: https://aws.amazon.com/marketplace/pp/B08LZJJZR3/
Sample Jupyter notebooks are also provided to perform inference using OpenVINOTM toolkit from a pre-trained model. After you launch an instance from OpenVINOTM toolkit AMI, you can connect to it and use it just like you use any other server. For information about launching, connecting, and using instance, see Amazon EC2 instances. For more information on using this AMI, please see Getting Started Guide to Launch AWS EC2 instance with OpenVINO™.
Figure 3: Architecture diagram showing OpenVINOTM AMI on EC2 and its samples
The following are two sample Jupyter notebooks provided as a reference to use OpenVINOTM toolkit AMI on AWS.
1. InceptionV3 for general computer vision-based applications.
2. Salt Identification in seismic data. This notebook uses a 3D CNN based model3 on data from the F3 Dutch block in the North Sea to identify salts. Salt-bodies are important subsurface structures with significant implications for hydrocarbon accumulation and sealing in petroleum reservoirs. On the other hand, if Salts are not recognized before drilling, they can lead to several complications if encountered unexpectedly while drilling the well. The ability to quickly launch OpenVINO™ enabled instances in AWS to perform automatic quick look seismic interpretation from a pre-trained model will help geoscientists in reducing time to the first oil.
To access other pre-optimized deep learning models, visit OpenVINO™ toolkit’s Open Model Zoo, which provides free 200+ pre-trained models to speed-up the development and production deployment process.
Acknowledgment to AWS Energy team: Team Intel would like to acknowledge AWS Energy team support in making this OpenVINO™ toolkit AMI available in the AWS marketplace.
References:
1. Xinming Wu, Luming Liang, Yunzhi Shi, and Sergey Fomel, (2019), "FaultSeg3D: Using synthetic data sets to train an end-to-end convolutional neural network for 3D seismic fault segmentation," GEOPHYSICS 84: IM35-IM45.
2. York Zheng, Qie Zhang, Anar Yusifov, and Yunzhi Shi, (2019), "Applications of supervised deep learning for seismic interpretation and inversion," The Leading Edge 38: 526–533.
3. Anders U. Waldeland, Are Charles Jensen, Leiv-J. Gelius, and Anne H. Schistad Solberg, (2018), "Convolutional neural networks for automated seismic interpretation," The Leading Edge 37: 529–53
Notices & Disclaimers
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.
Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
Your costs and results may vary.
Intel technologies may require enabled hardware, software or service activation.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
Optimization Notice
Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice
Configuration
Configuration |
Config1 |
Config2 |
Test by |
Intel |
Intel |
Test date |
08/06/2020 |
08/06/2020 |
Platform |
Intel(R) Xeon(R) Gold 6252N CPU @ 2.30GHz |
Intel(R) Xeon(R) Gold 5220 CPU @2.20GHz |
GPU |
n/a |
NVIDIA V100 |
# Nodes |
1 |
1 |
# Sockets |
2 |
2 |
CPU |
96 |
72 |
Cores/socket, Threads/socket |
24/48 |
18/36 |
ucode |
0x5002f01 |
0x5002f01 |
HT |
On |
On |
Turbo |
On |
On |
BIOS |
4.1.13,0x5002f01 |
3.1, 0x5002f01 |
System DDR Mem Config: slots / cap / run-speed |
DDR4: 12 / 16GiB / 2933 MHz |
DDR4: 6 / 32GiB / 2666 MHz |
DDR4: 8 / 16GiB / 2666 MHz |
||
System DCPMM Config: slots / cap / run-speed |
n/a |
n/a |
Total Memory/Node (DDR+DCPMM) |
192 GB |
320 GB |
Total GPU Memory |
n/a |
32GB |
Storage - application drives |
439.56GB |
7TB |
OS |
Ubuntu 18.04.4 LTS |
Ubuntu 16.04.6 LTS |
Kernel |
4.15.0-108-generic |
4.15.0-106-generic |
Mitigation variants (1,2,3,3a,4, L1TF) https://github.com/speed47/spectre-meltdown-checker |
Mitigated |
Mitigated |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.