HPC
Consult with Intel® experts on HPC topics
19 Discussions

How AI is Affecting Scientists and Driving New Methods of Research

Rick_Johnson
Employee
0 0 1,705

Posted on behalf of Vikram Saletore, PhD, Principal Engineer, Super Compute Group at Intel Corporation

We are witnessing a convergence of Artificial Intelligence (AI), High Performance Computing (HPC), and High-Performance Data Analytics (HPDA) due to the availability of large amounts of data supported by the rapid development and use of machine learning and deep learning AI frameworks and models that run on HPC infrastructures. This convergence has begun to reshape the landscape of scientific computing and enables scientists to find solutions in unique ways that were not possible before.  

Vikram Saletore is a Principal Engineer with the Intel Super Compute GroupVikram Saletore is a Principal Engineer with the Intel Super Compute Group We at Intel believe that artificial neural networks (ANNs) can act as an accelerator across the entire modeling and simulation pipeline. My work as part of our end user enablement team is focused on connecting the many ongoing advances in HPC and AI for science with end user applications and research. Ours is a multi-pronged approach as scientists are deservedly skeptical of AI in modeling and simulation. To be useful, AI technology must deliver demonstrably correct results to prevent the introduction of non-physical artifacts into a user’s simulation and provide some benefit such as faster inference performance, portability, and ease of use.

Scientists are deservedly skeptical of AI in modeling and simulation. To be useful, AI technology must deliver demonstrably correct results to prevent the introduction of non-physical artifacts into a user’s simulation and provide some benefit including faster inference performance, portability, and ease of use.

Practical Solutions Abound

Our customers have demonstrated that practical solutions abound as they are constantly finding new and better uses of AI technology in HPC and industry. It’s difficult to enumerate a short list due to the many advancements that include solutions that reduce manufacturing defect density; assist in retail; use digital-twin design to create robust, quieter, more fuel-efficient jet engines; deploy AI on the manufacturing floor; work to enable smart-cities; analyze traffic flows; augment auto crash simulations to save lives; perform high content drug discovery; assist with aircraft maintenance and repair; and make general advancements in fields such as computational fluid flow along with advancing human knowledge through high energy particle detector design. Numerous success stories have resulted in the deployment of production AI inference models in datacenters around the world to solve problems in agriculture, health, and many other areas.

In all cases I believe that there is a need for “Explainable AI” to ensure that physical and biological systems match the surrogate AI models to existing simulations and ground truth data.

Explainable AI incorporates a human expert in the loop who can interpret, explain, and verify the AI-based insights. When interpreting the inference of a CT scan that might identify a tumor, for example, explainable AI incorporates a radiologist who uses their expert interpretation to ensure that the identified region is indeed a tumor. This is a simple example that highlights how research efforts and the convergence of AI/HPC/HPDA are changing our lives.

The Impact of AI on Scientific Research

AI technology will affect nearly every aspect of future research. Success stories and ready accessibility to advanced technology has raised awareness among scientists about how AI augmentation and even replacement can expand what computer models can do. This promotes new thinking and the freedom to think big as scientists pursue the creation of more accurate models and explainable AI models. Scientists are realizing they can do research that was not possible before.

Success stories and ready accessibility to advanced technology has raised awareness among scientists about how AI augmentation and even replacement can expand what computer models can do. This promotes new thinking and the freedom to think big as scientists pursue the creation of more accurate models and explainable AI models. Scientists are realizing they can do research that was not possible before.

Data scientists are starting to incorporate physical constraints into AI models to create what are termed Physics Informed Neural Networks (PINNs). These constraints speed the creation, training, and deployment of neural networks. Remember that during training the ANN is being optimized to create the best model for a given training set, where the “best” model delivers results that most accurately reflect the physics. The addition of information about physical reality gives PINNs the ability to create models that are more accurate, have better predictability, and can train faster (time-to-solution).

PINNs reflect one of many new AI approaches being used by scientists to address real-world problems. Weather forecasting pipelines are one concrete example that illustrates how these new approaches can affect every aspect of the HPC workflow from input data preprocessing to assimilation, prediction, post-processing, and evaluation. Society benefits as these new approaches result in faster more accurate models for weather forecasting. [i]

New AI approaches means that scientists can do more with the data that they have. Meanwhile, scientists around the world will soon have access to the extraordinary performance of the latest Intel AI accelerators.  Access to new technology gives scientists everywhere the ability to address larger data sets and accelerate time-to-discovery.

New AI approaches means that scientists can do more with the data that they have. Access to new technology gives scientists everywhere the ability to address larger data sets and accelerate time-to-discovery.

The Key Challenge is Validation

While technology to deliver a faster time-to-solution is essential, the ultimate challenge with any data derived model is validation. This means that any AI-based effort needs to answer the question, “How do we know the neural network is doing what we think it is doing?”

How do we know the neural network is doing what we think it is doing?

Answering this question is a challenge that requires deep and careful thinking about data science and the problem being addressed. For many organizations, the adoption of AI technology and expertise is very new. The end user enablement team at Intel is working to help organizations understand their needs as they proceed along the technology adoption curve so they can accelerate their time to insight. This includes choosing the right technology approaches and technology building blocks.

For many organizations, the adoption of AI technology and expertise is very new. The end user enablement team at Intel is working to help organizations understand their needs as they proceed along the technology adoption curve so they can accelerate their time to insight.

Igniting the Agile Use of AI and HPC Resources

There is a convergence between HPC and AI workloads motivated in large part by their large computational requirements. This explains why many AI end users choose to run on existing cloud and datacenters that have been designed to support HPC workloads. Both HPC and AI end users need to scale up and scale out their workloads.

There are differences though. We at Intel have listened carefully to our AI customers and the HPC community. Our engineering teams are working wonders to deliver customer-requested solutions throughout the whole product stack. This can be seen in new Instruction Set Architectures (ISAs) that add specialized AI instructions along devices that incorporate high bandwidth, low latency, and high-capacity memory and storage systems. Many of these advanced Intel technology devices will be in the hands of customers this year.

Our preproduction units are delivering some amazing performance. We cannot wait to get these products into the hands of customers! Look at architecture day to get a sense of why we are so excited. Examples include:

  • XPUs [ii] provide an AI-enabled general-purpose hardware platform that incorporates high bandwidth memory. The “X” in “XPU” stands for any compute architecture that best fits the need of your application.[iii]
  • The Xe-HPC accelerators (codename Ponte Vecchio) provide massive parallelism coupled with high bandwidth memory and interconnect – all of which are important to deliver fast time to solution in a distributed HPC environment.
  • General purpose next generation Intel® Xeon® Scalable processors (codename Sapphire Rapids), which are now architected with new instructions for AI including Advanced Matrix Extensions (AMX) and Tile matrix MULtiply (TMUL) ISA extensions. Some of these Intel processors also incorporate High Bandwidth Memory (HBM).
  • Intel® Optane™ persistent memory with Distributed Asynchronous Object Storage (DAOS) have revolutionized storage performance to address the data handling issues. [iv]

A Software Strategy of Openness, Performance, and the Ability to Run Applications at Scale

Software enables portable heterogenous computing and is the gateway to yet-to-be-envisioned hardware innovations. Using the right software gives ends users freedom in the cloud and enables use of the hardware of their choice. We anticipate that the rate of change in HPC and AI technology will continue. Keeping up requires that scientists and application developers choose the right software that can support them as computing platforms evolve throughout the exascale supercomputing era and beyond.

Using the right software gives ends users freedom in the cloud and enables use of the hardware of their choice.

The oneAPI initiative provides the software agility that scientists need to unlock these grand new vistas of HPC and AI performance. The oneAPI software ecosystem delivers both performance and portability to support the latest technology for a multitude of tasks. The Intel® oneAPI compiler and libraries free applications from hardware lock-in so scientists and organizations can quickly pivot to run on the fastest and most cost-effective hardware platforms be they CPU-based or hardware accelerated.

My group teaches oneAPI and how the software ecosystem addresses customer needs to avoid hardware lock-in. We are delivering on customer expectations for extraordinary out of the box performance while also providing the Intel oneAPI tools and expertise to tune and increase performance over time. Extras include libraries that give scientists the ability to visualize data from big workloads photorealistically – even those running at the exascale!

For More Information

Click here to get more information on Intel’s HPC and AI technology.

 

Intel technologies may require enabled hardware, software or service activation.

© Intel Corporation.  Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  Other names and brands may be claimed as the property of others.   ​

 

[i] https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2020.0097

[ii] https://download.intel.com/newsroom/2021/client-computing/intel-architecture-day-2021-presentation.pdf

[iii] https://www.intel.com/content/www/us/en/architecture-and-technology/xpu.html

[iv] https://www.intel.com/content/www/us/en/high-performance-computing/daos-high-performance-storage-brief.html

Tags (4)