Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
488 Discussions

Intel AI Optimizations Boosting Energy Conservation Efforts

Vishnu_Madhu
Moderator
0 0 4,273

Author: Vishnu Madhu, AI Software Solutions Engineer

We like to think of energy as being available on demand. Flip on the switch, the lights come on. Press a few buttons, the microwave warms up your food. Adjust the thermostat, the heat or AC comes on. In the real world, however, we are increasingly finding that that is not always the case.

This all highlights the stark reality that we are living on a planet of finite resources. It requires time, money, water, food, energy and a host of other resources just to simply keep civilization going, and the demands seem to be ever-increasing. Headlines across social media show energy producers begging for decreased usage while energy consumers are continually heaping on demand.

This imbalance is what drove Intel to join the Green Software Foundation (GSF) and work at the forefront to help reduce energy consumption. Along with GSF’s other member companies, we are researching and implementing changes every day within the software development community — offering tools and building processes that allow companies to produce more while consuming less.

Sustainable Compute

One of the most promising initiatives that we are undertaking is around more sustainable compute. Computing power requires a lot of energy (hence the fans on your PC or laptop) and the energy requirements increase dramatically when processing large amounts of data. In modern computing, one of the larger uses of compute is Artificial Intelligence (AI) and AI’s requisite Machine Learning (ML) deployments. We anticipate that with a few incremental innovations to your ML deployment, you could see a reduction in energy usage, which typically leads to cost savings.

The Problem: Inefficient Machine Learning Deployments

Artificial intelligence is the umbrella term that we use to talk about systems that are meant to replicate human intelligence. AI is increasingly everywhere and it is in everything. Chances are if it has a battery and/or plugs into the wall, there is an option to have AI in it. You interact with AI when you use Google search, go shopping on Amazon, and stream content, which often utilizes AI upscaling. AI is in constant use in everyday devices such as our cellphones and helps regulate other objects that we encounter such as crosswalk signs and traffic lights.

To make this ubiquitous AI effective, AI needs to be trained. This is where training techniques such as “Machine Learning” (ML) come in. ML is one of the most successfully used techniques for implementing AI systems.

ML has phases with pipelines. Pipelines are created to train and deploy the AI’s machine learning. This multi-step pipeline is tuned to achieve the desired task, like the AI being able to distinguish trees and bees. Measuring how successful an AI can carry out those tasks are often called “Service Level Agreements” (SLAs), like “time-to-train” on the ML pipeline, or model throughput in deployment.

The energy consumption used to achieve SLAs is already significant. Tuning and adding processing power adds significantly to energy consumption. As with any increased energy usage, the carbon footprint increases as well. The result is that more and more SLA-meeting AIs are creating larger carbon footprints.

A new consideration needs to be added: AI and machine learning also need to be evaluated on energy usage.

The Solution: Enhancing the ML Pipeline

Evaluating AI deployments and machine learning based on overall energy usage instead of just processing power is a new idea. It’s so new that there is no standard metric currently. Each section of the ML pipeline consumes an enormous amount of energy, and each section should be evaluated and enhanced.

Intel has proposed enhancing the ML pipeline through what we call “Energy Efficient Optimizations”, or EEOs. Unlike just adding more hardware to the pipeline, EEOs are inclusive of things like optimizing the code to leverage the latest features on the CPUs (i.e. newer Instruction Sets for vectorization), improving CPU cache utilization or just using more efficient models wherever possible.

For example, we could use a model with a billion parameters for identifying cats instead of dogs, or one with only a million parameters for the same task. Currently, EEOs are rarely considered during model selection with most ML pipelines only considering and deploying the greatest and latest model available.

We see potential EEOs at every step of the ML pipeline.

EEOs are more often incremental innovations — that is, small, tiny innovations that do not require a complete overhaul of your pipeline but do add up over time. Many of these incremental innovations are fairly inexpensive and/or quick to do. It is similar to recycling your water bottle or using a reusable bottle: actions that are simple and quick, but add up to a significant change over time.

A simple EEO that we would like to highlight is switching over your default libraries to libraries that are optimized for machine learning. Intel oneAPI AI Analytics toolkit provides optimized libraries for various machine or deep learning needs. Similarly, Intel’s OpenVINO toolkit helps optimize the deep learning models and deploy them more efficiently on Intel hardware. The use of these tools and libraries will make the application more responsive while at the same time helping reduce the power requirements to meet predefined SLAs.

Another easy EEO involves replacing older Intel Xeon processors with newer Xeons that have features that provides AI/ML acceleration, thus improving compute efficiency.

Small changes can make a big impact.

The Real-World Impact

Intel has partnered with organizations such as the Green Software Foundation to promote this new mindset of system optimization. EEOs are incremental innovations that help promote energy reduction, not neutralization.

This is because we believe that simple, small changes like this can help promote solutions that will lead to taking steps to help solve the current climate crisis we are facing.

Call-to-Action

Our ultimate goal in our partnership with partners such as the Green Software Foundation is to save energy instead of offsetting. Taking these measurements and applying them to scale can give us a sense of what the potential energy savings could look like, and those energy savings can contribute to overall environmental impact.

You can apply Energy Efficient Optimizations to your ML pipeline today. We anticipate that even with small changes, we can begin to see meaningful reduction in energy usage, leading to savings. We want everyone to see that this “comprehensive sustainability” is something that concerns everyone, not just at the corporate level.

All of us architects, product owners, and product management need to be involved. Working towards sustainability can only be comprehensive if everyone in the ecosystem is aware of the environmental and energy impact. We anticipate that AI optimizations can improve energy consumption, even on existing systems.

Come see us at Intel Innovation this week to hear more about what we are doing in AI, sustainability, and other fields!

With small but focused changes to ML deployment, we believe we can help make a difference. Environmental and energy concerns are becoming foundational crises in our time. Working towards sustainable computing can help. The world’s resources are finite, and changes are needed to make them last. Intel, with our partners, can help offer ecosystem leadership and help enable real change in this crisis.

Notices & Disclaimers:

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex

​Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available ​updates. See backup for configuration details. No product or component can be absolutely secure.​​

​Intel technologies may require enabled hardware, software or service activation.​​​​​​​

​Your costs and results may vary.​​​

​Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.​​​

Intel is committed to the continued development of more sustainable products, processes, and supply chain as we strive to prioritize greenhouse gas reduction and improve our global environmental impact. Where applicable, environmental attributes of a product family or specific SKU will be stated with specificity. Refer to the 2022 Corporate Responsibility Report for further information.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.​​

About the Author
Vishnu Madhu is an AI Software Solutions Engineer at Intel. He is a EEE graduate with more than a decade of technology experience. Last few years of his work involved piping together ML systems for usecases that spans CV, NLP, Recommender Systems and more. In his current role, he works towards enabling Intel's partners to efficiently utilize their Intel hardware for deploying AI/ML applications.