Key Takeaways:
- Our pathfinding work is allowing Intel to deliver better solutions, faster, for our employees and customers.
- Along with game-changing successes, I’ve also seen instances where AI innovation projects fall short. Many of these failures stem from five common pitfalls.
- For all of AI’s challenges, the potential rewards are enormous, and the risks of stagnating while the rest of the world moves forward are significant.
I often say I have the best job at Intel. I lead a group that tackles high-risk, high-reward problems with novel machine learning (ML) algorithms—often without knowing whether a solution is even possible—in order to extract tangible benefit for Intel and our customers. My group has used ML to enable insights that are helping to improve yields in Intel’s fab processes and make our business forecasts more accurate. We’ve helped business teams decide whether to continue pursuing a strategic direction (or not).
Our pathfinding work is allowing Intel to deliver better solutions, faster, for our employees and customers. But along with game-changing successes, I’ve also seen instances, both within Intel and externally, where AI innovation projects fall short. Many of these failures stem from what I’ve come to recognize as five common pitfalls.
1. Lack of a clear business problem
Too many AI projects start with a mindset of, “Here’s a lot of data; tell me all the important insights.” That’s like wading into quicksand and expecting to emerge clean, with treasures in hand. Instead, the discovery phase of an innovation project should define a clear business problem that the project will attempt to solve. This phase should also determine who the solution will benefit and how you’ll measure its benefits. Otherwise, how will you know you’ve solved the problem?
2. Inadequate set of stakeholders
Getting the right people at the table is crucial for any transformative initiative. For AI innovation, the most effective programs bring together key stakeholders from the discovery phase and involve them as appropriate throughout the project. They have champions at the senior level, and they work closely with executives, decision makers, users, influencers, approvers, and solution beneficiaries, as well as data providers, data interpreters, and other technical collaborators. When stakeholders are involved in defining a solution’s use, intended benefits, and solution approach, the solution is likely to be more relevant and robust. Stakeholders are more likely to buy in to the actual solution and become active champions for it.
3. Incorrect technical assumptions
Incorrect technical assumptions can make things harder than they need to be. For example, if teams may decide early on that there’s only one approach to solve a problem or that a given algorithm / technique is the perfect one. Instead of choosing the simplest method that works for the problem and fulfills stakeholder requirements such as explainability, these teams may commit to complex AI methods before determining if simpler techniques could do the job.
A related failure point is believing that every project requires perfect, complete data (or the other extreme - where the problem statement is crisp but no data is available to develop a solution). Getting the data layer right is essential, and includes discovering, and ingesting, storing, and preparing the relevant data. But most business problems are solved every day without perfect data. Why? Because no practical situation requires only a perfect solution and would fail with anything less. A better than existing solution clearly trumps an elusive perfect one any day. By matching the algorithm approach and the level of data rigor to the business problem, AI innovators can avoid slow, costly, complex development and implementations when simpler approaches could solve the problem just as well, and with much less penalty to skillsets, infrastructure and opportunity cost.
4. Failure to test the solution in real life and move forward
Projects often achieve their technical objectives but may get stymied when they fail to convince decision makers that the solution will work and is worth testing in real life. This can be an outgrowth of pitfall #2, with key stakeholders insufficiently engaged with the project or left out of it altogether. It could result from pitfall #3, when ML/AI engineers are unable to explain “why” because of the choice of ML technique/method (pitfall #3). It may also occur when an organization’s culture makes people see the project as a threat, whether over concerns over eventual job losses or perceived reduction of an individual’s or organization’s value to the overall outcome. These feelings are understandable. I’ve encountered them when telling a team of highly skilled professionals, many of whom have spent decades developing their expertise, that an algorithm recommendation is worthy enough to be tested in real life. The challenge becomes even bigger when the algorithm recommendation contradicts the decision-maker’s intuition. While there is no silver bullet, providing options to test the recommendation first at a small scale that bounds the risk of being wrong, and then progressively grow its implementation with success milestones along the way, has better odds for acceptance. As an added bonus, this method often leads to a decision maker discovering that they add even more value to the overall outcome because they elevate themselves to more value-add tasks while delegating speed/scale-oriented work to AI. When managed appropriately, it can be a win-win across the board.
5. Failure to build on success
Even “successful” projects will have limited payoff if the organization doesn’t continue advancing in its use of AI. Initial adoption must be translated into more strategic and impactful projects that solve the next level of problems. Solutions must be automated so that any organization can readily use existing compute capacity to solve problems. Automation increases scale and leverage, and building on previous wins to tackle new impactful and strategic projects sets the team’s aim higher. This is how organizations create a virtuous cycle that delivers more value from AI over time.
The Biggest Pitfall: Failure to Start
I don’t want these pitfalls to discourage anyone from moving forward with AI innovation, so let me add what is potentially the greatest pitfall: Sitting on the sidelines and failing to engage. For all of AI’s challenges, the potential rewards are enormous, and the risks of stagnating while the rest of the world moves forward are significant. Think of your toughest enterprise challenges and your biggest questions about the future. Chances are that AI innovations will offer exciting opportunities to address them.
AI is here to stay. It’s changing our world. Intel and our partners offer a variety of resources to help you maximize value and avoid stumbling blocks. Let’s get started!
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.