Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
506 Discussions

How Enterprises and Developers are Powering AI Solutions using Intel® Tiber™ Developer Cloud

Sonya_Wach
Employee
2 0 12.8K

Getting access to the latest hardware and software tools for AI acceleration is paramount for enterprises, developers, and growing startups to build the best solutions possible with optimal efficiency. Intel® Tiber™ Developer Cloud makes accessing these technologies easy, empowering startups to innovate and iterate quickly with the opportunity to scale workloads effortlessly when needed. The Intel® Liftoff for Startups program helps early-stage AI and machine learning startups by removing code barriers and unleashing performance to scale startups into industry-defining AI companies. These startups get access to the computational power needed to solve their technical challenges, like the Intel Tiber Developer Cloud and suite of tools available with oneAPI and optimized AI frameworks.

Developers are making use of these AI tools and Intel’s developer cloud to streamline their development and improve model performance. Outlined below are a few startups and how they’re utilizing these tools right now:

Seekr* Builds Brand Trust with Trustworthy AI

Ensuring brand identities as trustworthy and suitable for target audiences is often a challenge for growing companies. Seekr solves these advertising problems by building trustworthy Large Language Models (LLMs) for generating and evaluating content at scale. SeekrAlign* utilizes proprietary scoring and analysis software to grow marketing reach responsibly with contextual AI while SeekrFlow* enables cost and complexity savings through an end-to-end toolset to build, scale, and validate AI workflows.

Seekr built, trained, and deployed their AI solutions on 8-node Intel® Gaudi® 2 AI accelerator clusters and 4th Gen Intel® Xeon® Scalable processors in the Intel Tiber Developer Cloud, seeing high performance gains over on-premises environments at significant cost reductions for various AI workloads. In addition to hardware, Seekr also utilized software optimizations like the Intel® Extension for PyTorch* for transcription and audio processing on Intel GPU-powered systems. Implementing SeekrAlign on the cloud helped scale systems to meet customer growth and AI LLM sizes at low costs while improving model training and inference speeds significantly compared to on-premises.

Prediction Guard* Reduces Risks in LLM GenAI Applications

The use of LLMs in business applications has grown substantially, and barriers exist to leveraging the latest LLMs to keep up with competitors and new technologies like generative AI (GenAI) applications. Prediction Guard is helping companies unlock the potential of LLMs with scalable APIs to prevent hallucinations, institute governance, and ensure compliance while reducing risks such as reliability, variability, and security.

Leveraging the Intel Tiber Developer Cloud on Intel Gaudi 2 processors, Prediction Guard has built applications that run more efficiently and effectively in a secure and private environment. Prediction Guard achieved significant cost reductions by moving workloads from GPU to Intel Gaudi 2 accelerator environments. In addition to cost savings, a 2x throughput increase1 was achieved, including a performance increase on 7B parameter models, helping customers using LLMs for GenAI applications. Predication Guard further boosted efficiency with software optimizations like the Intel Extension for PyTorch and the Intel® Extension for Transformers* with SOTA compression techniques for LLMs running on Intel platforms.

Selecton Technologies* Launches AI Virtual Assistant

Often virtual assistants can seem robotic and struggle to understand human interactions and respond accordingly. To improve these interactions, Selecton Technologies is utilizing AI for human virtual assistance to learn gestures and habits to better communicate with people. They are focused on building technologies that offer an entirely new, unique, and hyper-personalized world in the human purchasing experience. Selecton Technologies is developing SELECTA, an AI virtual assistant to enhance buying experiences to fit any customer situation, using LLMs.

Using the Intel Tiber Developer Cloud, Selecton Technologies fine-tuned the Dolly 7b and the OpenLLama 3b LLMs using the Low-Rank Adaptation (LoRA) training script on Intel® Data Center GPU Max Series and 4th Gen Intel® Xeon® Scalable processors. Selecton Technologies utilized tools such as the Intel® oneAPI Base Toolkit with technologies like the oneAPI Deep Neural Network Library (oneDNN) and oneAPI DPC++/C++ Compiler with SYCL* runtime to avoid proprietary lock-in and reuse code across CPUs and GPUs. Selecton Technologies uses the Intel Tiber Developer Cloud to build AI business applications with virtual hardware while utilizing the software tools and optimizations to bring their deep learning solutions to a new level in terms of both performance and efficiency.

SiteMana* Uses AI for Privacy-minded Engagement and Retargeting


Engaging with anonymous traffic while maintaining individual privacy and driving buyer conversion is often a difficult task for e-commerce businesses. SiteMana solves this problem by using AI and LLMs to automate the marketing process and provide businesses with a secure and legal way to engage with potential customers. SiteMana is developing a customizable AI platform that self-improves customer content and copywriting based on real-time user engagement and high-intent site visitors.

Through Intel Tiber Developer Cloud, SiteMana developed an API that facilitated real-time generation of personalized content through natural language processing (NLP) algorithms to meet the specific needs of customers. SiteMana utilized LLM inference on combined 4th Gen Intel Xeon Scalable processors and Intel Data Center GPU Max Series to enhance algorithm efficiency. Furthermore, Dolly 7b and OpenLLama 3b LLMs were fine-tuned using the oneAPI Base Toolkit. SiteMana also utilized the Intel® Extension for Scikit-learn* to accelerate scikit-learn for machine learning across single- and multi-node configurations. With these optimizations and platforms, Sitemana successfully deployed Mana LLM for enhanced email generation and retargeting for future revenue growth.

The Intel Tiber Developer Cloud includes several service levels depending on company needs, including a free tier for developers for hardware evaluation, model development, and education, which includes access to Jupyter notebooks to try out the latest AI hardware and optimized software, and gain Service tiers for enterprise include AI compute, training, and inference workload deployments, and greater access to AI accelerators to fit specific needs. Check out these video guides, such as how to get started and access GenAI notebooks on Intel Tiber Developer Cloud and accessing GenAI notebooks on Intel Tiber Developer Cloud.

Check out the tools these startups are using to build their solutions for yourself, like Intel’s AI tools and framework optimizations and the unified, open, standards-based oneAPI programming model that forms the foundation of Intel’s AI software portfolio. Also discover how our other collaborations with industry-leading independent software vendors (ISV), system integrators (SI), original equipment manufacturers (OEM), and enterprise users accelerate AI adoption.

1. As reported by Prediction Guard as of January 31, 2024.

About the Author
AI/ML Technical Software Product Marketing Manager at Intel. MBA, Engineer, and previous two-time startup founder with a passion for all things AI and new tech.