Looking for technical content to tide you over until the next Intel Innovation developer conference? Check out Innovation Selects on the Intel Software YouTube channel for videos of select tech talks and demos from this year’s postponed live event.
New product deep dives
Get deeper insights into Intel’s recently announced new products including Intel® Gaudi® 3 AI Accelerators, Intel® Xeon® 6 processors, and Intel® Core™ Ultra processors (Series 2) with these sessions:
Intel® Gaudi® 3 AI Accelerators: Bringing Choice to Gen AI with Efficiency, Scalability, Performance
Dive deep into the Intel® Gaudi® 3 AI Accelerator, a groundbreaking technology that offers unparalleled efficiency, scalability, and performance for generative AI.
Intel® Xeon® 6 Architecture and Proven Performance RAG
This video presentation showcases the advanced capabilities of Intel® Xeon® 6 processors with E-Cores and P-Cores processors, optimized for high-density compute, scale-out workloads, and compute-intensive tasks for AI and HPC.
Intel Core Ultra 200V Series Performance Deep Dive
Explore the groundbreaking advancements of Intel® Core™ Ultra 200V series processors, the latest innovation in high-performance client computing, with innovative design, energy efficiency, and performance advantages in AI, productivity, and gaming.
Get even more insights on taking advantage of Intel’s latest client computing platform with Optimizing Software on Intel Core Ultra 200V Series Processors Hybrid Architecture and Advanced Features of Intel Core Ultra 200V Series Processors.
Harness the power of the AI PC
If you want to dive into how Intel® Core™ Ultra 200V series processors enable the AI PC revolution, check out:
Intel and AI PC: Igniting a Software Ecosystem
Discover how Intel and software partners have collaborated to optimize AI applications on Intel® Core™ Ultra 200V processors.
Deploying AI Assistants with OpenVINO™ on the AI PC
Learn about power efficient AI at the edge by building local assistants with Intel® Core™ Ultra 200V processors, leveraging LLMs, speech transcription, and optimization techniques for efficient deployment.
Open Standards for Generative AI
Intel is a leading contributor to open standards based LLM projects. Tap into expert insights to explore generative AI and optimize your LLM-based applications.
Open Standard, Multi-vendor AI Training and Inference with LLMs
Learn about the llm.c and llama.cpp projects and their potential for efficient training and inference on Intel and Nvidia GPUs, how they are implemented with SYCL, and how you can use them to run training and inference of LLMs on multiple vendor targets.
Beyond LLMs: Foundation AI Models of Tomorrow
Explore Intel Labs' groundbreaking work in AI. Discover their research on LLMs, how they're pushing the boundaries of the field, and establishing open-source tools and new open benchmarks for industry and academia.
A PyTorch and OPEA based AI Audio Avatar Chatbot
Discover how to build a real-world avatar application with OPEA. Learn about our end-to-end GenAI examples and the benefits of using Intel® Xeon® Scalable processors and Intel® Gaudi® accelerators. This demo is a real-world avatar application that can be quickly built using OPEA End-to-End GenAI examples and microservices. It reflects the flexibility of integrating OPEA microservices with new services such as face and body animation. OPEA-based GenAI application offers an open and secure communication AI pipeline that's optimized with Intel® Xeon® Scalable Processor and Intel® Gaudi® accelerators. Additionally, it enables efficient, scalable LLM deployment.
Lower GenAI Adoption Barriers with OPEA
Learn how to easily deploy and customize AI models for various applications with OPEA, the Open Platform for Enterprise AI. OPEA lowers adoption barriers, allowing you to leverage your proprietary domain specific data and unlock value, with open-source models and tools, industry strength security, and tested templates for several popular usages such as ChatQnA and CodeGen.
Industry Conversation with Priyanka Sharma
Join Priyanka Sharma, Executive Director of the Cloud Native Computing Foundation, and Intel's Arun Gupta, VP and GM of Open Ecosystem initiatives to discover more about the effects developers are having on cloud native environments, AI and open source. In this engaging discussion, we delve into the world of cloud-native technologies, AI, and the future of development.
Intel Approach to Scaling Systems and Enterprise RAG
Explore how Intel's open systems approach is empowering businesses to leverage generative AI across various use cases and learn about the new Intel AI Software Catalog, launching in Q4 2024, which offers a range of AI tools and models for tasks like chat Q&A, code generation, and content summarization.
Security
Protect Security and Privacy with FIPS 140-3 Cryptography
Discover the new Intel® Cryptography Primitives Library and how to implement it in your applications, including ensuring data security with FIPS 140-3 compliant encryption. FIPS 140-3 (ISO 19790) computer security standards specify requirements for cryptography used by U.S. government entities. Many financial institutions, health sciences, and the private sector also embrace this proven set of standards
Go to Innovation Selects to experience all these and more tech talks and demos!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.