Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
562 Discussions

Why Open Source AI Models Matter and Using Open LLMs

Adam_Wolf
Employee
1 0 3,056

Code Together covers the rapidly changing software landscape from the perspective of developers. Listen as we talk to developers, entrepreneurs, and technologists who are trying to create solutions that meet the needs of the modern, accelerated computing world.

In the podcast episode of Intel’s "Code Together" series, Why Open Source AI Models Matter and Using Open LLMs, host Tony Mongkolsmai is joined by Eduardo Alvarez, an AI Solutions Engineer at Intel, to discuss various topics related to AI, specifically focusing on Large Language Models (LLMs) and their applications. Eduardo talks about his work at Intel, which involves enabling developers to leverage Intel's technology, both on the software and hardware side. He emphasizes the importance of understanding how AI runs on CPUs and GPUs and discusses efforts to optimize workloads on Intel's Xeon platform and Gaudi accelerators.

The conversation delves into open-source AI, particularly generative AI, focusing on Hugging Face, a platform enabling developers to access Intel infrastructure. Eduardo highlights the significance of open-source models like Falcon and Llama, which are fully open source for commercial and research use. The discussion also touches upon the challenges posed by extremely large models, prompting the exploration of techniques like quantization to make these models accessible and usable for developers with limited hardware resources.

Eduardo and Tony also explore the evolving landscape of AI development. They discuss the shift from developers spending time on neural network architecture and hyperparameters to focusing on operational AI. The conversation emphasizes the need to enable developers to apply AI solutions practically, using tools like the Intel® Distribution of OpenVINO™ Toolkit and other software initiatives. They talk about the Intel Machine Learning Certification program, specifically the ML Ops and Performant AI Solution Design course, designed to help professionals optimize AI lifecycles and stack infrastructure for efficient model deployment. The course offers industry certification in ML Ops at the professional level and aims to bridge the gap between AI research and practical application. The episode then goes into Eduardo’s experience with the brand-new Intel Developer Certification program, for which he created the MLOps content. Finally, it concludes with a discussion about his journey as a LatinX engineer celebrating Hispanic Heritage Month.

We encourage you to check out Intel’s other AI Tools and Framework optimizations and learn about the unified, open, standards-based oneAPI programming model that forms the foundation of Intel’s AI Software Portfolio.

The Speakers:

Eduardo Alvarez - AI Solutions Engineer at Intel

References

ChatGPT & Wolf, A. (2023, November 16). Summarize the main points from the following transcript. https://chat.openai.com

About the Author
AI Software Marketing Engineer creating insightful content surrounding the cutting edge AI and ML technologies and software tools coming out of Intel