AI developers often face bottlenecks when moving from local development to production environments. With Intel® AI PC, Podman, Red Hat® OpenShift® AI, and OPEA, developers can build, test, and deploy AI models efficiently across diverse environments. At this year’s Intel® AI DevSummit, Red Hat Solutions Architect Chris Calderon shared insights on how developers can boost productivity and optimize performance of AI applications by adopting these tools and technologies.
The Full Circle Developer Experience using Podman—From Intel AI PC to OpenShift AI
In this session, Chris highlights Red Hat open source solutions like Red Hat Enterprise Linux AI and OpenShift AI, which support multi-hybrid cloud elasticity and offer turnkey experiences for developers, facilitating seamless transitions from bare metal to cloud.
"By prioritizing automation and seamless integration,” said Chris, “we can enhance the developer experience, lower barriers to entry, and empower businesses to fully harness the potential of AI technologies.”
Chris explained the technical aspects of AI deployment, including the use of Podman for running lightweight container images on various operating systems. He also discussed how developers can prototype and test locally using tools like Podman Desktop and Instruct Lab, which helps reduce costs and streamline the process of training and deploying AI models.
Chris wrapped up his session by talking about OpenShift AI, which provides a comprehensive platform for creating and managing AI projects. Chris highlighted features such as data science pipelines, model serving, and the ability to import and manage notebook images.
If you're looking to develop an AI training job locally before migrating to a cloud, watch the full video recording here.
Build a RAG Pipeline with Red Hat OpenShift AI and OPEA Validated Platform
In his second talk, Chris walked viewers through the integration of AI and enterprise solutions stressing the importance of leveraging RAG databases to enhance operational lifecycles and reduce the need for constant model retraining. By using a RAG interface, said Chris, developers can reference existing databases, making the process more efficient and cost-effective.
Chris took viewers through the process of using OpenShift AI to create and manage AI projects explaining how to utilize tools like Jupyter notebooks and various runtimes to streamline workflows. Chris also covered the integration of Intel® Gaudi® AI accelerators and OpenVINO™ runtimes, which help optimize AI workloads and improve performance.
Does using a RAG database make your chatbot more effective? Chris shared a practical demo comparing two chatbots, one using a RAG database and one without. Watch the full video recording here to see why using RAG equips your chatbot to provide more accurate and context-aware responses.
Hungry for More AI Knowledge?
Tune into more AI sessions from the Intel® AI DevSummit 2025 to learn from experts, explore the latest advancements and best practices, and take your projects to the next level.
We encourage you to also check out and incorporate Intel’s other AI/ML Framework optimizations and tools into your AI workflow and learn about the unified, open, standards-based oneAPI programming model that forms the foundation of Intel’s AI Software Portfolio to help you prepare, build, deploy, and scale your AI solutions.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.