Susan Kahler, AI/ML Products and Solutions Marketing Manager, Intel | LinkedIn
This blog summarizes the two sessions utilizing Intel® Gaudi® AI deep learning processors that were delivered at the oneAPI DevSummit for AI and HPC 2023. These two sessions include:
- Redefining Voice Signal Processing with Habana® (Intel) Gaudi®
- Intel® Gaudi® in Action: Solving Real-World Challenges with Fine-Tuned Language Models
Tech Talk: Redefining Voice Signal Processing with Habana® Gaudi®
In this tech talk, Dr. Singh discusses how specialized hardware, such as Habana®(Intel) Gaudi2®, makes it possible to properly design the next generation of voice processing systems. She provides examples of voice signals and their levels (acoustic, sub-word, syntactic, and semantic) along with a high-level transformer architecture. This transformer architecture includes self-attention, cross-attention, and feedforward multi-layer perceptron layers which results in millions of parameters. The computations needed for these calculations can encroach upon 1.6T floating operations per second (FLOPs). For example, 100k hours of speech with 400 tokens extracted for each second of speech amounts to a total of 144T tokens. In comparison, Llama2 uses only 3T tokens for training with text. She concludes that Gaudi2 has the parallelism and accelerators needed to train these voice processing systems efficiently.
Watch the full video recording here and download the slides to learn more about the project.
Demo: Intel® Gaudi® in Action: Solving Real-World Challenges with Fine-Tuned Language Models
In this demo, Burak Aksar discusses the need to provide revenue teams with actionable customer insights through Spiky.ai’s more than 25 models which target vocal, language and visual metrics. These models incorporate emotional intelligence and contextual information. Spiky created a moments dataset that was adapted from existing open source spoken dialogue datasets along with customer ratings of the moments. They then utilized Intel Gaudi 1 DL1 instances to fine-tune large language models (LLMs), such as Llama-7B, on datasets containing sales call transcripts and industry jargon, to improve sales correspondence language.
Using the Intel Developer Cloud, Eduardo Alvarez shows how to make SoTA models accessible on Intel Gaudi2 by fine-tuning Llama2-7B models using Low-Rank Adaptations (LoRA) on the openassistant-guanaco dataset. LoRA reduces the complexity of models, making them easier to train, with little to no degradation in performance. Here is the workflow that he walked through:
- Started with a Llama2-B model
- Leveraged the Open Assistant Guanaco dataset to fine tune that model
- Used open-source package stack of Habana DeepSpeed, PEFT from Hugging Face, and Optimum Habana
- Ran on an Intel Developer Cloud Gaudi instance
This workflow allows a large amount of iteration to occur simultaneously in a short amount of time.
Watch the full video recording here and download the slides to learn more about the project.
About the Speakers
Dr. Rita Singh, Associate Research Professor, Carnegie Mellon University
Dr. Rita Singh is an Associate Research Professor at CMU’s Language Technologies Institute, where she leads the Center for Voice Intelligence and Security. She also co-leads the research groups on Robust Speech Processing, and Machine Learning for Signal Processing. With over 20 years in speech and audio processing, her recent focus has been on voice-based human profiling, blending AI and Voice Forensics. Her team has pioneered several world-firsts. In 2018, they created the first live voice profiling system, demonstrated live at the World Economic Forum. In 2019, they recreated Rembrandt’s voice from his facial self-portraits. In 2020, they pioneered the technology behind voice-driven Covid detection. She is the author of the book “Profiling Humans from their Voice,” and assists multiple federal and global agencies in forensic voice profiling. Her work has gained extensive global media coverage.
Eduardo Alvarez, Senior AI Solutions Engineer, Intel
Eduardo Alvarez is a Senior AI Solutions Engineer at Intel, specializing in architecting AI/ML solutions, MLOps, and deep learning. With a background in the energy tech startup space, he managed a team focused on delivering SaaS applications for subsurface data analytics in hydrocarbon and renewable energy production. Eduardo collaborates across technical teams at Intel, designing impactful solutions that highlight the Intel software and hardware stack’s influence on AI innovation. He holds a degree in Geophysics from Texas A&M University and is recognized as an accomplished technical author and community leader in AI.
Burak Aksar, Founder, Spiky.ai
Burak Aksar holds a Ph.D. in Electrical and Computer Engineering from Boston University and a B.S. degree in Electronics Engineering from Sabanci University. Before this role, he gained valuable experience at IBM AI Research and Sandia National Labs by developing and deploying end-to-end AI solutions across diverse domains while prioritizing explainability and robustness angles. Burak is also the author of nearly ten research papers and holds two patents.
Useful resources
- Intel AI Developer Tools and Resources
- Intel® Liftoff for Startups
- oneAPI Unified Programming Model
- Intel Gaudi AI Deep Learning Processors
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.