If you’ve been online at all lately, you’ve almost certainly caught wind of the excitement over DeepSeek’s latest update. And for good reason. This breakthrough model framework is grabbing headlines and more importantly, it’s challenging how startups approach AI.
From security and efficiency to testing, Intel® Liftoff Startups have jumped at the chance to explore DeepSeek’s potential. Here’s a look at the buzz, the breakthroughs, and why so many teams are betting on DeepSeek.
Enkrypt AI
Sahil Agarwal from Enkrypt AI leads the charge on red teaming DeepSeek-R1. According to their findings:
“3x more biased than Claude-3-Opus, 4x more vulnerable to generating insecure code than OpenAI’s o1, 4x more toxic than GPT-4o, and 11x more likely to create harmful output than OpenAI’s o1.”
Rather than just expose these issues, Enkrypt AI followed up by open-sourcing a safer version of DeepSeek-R1-Distill-Llama-8B. Their stats show a 47% overall risk reduction, 57% drop in toxicity, and 99% cut in harmful outputs. The takeaway: Yes, we need to address potential vulnerabilities in new AI models, but with collaboration and refinement, it’s possible to push AI toward safer performance.
Read More:
Red Teaming Report (PDF)
Safer Distilled Model on Hugging Face
AgenticFlow AI
Over at AgenticFlow AI, Sean P. is using DeepSeek-R1 to transform regression testing:
“I recently discovered that DeepSeek R1 can help with regression testing. To build high-quality software, we need solid test coverage. In many teams, the QA-to-Dev ratio is around 1:4. But our dev pace (fueled by AI coding tools like Cursor) leaves QA struggling to expand automation coverage.
Enter R1, hooked up to Playwright. I’m amazed by how well it follows tasks and goals with just a simple prompt. It might finally help us match automation test coverage to our rapid dev cycles - no quality compromises.
Prompt:
“You are an automation tester. Validate the functionality of the given page. When prompted to test, go to the target site, follow instructions like a normal user, then login and clone the ‘Story through Emoji’ workflow to your workspace. Website: agenticflow.ai (you can find it via the search box).”
I’m excited to see how this evolves!.”
This approach could help teams match test coverage to rapid development cycles without compromising on quality. Sean also points out that DeepSeek-R1 is “25x cheaper than o1 with almost the same performance.” It’s a perfect example of how a powerful LLM can be harnessed for very practical dev tasks—particularly in QA and automation.
Expanso/Bacalhau
DeepSeek-R1 has taken the world by storm, but how can you use it with your own data and systems? That’s exactly what an upcoming webinar hosted by Expanso/Bacalhau will cover. Sean M. Tracey plans to show how you can securely deploy and use DeepSeek AI’s latest LLMs on any system in under five minutes, all thanks to the Bacalhau Project’s streamlined approach. If you’re eager to explore a quick, secure DeepSeek deployment, sign up and add this event to your calendar:
Crunching Your Data In-Place With Open-Source LLMs
This session offers a practical look at harnessing DeepSeek with minimal infrastructure friction - perfect for teams looking to plug high-powered LLMs into existing workflows without a hassle.
TurinTech
In an article titled Innovation Under Pressure: Why DeepSeek’s Breakthrough is Good News for UK AI Startups, TurinTech CEO Mike Basios shares his perspective:
“DeepSeek's achievement isn't just another tech headline - it's a wake-up call. It shows us that constraints aren't roadblocks; they're catalysts for creativity. Innovation isn't about unlimited resources; it's about being clever with what you have.”
For TurinTech, DeepSeek signals the kind of resourceful innovation that can help UK startups (and indeed startups everywhere) do more with fewer resources.
Unsloth AI
Daniel Han from Unsloth.AI tackled quantization to shrink the DeepSeek R1 model. He reports they’ve:
“We quantized DeepSeek R1 to 1.58bit - 131GB so 80% smaller whilst being usable via Unsloth AI dynamic quantization!
- R1 has 3 dense + 58 MoE layers. MoE is quantized to 1.5bits. Rest in 4/6bit
- We can get 140 tokens/s for throughput on 2xH100 80GB!
- Also I found some tokenizer quirks!”
TitanML
The team at TitanML offers a candid perspective on what sets DeepSeek apart. Jamie Dborin (PhD) CSO & Co-founder of TitanML, breaks down what this means for AI and the market.
They see it as more than a flash-in-the-pan hype cycle: a sign that open-source AI can compete with the biggest players, provided the community rallies around it for improvements and testing.
Prediction Guard
Concerned about privacy? Daniel Whitenack from Prediction Guard insists you can run DeepSeek R1 “without sacrificing privacy.”
“I've seen soooo much confusion around the security/ privacy concerns with this model. There are definitely real concerns, but there are few things to clear up.
To start, there are two main ways you can access DeepSeek:
The second option ensures that your sensitive data remains under your control and isn't inadvertently sent to and stored by DeepSeek (the model builder). Of course you should be thinking about this kind of sensitive data leakage with any AI product (e.g., ChatGPT or Claude), even if they aren't hosted in China.
Then, once you have dealt with the privacy issues, you can move on to think about inherent model vulnerabilities and behavior (which have nothing to do with sending data to China). DeepSeek is particularly sensitive to prompt injections, and it will have its own biases related to training and fine-tuning data.”
Read More in their blog: DeepSeek R1 Security Blog
dstack
dstack released an update allowing users to run dev environments, tasks, and services on Intel® Gaudi accelerators whether you're fine-tuning or deploying DeepSeek. No Kubernetes. No Slurm. Just fast, AI-native container orchestration.
The takeaway is that simpler, more powerful infrastructure options are cropping up for early-stage teams who want to experiment with LLMs without huge overhead.
Koyeb
For Koyeb, DeepSeek R1’s multi-lingual and agentic RAG capabilities stand out. Their latest tutorial, DeepSeek-R1’s Multi-Lingual and Agentic RAG Capabilities in Practice, dives into orchestration with Koyeb and Inngest. It demonstrates how to handle diverse languages and large context windows—features important for global startups catering to diverse markets.
DeepSeek’s announcement is far from just hype. Intel® Liftoff Startups show it’s adaptable through red teaming, model alignment, and clever deployments. Whether you’re considering security, scalability, or brand-new AI frontiers, DeepSeek is quickly emerging as a powerful ally for startups.
DeepSeek-R1 is also proving its versatility on Intel’s AI PC. Running efficiently on an Intel® Core™ Ultra 7 system with 32 GB RAM, it demonstrates that advanced reasoning isn’t limited to massive LLM deployments. This marks quite a shift where smaller, distilled models can deliver incredible performance without enterprise-scale accelerators, making AI more accessible across different compute environments.
Learn how the DeepSeek-R1 distilled reasoning model performs on Intel® hardware
Read the full article
Want to Learn More?
- Red Teaming Results (PDF)
- Innovation Under Pressure: Why DeepSeek’s Breakthrough is Good News for UK AI Startups (TurinTech)
- Run DeepSeek R1 Dynamic 1.58-bit (Unsloth)
- DeepSeek-R1’s Multi-Lingual and Agentic RAG Capabilities (Koyeb)
About Intel® Liftoff
Intel® Liftoff for startups is open to early-stage AI and machine learning startups. This free virtual program helps you innovate and scale, no matter where you are in your entrepreneurial journey.
Related resources
6th Gen Intel® Xeon® Scalable Processor - Latest generation of enterprise server processors
Intel® Tiber™ AI Cloud - Cloud platform for AI development and deployment
Intel® Gaudi® 2 AI accelerator - High-performance AI training processor designed for deep learning workloads
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.