Authors: Eugenie Wirz, Rahul Unnikrishnan Nair
As Shadow AI emerges across enterprises, Intel® Liftoff startups are developing robust, scalable solutions to secure LLMs, protect sensitive data, and build trustworthy, future-proof AI systems - from the edge to the cloud.
While companies are still debating how to secure their artificial intelligence systems, employees are already using generative AI tools on the side.
Welcome to Shadow AI. This phenomenon describes the unauthorized or unsupervised use of public AI models such as ChatGPT, Claude, or other large language models (LLMs) by employees trying to boost their productivity.
Good intentions or not, using Shadow AI is risky. It can expose confidential data, breach regulations, or create business liabilities, especially if the AI gets things wrong (as it often does).
The AI security market is booming, and a new wave of startups are rising to challenge. Many of the most promising among them are backed by Intel® Liftoff. They’re building solutions that safeguard data, ensure compliance, and help businesses tap into AI’s full potential without the risks. Here’s how those solutions break down across key security needs and the tech that powers them.
LLM Security and AI Governance
Large language models (LLMs) are transformative, but unpredictable. They can hallucinate, producing false or toxic responses, or be manipulated through “prompt injection” attacks in which malicious users embed hidden instructions to force unintended outputs. That means traditional compliance controls are often insufficient.
Prediction Guard, an Intel® Liftoff Catalyst Track member, has made it its mission to de-risk LLMs. Their platform filters both harmful inputs (e.g., prompt injections) and harmful outputs (e.g., hallucinations) in a secure, private environment hosted on Intel® Tiber AI Cloud and accelerated by Intel® Gaudi® 2 processors. This infrastructure gives Prediction Guard the scale and speed needed to support use cases ranging from financial reporting to healthcare document summarization, all while protecting sensitive data and improving factual accuracy.
Raidu, another Intel® Liftoff startup, goes beyond just securing LLM outputs by offering a full-stack governance platform. Raidu integrates data masking, risk management, role-based access control, and audit logs to ensure that every AI interaction - whether for hospital discharge notes or financial contracts - remains secure and compliant with regulations such as SOC 2, GDPR, and HIPAA.
Meanwhile, Co-mind targets the root of Shadow AI directly. After discovering that teams were uploading confidential documents to public LLMs, Co-mind created a secure, private AI platform. Employees can still use advanced generative AI capabilities, but inside a corporate-controlled, secure infrastructure - with options for on-premises or private cloud deployment.
Key Concepts Explained:
- Prompt Injection: a malicious input crafted to hijack or override an LLM’s original instructions.
- Hallucinations: LLM-generated text that is incorrect, misleading, or toxic.
- Data Masking: replacing personal or sensitive identifiers with fictitious data to protect privacy during AI processing.
Confidential and Encrypted Computing
As AI systems grow, they increasingly train and run on sensitive data, bringing data privacy and confidentiality into sharp focus. Confidential computing addresses this challenge by isolating data within secure enclaves, keeping it protected even from the cloud provider.
Roseman Labs, for example, leverages encrypted multi-party computation to let organizations analyze joint data without ever exposing the raw data itself. In partnership with Intel® Liftoff, Roseman Labs recently achieved a fivefold speedup in its secure “group by” operations on 6th Gen Intel® Xeon® Granite Rapids processors, proving that secure computing can scale for real business data volumes.
Tinfoil, another Liftoff participant, delivers a fully verifiable confidential cloud AI platform. Using secure enclaves on diverse hardwares including Intel CPUs, Tinfoil guarantees that data is encrypted directly to the enclave where the model runs - ensuring no one, not even Tinfoil or the cloud provider, can view user data. The solution combines remote attestation, code integrity checks, and hardware-based trust to provide end-to-end verifiable confidentiality.
Supporting these efforts, Intel® Trust Domain Extensions (Intel® TDX) introduces hardware-isolated Trust Domains (TDs), which are secure virtual machines running in a CPU Secure Arbitration Mode (SEAM). With memory encryption, secure CPU state handling, and remote attestation, Intel TDX is enabling new levels of verifiable confidential AI in the cloud.
Key Concepts Explained:
- Confidential Computing: protecting data in use by running it inside a hardware-secured enclave.
- Remote Attestation: cryptographic proof that verifies the code and hardware environment before data is processed.
- Intel® TDX: Intel’s latest confidential computing technology, providing trusted virtual machines.
Digital Identity, Biometrics, and Deepfake Defense
Digital identities are becoming more complex. This complexity means that earning user trust and keeping out malicious bots has never been more important. But traditional CAPTCHA systems often do more harm than good, frustrating real users while letting smarter bots slip through.
Erasys has responded by launching Trustmark, a biometric-driven identity verification solution that replaces conventional CAPTCHAs. Trustmark uses typing patterns, device profiles, and behavioral metrics to create a digital signature for each user, enabling a seamless user experience while preventing bot attacks.
In parallel, Neural Defend addresses the skyrocketing risk of deepfakes. Its multimodal agentic AI can identify fake audio, images, and video in real time - protecting financial institutions and governments from advanced synthetic fraud. Neural Defend recently secured pre-seed funding to accelerate this work, reflecting strong investor confidence.
Findora, a Canadian startup, takes a holistic view of trust by combining a privacy-first search engine with a built-in deepfake detection engine. Users benefit from results ranked by credibility and verified sources while having their data safeguarded under Canadian privacy regulations.
Key Concepts Explained:
- Deepfake: synthetic media that impersonates real people or events using AI.
- Biometric Security: verifying identity through biological or behavioral traits, e.g., typing style or mouse movement.
Secure Agentic and Edge AI
AI is moving to the edge, into wearables, drones, and IoT. In these environments, both low power consumption and strong security are essential.
Falcons.AI is pioneering a neural architecture inspired by the human brain, compressing image recognition models to just 4MB while achieving energy efficiencies of up to 100x versus standard transformers. These models, trained with Intel Data Center GPUs, can run for months on a single battery, enabling secure, on-device intelligence.
MindFront, through its SynthGrid platform, is enabling secure agentic AI across enterprise systems. SynthGrid integrates security, authentication, VPN management, and deep system hooks, allowing AI agents to work safely with CRMs, ERPs, and even Microsoft Graph without risky migrations or security gaps.
Key Concepts Explained:
- Agentic AI: AI agents that act semi-autonomously to fulfill tasks across systems.
- Edge AI: AI that operates directly on local devices instead of sending data to the cloud.
Final Thoughts
In the end, artificial intelligence can only transform businesses if its security foundation is strong. Intel® Liftoff startups are showing how to build trust at every layer - from controlling prompt injection to scaling encrypted computing, from biometric security to confidential enclaves.
With hardware-backed solutions like Intel® Gaudi® AI Accelerators, 6th Gen Xeon® platforms, and Intel® TDX, these innovations prove that trustworthy AI is possible, even in a world of Shadow AI practices.
Resources & Further Reading:
- Intel Liftoff Startup Program
- Intel® Gaudi® 2 AI accelerator
- Intel® Data Center GPU Max Series
- Intel® Xeon® 6 processors
- Intel® Trust Domain Extensions (TDX)
- Prediction Guard Case Study
- Breaking Free from Shadow AI — Co-mind
- Roseman Labs Benchmark
- Tinfoil Confidential AI Platform
- Erasys Trustmark
- Neural Defend Deepfake Detection
- Falcons.AI Edge Efficiency
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.