Skip to content

lunal-dev/home

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

156 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Home   Components   Cloud   Pricing   Docs

Confidential AI

Confidential is the confidential computing stack. We run your AI workloads (inference, training, agents) in hardware-encrypted Trusted Execution Environments (TEEs). Your data and code stay private while being processed. Your code can't be tampered with. You can cryptographically verify both claims without trusting us.

You can run the Confidential stack on your hardware. Or host your workload on our Confidential Cloud.

Use Cases

  • Private inference. Guarantee data privacy during inference. Customer prompts, responses, and model interactions are never visible to you or your infrastructure.
  • Weight protection. Protect proprietary weights from extraction during inference or fine-tuning. Weights never leave hardware-enforced secure enclaves.
  • Private training. Train on sensitive data and cryptographically prove exactly what data was used.
  • Agent security. Agents run inside TEEs with hardware-enforced credential isolation. Tokens and API keys never exist in plaintext outside a TEE.

Get Started

To add privacy to your existing infra, see components. To run workloads on our infra, see cloud. Or contact us.

About

Confidential is software for private, secure AI workloads, agents, and inference. It lets you provide on-prem security and privacy guarantees on your off-prem hosted infrastructure. It protects data and models in use with Trusted Execution Environments (TEEs). It runs on your existing hardware.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors