"Private AI" has become the most misused phrase in enterprise software this year.
When a vendor tells a COO that their AI is "private," three out of four times they mean this: your prompts travel to OpenAI, we promise not to train on them, and we sign a DPA. That is not private. That is outsourced with paperwork.
I've seen this play out repeatedly, and the pattern is consistent. A Fortune 500 industrial firm we spoke with this quarter had paid a large AI vendor $1.4M over 18 months. Their compliance team eventually asked one question: *where do the prompts physically go?* The answer was "a data center in Oregon, under our contract." Under the vendor's contract — not the customer's control. The program paused the following week.
The version that actually works for regulated industries looks different. Prompts never leave your VPC. The model runs on infrastructure you can point at on a map. Fallback to commercial APIs can exist, but only through a governed boundary where sensitive tokens are anonymized before they cross it, and de-anonymized when the response returns. That boundary is the product. Everything else is a feature.
Here's the test I'd use if I were evaluating a vendor next week:
1. Can you name, in writing, every place a single prompt travels?
2. Does your architecture require anything beyond our VPC to run in steady state?
3. If our connectivity to your cloud drops tomorrow, what still works?
If the answer to any of those three is hedged, you don't have private AI. You have a contract about privacy.
How many of the three can your current stack answer cleanly? The AI Readiness Index — 30 questions, 2 minutes — will tell you, and email the full scoring back: [link].
---
Repurposing queue (auto-filled by repurposer when approved)
- [ ] Carousel 8-slide →
02-content/drafts/carousels/ - [ ] Short-form 60s →
02-content/drafts/shorts/ - [ ] Thread 5–7 posts →
02-content/drafts/threads/ - [ ] Medium expansion →
02-content/drafts/medium/