Private AI for Mechanical Contractors — 20 Questions, Answered Straight
If you're evaluating AI for a $50M–$1B mechanical, industrial, or manufacturing contracting business, you've probably been pitched twelve platforms in the last six months. Most of them promised the same three things and none of them told you what happens when your compliance officer reads the Data Processing Addendum.
This is the straight version. No vendor handwaving, no "enterprise-grade" labels, no slides. Twenty questions we hear every week from COOs, CFOs, VPs of Operations, CIOs, and the legal counsel they loop in. Answered with the detail you actually need to make a decision.
If you read one question and want the version tailored to your environment, take the AI Readiness Index. It's 30 questions. Five minutes. You'll get back a document scoped to your current systems, your regulatory posture, and the three wedge use-cases most likely to pay back inside 90 days.
---
Positioning — what this actually is
1. What does DKube actually do for a mechanical contractor?
We build private, agentic AI systems that absorb the cognitive and paperwork load your operations team carries every day — RFIs, submittals, change orders, AP reconciliation, bid-document scoping, safety incident triage — and return that reclaimed time to revenue-generating work. The platform is DKubeX 2.0. The delivery model is 12-week pilots with a fixed scope and a measured ROI outcome. No six-month requirements phase. No perpetual professional services.
2. How is this different from Microsoft Copilot, ChatGPT Enterprise, or the AI features my ERP vendor just added?
Three ways, and all three matter.
First, none of those tools are actually private. Your prompts — which include project numbers, vendor pricing, change-order language, and often personnel data — leave your VPC and traverse a third-party cloud. The DPA says it's secure. The network trace says otherwise. For contractors doing federal work, or handling prevailing-wage data, or running under a joint-venture NDA, that's a problem the legal team catches at month three.
Second, those tools are horizontal. They don't know what a submittal log looks like. They don't know that a "change order" in Viewpoint Spectrum is structured differently than the one in Sage 300 CRE. The agents we build are scoped to the specific document shapes, ERP schemas, and workflows of mechanical contracting.
Third, they're rented cognition. The moment you stop paying, the embeddings, the fine-tuning, the workflow agents — all of it is gone. DKubeX 2.0 runs on infrastructure you own or we operate for you. The agents, the vector stores, the audit trails are yours.
3. Who is this for inside my company?
The sharpest-return buyers are the people who feel the paperwork drain directly. President of Construction. VP of Operations. VP of Preconstruction and Estimating. VP of Service. Directors of Controls/BAS. The CIO or IT Director is a critical partner on deployment and security, but the pilot sponsor is almost always the operator whose team is buried.
CFOs are the ROI gatekeeper — they don't sponsor, but they can kill a deal if the numbers don't defend themselves.
---
Security, privacy, and what "private AI" actually means
4. What does "private AI" actually mean in your setup — specifically?
Three tests we'd use if we were evaluating any vendor, including ourselves.
One: does the prompt leave your VPC at any point during inference? If the answer is "it goes to our cloud but we don't retain it," that's not private. That's a promise.
Two: do the model weights stay under your control, or does the vendor retain the right to rotate, fine-tune, or deprecate them on their schedule? If the vendor can change the model you've validated, your compliance posture is theirs to invalidate.
Three: is there a cryptographically-verifiable audit trail — who prompted what, which model version responded, what data it retrieved? If your auditor can't reconstruct a single answer six months later, you don't have private AI. You have a black box with a confidentiality clause.
DKubeX 2.0 passes all three. Prompts never leave your network. Model weights are frozen and versioned. Every inference is logged with a tamper-evident hash chain.
5. Where does the data live? Can it leave my network?
You pick the topology. On-premises on hardware you own. In your existing AWS/Azure/GCP VPC. Or in a DKube-operated sovereign cloud region with a single-tenant dedicated deployment. In all three cases, the answer to "does the data leave my network" is the same: no, unless you explicitly turn on a feature that requires it, and even then we tell you which byte is going where.
6. What about prevailing-wage, Davis-Bacon, GSA, or DoD work?
This is where commercial AI tools break. We've deployed for contractors working GSA schedules and DoD subcontracts where FAR 52.204-21 applies. DKubeX 2.0 supports FIPS 140-2 validated cryptography, runs inside GovCloud or IL4/IL5 enclaves, and ships with pre-built audit export for DCAA reviews.
7. Do you train on our data?
No. Full stop. Your prompts and documents are used for retrieval-augmented generation within your own deployment — meaning the model sees them at inference time, answers, and forgets. Nothing goes back into any model's training set. You can verify this yourself with network egress monitoring; we encourage it.
---
Integration with your existing stack
8. Does this integrate with Viewpoint Spectrum, Sage 300 CRE, Procore, JD Edwards, IFS, or Oracle Primavera?
Yes to all of them, and yes to the ones you didn't list. Our integration layer reads from the underlying database (MSSQL, Oracle, DB2, Postgres) through read-only service accounts, or via the vendor's API where one exists. We've shipped connectors for the eight ERP/project-control systems most common in mechanical contracting. If yours isn't in the list, we write the connector as part of the pilot scope; it takes 2-4 days.
9. What about Bluebeam, Procore Submittals, and the document management side?
Direct integration. Bluebeam markups are parsed. Procore submittal logs are ingested and routed. Outlook and SharePoint document stores are indexed with respect for existing access-control lists — which means the AI surfaces only documents the prompting user has permission to see.
10. We run a Medius AP automation layer. Does this replace it?
No. It augments it. Medius handles the invoice capture and three-way match. Our AP agent sits in the exception queue — the 15-30% of invoices Medius flags as needing human review — and resolves the ones that just need a packing-slip cross-reference or a field note correlation. The mechanical contractor we worked with reduced their exception review time from 7,000 hours per year to 2,100.
11. What infrastructure do we need? Do we need to buy GPUs?
Depends on your deployment topology. For in-VPC cloud deployment, no new infrastructure — DKubeX 2.0 provisions what it needs in your existing cloud account. For on-premises, we've deployed production systems on a single 8-GPU server (H100s or A100s) serving a 400-user contractor. For smaller pilots, the pilot runs on our infrastructure and migrates to yours at production cutover.
---
Pilots, pricing, and timeline
12. How long does a pilot take?
Twelve weeks. Not six months. That time is distributed as follows: two weeks of discovery and scope freeze (we call it survey-and-demo week, not requirements phase), eight weeks of build-and-iterate with weekly reviews, two weeks of UAT and handoff. Any vendor who needs longer than this for a scoped wedge use-case is either padding the engagement or doesn't have a platform.
13. What does a pilot cost?
Our founding-pilot slots for mechanical, industrial, and manufacturing contractors are priced at $25K for scoped wedge use-cases and $75K for full-stack pilots covering three wedges simultaneously. Pilot fees apply against year-one production licensing if you choose to continue. If the ROI measurement we agree on at kickoff doesn't hit, you owe nothing beyond what you've already paid.
14. What's the smallest wedge we can start with?
The three we've seen return the fastest ROI for mechanical contractors:
One — Submittal and RFI triage. Your project engineers are spending 12-18 hours per week routing, annotating, and chasing status on submittals that have predictable answers 60% of the time. An agentic layer collapses the first-pass review to minutes and hands the edge cases to the engineer.
Two — AP exception reconciliation. Described in question 10.
Three — Bid-document scope normalization for preconstruction. Your estimators read 200-page bid documents to extract the twelve scope items that matter. An agent does the first pass in 4 minutes and the estimator spends their time on the three high-judgment items instead of all twelve.
Any of these can be scoped as a $25K pilot and measured against your existing baseline.
15. How do you prove ROI in 90 days?
We agree on the ROI metric at kickoff, before a line of code is written. For AP, it's exception-queue hours per month. For submittals, it's cycle time from receipt to resolution. For estimating, it's bid turnaround time and scope-capture accuracy versus the award documents. We measure your current baseline in week one, build against it, and remeasure in week twelve. The delta is the ROI. If it doesn't pay back within 12 months at production volume, you don't have to sign the renewal.
---
People and change management
16. Does this replace my estimators, PMs, or AP team?
No, and any vendor telling you otherwise is selling you a problem. The teams doing this work have the judgment you need; what they don't have is time. Our agents absorb the low-judgment, high-volume tasks so those teams can spend their hours on the things that actually win bids, protect margin, and catch errors. Every production deployment we've shipped has resulted in the same headcount doing more valuable work — not fewer people doing the same work.
17. Who on my team has to learn a new tool?
Minimum. We build agents that live inside the tools your team already uses — Outlook, Teams, Bluebeam, your ERP, Procore. The agent shows up as a comment, a flagged exception, a drafted response, or a recommendation inside existing workflows. Training is typically a 45-minute session and a one-page reference card.
18. What happens if the AI is wrong? How do we catch errors?
Three layers. First, the agent declares its confidence on every output — high-confidence items route to auto-approve queues, medium to human review, low to an escalation stream. Second, a random sample of high-confidence outputs is surfaced weekly for audit. Third, every output is logged with the retrieval sources it used, so when something is wrong, you can trace exactly why in under 30 seconds.
The one thing we do not do is let the agent act autonomously on anything with financial, safety, or compliance consequence. Every decision with real-world impact has a human approver in the loop, by design.
---
Company credibility and long-term fit
19. Who builds and maintains the agents? Us or you?
Both, in a sequence. For the first 90 days, DKube builds. For months 3-6, your IT or ops team shadows and co-develops. By month 9, your team is building new agents on the platform independently, with DKube in an advisory seat. This is deliberate — we don't want to be a professional-services dependency. The pledge-pilot model explicitly funds knowledge transfer, not vendor lock-in.
20. How do you sunset a model? What's the audit trail if we ever need to tear this out?
Every model version is immutable and signed. Every inference is logged with the model hash, the retrieval context, the user identity, and the output. If a model is deprecated or replaced, the historical audit trail survives the replacement. If you ever choose to terminate the engagement, you receive a full export of your agents, your vector stores, and your audit logs — in open formats, on portable media. We don't hold your intelligence hostage.
---
One more thing
The contractors we've worked with who moved fastest on this didn't start by evaluating every vendor for six months. They started by mapping the three places in their operation where people were spending time on cognitive paperwork that a well-scoped agent could absorb. Then they ran a 12-week pilot against one of the three.
If you want to see what those three places look like in your operation — specifically — the AI Readiness Index is the fastest way. Thirty questions, five minutes, a scoped report back that you can hand to your board. No sales call required to get it.
The math we've seen consistently: every hour of cognitive paperwork an agent absorbs can be redeployed into bid pursuit, service density, or preventive maintenance. At the typical mechanical contractor margin, 2,000 reclaimed hours per year compound into $400K-$800K of margin-adjusted revenue. That's the number that turns "AI project" into "P&L line item."
What's on your three-list?