Are You Doing AI — or Just Using It?
- Bill Bierds
- 7 minutes ago
- 4 min read
In the race to deploy AI, most companies have forgotten the one thing that determines who actually wins: control over their data.
Every company is “doing AI” now. Copilots, chatbots, summarization tools, RAG pipelines, the tooling is everywhere, and the announcements come weekly. Yet most organizations, if pressed, would admit something uncomfortable: they don’t actually control their data. It lives in silos. It’s locked inside SaaS platforms. It flows through pipelines they don’t fully own.
This is the gap no one talks about at the all-hands meeting: AI adoption is accelerating, but data sovereignty is stalling.
The wrong question is winning boardrooms
Executives are asking “which model should we use?” when the more consequential question is “who controls the data that trains and feeds it?” The model is increasingly a commodity. GPT, Claude, Gemini, Llama — they are converging. The moat isn’t the intelligence layer. It’s the data layer underneath it.
Meanwhile, the real lock-in is quietly happening at the data level. It’s not the app you’re paying for. It’s the decade of customer interactions, transaction logs, and operational records that only exist inside that app’s proprietary schema — and that you can’t easily extract, port, or recombine.
“In 2026, the question won’t be whether enterprises are using AI — it will be whether their data systems are capable of sustaining it.”
VentureBeat put it plainly at the start of this year: the companies that scale AI successfully won’t be those with the best models. They’ll be those with durable data infrastructure — the kind that doesn’t crumble when a vendor changes its pricing, rewrites its API, or goes bankrupt.
Three symptoms of a data sovereignty problem
01 Siloed data Your most valuable signals live in disconnected systems — CRM, ERP, data warehouse, SaaS tools — that don’t speak to each other in real time. | 02 Platform-throttled speed You can’t move as fast as your AI strategy demands because your data pipeline is gated by a vendor’s API limits, schema constraints, or release cadence. | 03 Invisible lock-in Switching costs aren’t about the software license. They’re about the proprietary data formats, embeddings, and integrations that only work inside one vendor’s ecosystem. |
These aren’t edge cases. According to recent industry analysis, 67% of organizations are actively working to reduce single-vendor AI dependency — and migration costs, when companies do finally exit, average $315,000 per project. The irony is that prevention is architecturally simple. Execution, however, requires a strategic shift in how companies think about their data.
67% of enterprises actively reducing single-vendor AI dependency | $315K average cost to migrate AI workflows after vendor failure | 93% of enterprises now operate in multi-cloud environments |
The architecture that changes the equation
The answer isn’t to avoid AI platforms — it’s to de-couple your data from them. The companies getting this right share a common pattern: they’ve separated the data layer from the application and model layers, using an adapter-based architecture that allows any AI system to plug in without holding the data hostage.
Think of it as a three-layer stack:
Data layer (yours to own): Raw and processed data lives in infrastructure you control — whether on-prem, in a cloud data warehouse, or in a vector database that supports open standards. This layer belongs to you unconditionally.
Orchestration layer (yours to govern): An AI gateway or abstraction layer routes requests to whatever model or service is appropriate — and can be swapped without touching the data. This is where MCP (Model Context Protocol), now rapidly becoming an industry standard, plays a critical role: it defines how AI systems connect to data sources in a vendor-agnostic way.
Application layer (composable, not locked): The user-facing tools and workflows sit on top. When the model changes, the application doesn’t break. When a vendor changes pricing, you route around it in minutes, not months.
financial data systemsdata interoperabilityAI orchestrationAI infrastructure

Doing AI vs. using AI
Using AI means plugging into someone else’s model and letting their platform define what’s possible. It’s fast to start, and it generates demos beautifully. But it compounds dependency over time. Every integration, every fine-tuned prompt, every embedding stored in a proprietary format is a brick in a wall you’re building around yourself.
Doing AI means treating data as a strategic asset — one that appreciates with time, can be moved, combined, and applied to whatever the best model is next year. It means asking hard questions before signing vendor contracts: Can we export our data in full? Can we port our embeddings? What happens to our pipelines if this company is acquired?
The companies that will compound on their AI investments are the ones who answer “yes” to those questions today — not the ones racing to ship the most demos.
The strategic audit: three questions worth asking now
FOR LEADERS AND ARCHITECTS
Do we own our data, or does a vendor control access to it? Map every critical data source and identify which ones are locked inside a proprietary schema.
If our primary AI vendor disappeared tomorrow, how long would recovery take? If the answer is “months,” you have a data architecture problem, not an AI problem.
Is our AI stack composable? Can we swap models, providers, or tools at the orchestration layer — without touching the data layer?
The companies that will define the next five years of enterprise AI aren’t necessarily the ones with the most advanced models in production. They’re the ones that built the right foundation and kept the data for themselves.