Unwind DataUnwind Data
Data Strategy6 min read

The Intelligence Allocation Stack: Why AI Projects Fail

For every dollar companies spend on AI, they should be spending six on the data architecture underneath it. The Intelligence Allocation Stack is a four-layer framework for building data infrastructure that makes AI trustworthy, scalable, and actionable.

Wesley Nitikromo

Wesley Nitikromo

March 24, 2026

The Intelligence Allocation Stack: Why AI Projects Fail

For every dollar companies spend on AI, they should be spending six on the data architecture underneath it. Almost none of them do. That single imbalance explains why 88% of companies are now using AI and only 39% see any measurable impact on their bottom line.

This is not a technology problem. It is an allocation problem. Companies are putting intelligence in the wrong layer.

Data infrastructure and AI architecture diagram

The Intelligence Allocation Stack

Over the past decade, I have built data infrastructure across fintech, e-commerce, sustainability, and SaaS. I co-founded DataBright, grew it from zero to acquisition, and have since taken interim data leads at companies ranging from early-stage startups to scaling platforms handling millions of transactions.

The pattern is always the same.

Every company that fails with AI makes the same architectural mistake: they start at the top of the stack and work down. Every company that succeeds builds from the bottom up.

I have formalized this into a framework I call the Intelligence Allocation Stack. It has four layers, and the order is not negotiable. You can read more about foundational data strategy principles in our data strategy overview.

Layer 1: Data Foundation

This is where data enters the organization, gets stored, and becomes governable. It includes ingestion pipelines, warehousing, data quality checks, and the basic infrastructure that ensures data is clean, consistent, and available.

Most companies think they have this. They do not. They have data scattered across SaaS tools, spreadsheets maintained by one person, and pipelines held together by tribal knowledge that lives in someone's head. When that person leaves, the foundation cracks and everything built on top of it starts producing unreliable outputs.

The test is simple: can three different people in your company run the same query and get the same answer? If not, Layer 1 is broken.

Layer 2: Semantic Layer

This is where business logic gets translated into something machines can understand. Revenue means one thing in finance and another in marketing. "Active customer" has a different definition in every department. The semantic layer creates a single, governed vocabulary that every downstream tool and every AI agent can rely on.

Without Layer 2, your AI does not understand your business. It understands your data. Those are not the same thing.

Google understood this when they acquired Looker for $2.6 billion. Looker's real value was never the dashboards. It was LookML, the semantic layer that gave organizations one language for their metrics. Today, tools like Omni and dbt's Semantic Layer are pushing this further, making business logic portable, testable, and AI-ready.

Layer 3: Orchestration Layer

This is where data gets connected, transformed, and routed to where it needs to go. CRM syncs, reverse ETL, workflow automation, API integrations, real-time event processing. The orchestration layer is the nervous system of your data architecture.

Most companies overspend on Layer 1 (collecting data) and underspend on Layer 3 (making it usable). That imbalance means they have warehouses full of data that nobody can efficiently activate.

Layer 4: AI Layer

This is where models, agents, and conversational AI live. It is the most visible layer, the one executives get excited about, and the one vendors sell hardest. It is also the layer that is entirely dependent on the three below it.

An AI agent querying a broken pipeline will confidently return wrong answers. A conversational AI built on top of ungoverned data will hallucinate with authority. An LLM without a semantic layer will interpret "revenue" five different ways in the same report.

The AI does not hallucinate because the model is bad. It hallucinates because the data foundation, the semantic layer, or the orchestration layer failed to give it something trustworthy to work with.

Team reviewing AI data pipeline results on screens

Why Companies Start at the Wrong Layer

The incentive structure pushes companies toward Layer 4 first. AI demos are impressive. Investors want to hear about agents. Boards ask about automation. Nobody gets promoted for fixing a data pipeline.

But the consequences of skipping layers compound fast.

In 2018, companies hired Data Scientists and gave them a laptop. The Data Scientists spent 80% of their time cleaning data because Layer 1 did not exist. The insights never materialized, and companies concluded that data science did not work. It did. The foundation was just missing.

In 2022, companies built dashboards that nobody trusted. Finance and marketing showed the CEO different revenue numbers because there was no semantic layer. The dashboards were not wrong. They were built on competing definitions of the same metric.

In 2026, companies are deploying AI agents on data infrastructure that one engineer understands. That engineer has not taken a vacation in two years. When they leave, the agents will keep running, confidently making decisions on data nobody can verify or explain.

Different era. Same architectural mistake. Different layer skipped.

The Tacit Knowledge Problem

There is a compounding risk that most AI strategies ignore entirely: tacit knowledge.

Stanford research published in March 2026 shows that AI has already cut entry-level developer hiring by 20% and call center jobs by 15%. Companies are shrinking teams in the name of AI efficiency.

But the people being cut are usually the ones who knew where the data lived. They knew which pipeline broke every Tuesday. They knew why the CRM and the revenue dashboard never matched. That knowledge never made it into documentation. It lived in their heads.

Smaller teams means less institutional data knowledge. Less data knowledge means worse data governance. Worse data governance means AI making decisions on data nobody can verify. The trust gap compounds daily.

The companies that thrive with smaller teams will not be the ones that cut headcount first. They will be the ones that build Layers 1 through 3 so robustly that tacit knowledge becomes irrelevant.

The Vendor Dependency Trap

There is another risk embedded in how most companies approach Layer 4: single-provider dependency.

A well-designed Intelligence Allocation Stack is provider-agnostic at every layer. The data foundation should not depend on one warehouse. The semantic layer should not be locked to one BI tool. The orchestration layer should not collapse if one integration goes dark. And the AI layer should be designed so you can swap the model underneath without rebuilding the stack.

Google kept Wiz multi-cloud after a $32 billion acquisition because they understood this. The biggest players in AI are all building for provider independence. Most companies deploying AI are not.

Allocating Intelligence Correctly

The question is not "which AI model should we use?" The question is "where should intelligence live in our organization?"

Some intelligence belongs in the data layer: automated quality checks, anomaly detection, schema validation. Some belongs in the semantic layer: metric definitions, business rules, governed vocabularies. Some belongs in the orchestration layer: workflow automation, data routing, event-driven triggers. And some belongs in the AI layer: natural language interfaces, autonomous agents, predictive models.

The companies getting ROI from AI are the ones that deliberately allocated intelligence across all four layers. They did not dump everything into Layer 4 and hope the model would figure it out. For a deeper look at how this applies in practice, explore our AI readiness framework.

The Opportunity

For every dollar spent on AI tools, six will be spent on the data architecture underneath. That ratio has not hit the market yet, but it will. As the gap between AI adoption and AI results becomes impossible to ignore, the companies that provide Layers 1 through 3 will capture the largest share of enterprise AI spending.

The model providers will compete on intelligence. The real margin will be in the infrastructure that makes intelligence trustworthy.

The next wave of AI consulting will not sell models. It will not sell prompts. It will not sell agents. It will sell the foundation those agents stand on.

Systems beat individuals at scale. The right architecture beats the smartest model. And the companies that understand where to allocate intelligence will be the ones still standing when the hype cycle ends and the real work begins.

Data ArchitectureData QualityData GovernanceAI AgentsData InfrastructureSemantic LayerData EngineeringAI ReadinessData StrategyIntelligence Allocation
Wesley Nitikromo

Written by

Wesley Nitikromo

Founder of Unwind Data, an AI-native data consultancy based in Amsterdam. Previously co-founded DataBright (acquired 2023). Specializes in data infrastructure, data architecture, and helping companies allocate intelligence to the right layer of their stack.

Ready to unlock your data potential?

Let's talk about how we can transform your data into actionable insights.

Get in touch