BLOG > The Emperor Has No Clothes (and AI Is Shouting It)

The Emperor Has No Clothes (and AI Is Shouting It)

The Emperor Has No Clothes (and AI Is Shouting It)

Remember Andersen's fairy tale "The Emperor's New Clothes"?  Two weavers promise the sovereign a magnificent garment, invisible only to fools. The advisors are careful not to say they see nothing. The people applaud. Until a child shouts what everyone knows: the emperor is naked.

 

AI is that child.

 

Here's another change brought by artificial intelligence, and this time it's not about automation. AI is exposing that the foundations of corporate knowledge are fragile. Not just for AI, but for all the enterprise processes that preceded it.

 

92% of companies plan to increase AI investments over the next three years, yet only 1% of C-level executives describe their generative AI projects as mature.1 The gap between executive enthusiasm and operational reality has never been wider. And AI is making it emerge rapidly.

 

The data crisis was easy to ignore as long as people could compensate. Decent dashboards were built even with dirty data. Analytics ran on incomplete datasets. The fact that data lakes were actually data swamps (poorly cataloged, inconsistent, full of duplicate records) was swept under the rug, because knowledge workers filled the gaps.

 

Everyone nodded, pretending the infrastructure was solid. Reporting problems meant becoming responsible for solving them and somehow pointing fingers at colleagues.

 

What Happens When Data Actually Works

When companies put their data in order, AI transforms customer experience in ways that justify the investment. Marketing proposes contextual offers based on actual purchase history, not demographic assumptions. Support teams predict and prevent churn before the customer thinks of leaving. Product teams identify unmet needs starting from usage patterns that flesh-and-blood analysts would hardly identify.

 

Most companies, however, fail to achieve these results. The foundations don't support the weight.

01--ai-imperatore-eng-

 

The Invisible Scaffolding Collapses

AI systems are sophisticated, capable, and designed to operate autonomously. But autonomy fails when the knowledge base depends on analysts, IT teams, and operators constantly compensating for poor data quality. That invisible scaffolding worked with previous technologies because people knew how to work around problems. With AI, those structural flaws become operational failures on a large scale.

 

The emperor's magnificent robes, when put to the test, were human workarounds.

 

Think about what happens when an AI system tries to process customer data that has seven different definitions across as many departments. A human analyst knows which definition to use based on context, unwritten rules, years of institutional knowledge. AI doesn't have that context. It sees seven competing truths and produces seven different answers. Where is the truth?

 

But there's an aspect worth noting: what for previous technologies was a manageable limitation with human intelligence becomes a measurable problem for AI. And what can be measured can be solved.

 

The Gap No One Wants to Admit

The numbers tell a story that leadership would prefer not to hear. According to BCG, 75% of executives rank AI among the top three strategic priorities, but only 25% are deriving significant value from it.2 A three-to-one ratio between ambition and concrete results.

 

When 75% of companies increase investments in data management due to generative AI3, they're implicitly admitting they can't implement the technology they promised investors without first fixing the foundations. This honesty creates opportunities. Companies that treat data quality as infrastructural work, rather than innovation theater, are building something meant to last over time.

 

According to McKinsey, 70% of organizations achieving the best results from AI encountered significant difficulties precisely with data: from defining data governance processes, to the ability to rapidly integrate data into models, to insufficient amounts of training data12.

 

If even those who succeed in generating value from AI have these problems, the picture for other companies is predictable. Only 26% of Chief Data Officers say they're confident their data can support new revenue streams enabled by AI10. When employees use AI at a personal level, they control the context, understand the limitations, and can verify outputs against their own knowledge. When they use enterprise AI built on poorly organized data, they see it produce hallucinations, contradict itself from one query to the next, present absurdities with absolute confidence.

 

To be clear, it's not that Artificial Intelligence is somehow responsible for this situation. It's simply a strong wake-up call that things aren't working and require attention.

 

Concentrating Doesn't Mean Giving Coherence

Organizations often react to AI failures by rushing to consolidate data, convinced that centralization will unlock value. The data refutes this belief: 82% of companies that have implemented Master Data Management programs still dedicate one or more days per week to resolving data quality problems. And 80% still have divisions operating in silos, despite centralized infrastructure.9

 

The emperor gets a bigger wardrobe, but the clothes remain imaginary.

 

Bringing together fragmented data without discipline doesn't generate insights. It generates noise at scale. Duplicates multiply, definitions collide, and AI systems inherit confusion rather than clarity. Misaligned formats, duplicate records, and incomplete datasets distort outputs regardless of how centralized storage is.

 

What separates usable data from unusable data is structure: consistent definitions, shared standards, and continuous attention to quality. Standardized data structures, uniform naming and formatting conventions, constant processes of deduplication, cleaning, and enrichment.

 

Data doesn't stay reliable on its own. As soon as it circulates between teams, tools, and models, it begins to degrade. Governance slows that degradation, not as a control mechanism, but as a framework of trust. When data lineage (tracking data flow) is clear, ownership is defined, and quality is continuously monitored, AI outputs become explainable, defensible, and usable in real decisions4.

 

The Unstructured Data Challenge

The vast majority of enterprise data is unstructured and largely unused. According to IBM, only about 1% of enterprise data is leveraged in traditional large language models, and less than 1% of unstructured data exists in a format suitable for AI processing4. This limited view of reality helps explain why only 16% of AI initiatives reach enterprise scale5.

 

Meanwhile, broader access to data raises the stakes. As AI systems consume and generate information at increasing speed, risks related to security, privacy, and misuse increase. The global average cost of a data breach reached $4.88 million in 20248, and new threats like data leaks and prompt injection attacks create vulnerabilities that didn't exist before.

 

The most prepared organizations treat security as a data problem, not just infrastructure. This means discovery capabilities to identify and classify sensitive data, protection through encryption and access controls, monitoring to detect anomalous behaviors early4.

 

03-ai-imperatore-engù-

 

The Business Cost of Pretense

The consequences of approximate data management go well beyond failed AI projects. For 67% of executives, the data used to make decisions in the company is not completely reliable, a worsening figure from 55% the previous year11. This translates into slowed AI adoption, with months spent fixing basic problems. Distorted insights that produce inaccurate forecasts and misaligned objectives. Delays in return on investment, while teams rush to fill data gaps before seeing any results.

 

Silos create inherently inefficient structures that require additional steps for data preparation and use, increasing the cost and complexity of AI programs and slowing access to information needed for decisions4. Dispersion raises concerns about compliance, access control, and trust, precisely when organizations need them most.

 

A recent S&P Global report shows that 42% of companies abandoned most of their AI initiatives in 2025, a dramatic jump from 17% the previous year6. These weren't model failures. They were data failures, entirely predictable.

 

Gartner reinforces the picture with a sharp prediction: by 2026, 60% of AI projects lacking adequate data will be abandoned7. Organizations investing wisely verify data readiness before deployment, set longer timelines, and treat governance as infrastructure, not as an accessory cost.

 

If AI Reveals the Problem, It's Also the Cure

So far the picture might seem discouraging. But the fairy tale has a second act that the original doesn't tell: the child who denounced the emperor's nakedness also knows how to sew, and does so at a speed no traditional tailor could match.

 

This is the news that changes the perspective. Even when data is a disaster, putting everything in order with AI is enormously simpler than it was even just two years ago. Modern AI systems don't just diagnose data quality problems: they solve them. Entity resolution algorithms identify and unify duplicate records across different databases in hours, not weeks. Natural language processing models classify and structure unorganized data that previously remained inaccessible. Automatic data profiling tools identify anomalies, inconsistencies, and gaps that would require months of manual work.

 

This radically changes the economics of data quality. In the past, cleanup was so expensive and slow that many companies preferred to live with the disorder. Today the cost of remediation has dropped to the point where not doing it becomes the irrational choice.

 

It doesn't mean just pressing a button. Human oversight remains essential for defining rules, validating results, and making decisions on exceptions. But the ratio between effort and results has changed significantly.

 

What Actually Works

Organizations achieving concrete results from AI aren't just busy experimenting with the most innovative technologies. They're doing quieter, harder work: strengthening data foundations, clarifying responsibilities, and building trust in the outputs AI produces.

 

AI-ready companies follow a recognizable pattern:

 

 

They verify before implementing.

They map where customer data resides, how it's defined, and where inconsistencies exist, before committing resources to new AI initiatives. They use AI itself to accelerate this diagnostic phase.

They invest in governance that lasts.

Data quality isn't a one-time job. It requires continuous management and constant monitoring.

They unify with structure.

Integration includes consistent organization, standardized definitions, and quality controls. Centralization without these elements creates different problems, not better insights.

They build incrementally.

Instead of organization-wide rollouts, they start with a high-value use case, fix the data foundations for that specific case, demonstrate value, and then expand. And they use the results of each iteration to train AI data quality tools on the specificities of their organization.

 

These companies understand that AI-ready data must be unified and made accessible to break down silos and create economies of scale. It must be governed through clear policies and standards that ensure integrity and security. It must be protected with discovery and defense mechanisms that prevent breaches and abuse. And it must be supported by teams with a deep understanding of AI concepts and their responsible use4.

 

 

The Window Is Opening

AI doesn't transform companies. It exposes them. And what it reveals is often uncomfortable: years of deferred maintenance, competing priorities that put data quality on the back burner, and infrastructures that were never designed to support autonomous systems.

 

The child in Andersen's fairy tale had no special powers. He simply said what everyone saw but no one wanted to admit. AI is playing that role now, bringing to light data foundations that have never been solid enough to support what we're building on top of them.

 

The difference from the fairy tale is what happens next. Today's child doesn't just denounce: it also knows how to solve the problem, and does so faster than anyone thought possible. Some organizations will treat this phase as a crisis, canceling investments and waiting for "better" AI that works with messy data. Others will leverage AI itself to create order, while their competitors continue to pretend everything is fine.

 

This choice may determine the company's very future.

Translating data into genuine customer engagement requires the right strategy

Our experts are available to discuss the possibilities

Get in touch

 

Fonti

[1] McKinsey & Company, The State of AI: How Organizations Are Rewiring to Capture Value

[2] Boston Consulting Group, AI Radar: From Potential to Profit — Closing the AI Impact Gap

[3] Deloitte AI Institute, The State of Generative AI in the Enterprise

[4] IBM, What Is AI-Ready Data?

[5] IBM Institute for Business Value, CEO Study 2025

[6] S&P Global / 451 Research, Voice of the Enterprise: AI & Machine Learning, Use Cases 2025

[7] Gartner, Lack of AI-Ready Data Puts AI Projects at Risk

[8] IBM, Cost of a Data Breach Report 2024

[9] McKinsey & Company, Master Data Management: The Key to Getting More from Your Data

[10] IBM Institute for Business Value / Oxford Economics, CDO Study 2025

[11] Precisely / Drexel University LeBow College of Business, 2025 Outlook: Data Integrity Trends and Insights

[12] McKinsey & Company / QuantumBlack, The State of AI in Early 2024: Gen AI Adoption Spikes and Starts to Generate Value






Summary