- datapro.news
- Posts
- Has the Rise of Generative AI Caused a Deterioration in Data Trust?
Has the Rise of Generative AI Caused a Deterioration in Data Trust?
This Week: An investigation into the alarming patterns emerging across enterprise data teams

Dear Reader…
The promise of AI-driven insights is being undermined by a crisis of confidence in the very data that powers these systems. This week we reveal why data engineers are fighting an uphill battle against declining trust.
The numbers tell a stark story. In just twelve months, the percentage of organisations reporting complete distrust in their decision-making data has jumped from 55% to 67%—a 12 percentage point increase that should have every data engineer questioning whether their carefully architected pipelines are actually serving the business.
This isn't merely a statistical blip. It's a fundamental shift that coincides precisely with the enterprise adoption of generative AI, raising uncomfortable questions about whether our rush to implement AI capabilities has inadvertently sabotaged the very foundation they depend upon: trusted data.
This internal perspective is mirrored at a global level with the Edelman Trust Barometer showing a 3 point decline over the previous year. Indicating there are larger forces at work in relation to trust in information.

Check out the full report on the Data Innovators Exchange
The Quality Paradox
Perhaps the most damning evidence comes from data quality assessments. Despite unprecedented investment in data infrastructure, organisations rating their data quality as "average or worse" increased from 66% to 77% between 2023 and 2024. This deterioration is happening whilst data teams are under more pressure than ever to deliver AI-ready datasets.
"We're seeing a perfect storm," explains one senior data engineer at a FTSE 100 company, speaking on condition of anonymity. "Management wants AI insights yesterday, but they don't understand that generative AI amplifies every data quality issue we've been trying to fix for years."
The irony is palpable. Generative AI systems demand higher data quality standards than traditional analytics, yet their implementation often exposes the inadequacies of existing data governance frameworks. When AI systems confidently generate fabricated information—the phenomenon known as "hallucinations"—they don't just undermine trust in the AI itself, but cast doubt on the entire data ecosystem.
The Governance Crisis
Our investigation reveals that data governance has become the primary bottleneck in AI implementations. The percentage of organisations identifying governance as their top data integrity challenge has surged from 27% to 51%. This 89% increase suggests that traditional governance frameworks are buckling under the demands of AI workloads.
The challenge is multifaceted. AI systems require comprehensive data lineage tracking, rigorous quality controls, and compliance frameworks that many organisations simply haven't built. Meanwhile, 49% of data teams report lacking adequate tools for automating data quality processes—a critical gap when dealing with the volume and velocity requirements of AI training datasets.
"The old ways of managing data quality through manual checks and periodic audits simply don't scale to AI requirements," notes a data platform architect at a major retailer. "We're essentially trying to retrofit governance onto systems that were never designed for it."
The Skills Gap Widens
Perhaps most concerning for the profession is the widening skills gap. Sixty percent of organisations cite insufficient AI skills and training as a barrier to launching AI initiatives. This shortage creates a dangerous dynamic where data engineers are expected to support AI implementations without adequate understanding of the unique requirements these systems impose on data infrastructure.
The problem extends beyond technical skills. Data engineers increasingly find themselves caught between business stakeholders demanding rapid AI deployment and the technical reality of building trustworthy data foundations. This tension is exacerbated by the "black box" nature of many AI systems, which makes it difficult to explain data requirements to non-technical stakeholders.
Start learning AI in 2025
Keeping up with AI is hard – we get it!
That’s why over 1M professionals read Superhuman AI to stay ahead.
Get daily AI news, tools, and tutorials
Learn new AI skills you can use at work in 3 mins a day
Become 10X more productive
Trust by Numbers
The human factor in this crisis cannot be overlooked. Research involving nearly 6,000 global knowledge workers reveals that 54% of AI users don't trust the data used to train AI systems. More tellingly, 71% state that consistently inaccurate AI outputs would permanently damage their trust in AI systems.
This creates a vicious cycle for data teams. When business users encounter conflicting reports or AI-generated insights that don't align with reality, they develop lasting scepticism that persists even when accurate data is presented. The result is that valuable insights go unutilised, and decision-makers revert to intuition-based approaches rather than data-driven strategies.
"We've spent years building sophisticated analytics capabilities, only to watch executives ignore our dashboards because they don't trust the AI-generated summaries," reports a data engineering manager at a financial services firm. "It's incredibly frustrating."
Regional Variations Tell a Story
The trust crisis isn't uniform across regions, revealing important cultural and regulatory factors. East Asian markets generally show higher tolerance for AI systems, whilst European markets—particularly France and Germany—demonstrate greater scepticism. In the United States, 35% of workers avoid generative AI entirely, suggesting significant resistance to adoption.
These regional differences highlight the importance of local context in data governance strategies. European data teams, operating under GDPR and emerging AI regulations, face additional compliance burdens that their counterparts in less regulated markets can avoid. This regulatory complexity adds another layer to the trust challenge, as teams must balance innovation with compliance requirements.
The Transparency Imperative
Despite these challenges, our investigation identifies clear patterns among organisations successfully maintaining data trust in the AI era. The most critical factor is transparency. Workers consistently emphasise the importance of understanding how AI systems use their data, with 82% citing accurate data as critical to building trust, and 78% highlighting the need for complete, holistic datasets.
Successful organisations are implementing what industry experts term "explainable AI" frameworks—systems that can articulate how decisions are made and which data sources influence outcomes. This transparency is particularly crucial for data engineers, who need to understand data lineage and quality impacts throughout the AI pipeline.
Privacy-Enhancing Technologies Show Promise
Forward-thinking organisations are adopting privacy-enhancing technologies such as federated learning and differential privacy to address trust concerns whilst maintaining AI capabilities. These approaches allow AI training without exposing sensitive data, addressing both privacy concerns and data governance challenges.
"Federated learning has been a game-changer for us," explains a data architect at a healthcare technology company. "We can train models across multiple data sources without centralising sensitive patient data, which addresses both trust and compliance concerns."
The Executive Paradox
Interestingly, our investigation reveals a paradox at the executive level. Whilst overall trust in data declines, 63% of US executives using generative AI daily report growing confidence in these systems. Moreover, 38% express willingness to trust AI for business decisions—a figure that would likely alarm the data engineers supporting these systems.
This disconnect between executive enthusiasm and data team concerns suggests a communication gap that could prove costly. Executives experiencing the benefits of AI tools may not fully appreciate the data quality challenges their teams face in maintaining these systems.
Fighting Back: What Data Engineers Can Do
Despite the challenges, our investigation identifies several strategies that data engineering teams can employ to rebuild trust:
Implement comprehensive data lineage tracking to provide transparency into how data flows through AI systems. This visibility is crucial for both debugging issues and building stakeholder confidence.
Establish automated data quality monitoring that can operate at the scale and speed required by AI workloads. Manual quality checks simply cannot keep pace with AI data requirements.
Develop clear communication frameworks that help business stakeholders understand data limitations and AI system capabilities. This includes setting realistic expectations about what AI can and cannot deliver.
Invest in privacy-enhancing technologies that allow AI development whilst addressing legitimate privacy and security concerns.
Build governance frameworks specifically designed for AI workloads, rather than attempting to retrofit traditional governance approaches.
What patterns are you seeing in your organisation?
We'd like to hear from you about their experiences with AI implementations and data trust challenges.
The Path Forward
The evidence suggests that the relationship between generative AI and data trust is complex and evolving. Whilst AI has certainly exposed and amplified existing data quality issues, it has also created new opportunities for organisations willing to invest in proper foundations.
The data engineers who will thrive in this environment are those who recognise that trust is not a technical problem to be solved, but an ongoing relationship to be nurtured. This requires continuous investment in quality infrastructure, governance frameworks, and transparent communication about AI capabilities and limitations.
The stakes couldn't be higher. As one senior data engineer put it: "We're not just building data pipelines anymore—we're building the foundation for business trust in an AI-driven world. Get it wrong, and the consequences extend far beyond our technical systems."
The question isn't whether generative AI has caused a deterioration in data trust—the evidence clearly shows it has. The real question is whether data engineering teams can rise to meet this challenge and rebuild the trust that modern businesses desperately need.