As many know, California Governor Gavin Newsom recently signed legislation designed to enhance online safety by installing guardrails on the development of frontier AI (artificial intelligence) models. Some people are praising the push for transparency and accountability, while others are worried about ambiguities and burdens on developers. Everyone has their opinion. A lot has to do with whatever political side of the aisle you stand on. Wherever you stand on this fencepost, the bottomline is regulation for AI is coming—and in some cases it is already here. And the fact remains; we are talking about AI almost in the same vein as we did with the birth of the Internet as the Wild Wild West.
In fact, the European Union has long had a law protecting individuals’ personal data and privacy. The General Data Protection Regulation has been in effect since 2018 and mandates strict data security measures. Will laws such as this and Governor Newsom’s continue around the world? The answer is likely yes. Now it is a matter of companies being prepared.
New research from Coherent Market Insights suggests the data governance market is estimated to surge 19.9% from 2025 through 2032, rising from $4.75 billion to $16.93 billion. As organizations move to greater digital transformation, companies are now considering regulatory compliance and the need to improve data quality.
AI Governance: A Double-Edged Sword
The Coherent Market Insights report points to a dichotomy that exists today. On one side of the coin, stringent privacy laws, like the GDPR (General Data Protection Regulation), have necessitated the implementation of data governance practices to ensure regulatory compliance. The reality is noncompliance of such laws could lead to big penalties.
But on the flip side of the coin, organizations are under immense pressure to leverage insights from data to gain a competitive advantage. It seems the only way to achieve this in the future will be to have strong data governance practices in place to ensure data quality and trust.
Here’s the hard reality though. We can sit here and debate the pros and cons—and we certainly should have discussions around this conversation, but companies must also move forward. An interesting new study from the EY organization finds that companies that have responsible AI governance often have better business outcomes.
Your Responsible AI Journey
Let’s break down the findings from the second phase of its Responsible AI (RAI) Pulse survey from the EY organization. Nearly four in five respondents said their company has improved innovation (81%) and efficiency and productivity gains (79%), while about half report boosts in revenue growth (54%), cost savings (48%), and employee satisfaction (56%).
The research suggests some key steps and a comprehensive approach must be considered as you move forward on a responsible AI journey:
- Start by defining and communicating principles and then advance to implementation and governance.
- Adopt specific KPIs (key performance indicators) to measure adherence to responsible AI principles.
- Provide training and education to employees on how to use AI responsibly.
- Adopt controls and safeguards to implement responsible AI principles.
- Assign a specific budget or funding stream for responsible AI efforts.
- Establish realtime monitoring to ensure adherence to principles.
- Conduct independent internal and/or external assessment of governance and control practices.
- Fill knowledge gaps in the C-suite with targeted training.
- Get ahead of emerging agentic AI and citizen developer risks.
Responsible AI can’t be achieved through principles alone, but rather it requires an all of the above approach.
The price tag of getting this wrong is too high to ignore. Almost every company in the survey reported financial losses from AI-related risks, and 64% experienced losses exceeding $1 million. The most common risks are non-compliance with AI regulations, negative impacts to sustainability, and bias in outputs. Issues such as explainability, legal liability, and reputational damage have so far been less prominent, but their significance is expected to grow as AI is deployed at scale.
Where are you on your responsible AI journey? What step do you need to take next to move forward with intention and success?
Want to tweet about this article? Use hashtags #datagovernance #AIregulations #IoT #sustainability #AI #5G #cloud #edge #futureofwork #digitaltransformation


