Data Science Talent Logo
Call Now

Navigating AI Governance: A Practical Guide for Enterprise Leaders and Investors By Rohan Sharma

 width=Rohan Sharma is an award-winning senior technology executive with extensive global experience in AI and digital transformation. He’s led initiatives for Fortune 100 companies including Apple, Disney, AT&T, Nationwide, and Honda, and serves on advisory boards for Frost & Sullivan, UCLA Anderson School’s Venture Accelerator, Harvard Business Review, and the CMO Council.
Rohan mentors at Techstars, UC San Diego, and MAccelerator, and consults for Stanford Seed. A USC-trained engineering leader, Rohan is an international speaker on leadership, presenting at TEDx and premier industry conferences. He’s authored AI & Boardroom (Springer Nature) and Minds of Machines.
In this post, Rohan provides an in-depth AI governance guide for business leaders and investors. The new breed of innovative AI-first start ups presents exciting opportunities. But how should stakeholders navigate this complex landscape while ensuring the AI systems are responsible and compliant? As Rohan explains, AI governance requires moving beyond traditional investment criteria to embrace a radical shift in perspective:

A new breed of AI-first startups and business models is creating unprecedented opportunities for investors but is also introducing new risks and challenges. Navigating this complex landscape requires a shift in perspective, moving beyond traditional investment criteria to embrace a more holistic approach that incorporates AI governance as a core component of due diligence.

The biggest risk in AI adoption is fear itself.

Scandals and negative headlines create a trust deficit, potentially slowing innovation and progress.

This article provides a practical, action-oriented guide for investors to evaluate AI portfolio companies with a focus on governance and making informed investment decisions. One of the key reasons why AI governance is crucial for startups is that it is essential for the large enterprises to which they sell their products and services.

Beyond the Hype: Asking the Right Questions

The allure of AI can sometimes lead to inflated expectations and overhyped claims. To avoid falling into the trap of ‘AI washing,’ where an AI-first approach in a startup’s core product is touted as a solution without substance, investors must adopt a more discerning approach to due diligence. This starts with asking the right questions to cut through the hype and gain a deeper understanding of the company’s true AI capabilities and governance practices.

This is particularly important because this is the first time in modern history that regulations, legal frameworks, and popular media conversation are going hand in hand with innovation in the field of AI.

Furthermore, responsible AI governance in a portfolio company and its products builds trust, accelerating decision-making and technology deployment.

Key Areas of Focus

  • Ethical Considerations: Delve into the portfolio company’s approach to addressing ethical challenges inherent in AI, particularly bias and discrimination. Responsible investing demands a proactive approach to mitigating potential harms and ensuring fairness in AI systems. This could also prevent legal challenges against the portfolio company in the future.
  • AI Governance Framework: Look for tangible evidence of the portfolio company’s commitment to responsible AI development, such as an AI Principles document or adopted ethical guidelines. This demonstrates a conscious effort to establish guardrails and ensure accountability in AI development and deployment.
  • Regulatory Compliance: Evaluate the portfolio company’s understanding of and preparedness for the evolving regulatory landscape. Determine the company’s risk category under frameworks like the EU AI Act and assess compliance with existing regulations relevant to their industry. Some regulations could outright decimate the business models for many portfolio companies.
  • Compliance Personnel: Inquire about the portfolio company’s internal structure for AI governance. Is there a designated compliance role responsible for tracking and interpreting regulatory guidance? This signals a proactive approach to risk management and compliance.
  • Intellectual Property (IP): Assess the portfolio company’s IP strategy to protect investor interests and ensure long-term viability. Look for signs of a well-defined IP strategy and address any red flags related to potential IP self-dealing. Since many startups are built on large foundational models like GPT, Llama, etc. and on cloud stacks of these very same technology giants, portfolio companies should have a very clear understanding of IP ownership and potential copyright legal challenges their products and services can expose them to.

From Questions to Action: Deal-Gating with AI Governance

The answers to these due diligence questions provide crucial insights that inform investment decisions. By incorporating AI governance into the deal-gating process, investors can effectively filter opportunities and mitigate potential risks.

Without effective AI governance, companies risk missteps leading to misinformation, bias, IP infringement, and data leakage. These failures can result in financial and reputational damage, including fines, lawsuits, and brand value erosion.

  • Red Flags: Certain responses should raise immediate concerns and warrant further investigation or potentially disqualify the investment altogether. These include overhyped AI claims without substantiation, a lack of a designated compliance person, and the absence of a clear IP strategy.
  • Yellow Flags: Some answers, while not necessarily deal-breakers, require more cautious evaluation. This could include involvement in high-risk AI categories under the EU AI Act or other regulations, which signals potential legal and reputational risks that need to be carefully considered.
  • Green Flags: Conversely, certain responses indicate a strong commitment to responsible AI development and governance, serving as positive indicators for investment. This includes companies that can articulate a clear understanding of AI governance principles, have implemented a robust compliance framework, and demonstrate a proactive approach to addressing ethical considerations.

Aligning with Global Standards: EU AI Act and NIST AI RMF

The global regulatory landscape for AI is rapidly evolving, with frameworks like the EU AI Act and the NIST AI RMF setting new standards for responsible AI development and deployment. Investors should encourage portfolio companies to align their practices with both frameworks, ensuring preparedness for international markets and mitigating future compliance risks.

In extreme cases, governance failures may trigger regulatory backlash, potentially stifling innovation across entire sectors.

Understanding ‘Foundation Models’: Licensing, Dependencies, and Risks

‘Foundation models,’ large-scale AI models trained on massive datasets, are becoming increasingly prevalent in AI applications. While powerful, these models present unique challenges and risks that investors must understand.

  • Licensing Risks: Carefully evaluate the licensing agreements associated with foundation models, particularly those developed by third parties. Understand potential restrictions on usage, modification, and commercialisation.
  • Commercial Dependencies: Analyse the portfolio company’s reliance on foundation models and the potential impact of changes in licensing terms or the availability of these models on their business. Over-dependence on third-party foundation models can create vulnerabilities and limit flexibility.

Beyond Checklists: Vigilance, Adaptation, and the Human Element

AI governance is not a static checklist but a dynamic process that demands ongoing vigilance and adaptation. The rapid pace of innovation in AI requires investors to stay informed about emerging regulations, copyright debates, and technological advancements to effectively adjust their investment strategies and due diligence processes.

Beyond regulations and frameworks, AI governance ultimately comes down to people.

Assess the portfolio company’s culture and its commitment to ethical AI development. Seek out founders who are open to feedback, guidance, and continuous learning in the realm of AI governance. Prioritise investments in portfolio companies thoughtfully designing systems that foster effective human-AI collaboration, ensuring human oversight and intervention where appropriate.

Additionally, consider how portfolio companies are positioning their AI solutions for adoption by target enterprises. Evaluate their understanding of enterprisespecific AI governance requirements and their ability to meet the stringent standards of large corporate clients. By embracing an informed approach to AI governance, investors can position themselves to capitalise on the transformative potential of AI, while mitigating risks and contributing to a more responsible and ethical AIpowered future.

Back to blogs
Share this:
© Data Science Talent Ltd, 2025. All Rights Reserved.