Should Your Board Approve the AI Policy?
Ask These 6 Questions
Organizations regularly struggle with the question of which policies and decisions should be made at the board table and which should stay with management.
A good example of this debate is the AI policy.
Boards of directors are questioning their role and responsibilities in guiding and overseeing their organization’s AI journey, wondering if it should extend beyond general oversight to approving the AI policy.
There is plenty of expert guidance that recommends that the board should approve the AI policy, and plenty of boards have approved AI policies. However, not everyone is convinced it is a board-level policy, arguing that it seems like more of an operational policy, such as technology and data policies (which most boards don’t approve).
What is right for your organization?
Just as an AI policy should be tailored to the workplace context, whether it should be a board-approved policy should be assessed based on how AI is used and how it can impact the business.
In a related post I describe a quick 6-step decision filter to help you evaluate who should approve the AI policy in your organization - board or management.
Below I have applied that decision filter to determine whether an organization – in its context and based on its use of AI - should approve an AI policy.
Scenario:
A mid-sized financial services organization has developed an AI policy to guide staff on the use of artificial intelligence tools. The board has been intentionally elevating digital and data-related oversight in recent months due to the organization’s innovation-focused strategic goals. The CEO is considering whether to place the AI policy on the upcoming board agenda “for discussion and approval” before management rolls it out across departments.
Before deciding if the board should approve the policy, let’s apply the Decision Filter:
1. Are There Legal Requirements or Expectations?
Do laws, regulations, contracts, or bylaws require board approval for this decision? Even if not, are there regulatory expectations, governance guidelines, or legal risks that imply the board should be involved?
The applicable laws, requirements and expectations depend on this organization’s governing arrangement, jurisdiction and activities.
⚠️ Possibly.
Let’s assume here that there are no legal requirements applicable to this business requiring the board to approve any AI policy. [Otherwise it makes the other filters irrelevant 😊] However, there is generic legal guidance (articles) from law firms in the jurisdiction recommending that AI policies should be approved by the board of directors.
2. What Is the Risk Level to the Organization?
Does this decision impact routine business activities—or could it materially affect the mission, performance, reputation, compliance, resiliency or culture? High-risk decisions usually belong at the board.
✅ Moderate to High.
In addition to fairly standard internal workplace AI usage, the organization is developing new apps and tools using AI to enhance customer experience, detect fraud, analyze customer data, provide product recommendations, etc. Governance of these analytical AI tools will add process rigour and aim to mitigate biased decision-making that could impact operational compliance and duties to customers. AI regulation is expected and there may be moderate penalties for offenses. AI is an evolving issue, and therefore risks are also evolving and uncertain.
3. What is the Long-Term Impact?
Will this decision be short-term and reversible, or long-term and precedent-setting? Will it shape material capital allocation, future innovation, organizational structure, or long-term strategies? If the decision is high impact with a long tail, the board likely needs to weigh in.
✅ Moderate
Decisions made under the policy will influence how AI is used, some of which are short-term or easily changed. However, some AI choices may influence material future innovation in data analysis and critical customer products that would be difficult to reverse. As AI is evolving quickly, it is unlikely that technology choices will be too long-term as flexibility will be needed.
4. Does the Board Bring Special Value?
Does the board bring independent perspective, diverse experience, or critical oversight not otherwise present? Could it help identify blind spots, challenge assumptions, or reduce conflict of interest concerns? If the board’s input doesn’t enhance the quality of the decision, it may not need to make it.
⚠️ Possibly.
Although certain board members are knowledgeable about AI, they do not bring any special expertise that management doesn’t have. Management does not have personal conflicts requiring independent review. Given AI is a newer complex area, the diverse views board discussion could be valuable. Board decision is not an obvious added value.
5. What’s the Organizational Context?
Size and maturity: Early stage or smaller organizations often need more board involvement. Mature organizations may need the board to pulled back to more impactful areas.
Leadership strength: Experienced and trusted management teams often earn greater delegation
Past problems: If this area has caused issues before, tighter oversight may be needed—at least temporarily.
Other sources of accountability: Are there independent experts, auditors, regulators, or partners involved who provide sufficient oversight?
The organization is mid-size, moderately mature, with a strong executive team but limited prior experience navigating AI risks. The organization has extensive policies to mitigate risk and guide technology and data use, as well as a robust risk management framework.
👉 Organizational context suggests that the board does not need higher involvement to counterbalance management weaknesses. However, board involvement may be expected, and diverse perspectives may be valuable.
6. What Are Stakeholder Expectations?
Would regulators, funders, shareholders or the public expect board involvement in this decision? Is there reputational risk if the board is not involved? What are peers doing? Does that set a de facto standard? How is your organization different?
The financial services industry is highly regulated, and the regulator defines its expectations with specific requirements (which currently doesn’t include board sign-off of the AI policy, as discussed above). Employees and clients would expect the organization to conform to typical industry practices. Management believe that employees will take the policy seriously regardless of whether it is board-approved. Peers are adopting AI policies and many are board-approved.
👉 Peer practices may influence the organization towards an approach of board sign-off, but otherwise stakeholder expectations on board approval are neutral.
Conclusion: Board Should Approve the AI Policy
In this case, due to how AI is being used in this organization, heightened risk, and public expectations on board involvement, board approval of the AI policy sends an important governance signal.
As this filter relies on subjective interpretations of risk, impact and value, your organization may assess these factors differently.
* * *
Next Steps for Boards and CEOs
✅ Use the above example, adjusted for your organization – for an AI policy or any other key decision - to determine whether it should go to the board.
Need help building or refreshing your delegation model?
Let’s talk. Clarifying decision rights is one of the highest-leverage changes you can make to unlock a more strategic, empowered boardroom.