News

Mitigating the risk of shadow AI in organisations by Ben Davey MSc

In my last blog, we looked at what shadow AI is and why it threatens organisations. I argued that organisations should implement strategies to ensure responsible AI integration is secure, governed, and aligned with corporate policies while fostering innovation. These strategies should complement existing IT policies and governance frameworks.

Step by step

Taking a strategic approach is important. It will help mitigate the risks of shadow AI, whilst ensuring that AI technologies serve as assets, driving innovation and efficiency while minimising threats.

I have found the following three-step approach to be helpful. Before you begin each step, consider whether you would benefit from engaging specialist support, such as an AI ethics or IT expert. They may be useful in identifying potential risks and developing effective strategies to mitigate them.

Step one – Implement a robust acceptable use policy (AUP)

An AUP is a document that explains the set of rules that employees must follow in the workplace. It explains what employees can and can’t do when using corporate computers, networks, websites or systems. They can be used to lay the foundation for a guidance framework on AI use, helping to establish clear expectations and standards for anyone using the IT infrastructure or BYOAI tools.

If you’ve got an AUP, then edit it to incorporate AI. And if you haven’t, create one to cover general use, but ensure that it also includes the following:

  • Defining AI – Establish clear definitions to ensure all employees understand AI and its applications.
  • Ethical considerations – Provide guidelines on transparency, accountability, and fairness to align AI use with organisational values and societal norms.
  • Prohibited cases – Specify misuse of AI tools to prevent activities that could harm the organisation or violate legal standards.
  • Data management – Implement privacy and data protection policies to ensure AI tools handle sensitive information securely and comply with data protection laws.
  • Security reporting – Establish procedures for reporting security issues to enable swift responses to potential threats.
  • Legal considerations – Ensure compliance with relevant laws and regulations to avoid legal repercussions and ensure ethical AI deployment.
  • Monitoring and training – Provide ongoing education to ensure employees are aware of risks and proper practices, promoting a culture of responsible AI use.

Step two – Define a playbook

A playbook outlines specific processes, procedures, and best practices for managing AI tools within an organisation. Ensure that you include these key elements:

  • Standard operating procedures (SOPs) – Document detailed processes for AI tool approval, usage, and monitoring to ensure consistent and compliant use.
  • Cyber risk management – Outline the steps to take in a shadow AI incident, including isolating the unauthorised tool and mitigating potential damage.
  • Roles and responsibilities – Define responsibilities for AI management, including IT managers, an AI governance team, and a crisis management team.
  • Best practices – Include recommendations for safe and effective AI integration, focusing on compliance, security, and performance.
  • Training materials – Provide resources to educate employees on responsible AI usage and the importance of adhering to established policies and procedures.

Step three – Create a roadmap

Now that you have secured the foundations by establishing the rules for using AI in the workplace, now’s the time to start planning. Establish a comprehensive roadmap that outlines the strategic plan and timeline for AI adoption and integration across the organisation. The roadmap should target executive leadership to define strategic goals, project managers to oversee implementation, and procurement officers to manage the acquisition of AI tools in line with the roadmap.

The roadmap should include:

  • AI landscaping – Identify and document all AI tools in use, both approved and unapproved, to understand the full scope of AI adoption within the organisation.
  • End-to-end governance – Map out governance processes from AI tool selection to deployment, ensuring all steps are transparent and well-documented.
  • Procurement processes – Define the procedures for acquiring AI tools, ensuring they meet the organisation’s security, compliance, and performance standards.
  • Milestones and timelines – Establish clear milestones and timelines for each phase of AI adoption to ensure the strategy is implemented effectively.
  • Key objectives – Outline the primary goals of AI integration, ensuring they align with organisational priorities and regulatory requirements.

If you have a question for Ben or the Triad team, please get in touch.