The Emergence of Shadow AI
13 December 2023
A year ago, ChatGPT, a breakthrough Generative AI tool which understands language, was launched by OpenAI. It attracted over a million users in five days and now has 100 million weekly users which makes it the fastest-growing app in history.
ChatGPT also inspired a vast number of other applications using Large Language Models (LLMs) to boost the productivity of workers, and the AI tools market continues to expand quickly. Website AITopTools lists more than 10,000 AI-based apps which are available to instantly access or download. OpenAI, the creator of ChatGPT, plans to open a store where users can share and sell custom GPT bots. Just this month, ByteDance, the maker of TikTok, launched Feishu 7, an AI tool for the workplace which reads calendars and summarises email.
Business risk managers and IT teams are scrambling to keep up.
Shadow AI is the term for AI solutions that are used without the official approval or oversight of a business or its IT department. Security teams may not know what apps their employees are using, which can ultimately threaten the security of their data.
Whilst some of these apps apply good governance and security measures—for example, those from the major software houses—too many of these apps are from small AI startups that lack strong security protections and offer little transparency around data storage or usage. What if an employee downloads an app and the app reads through their emails, calendar, voice messages and instant messages, summarising them all in a cloud application—has that employee effectively granted access to all of their personal data to a company of unknown or dubious origin? Or consider the situation where an employee uses a generative AI tool to create a report thinking it’s a transient interaction, but the tool then stores and shares the input data without the employee’s knowledge?
Shadow AI poses a number of specific data security challenges, such as:
Data leakage: Sensitive and confidential data may be exposed to third-party servers or unauthorised users, violating data protection laws or regulations.
Data integrity: Inaccurate, biased, or fake outputs may be produced that can compromise the quality and reliability of the data.
Data governance: Established policies and procedures for data management may be bypassed, such as access control, storage and destruction.
How to Reduce Shadow AI risks?
In November 2023, Australia, along with 15 other countries, pledged to make AI systems “secure by design” at the AI Safety Summit in Bletchley Park, UK. The agreement includes general recommendations regarding the monitoring of AI systems for abuse, protecting data from tampering, and vetting software suppliers. However, it does not tackle thorny questions around the appropriate uses of AI, or how the data that feeds these models is gathered. The Federal Government is instead paving the way for regulation under its new 2023-2030 Australian Cyber Security Strategy. The Cyber Security Strategy has identified risks associated with AI, and is promoting and supporting the safe and responsible use of this emerging technology (Shield 2 Initiative 10).
To manage the data security risks of Shadow AI, organisations can adopt the following practices:
Shadow AI audits: Determine which apps and Shadow AI tools your employees are using and why. This will help you to assess threats and offer safer alternatives.
Vendor assessments: Ask developers and vendors tough questions about their security practices.
Create and refine policies: Establish clear and comprehensive policies for AI usage, including which tools will be allowed; for what purposes; and using which types of data. Communicate and enforce these policies to all employees and stakeholders, providing guidance and support to achieve compliance. Update your Acceptable Use Policy on a regular basis.
Monitor for policy violations: Use systems to detect failure to comply with this Acceptable Use Policy of your organisation. Many organisations have policies to prevent employees and stakeholders from sharing sensitive data, but few enforce them.
Identify and remediate: Monitor for instances of Shadow AI or suspicious AI deployments, and report and escalate any incidents or breaches.
Educate and train employees: Foster a culture of security awareness by consistently training employees on the benefits and risks of AI, as well as the responsibilities and expectations for AI usage.
AI tools have been proven to create real business efficiencies. If not managed appropriately however, Shadow AI can pose a threat to executives, IT teams and whole-of-organisation security practices. As a priority, organisations must determine which third-party apps are already being relied upon by their teams and carry out a robust assessment to manage these risks accordingly.