The Technology Readiness Council (TRC) empowers schools to thrive in a digitally connected world. Our mission is to help institutions build a strong, future-ready foundation. Right now, that foundation is facing a turbulent new kind of security challenge: Shadow AI.
Tech Directors and Data Protection Officers (DPOs) are familiar with the concept of Shadow IT. Every month or so we find someone using some online software or service that we have never heard of and certainly never approved. But today’s challenge is far more insidious. We're not just dealing with a few examples of unapproved software; we’re facing an overwhelming wave of instantly accessible AI tools.
The sheer volume of new, free, and instantly accessible AI tools—from browser-based chatbots to creative generative services—means schools can’t possibly vet or block every application students and staff are using. Every day, a new risk vector appears in your tech defenses. This lack of centralized control is the fastest way to undermine our privacy program, as most of these unvetted AI services are completely non-transparent about their data practices.
Imagine your school’s data protection protocols are the core of your security. If a tool is officially vetted, we know exactly where the data is stored, who has access, and how it’s protected. When a student or staff member uses an unapproved Shadow AI tool, they are effectively bypassing all governance.
EU Schools are obligated to protect sensitive data under GDPR and the requirements of the EU AI Act. This Act, for instance, specifically categorizes AI used in education (like systems that evaluate learning outcomes) as "High-Risk." This classification demands that providers and deployers ensure the system is transparent, offering clear instructions on its capabilities, limitations, and data logging.
The risk from Shadow AI is that it completely bypasses these checks. When a student uploads their essay or research notes into a public Large Language Model (LLM):
The result is a total loss of accountability and traceability, leaving your institution exposed to regulatory penalties and major reputational damage. To gain control, we need to move beyond reacting to every new tool and start building a robust governance strategy.
Moving from a reactive, prohibition-only mindset to a proactive, governance-and-provisioning strategy is the only way to effectively mitigate Shadow AI.
Your time is too valuable to chase every shiny new AI gadget. You need a fast, tiered system to evaluate risk, allowing small innovations to flourish while stopping the really dangerous tools immediately.
This framework focuses your limited compliance energy on the systems that pose the greatest risk to students’ fundamental rights.
Trying to block every new tool is a losing battle. A better strategy is to give your users an authorized, capable alternative that makes Shadow AI unnecessary. If you provide users with reliable, approved alternatives, they are less likely to seek out unauthorized tools.
The strongest defense isn't technology, it's an informed user base. The challenge of Shadow AI must be reframed as an opportunity to teach AI ethics and data responsibility, a vital element of the TRC’s mission.
By following this action plan, your school can stabilize its digital environment, mitigate the proliferation risk, and ensure that the powerful currents of AI drive learning forward without compromising your commitment to student data protection.
Michael has been working in international schools for 30 years. Originally a Grade 4 classroom teacher, he has served as Technology Director in Dubai, Riyadh and is currently the Director of Technology at the International School of Amsterdam (ISA). From 2018 to 2024 Michael also served as Data Protection Lead at ISA.