TRC Blog

Managing AI Proliferation while Mitigating Student Privacy Risks: A Tech Director’s Action Plan

Written by Michael McGlade | Jan 10, 2026 2:01:00 PM

The Technology Readiness Council (TRC) empowers schools to thrive in a digitally connected world. Our mission is to help institutions build a strong, future-ready foundation. Right now, that foundation is facing a turbulent new kind of security challenge: Shadow AI.

Tech Directors and Data Protection Officers (DPOs) are familiar with the concept of Shadow IT. Every month or so we find someone using some online software or service that we have never heard of and certainly never approved. But today’s challenge is far more insidious. We're not just dealing with a few examples of unapproved software; we’re facing an overwhelming wave of instantly accessible AI tools.

The sheer volume of new, free, and instantly accessible AI tools—from browser-based chatbots to creative generative services—means schools can’t possibly vet or block every application students and staff are using. Every day, a new risk vector appears in your tech defenses. This lack of centralized control is the fastest way to undermine our privacy program, as most of these unvetted AI services are completely non-transparent about their data practices.

The Problem: When Control is Lost

Imagine your school’s data protection protocols are the core of your security. If a tool is officially vetted, we know exactly where the data is stored, who has access, and how it’s protected. When a student or staff member uses an unapproved Shadow AI tool, they are effectively bypassing all governance.

EU Schools are obligated to protect sensitive data under GDPR and the requirements of the EU AI Act. This Act, for instance, specifically categorizes AI used in education (like systems that evaluate learning outcomes) as "High-Risk." This classification demands that providers and deployers ensure the system is transparent, offering clear instructions on its capabilities, limitations, and data logging.

The risk from Shadow AI is that it completely bypasses these checks. When a student uploads their essay or research notes into a public Large Language Model (LLM):

  1. Privacy is Liquidated: The school loses all visibility into how that student’s work—which may contain personally identifiable information (PII) or sensitive educational content—is processed.
  2. No Contract, No Control: Because there is no contract (or even an awareness) with the AI vendor, the school cannot confirm that the data isn't being used to train the public model, nor can it ensure data stays within a compliant jurisdiction.

The result is a total loss of accountability and traceability, leaving your institution exposed to regulatory penalties and major reputational damage. To gain control, we need to move beyond reacting to every new tool and start building a robust governance strategy.

The Action Plan: Three Ways to Establish Control

Moving from a reactive, prohibition-only mindset to a proactive, governance-and-provisioning strategy is the only way to effectively mitigate Shadow AI.

1. Build a Risk-Based Framework for AI Tools

Your time is too valuable to chase every shiny new AI gadget. You need a fast, tiered system to evaluate risk, allowing small innovations to flourish while stopping the really dangerous tools immediately.

  • Implement a Tiered Vetting Framework: Categorize AI tools by the level of sensitive data they process:
  • Green Zone (Fast Lane): Tools that use no PII or are only used for internal, non-student administrative tasks (e.g., lesson plan brainstorming). These require a simple, rapid check.
  • Yellow Zone (DPIA Required): Tools that handle anonymized student content or non-critical metrics. These require a documented Data Protection Impact Assessment (DPIA) and contractual assurance that the vendor meets your data handling standards.
  • Red Zone (Hard Block): Tools that are non-transparent, use PII for commercial purposes, or are embedded in high-stakes decisions like grading. These must be strictly monitored and blocked at the network level.

This framework focuses your limited compliance energy on the systems that pose the greatest risk to students’ fundamental rights.

2. Provision an Approved "Safe Space" Ecosystem

Trying to block every new tool is a losing battle. A better strategy is to give your users an authorized, capable alternative that makes Shadow AI unnecessary. If you provide users with reliable, approved alternatives, they are less likely to seek out unauthorized tools.

  • Provide Enterprise Access: Secure enterprise-level accounts for common GenAI platforms (like Google Gemini or Microsoft Copilot). These institutional licenses are built to protect you, typically including legal clauses that guarantee:
    • Student/staff input data is not used to train the public model.
    • Data residency complies with local regulations.
  • Enhance Visibility, Not Just Veto Power: Utilize next-generation firewall or Cloud Access Security Broker (CASB) solutions. These tools can’t block everything, but they can detect and monitor the traffic to unapproved cloud services. If a CASB alerts you to a high volume of file uploads to a Red Zone platform, you’ve identified a major security risk you need to address immediately, shifting your strategy from blanket prohibition to targeted enforcement.

3. Transform Users into Ethical Crew Members

The strongest defense isn't technology, it's an informed user base. The challenge of Shadow AI must be reframed as an opportunity to teach AI ethics and data responsibility, a vital element of the TRC’s mission.

  • Make AI Literacy Role-Specific: Ditch generic "don’t use bad software" emails. Training should be tailored:
    • Students need to understand the concept of "data as payment." They must know that uploading a personal assignment to a free public LLM is trading their PII for a quick answer.
    • Teachers need training on the specific data-handling policies of the Green Zone tools they are allowed to use. They must be empowered to recognize and report when a Shadow AI tool appears in their classroom.
  • Embed Accountability: Clearly integrate AI data protection concepts into your school's Responsible Use Policy (RUP). When users understand why they can’t use a certain tool—because it violates student privacy—they are more likely to comply.
    • Integrate Child Safeguarding: Explicitly include how AI tools relate to child safeguarding policies, ensuring that the use of any AI technology does not compromise the safety and well-being of students, particularly concerning data related to minors.

By following this action plan, your school can stabilize its digital environment, mitigate the proliferation risk, and ensure that the powerful currents of AI drive learning forward without compromising your commitment to student data protection.

Michael has been working in international schools for 30 years. Originally a Grade 4 classroom teacher, he has served as Technology Director in Dubai, Riyadh and is currently the Director of Technology at the International School of Amsterdam (ISA). From 2018 to 2024 Michael also served as Data Protection Lead at ISA.