Who Owns ‘Truth’ in the Age of Educational GenAI?

As generative AI becomes more deeply embedded in digital education, it no longer simply delivers knowledge; it shapes it. What counts as truth, and whose truth is represented, becomes increasingly complex. Rather than offering fixed answers, this piece challenges educational technologists to confront the ethical tensions and contextual sensitivities that now define digital learning.

Who Owns ‘Truth’ in the Age of Educational GenAI?

Author: Prof. John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

St. Gallen, May 23, 2025 – Idealistically, perhaps, teaching and learning are about sharing truths, and sharing facts, values, ideas and opinions. Over the past three decades, digital technology has been increasingly involved or implicated in teaching and learning, and increasingly involved or implicated in shaping the truths, the facts, values, ideas and opinions that are shared. Truth seems increasingly less absolute, stable and reliable and digital technology seems increasingly less neutral and passive.

The emergence of powerful and easily available AI, both inside education and in the societies outside it, only amplifies and accelerates the instabilities and uncertainties around truth, making it far less convincing for educational digital technologists to stand aside, hoping that research or legislation or public opinion will understand the difficulties and make the rules. This piece unpacks these sometimes controversial and uncomfortable propositions, providing no easy answers but perhaps clarifying the questions.

Truth and The Digital

Truth is always tricky. It is getting trickier and trickier, and faster and faster. We trade in truth, we all trade in truth; it is the foundation of our communities and our companies, our relationships and our transactions. It is the basis on which we teach and learn, we understand and we act. And we need to trust it.

The last two decades have, however, seen the phrases ‘fake news’ and ‘post truth’ used to make assertions and counter assertions in public spheres, physical and digital, insidiously reinforcing the notion that truth is subjective, that everyone has their own truth. It just needs to be shouted loudest. These two decades also saw the emergence and visibility of communities, big and small, in social media, able to coalesce around their own specific beliefs, their own truths, some benign, many malign, but all claiming their adherents to be truths. 

The digital was conceived ideally as separate and neutral. It was just the plumbing, the pipes and the reservoirs that stored and transferred truths, from custodian or creator to consumers, from teacher to learner. Social media, intrusive, pervasive and universal, changed that, hosting all those different communities.

The following selection of assertions comprises some widely accepted truths, though this will always depend on the community; others are generally recognised as false and some, the most problematic, generate profound disagreement and discomfort.

  • The moon is blue cheese, the Earth is flat
  • God exists
  • Smoking is harmless
  • The holocaust never happened 
  • Prostate cancer testing is unreliable
  • Gay marriage is normal 
  • Climate change isn’t real 
  • Evolution is fake
  • Santa Claus exists 
  • Assisted dying is a valid option
  • Women and men are equal
  • The sun will rise
  • Dangerous adventure sports are character-building
  • Colonialism was a force for progress

These can all be found on the internet somewhere and all represent the data upon which GenAI is trained as it harvests the world’s digital resources. Whether or not each is conceived as true depends on the community or culture.

Saying, ‘It all depends on what you mean by …’ ignores the fundamental issue, and yes, some may be merely circular while others may allow some prevarication and hair-splitting, but they all exist. 

Educational GenAI

In terms of the ethics of educational AI, extreme assertions like the ‘sun will rise’ or ‘the moon is blue cheese’ are not a challenge. If a teacher wants to use educational GenAI tools to produce teaching materials that make such assertions, the response is unequivocal; it is either ‘here are your teaching materials’ or ‘sorry, we can’t support you making that assertion to your pupils’.   

Where educational AI needs much more development is in dealing with assertions which, for us, may describe non-controversial truths, such as ‘women and men are equal’ and ‘gay marriage is normal’, but which may be met by different cultures and communities with violently different opinions.

GenAI harvests the world’s digital resources, regurgitating them as plausible, and in doing so, captures all the prejudice, biases, half-truths and fake news already out there in those digital resources. The role of educational GenAI tools is to mediate and moderate these resources in the interests of truth and safety, but we argue that this is not straightforward. If we know more about learners’ culture and contexts and their countries, we are more likely to provide resources with which they are comfortable, even if we are not. 

Who Do We Believe?

Unfortunately, some existing authorities that might have helped, guided and adjudicated these questions are less useful than previously. The speed and power of GenAI have overwhelmed and overtaken them. 

Regulation and guidance have often mixed pre-existing concerns about data security with assorted general principles and haphazard examples of their application, all focused on education in the education system rather than learning outside it. The education system has, in any case, been distracted by concerns about plagiarism and has not yet addressed the long-term issues of ensuring school-leavers and graduates flourish and prosper in societies and economies where AI is already ubiquitous, pervasive, intrusive and often unnoticed. In any case, the treatment of minority communities or cultures within education systems may itself already be problematic.

Education systems exist within political systems. We have to acknowledge that digital technologies, including educational digital technologies, have become more overtly politicised as global digital corporations and powerful presidents have become more closely aligned.

Meanwhile, the conventional cycle of research funding, delivery, reflection and publication is sluggish compared to developments in GenAI. Opinions and anecdotes in blogs and media have instead filled the appetite for findings, evaluations, judgments and positions. Likewise, the conventional cycle of guidance, training, and regulation is slow, and many of the outputs have been muddled and generalised. Abstract theoretical critiques have not always had a chance to engage with practical experiences and technical developments, often leading to evangelical enthusiasm or apocalyptic predictions. 

So, educational technologists working with GenAI may have little adequate guidance or regulation for the foreseeable future.

Why is This Important?

Educational technologists are no longer bystanders, merely supplying and servicing the pipes and reservoirs of education. Educational technologists have become essential intermediaries, bridging the gap between the raw capabilities of GenAI, which are often indiscriminate, and the diverse needs, cultures and communities of learners. Ensuring learners’ safe access to truth is, however, not straightforward since both truth and safety are relative and changeable, and so educational technologists strive to add progressively more sensitivity and safety to truths for learners. 

At the Avallain Lab, aligned with Avallain Intelligence, our broader AI strategy, we began a thorough and ongoing programme of building ethics controls that identify what are almost universally agreed to be harmful and unacceptable assertions. We aim to enhance our use of educational GenAI in Avallain systems to represent our core values, while recognising that although principles for trustworthy AI may be universal, the ways they manifest can vary from context to context, posing a challenge for GenAI tools. This issue can be mitigated through human intervention, reinforcing the importance of teachers and educators. Furthermore, GenAI tools must be more responsive to local contexts, a responsibility that lies with AI systems deployers and suppliers. While no solution can fully resolve society’s evolving controversies, we are committed to staying ahead in anticipating and responding to them.

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

UK’s Generative AI: Product Safety Expectations

The UK’s Department for Education publishes its outcomes-oriented safety recommendations for GenAI products, addressed to edtech companies, schools and colleges.

UK’s Generative AI: Product Safety Expectations

St. Gallen, February 20, 2025 – On 22 January 2025, the UK’s Department for Education (DfE) published its Generative AI: Product Safety Expectations. This is part of the broader strategy to establish the country as a global leader in AI, as outlined in the Government’s AI Opportunities Action Plan

As a leading edtech company with over 20 years of experience, Avallain was invited to participate in consultations on the Safety Expectations. Avallain Intelligence’s focus on clear ethical guidelines for safe AI development, demonstrated through TeacherMatic and other AI-driven solutions across our product portfolio, is reflected in our role in these consultations, where we were well-positioned to contribute expert advice.

Product Expectations for the EdTech Industry

The Generative AI: Product Safety Expectations define the ‘capabilities and features that GenAI products and systems should meet to be considered safe for use in educational settings.’ The guidelines, aimed primarily at edtech developers, suppliers, schools and colleges, come at a crucial time. Educational institutions need clear frameworks to assess the trustworthiness of the AI tools they are adopting. The independent report, commissioned by Avallain, Teaching with GenAI: Insights on Productivity, Creativity, Quality and Safety, provides valuable insights to help inform these decisions and guide best practices.

Legal Alignment, Accountability and Practical Implementation

The guidelines are specifically intended for edtech companies operating in England. While not legally binding, the text links the product expectations to existing UK laws and policies, such as the UK GDPR, Online Safety Act and Keeping Children Safe in Education, among others. This alignment helps suppliers, developers and educators navigate the complex legal landscape. 

From an accountability point of view, the DfE states that, ‘some expectations will need to be met further up the supply chain, but responsibility for assuring this will lie with the systems and tools working directly with schools and colleges.’ Furthermore, the guidelines emphasise that the expectations are focused on outcomes, rather than prescribing specific approaches or solutions that companies should implement.

Comparing Frameworks and An Overview of Key Categories

In line with other frameworks for safe AI, such as the EU’s Ethics Guidelines for Trustworthy AI, the Generative AI: Product Safety Expectations are designed to be applied by developers and considered by educators. However, unlike the EU’s guidelines, which are field-agnostic and principles-based, the DfE’s text is education-centred and structured around precise safety outcomes. This makes it more concrete and focused, though it is less holistic than the EU framework, leaving critical areas such as societal and environmental well-being out of its scope.

The guidance includes a comprehensive list of expectations organised under seven categories, summarised in the table below. The first two categories — Filtering and Monitoring and Reporting — are specifically relevant to child-facing products and stand out as the most distinctive of the document, as they tackle particular risk situations that are not yet widely covered.

The remaining categories — Security, Privacy and Data Protection, Intellectual Property, Design and Testing and Governance — apply to both child- and teacher-facing products. They are equally critical, as they address these more common concerns while considering the specific educational context in which they are implemented.

Collaboration and Future Implications

By setting clear safety expectations for GenAI products in educational settings, the DfE provides valuable guidance to help edtech companies and educational institutions collaborate more effectively during this period of change. As safe GenAI measures become market standards, it is important to point out that the educational community also needs frameworks that explore how this technology can foster meaningful content and practices across a diverse range of educational contexts.


Generative AI: Product Safety Expectations — Summary

  • Filtering
    1. Users are effectively and reliably prevented from generating or accessing harmful and inappropriate content.
    2. Filtering standards are maintained effectively throughout the duration of a conversation or interaction with a user.
    3. Filtering will be adjusted based on different levels of risk, age, appropriateness and the user’s needs (e.g., users with SEND).
    4. Multimodal content is effectively moderated, including detecting and filtering prohibited content across multiple languages, images, common misspellings and abbreviations.
    5. Full content moderation capabilities are maintained regardless of the device used, including BYOD and smartphones when accessing products via an educational institutional account.
    6. Content is moderated based on an appropriate contextual understanding of the conversation, ensuring that generated content is sensitive to the context.
    7. Filtering should be updated in response to new or emerging types of harmful content.
    8. Filtering should be updated in response to new or emerging types of harmful content.
  • Monitoring and Reporting
    1. Identify and alert local supervisors to harmful or inappropriate content being searched for or accessed.
    2. Alert and signpost the user to appropriate guidance and support resources when access to prohibited content is attempted (or succeeds).
    3. Generate a real-time user notification in age-appropriate language when harmful or inappropriate content has been blocked, explaining why this has happened.
    4. Identify and alert local supervisors of potential safeguarding disclosures made by users.
    5. Generate reports and trends on access and attempted access of prohibited content, in a format that non-expert staff can understand and which does not add too much burden on local supervisors.
  • Security
    1. Offer robust protection against ‘jailbreaking’ by users trying to access prohibited material.
    2. Offer robust measures to prevent unauthorised modifications to the product that could reprogram the product’s functionalities.
    3. Allow administrators to set different permission levels for different users.
    4. Ensure regular bug fixes and updates are promptly implemented.
    5. Sufficiently test new versions or models of the product to ensure safety compliance before release.
    6. Have robust password protection or authentication methods.
    7. Be compatible with the Cyber Security Standards for Schools and Colleges.
  • Privacy and Data Protection
    1. Provide a clear and comprehensive privacy notice, presented at regular intervals in age-appropriate formats and language with information on:
    2. The type of data: why and how this is collected, processed, stored and shared by the generative AI system.
    3. Where data will be processed, and whether there are appropriate safeguards in place if this is outside the UK or EU.
    4. The relevant legislative framework that authorises the collection and use of data.
    5. Conduct a Data Protection Impact Assessment (DPIA) during the generative AI tool’s development and throughout its life cycle.
    6. Allow all parties to fulfil their data controller and processor responsibilities proportionate to the volume, variety and usage of the data they process and without overburdening others.
    7. Comply with all relevant data protection legislation and ICO codes and standards, including the ICO’s age-appropriate design code if they process personal data.
    8. Not collect, store, share, or use personal data for any commercial purposes, including further model training and fine-tuning, without confirmation of appropriate lawful basis.
  • Intellectual Property
    1. Unless there is permission from the copyright owner, inputs and outputs should not be:
      • Collected
      • Stored
      • Shared for any commercial purposes, including (but not limited to) further model training (including fine-tuning), product improvement and product development.
    2. In the case of children under the age of 18, it is best practice to obtain permission from the parent or guardian. In the case of teachers, this is likely to be their employer—assuming they created the work in the course of their employment.
  • Design and Testing
    1. Sufficient testing with a diverse and realistic range of potential users and use cases is completed.
    2. Sufficient testing of new versions or models of the product to ensure safety compliance before release is completed.
    3. The product should consistently perform as intended.
  • Governance
    1. A clear risk assessment will be conducted for the product to assure safety for educational use.
    2. A formal complaints mechanism will be in place, addressing how safety issues with the software can be escalated and resolved in a timely fashion.
    3. Policies and processes governing AI safety decisions are made available.

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com