UK’s Generative AI: Product Safety Expectations

The UK’s Department for Education publishes its outcomes-oriented safety recommendations for GenAI products, addressed to edtech companies, schools and colleges.

UK’s Generative AI: Product Safety Expectations

St. Gallen, February 20, 2025 – On 22 January 2025, the UK’s Department for Education (DfE) published its Generative AI: Product Safety Expectations. This is part of the broader strategy to establish the country as a global leader in AI, as outlined in the Government’s AI Opportunities Action Plan

As a leading edtech company with over 20 years of experience, Avallain was invited to participate in consultations on the Safety Expectations. Avallain Intelligence’s focus on clear ethical guidelines for safe AI development, demonstrated through TeacherMatic and other AI-driven solutions across our product portfolio, is reflected in our role in these consultations, where we were well-positioned to contribute expert advice.

Product Expectations for the EdTech Industry

The Generative AI: Product Safety Expectations define the ‘capabilities and features that GenAI products and systems should meet to be considered safe for use in educational settings.’ The guidelines, aimed primarily at edtech developers, suppliers, schools and colleges, come at a crucial time. Educational institutions need clear frameworks to assess the trustworthiness of the AI tools they are adopting. The independent report, commissioned by Avallain, Teaching with GenAI: Insights on Productivity, Creativity, Quality and Safety, provides valuable insights to help inform these decisions and guide best practices.

Legal Alignment, Accountability and Practical Implementation

The guidelines are specifically intended for edtech companies operating in England. While not legally binding, the text links the product expectations to existing UK laws and policies, such as the UK GDPR, Online Safety Act and Keeping Children Safe in Education, among others. This alignment helps suppliers, developers and educators navigate the complex legal landscape. 

From an accountability point of view, the DfE states that, ‘some expectations will need to be met further up the supply chain, but responsibility for assuring this will lie with the systems and tools working directly with schools and colleges.’ Furthermore, the guidelines emphasise that the expectations are focused on outcomes, rather than prescribing specific approaches or solutions that companies should implement.

Comparing Frameworks and An Overview of Key Categories

In line with other frameworks for safe AI, such as the EU’s Ethics Guidelines for Trustworthy AI, the Generative AI: Product Safety Expectations are designed to be applied by developers and considered by educators. However, unlike the EU’s guidelines, which are field-agnostic and principles-based, the DfE’s text is education-centred and structured around precise safety outcomes. This makes it more concrete and focused, though it is less holistic than the EU framework, leaving critical areas such as societal and environmental well-being out of its scope.

The guidance includes a comprehensive list of expectations organised under seven categories, summarised in the table below. The first two categories — Filtering and Monitoring and Reporting — are specifically relevant to child-facing products and stand out as the most distinctive of the document, as they tackle particular risk situations that are not yet widely covered.

The remaining categories — Security, Privacy and Data Protection, Intellectual Property, Design and Testing and Governance — apply to both child- and teacher-facing products. They are equally critical, as they address these more common concerns while considering the specific educational context in which they are implemented.

Collaboration and Future Implications

By setting clear safety expectations for GenAI products in educational settings, the DfE provides valuable guidance to help edtech companies and educational institutions collaborate more effectively during this period of change. As safe GenAI measures become market standards, it is important to point out that the educational community also needs frameworks that explore how this technology can foster meaningful content and practices across a diverse range of educational contexts.


Generative AI: Product Safety Expectations — Summary

  • Filtering
    1. Users are effectively and reliably prevented from generating or accessing harmful and inappropriate content.
    2. Filtering standards are maintained effectively throughout the duration of a conversation or interaction with a user.
    3. Filtering will be adjusted based on different levels of risk, age, appropriateness and the user’s needs (e.g., users with SEND).
    4. Multimodal content is effectively moderated, including detecting and filtering prohibited content across multiple languages, images, common misspellings and abbreviations.
    5. Full content moderation capabilities are maintained regardless of the device used, including BYOD and smartphones when accessing products via an educational institutional account.
    6. Content is moderated based on an appropriate contextual understanding of the conversation, ensuring that generated content is sensitive to the context.
    7. Filtering should be updated in response to new or emerging types of harmful content.
    8. Filtering should be updated in response to new or emerging types of harmful content.
  • Monitoring and Reporting
    1. Identify and alert local supervisors to harmful or inappropriate content being searched for or accessed.
    2. Alert and signpost the user to appropriate guidance and support resources when access to prohibited content is attempted (or succeeds).
    3. Generate a real-time user notification in age-appropriate language when harmful or inappropriate content has been blocked, explaining why this has happened.
    4. Identify and alert local supervisors of potential safeguarding disclosures made by users.
    5. Generate reports and trends on access and attempted access of prohibited content, in a format that non-expert staff can understand and which does not add too much burden on local supervisors.
  • Security
    1. Offer robust protection against ‘jailbreaking’ by users trying to access prohibited material.
    2. Offer robust measures to prevent unauthorised modifications to the product that could reprogram the product’s functionalities.
    3. Allow administrators to set different permission levels for different users.
    4. Ensure regular bug fixes and updates are promptly implemented.
    5. Sufficiently test new versions or models of the product to ensure safety compliance before release.
    6. Have robust password protection or authentication methods.
    7. Be compatible with the Cyber Security Standards for Schools and Colleges.
  • Privacy and Data Protection
    1. Provide a clear and comprehensive privacy notice, presented at regular intervals in age-appropriate formats and language with information on:
    2. The type of data: why and how this is collected, processed, stored and shared by the generative AI system.
    3. Where data will be processed, and whether there are appropriate safeguards in place if this is outside the UK or EU.
    4. The relevant legislative framework that authorises the collection and use of data.
    5. Conduct a Data Protection Impact Assessment (DPIA) during the generative AI tool’s development and throughout its life cycle.
    6. Allow all parties to fulfil their data controller and processor responsibilities proportionate to the volume, variety and usage of the data they process and without overburdening others.
    7. Comply with all relevant data protection legislation and ICO codes and standards, including the ICO’s age-appropriate design code if they process personal data.
    8. Not collect, store, share, or use personal data for any commercial purposes, including further model training and fine-tuning, without confirmation of appropriate lawful basis.
  • Intellectual Property
    1. Unless there is permission from the copyright owner, inputs and outputs should not be:
      • Collected
      • Stored
      • Shared for any commercial purposes, including (but not limited to) further model training (including fine-tuning), product improvement and product development.
    2. In the case of children under the age of 18, it is best practice to obtain permission from the parent or guardian. In the case of teachers, this is likely to be their employer—assuming they created the work in the course of their employment.
  • Design and Testing
    1. Sufficient testing with a diverse and realistic range of potential users and use cases is completed.
    2. Sufficient testing of new versions or models of the product to ensure safety compliance before release is completed.
    3. The product should consistently perform as intended.
  • Governance
    1. A clear risk assessment will be conducted for the product to assure safety for educational use.
    2. A formal complaints mechanism will be in place, addressing how safety issues with the software can be escalated and resolved in a timely fashion.
    3. Policies and processes governing AI safety decisions are made available.

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Avallain Reinforces its Commitment to Research-Driven Solutions with a Newly Commissioned GenAI Report

How is GenAI being integrated into schools to enhance teaching and learning? ‘Teaching with GenAI: Insights on Productivity, Creativity, Quality and Safety’ delves into this critical question by exploring the opportunities GenAI offers, the challenges it poses and how it’s shaping the future of education.

Avallain Reinforces its Commitment to Research-Driven Solutions with a Newly Commissioned GenAI Report

St. Gallen, January 30, 2025 – Education technology pioneer Avallain introduces, Teaching with GenAI: Insights on Productivity, Creativity, Quality and Safety. This independent report commissioned by the Avallain Group and produced by Oriel Square Ltd, is key research that provides valuable insights for educators and policymakers alike.

This timely and comprehensive report explores how generative AI is being integrated into schools to enhance teaching and learning outcomes and the critical opportunities and challenges it presents.

Professor Rose Luckin, of University College London and Founder of Educate Ventures Research, says the report is ‘an essential read for any education leader navigating the AI landscape.’

Navigating the Opportunities, Challenges and Risks of GenAI

The ‘Teaching with GenAI: Insights on Productivity, Creativity, Quality and Safety’ report provides detailed insights into how GenAI saves time and boosts efficiency, allowing educators to streamline workflows and dedicate more time to impactful teaching. It delves into the tools and training needed to create meaningful learning materials, providing practical advice for designing engaging and effective content. The report examines how GenAI fosters creativity and innovation in teaching practices, encouraging educators to reimagine their instructional approaches.

Beyond this, the report also stresses the importance of quality control in GenAI applications, identifying areas where oversight is essential to ensure high standards in AI-generated content. Critical advice is offered around data security and tackling inbuilt bias, helping educators and institutions confidently address these key concerns. More importantly, the report provides actionable recommendations on how schools and organisations can effectively integrate and apply GenAI to maximise its potential while ensuring ethical and responsible use.

As Professor John Traxler, Academic Director of the Avallain Lab, explains, ‘While schools and educators acknowledge the potential of GenAI tools to assist in key pedagogical tasks, they also express concerns about content accuracy, the risk of perpetuating biases and the impact of these tools on their evolving role in the classroom. This underscores the need to provide educators with GenAI solutions tailored to educational contexts and the critical analysis skills required to engage with these technologies safely and effectively.’

A Commitment to Research-Driven Solutions

The rapid rise of GenAI has introduced both unprecedented possibilities and complex challenges in the educational landscape. With a long history of developing educator-led technology, Avallain has always believed that research-driven approaches are essential to ensuring technology supports learning outcomes.

‘This report reflects our commitment to research-driven solutions that empower educators. By exploring the benefits, potential and challenges of GenAI through the experiences of teachers and specialists, we aim to provide valuable insights and actionable recommendations to the educational community. Together, we are navigating this transformative field to deliver technology that ethically and safely supports teachers and students.’ As Ignatz Heinz, President and Co-Founder of Avallain, highlights.

Over 50% of teachers in England use AI tools to reduce workload, and 40% use them to personalise learning content.

Avallain’s Approach to Ethical and Safe GenAI Integration

As GenAI enters classrooms, Avallain is doubling down on this commitment with these informative reports and its broader AI strategy, Avallain Intelligence, which aims to responsibly integrate AI across the entire edtech value chain. This initiative is built on the principle that ethical AI is essential—not only for achieving better outcomes, enhanced productivity and safe, innovative learner interactions but, more importantly, as a foundation for the reliable adoption of these tools in our societies, particularly in our educational systems.

Carles Vidal, Avallain Lab Business Director, explains further, ‘Avallain’s unwavering commitment to Ethical AI is reflected in a range of AI solutions designed in alignment with the Ethical Key Requirements, outlined in the EU’s Ethics Guidelines for Trustworthy AI. These guidelines uphold the principles of respect for human autonomy, prevention of harm, fairness and explicability.’

The newly commissioned report aligns with this AI strategy by exploring critical ethical, safe, and effective implementation considerations. It provides actionable recommendations for schools and educators to adopt these technologies while ensuring responsible use confidently.

Leveraging Insights to Drive GenAI in Education

Avallain strives to remain at the forefront of educational innovation, actively monitoring and analysing educators’ difficulties as they integrate generative AI into their teaching practices. With a particular focus on ethics and pedagogy, these insights shape the ongoing development of Avallain’s next generation of GenAI features implemented in our solutions. Explore the full report and gain a deeper understanding of how GenAI can enhance teaching and learning. 

Download your free copy here.

Register Now for Upcoming Live Report Briefings

As part of our commitment to supporting educators and institutions, look out for upcoming report briefings to explore key insights from the report, including practical and ethical steps for integrating GenAI effectively. This is an opportunity to engage in discussions about the future of AI in education.

Secure my place

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

EU’s Guidelines for Trustworthy AI: A Reliable Framework for Edtech Companies

This post is the first in a series that highlights the most relevant recommendations, and regulations on ethics and AI-systems, produced by international institutions and educational agencies world-wide. Our goal is to provide updated and actionable insights to all stakeholders, including designers, developers and users involved in the field.

EU’s Guidelines for Trustworthy AI: A Reliable Framework for Edtech Companies

A look into the EU’s ethical recommendations and their possible adaptation to Gen-AI-based educational content creation services.

Author: Carles Vidal, Business Director of the Avallain Lab

Since the release of OpenAI’s ChatGPT-3.5, in November 2022, the edtech sector has focused its efforts on delivering products and services that leverage the creative potential of large language models (LLMs) to offer personalised and localised learning content to users. 

LLMs have prompted the educational content industry to reassess traditional editorial processes, and have also transformed the way in which teachers and professors plan, create and distribute classroom content across schools and universities.

The generalised uptake of Generative AI technologies [Gen-AI], in education, calls for ensuring that their design, development and use are based on a thorough understanding of the ethical implications at stake, a clear risk analysis, and the application of the corresponding mitigating strategies.

We start by discussing the work of the High-level Expert Group on AI (HLEG on AI), appointed by the European Commission in 2018 to support the implementation of the European strategy on AI. The work provides policy recommendations on AI-related topics. The “Ethics Guidelines for Trustworthy AI”(2019) and its complementary “Assessment List for Trustworthy AI, for Self-Assessment” (2020) are two non-binding texts that can be read as one single framework.

1. Ethics Guidelines for Trustworthy AI

From an AI practitioner’s point of view, the guidelines and the assessment list for trustworthy AI are strategic tools with which companies can build their own policies to ensure the implementation of ethical AI-Systems. In this sense, the work of the HLEG on AI is presented as a generalist model that can/should be adapted to the context of each specific AI-System. Additionally, due to its holistic approach, the framework addresses not only the technological requirements of AI-systems, but also considers all actors and processes involved throughout the entire life cycle of the AI.

As the HLEG on AI states, the guidelines’ “foundational ambition” is the achievement of trustworthy AI, which requires AI-systems, actors and processes to be “lawful, ethical, and robust”. Having said this, the authors explicitly exclude legality from the scope of the document, deferring to the corresponding regulations, and focus on addressing the ethical and robust dimensions for trustworthy AI-systems.

The framework is structured around three main conceptual levels, progressing from more abstract to more concrete. At the top level, defining the foundations of trustworthy AI, four “ethical imperatives” are established, to which all AI systems, actors, and processes must adhere:

  1. Respect for Human Agency
  2. Prevention of Harm
  3. Fairness 
  4. Explicability

At a second level, the framework introduces a set of seven key requirements for the realisation of trustworthy AI. The list is neither exhaustive nor presented in a hierarchical order. 

  1. Human Agency and Oversight
  2. Technical Robustness and Safety
  3. Privacy and Data Governance 
  4. Transparency
  5. Diversity, Non-discrimination and Fairness
  6. Societal and Environmental Wellbeing
  7. Accountability

The relevance of these key requirements extends beyond these guidelines. They also inform recital 27 and, implicitly, article 1 of the recently published EU AI Act, of April 2024.

The guidelines suggest a range of technical and non-technical methods for their implementation (e.g., architectures for trustworthy AI, codes of conduct, standardization, diversity and inclusive design) that actors can use to enforce the mentioned requirements. 

Achieving trustworthy AI is an ongoing and iterative process that requires continuous assessment and adaptation of the methods employed to implement key requirements in dynamic environments.

2. Assessment List for Trustworthy AI

The third level of the framework consists of an “Assessment List for Trustworthy AI” (ALTAI), intended to operationalise the key requirements. It is primarily addressed to developers and deployers of AI-Systems that directly interact with users. 

The ALTAI list breaks down the key requirements into more concrete categories. It provides a range of self-assessment questions for each of these, aiming to spark reflection around every aspect. Each individual actor is left to decide on the corresponding mitigating measures.

For example, the ethical requirement of Diversity, Non-Discrimination and Fairness, is divided in three subsections: 

1) Avoidance of unfair bias

2) Accessibility and Universal Design 

3) Stakeholder participation

In turn, for Avoidance of Unfair Bias, a series of self-assessment questions are proposed, a sample of which is listed below:

  • Did you establish a strategy or a set of procedures to avoid creating or reinforcing unfair bias in the AI system, both regarding the use of input data as well as for the algorithm design? 
  • Did you consider diversity and representativeness of end-users and/or subjects in the data? 
    • Did you test for specific target groups or problematic use cases? 
    • Did you research and use publicly available technical tools, that are state-of-the-art, to improve your understanding of the data, model and performance? 
    • Did you assess and put in place processes to test and monitor for potential biases during the entire lifecycle of the AI system (e.g. biases due to possible limitations stemming from the composition of the used data sets (lack of diversity, non-representativeness)? 
    • Where relevant, did you consider diversity and representativeness of end-users and or subjects in the data? 

The guidelines also suggest that companies incorporate their assessment processes into a governance mechanism, involving both top management and operations. The text even proposes a governance model, describing roles and responsibilities. 

The assessment list is not intended to be exhaustive and follows a generalist (horizontal) approach. The purpose of the HLEG on AI is to provide a set of questions that help all AI-system actors operationalise the more abstract key requirements, and to encourage them to adapt the assessment list to the specific needs of their sector and continuously update it.

In accordance with this vision, and grounded in the same framework, the European Commission published in September 2022, the “Ethical Guidelines on the Use of AI and Data in Teaching and Learning for Educators”. This document is a valuable resource for teachers and educators, helping them to reflect on AI and critically assess whether the AI systems they are using comply with the Key Requirements for Trustworthy AI.

3. Adapting and implementing the guidelines.

Having analysed the work of the HLEG on AI, we understand that it is proposed as a framework that companies like Avallain, along with other AI-system deployers, can build upon to create an adapted version that ensures the ethical design, development, and use of AI tools for the educational content creation community.

To this end, we support the framework’s recommendation of establishing a multidisciplinary body within companies to define ethical and robustness standards, identify the corresponding mitigating interventions, and ensure their implementation across all involved areas. This governing body should play a crucial role in the continuous adaptation of the company’s ethics and AI strategy to future ethical challenges.

About the Avallain Lab

We established the Avallain Lab in 2023 to be an ethically and pedagogically sound academic resource, providing support to Avallain product designers and partners, as well as the wider e-learning community.

This unit operates under the academic leadership of John Traxler and the business direction of Carles Vidal. The Avallain Lab also has the support of an advisory panel including Professor Rose Luckin. This experience and expertise allows us to deliver research-informed technology and experiences for learners and teachers, including in the field of AI.

The Avallain Lab is a unique, novel and innovative approach acting as the interface between the world’s vast and rapidly evolving research outputs, activities, networks and communities and Avallain’s continued ambition to enhance both the pedagogic and technical dimensions of its products and services with relevant medium-term ideas and longer-term concepts.

The Lab supports Avallain’s trials and workshops, informs internal discussion and draws in external expertise. The Lab is building a library of research publications, contributing to blogs and research papers and presenting at conferences and webinars. Early work focussed on learning analytics and spaced learning but the current focus is artificial intelligence, specifically ethics and pedagogy and their interactions.

About Carles Vidal

Business Director of the Avallain Lab, 
MSc in Digital Education by the University of Edinburgh.

Carles Vidal is an educational technologist with more than twenty years of experience in content publishing, specializing in creating e-learning solutions that empower educators and students in K12 and other educational stages. His work has included the publishing direction of learning materials aligned with various curricula across Spain and Latin American countries.

About John Traxler

Academic Director of the Avallain Lab, 
FRSA, MBCS, AFIMA, MIET

John Traxler, FRSA, MBCS, AFIMA, MIET, is Professor of Digital Learning, UNESCO Chair in Innovative Informal Digital Learning in Disadvantaged and Development Contexts and Commonwealth of Learning Chair for innovations in higher education. His papers are cited over 11,000 times and Stanford lists him in the top 2% in his discipline. He has written over 40 papers and seven books, and has consulted for a variety of international agencies including UNESCO, ITU, ILO, USAID, DFID, EU, UNRWA, British Council and UNICEF.

About Rose Luckin

Advisory Panellist of the Avallain Lab,
Doctor of Philosophy – PhD, Cognitive Science and AI

Rosemary (Rose) Luckin is Professor of Learner Centred Design at UCL Knowledge Lab, Director of EDUCATE, and author of Machine Learning and Human Intelligence: The Future of Education for the 21st Century (2018). She has also authored and edited numerous academic papers.  

Dr Luckin’s work centres on investigating the design and evaluation of educational technology. On top of this, she is Specialist Adviser to the UK House of Commons Education Select Committee for their inquiry into the Fourth Industrial Revolution. 

Her other positions include: 

  • Co-founder of the Institute for Ethical AI in Education
  • Past President of the International Society for AI in Education
  • A member of the UK Office for Students Horizon Scanning panel
  • Adviser to the AI and Robotics panel of the Topol review into the future of the NHS workforce
  • A member of the European AI Alliance
  • Previous Holder of an International Franqui Chair at KU Leuven

Avallain increases impact with strategic investment in AI Platform, TeacherMatic

Education Technology Provider, Avallain, today reaffirmed its commitment to responsible generative AI and innovation in education with a number of key announcements.

Avallain, a twenty year veteran of innovative and impactful edtech, has acquired TeacherMatic, one of Europe’s fastest growing generative AI toolsets for educators. The acquisition supports Avallain’s broader AI strategy including remediation and copyright protection, both features developed for its industry leading content creation tool, Avallain Author. 

The Avallain product suite already enables publishers to use the full breadth of the best generative AI while meeting educational, legal and commercial requirements. TeacherMatic has, over the last year, developed and organised one of the most complete AI toolsets to support educators globally, allowing everything from lesson plans and flashcards to schemes of work and multiple choice quizzes  – alignable to curricula – at the click of a few buttons. This coming together of TeacherMatic and Avallain forms the basis of a strong partnership of leading-edge and ethical capability applying generative AI for education.

Ursula Suter, Co-Founder and Executive Chairwoman at Avallain, says “We see this joining of forces with TeacherMatic as a crucial step to counter the main risks from generative AI while also benefiting educators and education, in general, in a manner that will cater to high quality educational publishing and learning outcomes. For many years we have been delivering grounded and considered educational innovation. With TeacherMatic, we will continue to do that and more. Our product suite achieves both high-quality education and commercial viability with success for all parties involved.”

Peter Kilcoyne, MD at TeacherMatic comments “TeacherMatic was formed by a group of lifelong educators with the aim of making generative AI available and accessible to all teaching staff to help reduce workloads and improve creativity. We are delighted to have been acquired by Avalllain whose expertise and experience in terms of both education and technology will greatly enhance our future developments, improve TeacherMatic as a platform as well as engaging with new markets around the world. We see the ethical, technical and educational principles that drive both Avallain and TeacherMatic make this a partnership that will benefit both organisations as well as our customers and all teachers and students in organisations that we support.”

Ignatz Heinz (President & Co-Founder), Ursula Suter (Executive Chairwoman & Co-Founder), Alexis Walter (MD), Monika Morawska (COO), Rahim Hirji (Executive VP) © Mario Baronchelli

In an additional announcement, Professor Rose Luckin has been appointed to the advisory board of Avallain. Rosemary (Rose) Luckin is a Professor at University College London and Founder of Educate Ventures Research (EVR) who has spent over 30 years developing and studying AI for Education. She is renowned for her research into the design and evaluation of educational technology and AI. 

Rose comments “Avallain has, for many years, been the quality engine of education for publishers and content providers. I am delighted to support them and provide guidance and direction for Avallain’s products as we step forward into this exciting era of AI within education” 

Finally, Avallain also officially announced the Avallain Lab, with John Traxler as Academic Director. Traxler holds Chairs from the Commonwealth of Learning for innovations in higher education and from UNESCO for Innovative Informal Digital Learning in Disadvantaged and Development Contexts. The Avallain Lab was incubated in 2023 with a remit to provide research and rigour around product development for partners covering everything from learner analytics and accessibility to ethical AI applicability for learners. The Lab will support Avallain’s current partners and operate commercially in partnership with other institutions exploring innovation in educational contexts – and welcomes global collaborators.

Avallain’s announcements today build upon its established commitment to ethical generative AI, which is already available for Avallain Author, its market-leading content authoring tool and its newly launched SaaS Learning Management System (LMS), Avallain Magnet. Current clients can leverage new tools that can automate parts of the editorial workflow while leaving editors and learning designers firmly in control of the process.

About Avallain

Avallain powers some of the most renowned educational brands including Oxford University Press, Cengage National Geographic Learning, Cambridge University Press, Santillana, Klett, and Cornelsen, reaching millions of learners worldwide. Avallain most recently raised 8M Euros from Round2 Capital Partners and is advised by i5invest. Through the Avallain Foundation, the technology is also made available to partners improving access to quality education.

Find out more at avallain.com

About TeacherMatic

TeacherMatic was formed in 2022 as Innovative Learning Technologies Limited and has developed a suite of generative AI tools for educators. Teachermatic has adoption in FE Colleges in the UK, Universities and Schools and recently partnered with OpenLMS.

Free trials are available at teachermatic.com

For media comment, contact Rahim Hirji, Avallain, rhirji@avallain.com

Contact:
Daniel Seuling
VP Sales & Marketing
dseuling@avallain.com