Revisit the Language Teaching Takeoff Webinar Series: Featured Highlights and Insights

While taking a short summer break, we wanted to pause and review the best moments and most important insights from our Language Teaching Takeoff Webinar Series. If you missed an episode or want to revisit the practical tips and tools demonstrated in the TeacherMatic Language Teaching Edition, this blog highlights key takeaways and illustrates how a purpose-built AI supports language educators and enhances classroom practice.

Revisit the Language Teaching Takeoff Webinar Series: Featured Highlights and Insights

London, August 2025 – The Language Teaching Takeoff Webinar Series offers a practical look at the TeacherMatic Language Teaching Edition, a toolkit designed specifically for language educators. It’s more than a generic AI solution: every generator is built around the realities of classroom teaching, with a focus on saving time, enhancing creativity, maintaining pedagogical standards and ensuring the ethical and safe adoption of AI in language education. 

This edition of TeacherMatic can generate comprehensive lesson plans, adapt texts and tasks, create original content and quizzes, provide personalised feedback and more, all tailored to different CEFR levels. Each 30-minute session focuses on integrating AI meaningfully and responsibly, providing ideas, activities and workflows that make a real difference to teaching and learning.

The series has attracted over 300 educators across four sessions, underscoring the strong interest in practical, teacher-focused AI solutions.

Meet the Hosts

Moderated by Giada Brisotto, Senior Marketing and Sales Operations Manager at Avallain, and led by Nik Peachey, award-winning educator, author and edtech consultant, each webinar combines deep expertise with actionable guidance. 

‘These generators aren’t just text tools. They’re designed with real classroom needs in mind. You input your goals, level and theme, and the results are ready to use or refine.’ – Nik Peachey, Director of Pedagogy, PeacheyPublications

Save Time While Planning Quality Lessons

The first webinar in the series, Elevate Your Lesson Planning’, explored how purpose-built AI can transform how teachers design lessons. One of the main insights from the session was the critical balance between efficiency and academic rigour. Nik demonstrated how the Lesson Plan generator enables educators to produce fully structured, CEFR-aligned lesson plans in just a few minutes. 

Key benefits highlighted in the session included:

  • CEFR-aligned outputs to ensure lessons meet recognised language standards.
  • Adaptable and editable plans that reflect the needs of individual classes.
  • Support for professional autonomy, giving teachers control instead of imposing rigid templates.
  • Support for core pedagogical models, including Communicative Language Teaching (CLT), Task-Based Learning (TBL), Presentation Practice Production (PPP), Lexical Approach and Test-Teach-Test.

The session emphasised that the real value of AI in education lies in targeted, purposeful support, rather than blanket automation. Starting with focused applications like lesson planning allows educators to make small, practical changes that can significantly impact both teaching quality and learners’ experiences.

Deliver Personalised CEFR-Aligned Feedback

The second webinar, From Rubrics to Results: How to Provide Impactful Feedback’, focused on how AI can help teachers provide meaningful, personalised feedback without adding to their workload. Nik demonstrated the Feedback generator, showing how educators can instantly create feedback tailored to each student while keeping them aligned with CEFR standards and institutional rubrics.

Key benefits highlighted in the session included:

  • CEFR-aligned feedback that can be tailored to specific subscales.
  • Feedback tailored to rubrics and assessment criteria, ensuring comments reflect your teaching context.
  • Balanced, constructive comments that highlight both strengths and areas for improvement.

During the session, it was stressed that AI works best when it enhances teacher expertise rather than replacing it. By streamlining the feedback process, educators can maintain high standards of personalisation and pedagogy, even with large groups of students.

Adapt and Analyse Content Across Levels

The third webinar, Adapting Content for Effective CEFR-Aligned Language Teaching’, spotlighted how AI can empower teachers to adapt existing materials to diverse learner groups and levels. Nik introduced two powerful tools specifically designed with classroom realities in mind: the Adapt your content generator and the CEFR Level Checker.

Key benefits highlighted in the session included:

  • Effortlessly adapting content from one CEFR level to another while preserving the original theme and ensuring the result is pedagogically effective.
  • Immediate, precise CEFR analysis of texts, breaking down vocabulary and grammar complexity to help verify learner-appropriate materials.
  • Supporting teacher control through editable outputs that can be fine-tuned for specific class needs.

As Nik emphasised, ‘It’s not just about saving time. It’s about creating something that actually works for your learners faster’. The session showed how these AI generators translate the complexity of CEFR adaptation into practical, editable resources, enabling teachers to respond precisely to different learner needs without compromising pedagogical integrity.

Engage Students and Assess Progress Quickly

Generate, Engage and Assess: Create Custom Texts and Multiple Choice Quizzes’, demonstrated how TeacherMatic can support both content creation and assessment in language teaching. Participants saw how the Create a text and Multiple Choice Questions generators allow teachers to produce original CEFR-level texts and assess learner understanding instantly, without prompt engineering or technical complexity.

Highlights from the session included:

  • Generating original classroom-ready texts tailored by topic, CEFR level, grammar focus, text type, vocabulary and length.
  • Creating CEFR-aligned multiple-choice quizzes from any text to assess comprehension, vocabulary or grammar.
  • Adapting content across proficiency levels while preserving the theme and ensuring pedagogical usefulness.

In this session, participants learned how combining flexible content and quiz generators can streamline lesson preparation, enhance learner engagement and support accurate, timely assessment.

The Language Teaching Takeoff Webinar Series has illustrated how purpose-built AI can support language educators in practical, impactful ways. The TeacherMatic Language Teaching Edition allows teachers to leverage AI responsibly, ethically and safely, enhancing learning while maintaining pedagogical standards and putting educators in control of their classroom practice.

The series isn’t over yet.


What’s Next:

After a short summer break, the Language Teaching Takeoff Webinar Series returns. Join us for the next session:

Create Engaging Materials from YouTube Content and Build Custom Glossaries

Date: Thursday, 11th September

Time: 12:00 – 12:30 BST | 13:00 – 13:30 CEST

Secure your place

Discover how AI generators can turn YouTube videos into engaging content, and learn how to generate custom glossaries tailored to CEFR levels and your learners’ needs.


Explore the Language Teaching Edition of TeacherMatic

Whether teaching A1 learners or guiding advanced students through C1 material, the Language Teaching Edition of TeacherMatic helps you do it more efficiently, precisely and flexibly. 

Explore it here


About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Bringing Mobile Learning Back with AI, Context and Expertise

What if mobile learning had the intelligence and context it lacked 25 years ago? This piece revisits the rise and fall of early mobile learning projects and considers how the convergence of artificial intelligence, contextual mobile data and educational expertise could support more responsive and personalised learning today.

Bringing Mobile Learning Back with AI, Context and Expertise

Author: Prof. John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

St. Gallen, July 28, 2025 – Around 25 years ago, many members of the European edtech research community, myself included, were engaged in projects, pilots and prototypes exploring what was then known as ‘mobile learning’. This roughly and obviously referred to learning with mobile phones, likely 3G, nearing the dawn of the smartphone era. Learners could already access all types of learning available on networked desktops in their colleges and universities, but they were now freed from their desktops. The excitement, however, was around all the additional possibilities. 

One of these was ‘contextual learning,’ meaning learning that responded to the learner’s context. Mobile phones knew where they were, where they had been and what they had been doing1. These devices could capture images, video and sound of their context, including both the user and their surroundings. This meant they could also understand and know their user, the learner. 

So, to provide some examples:

  • Walking around art galleries like the Uffizi and heritage sites like Nottingham Castle, learners with their mobile phones could stop at a painting randomly and receive a range of background information, including audio, video and images. The longer they stayed, the more they would receive. Based on other paintings they had lingered at, they could get suggestions, explanations and perspectives on what else they might like and where else they could go.
  • Augmented reality on mobile phones meant that learners standing in Berlin using their mobile phone as a camera viewfinder could see the Brandenburg Gate, but with the now-gone Berlin Wall interposed perfectly realistically as they walked up to and around it. Similarly, they could see Rembrandt’s house in Amsterdam. Learners could also walk across the English Lake District and see bygone landforms and glaciers or engage in London murder mysteries, looking at evidence and hearing witnesses at various locations.
  • Recommender systems on mobile phones analysed learners’ behaviours, achievements and locations to suggest the learning activity that would suit them best based on their history and context. These recommendations could be linked to assignments, resources and colleagues on their university LMS, providing guidance and practical advice. For example, in a Canadian project, there are specific applications in tourism.
  • Using a system like Molly Oxford on their mobile phones, learners could be guided to the nearest available loan copy of a library book they wanted. They could also be given suggestions based on public transport, wheelchair accessible footpaths and library opening hours.
  • Trainee professionals, such as physiotherapists or veterinary nurses, in various projects across Yorkshire, could be assessed while carrying out a healthcare procedure in ‘real-life’ practice. Their mobile phones would capture the necessary validation and contextual data to ensure a trustworthy process.
  • Some early experiments, with Bluetooth and other forms of NFC (near-field communication), allowed passers-by or students to pick up comments or images hanging in discrete locations, such as a subway or corridor on a university campus, serving as sign-posting or street art. 

These pilots and projects implemented situated2, authentic3 and personalised4 learning as aspects of contextual learning, and espoused5 the principles of constructivism6 and social constructivism7. This was only possible as far as the contemporary resources and technologies permitted. They did not, however, encourage or allow content to be created, commented on, or contributed to by learners, only consumed by them. Also, they usually only engaged with learners on an individual basis, not supporting interaction or communication among learners, even those learning the same thing, at the same place and at the same time.

So what went wrong? Why aren’t such systems widespread across communities, galleries, cultural spaces, universities and colleges any more? And how have things changed? Could we do better now?

The Downfall of Mobile Learning: What Went Wrong?

Mobile phone ownership was not widespread two decades ago, and popular mobile phones were not as powerful as they are today. The ‘apps economy’8 had not taken off. This meant that projects and pilots had to develop all software systems from scratch and get them to interoperate9. They also had to fund and provide the necessary mobile phones for the few learners involved10

Once the pilot or project and its funding had finished, its ideas and implementation were not scalable or sustainable; they were unaffordable. Pilots and projects were usually conducted within formal educational institutions among their students. Also, evaluation and dissemination focused on technical feasibility, proof-of-concept and theoretical findings. They rarely addressed outcomes that would sway institutional managers and impact institutional performance metrics. As a result, these ideas remained optional margins of institutional activity rather than the regulated business of courses, qualifications, assessments and certificates. Nor was there a business model to support long-term adoption. 

In fairness, we should also factor in the political and economic climate at the end of the 2000s. The ‘subprime mortgage’ crisis11 and the ‘bonfire of the quangos’12 depleted the political goodwill and public finances for speculative development work. Work that had previously and implicitly assumed the ‘diffusion of innovations’13 into mainstream provision. That ‘trickle down’ would take these ideas from pilot project to production line.

The Shift in Mobile Learning: What Changed?

Certainly not the political or economic climate, but mobile phones are now familiar, ubiquitous and powerful, and so is artificial intelligence (AI), also familiar, ubiquitous and powerful. Both of these technologies are outside educational institutions rather than confined within them. 

These earlier pilots and projects were basically ‘dumb’ systems, with no ‘intelligence’, drawing only on information previously loaded into their closed systems. Now, we have ‘intelligence’, we have AI and we have AI chatbots on mobile phones. However, currently, AI lacks context and cannot know or respond to the location, history, activity or behaviour of the learner and their mobile phone. Unfortunately, many current AI applications and chatbots are stateless and do not retain memory across interactions, and this represents a further challenge to any continuity.

The Possibilities of Mobile Learning: Could We Do Better Now?

Today’s network technologies can enable distributed connected contributions and consumption, enabling writing and reading. These might realise more of the possibilities of constructivism and social constructivism. They could enable educational systems to learn about and respond to their individual learners and their environment, connecting groups of learners and showing them how to support each other14

So, is there the possibility of convergence? Is it possible to combine the ‘intelligence’ of AI, the ‘memory’ of databases and the context provided by mobile phones, including both the learner and their environment? Could this be merged and mediated by educational expertise, acting as an interface between the three technologies, filtering, selecting and safeguarding?

So what might this look like? We could start by adding ‘intelligence’ and ‘memory’ to our earlier examples.

The Future of Mobile Learning: What Could it Look Like? 

In terms of formal learning, our previous examples of the Uffizi Galleries, the Lake District, the Berlin Wall and Nottingham Castle are easy to extrapolate and imagine. Subject to a mediating educational layer, learners would each be in touch with other learners, helping each other in personalised versions of the same task. They could receive background information, ideas, recommendations, feedback and suggestions, cross-referenced with deadlines, schedules and assignments from their university LMS, all based on the cumulative history of their individual and social interactions and activities. 

In terms of community learning or visitor attractions, systems could be created that encourage interactive, informal learning. For example, a living local history or 3D community poem spread around in the air, held together by links and folksonomies15, perhaps using tags to connect ideas, a living virtual world overlaying the real one. These systems could also support more prosaic purely educational applications, combining existing literary, artistic or historical sources with personal reactions or recollections.

Technically, this is about accessing the mobile phone’s contextual data, but sometimes other simple mobile data communications, for context. It also requires querying a relational database16 to retrieve history and constraints, and perhaps an institutional LMS, to retrieve assignments, timetables and course notes. AI can then be prompted to bring these together for some educational activity. Certainly, a proof of concept is eminently feasible. The expertise and experience of the three core disciplines are still out there and only need to be connected, tasked and funded.

Conclusions and Concerns

This piece sketches some broad educational possibilities once we enlist AI to support various earlier kinds of contextual mobile learning. Specific implementations and developments must address considerable social, legal, ethical and regulatory concerns and requirements. The earlier generation of projects might have already worked with these, privacy and surveillance being the obvious ones. Still, AI adds an enormous extra dimension to these, and there are other concerns like digital over-saturation, especially of children and vulnerable adults.

Nonetheless, this convergence of AI, contextual mobile data and educational expertise promises a future where learning is not confined to traditional settings but is a fluid, intelligent and deeply embedded aspect of our daily lives, making education more effective, accessible and aligned with individual and societal needs.


  1. There is considerable literature, including:
    Special editions: Research in Learning Technology, Vol. 17, 2009. 
    Review articles: 
    Kukulska-Hulme, A., Sharples, M., Milrad, M., Arnedillo-Sanchez, I. & Vavoula, G. (2009). Innovation in mobile learning: A European perspective. International Journal of Mobile and Blended Learning, 1(1), 13–35.
    Aguayo, C., Cochrane, T. & Narayan, V. (2017). Key themes in mobile learning: Prospects for learner-generated learning through AR and VR. Australasian Journal of Educational Technology, 33(6).
    Edited books: Traxler, J. & Kukulska-Hulme, A. (Eds) (2015), Mobile Learning: The Next Generation, New York: Routledge. (Also available in Arabic, 2019.) 
    More philosophically, Traxler, J. (2011) Context in a Wider Context, Medienpädagogik, Zeitschrift für Theorie und Praxis der Medienbildung. The Special Issue entitled Mobile Learning in Widening Contexts: Concepts and Cases (Eds.) N. Pachler, B. Bachmair & J. Cook, Vol. 19, pp. 1-16. ↩︎
  2. Meaning, ‘real-life’ settings. ↩︎
  3. Meaning, ‘real-life’ tasks. ↩︎
  4. Meaning, learning tailored to each separate individual learner. ↩︎
  5. Educational technology researchers distinguish between what teachers say, what they ‘espouse’, and what they actually do, what they ‘enact’, usually something far more conservative or traditional.  ↩︎
  6. An educational philosophy based on learners actively building their knowledge through experiences and interactions. ↩︎
  7. A variant of constructivism that believes that learning is created through social interactions and through collaboration with others. For an excellent summary of both, see: https://www.simplypsychology.org/constructivism.html ↩︎
  8. For an explanation, see: https://smartasset.com/investing/the-economics-of-mobile-apps ↩︎
  9. A common term among computing professionals, referring to whether or not different systems, such as hardware, software, applications and peripherals will actually work together, or whether it would be more like trying to fit a UK plug into an EU socket. ↩︎
  10. A more detailed account is available at: https://medium.com/@Jisc/what-killed-the-mobile-learning-dream-8c97cf66dd3d ↩︎
  11. For an explanation, see:https://en.wikipedia.org/wiki/Subprime_mortgage_crisis ↩︎
  12. For an explanation, see: 2010 UK quango reforms – Wikipedia, which impacted Becta, the LSDA, Jisc and other edtech supporters. 
    ↩︎
  13.  For an explanation, see: https://en.wikipedia.org/wiki/Diffusion_of_innovations ↩︎
  14. The proximity of physical or geographical context that the location awareness of neighbouring mobile phones could extend to embrace social proximity, meaning learners who are socially connected, or educational proximity, meaning learners working on similar tasks. The latter idea connects to the notions of ‘scaffolding’, ‘the more knowledgeable other’ and ‘the zone of proximal development’ of the theorist Vygotsky. For more, see: https://en.wikipedia.org/wiki/Zone_of_proximal_development   ↩︎
  15. Databases conventionally have a fixed structure, for example, personal details based on forename, surname, house name, street name and so on, with no choice. Folksonomies, by contrast, are defined by the user, often on the fly. For example, tagging with labels such as ‘people I like’, ‘people nearby’, ‘people with a car’. Diigo, a social bookmarking service, uses tagging to implement a folksonomy. ↩︎
  16. Relational databases, unlike ‘flat’ databases based solely on a file, capture relationships, such as a teacher working in a college or a student enrolling in a course, and include all the various individual teachers, courses, students and colleges.
    ↩︎

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Who Owns ‘Truth’ in the Age of Educational GenAI?

As generative AI becomes more deeply embedded in digital education, it no longer simply delivers knowledge; it shapes it. What counts as truth, and whose truth is represented, becomes increasingly complex. Rather than offering fixed answers, this piece challenges educational technologists to confront the ethical tensions and contextual sensitivities that now define digital learning.

Who Owns ‘Truth’ in the Age of Educational GenAI?

Author: Prof. John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

St. Gallen, May 23, 2025 – Idealistically, perhaps, teaching and learning are about sharing truths, and sharing facts, values, ideas and opinions. Over the past three decades, digital technology has been increasingly involved or implicated in teaching and learning, and increasingly involved or implicated in shaping the truths, the facts, values, ideas and opinions that are shared. Truth seems increasingly less absolute, stable and reliable and digital technology seems increasingly less neutral and passive.

The emergence of powerful and easily available AI, both inside education and in the societies outside it, only amplifies and accelerates the instabilities and uncertainties around truth, making it far less convincing for educational digital technologists to stand aside, hoping that research or legislation or public opinion will understand the difficulties and make the rules. This piece unpacks these sometimes controversial and uncomfortable propositions, providing no easy answers but perhaps clarifying the questions.

Truth and The Digital

Truth is always tricky. It is getting trickier and trickier, and faster and faster. We trade in truth, we all trade in truth; it is the foundation of our communities and our companies, our relationships and our transactions. It is the basis on which we teach and learn, we understand and we act. And we need to trust it.

The last two decades have, however, seen the phrases ‘fake news’ and ‘post truth’ used to make assertions and counter assertions in public spheres, physical and digital, insidiously reinforcing the notion that truth is subjective, that everyone has their own truth. It just needs to be shouted loudest. These two decades also saw the emergence and visibility of communities, big and small, in social media, able to coalesce around their own specific beliefs, their own truths, some benign, many malign, but all claiming their adherents to be truths. 

The digital was conceived ideally as separate and neutral. It was just the plumbing, the pipes and the reservoirs that stored and transferred truths, from custodian or creator to consumers, from teacher to learner. Social media, intrusive, pervasive and universal, changed that, hosting all those different communities.

The following selection of assertions comprises some widely accepted truths, though this will always depend on the community; others are generally recognised as false and some, the most problematic, generate profound disagreement and discomfort.

  • The moon is blue cheese, the Earth is flat
  • God exists
  • Smoking is harmless
  • The holocaust never happened 
  • Prostate cancer testing is unreliable
  • Gay marriage is normal 
  • Climate change isn’t real 
  • Evolution is fake
  • Santa Claus exists 
  • Assisted dying is a valid option
  • Women and men are equal
  • The sun will rise
  • Dangerous adventure sports are character-building
  • Colonialism was a force for progress

These can all be found on the internet somewhere and all represent the data upon which GenAI is trained as it harvests the world’s digital resources. Whether or not each is conceived as true depends on the community or culture.

Saying, ‘It all depends on what you mean by …’ ignores the fundamental issue, and yes, some may be merely circular while others may allow some prevarication and hair-splitting, but they all exist. 

Educational GenAI

In terms of the ethics of educational AI, extreme assertions like the ‘sun will rise’ or ‘the moon is blue cheese’ are not a challenge. If a teacher wants to use educational GenAI tools to produce teaching materials that make such assertions, the response is unequivocal; it is either ‘here are your teaching materials’ or ‘sorry, we can’t support you making that assertion to your pupils’.   

Where educational AI needs much more development is in dealing with assertions which, for us, may describe non-controversial truths, such as ‘women and men are equal’ and ‘gay marriage is normal’, but which may be met by different cultures and communities with violently different opinions.

GenAI harvests the world’s digital resources, regurgitating them as plausible, and in doing so, captures all the prejudice, biases, half-truths and fake news already out there in those digital resources. The role of educational GenAI tools is to mediate and moderate these resources in the interests of truth and safety, but we argue that this is not straightforward. If we know more about learners’ culture and contexts and their countries, we are more likely to provide resources with which they are comfortable, even if we are not. 

Who Do We Believe?

Unfortunately, some existing authorities that might have helped, guided and adjudicated these questions are less useful than previously. The speed and power of GenAI have overwhelmed and overtaken them. 

Regulation and guidance have often mixed pre-existing concerns about data security with assorted general principles and haphazard examples of their application, all focused on education in the education system rather than learning outside it. The education system has, in any case, been distracted by concerns about plagiarism and has not yet addressed the long-term issues of ensuring school-leavers and graduates flourish and prosper in societies and economies where AI is already ubiquitous, pervasive, intrusive and often unnoticed. In any case, the treatment of minority communities or cultures within education systems may itself already be problematic.

Education systems exist within political systems. We have to acknowledge that digital technologies, including educational digital technologies, have become more overtly politicised as global digital corporations and powerful presidents have become more closely aligned.

Meanwhile, the conventional cycle of research funding, delivery, reflection and publication is sluggish compared to developments in GenAI. Opinions and anecdotes in blogs and media have instead filled the appetite for findings, evaluations, judgments and positions. Likewise, the conventional cycle of guidance, training, and regulation is slow, and many of the outputs have been muddled and generalised. Abstract theoretical critiques have not always had a chance to engage with practical experiences and technical developments, often leading to evangelical enthusiasm or apocalyptic predictions. 

So, educational technologists working with GenAI may have little adequate guidance or regulation for the foreseeable future.

Why is This Important?

Educational technologists are no longer bystanders, merely supplying and servicing the pipes and reservoirs of education. Educational technologists have become essential intermediaries, bridging the gap between the raw capabilities of GenAI, which are often indiscriminate, and the diverse needs, cultures and communities of learners. Ensuring learners’ safe access to truth is, however, not straightforward since both truth and safety are relative and changeable, and so educational technologists strive to add progressively more sensitivity and safety to truths for learners. 

At the Avallain Lab, aligned with Avallain Intelligence, our broader AI strategy, we began a thorough and ongoing programme of building ethics controls that identify what are almost universally agreed to be harmful and unacceptable assertions. We aim to enhance our use of educational GenAI in Avallain systems to represent our core values, while recognising that although principles for trustworthy AI may be universal, the ways they manifest can vary from context to context, posing a challenge for GenAI tools. This issue can be mitigated through human intervention, reinforcing the importance of teachers and educators. Furthermore, GenAI tools must be more responsive to local contexts, a responsibility that lies with AI systems deployers and suppliers. While no solution can fully resolve society’s evolving controversies, we are committed to staying ahead in anticipating and responding to them.

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

From Rubrics to Results: Making Feedback More Impactful with AI in Language Teaching

Delivering impactful feedback can be one of the most time-consuming parts of language teaching. In this chapter of the Language Teaching Takeoff Webinar Series, we explored how to streamline the feedback process without compromising the quality that learners deserve.

From Rubrics to Results: Making Feedback More Impactful with AI in Language Teaching

London, May 2025 – On May 15th, the Avallain Group hosted the second session in its Language Teaching Takeoff Webinar Series, ‘From Rubrics to Results: How to Provide Impactful Feedback’. The session was moderated by Giada Brisotto, Senior Marketing and Sales Operations Manager at Avallain, and led by Nik Peachey, educator, author and edtech consultant. 

This 30-minute session focused on how the Feedback Generator in the TeacherMatic Language Teaching Edition can assist educators in providing better, faster and more personalised feedback.

The Challenge: High-Quality Feedback Takes Time

Feedback is essential for student progress, but for teachers, it often comes at the cost of time and energy. Nik opened the session by acknowledging this widespread issue and proposing a practical, AI-supported solution: the Feedback Generator.

Unlike general-purpose tools, the TeacherMatic Feedback Generator, designed specifically for language teaching, allows educators to produce constructive feedback that aligns with assignment briefs, CEFR levels and specific pedagogical approaches.

Personalised Feedback at Scale

Nik demonstrated how the Feedback Generator makes it possible to maintain personalisation, even with large groups of students. By inputting a student’s response and the original task prompt, teachers can instantly generate comments that are:

  • Aligned with CEFR levels and subscales. (e.g., B1 writing > coherence and cohesion)
  • Tailored to the assessment criteria or rubric used by the teacher or institution.
  • Balanced between strengths and areas of improvement.

​​The result: fast, personalised and pedagogically relevant feedback.

Designed for Language Teachers, Not Just Generic Use

As it is purpose-built for language educators, the Feedback Generator supports core pedagogical models including:

  • Communicative Language Teaching (CLT)
  • Task-Based Learning (TBL)
  • Presentation Practice Production (PPP)
  • Lexical Approach
  • Test – Teach – Test

This flexibility allows teachers to generate feedback that fits their existing lesson models and institutional standards.

From Feedback to Feedforward

Nik emphasised that effective feedback not only reflects on the past but also guides learners as they progress. The Feedback Generator enables this by including next steps and actionable guidance in the comments, which can be adjusted for tone, focus and complexity.

This ‘forward approach’ aligns with current thinking in language assessment, that feedback should help students take ownership of their progress and better understand learning objectives.

Why It Matters: Lighter Workload, Deeper Impact

The session closed with a powerful reminder: when tools are designed around the real needs of teachers, not just general AI capabilities, they can genuinely reduce pressure without lowering standards.

By using the Feedback Generator, teachers can:

  • Save time without sacrificing quality
  • Ensure consistency in grading
  • Focus more on student support and less on repetitive admin
  • Promote deeper engagement with learning goals

What’s Next in the Series?

The Language Teaching Takeoff Webinar Series continues in June with ‘Adapting Content for Effective CEFR-Aligned Language Teaching’. You can reserve your seat now. This is a free webinar, but spaces are limited.

Save the Date:

  • Thursday, 12th June
  • 12:00 – 12:30 BST | 13:00 – 13:30 CEST

Register now for the webinar


Discover the TeacherMatic Language Teaching Edition

The Language Teaching Edition of TeacherMatic has been purpose-built to elevate language teaching and learning through sector-specific features designed for real classroom needs. With CEFR-aligned AI generators and support for key pedagogical models such as CLT, Task-Based Learning, PPP and more, it empowers language educators to create high-quality, personalised content efficiently and confidently.

Visit the dedicated landing page to explore all features in depth


About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

UK’s Generative AI: Product Safety Expectations

The UK’s Department for Education publishes its outcomes-oriented safety recommendations for GenAI products, addressed to edtech companies, schools and colleges.

UK’s Generative AI: Product Safety Expectations

St. Gallen, February 20, 2025 – On 22 January 2025, the UK’s Department for Education (DfE) published its Generative AI: Product Safety Expectations. This is part of the broader strategy to establish the country as a global leader in AI, as outlined in the Government’s AI Opportunities Action Plan

As a leading edtech company with over 20 years of experience, Avallain was invited to participate in consultations on the Safety Expectations. Avallain Intelligence’s focus on clear ethical guidelines for safe AI development, demonstrated through TeacherMatic and other AI-driven solutions across our product portfolio, is reflected in our role in these consultations, where we were well-positioned to contribute expert advice.

Product Expectations for the EdTech Industry

The Generative AI: Product Safety Expectations define the ‘capabilities and features that GenAI products and systems should meet to be considered safe for use in educational settings.’ The guidelines, aimed primarily at edtech developers, suppliers, schools and colleges, come at a crucial time. Educational institutions need clear frameworks to assess the trustworthiness of the AI tools they are adopting. The independent report, commissioned by Avallain, Teaching with GenAI: Insights on Productivity, Creativity, Quality and Safety, provides valuable insights to help inform these decisions and guide best practices.

Legal Alignment, Accountability and Practical Implementation

The guidelines are specifically intended for edtech companies operating in England. While not legally binding, the text links the product expectations to existing UK laws and policies, such as the UK GDPR, Online Safety Act and Keeping Children Safe in Education, among others. This alignment helps suppliers, developers and educators navigate the complex legal landscape. 

From an accountability point of view, the DfE states that, ‘some expectations will need to be met further up the supply chain, but responsibility for assuring this will lie with the systems and tools working directly with schools and colleges.’ Furthermore, the guidelines emphasise that the expectations are focused on outcomes, rather than prescribing specific approaches or solutions that companies should implement.

Comparing Frameworks and An Overview of Key Categories

In line with other frameworks for safe AI, such as the EU’s Ethics Guidelines for Trustworthy AI, the Generative AI: Product Safety Expectations are designed to be applied by developers and considered by educators. However, unlike the EU’s guidelines, which are field-agnostic and principles-based, the DfE’s text is education-centred and structured around precise safety outcomes. This makes it more concrete and focused, though it is less holistic than the EU framework, leaving critical areas such as societal and environmental well-being out of its scope.

The guidance includes a comprehensive list of expectations organised under seven categories, summarised in the table below. The first two categories — Filtering and Monitoring and Reporting — are specifically relevant to child-facing products and stand out as the most distinctive of the document, as they tackle particular risk situations that are not yet widely covered.

The remaining categories — Security, Privacy and Data Protection, Intellectual Property, Design and Testing and Governance — apply to both child- and teacher-facing products. They are equally critical, as they address these more common concerns while considering the specific educational context in which they are implemented.

Collaboration and Future Implications

By setting clear safety expectations for GenAI products in educational settings, the DfE provides valuable guidance to help edtech companies and educational institutions collaborate more effectively during this period of change. As safe GenAI measures become market standards, it is important to point out that the educational community also needs frameworks that explore how this technology can foster meaningful content and practices across a diverse range of educational contexts.


Generative AI: Product Safety Expectations — Summary

  • Filtering
    1. Users are effectively and reliably prevented from generating or accessing harmful and inappropriate content.
    2. Filtering standards are maintained effectively throughout the duration of a conversation or interaction with a user.
    3. Filtering will be adjusted based on different levels of risk, age, appropriateness and the user’s needs (e.g., users with SEND).
    4. Multimodal content is effectively moderated, including detecting and filtering prohibited content across multiple languages, images, common misspellings and abbreviations.
    5. Full content moderation capabilities are maintained regardless of the device used, including BYOD and smartphones when accessing products via an educational institutional account.
    6. Content is moderated based on an appropriate contextual understanding of the conversation, ensuring that generated content is sensitive to the context.
    7. Filtering should be updated in response to new or emerging types of harmful content.
    8. Filtering should be updated in response to new or emerging types of harmful content.
  • Monitoring and Reporting
    1. Identify and alert local supervisors to harmful or inappropriate content being searched for or accessed.
    2. Alert and signpost the user to appropriate guidance and support resources when access to prohibited content is attempted (or succeeds).
    3. Generate a real-time user notification in age-appropriate language when harmful or inappropriate content has been blocked, explaining why this has happened.
    4. Identify and alert local supervisors of potential safeguarding disclosures made by users.
    5. Generate reports and trends on access and attempted access of prohibited content, in a format that non-expert staff can understand and which does not add too much burden on local supervisors.
  • Security
    1. Offer robust protection against ‘jailbreaking’ by users trying to access prohibited material.
    2. Offer robust measures to prevent unauthorised modifications to the product that could reprogram the product’s functionalities.
    3. Allow administrators to set different permission levels for different users.
    4. Ensure regular bug fixes and updates are promptly implemented.
    5. Sufficiently test new versions or models of the product to ensure safety compliance before release.
    6. Have robust password protection or authentication methods.
    7. Be compatible with the Cyber Security Standards for Schools and Colleges.
  • Privacy and Data Protection
    1. Provide a clear and comprehensive privacy notice, presented at regular intervals in age-appropriate formats and language with information on:
    2. The type of data: why and how this is collected, processed, stored and shared by the generative AI system.
    3. Where data will be processed, and whether there are appropriate safeguards in place if this is outside the UK or EU.
    4. The relevant legislative framework that authorises the collection and use of data.
    5. Conduct a Data Protection Impact Assessment (DPIA) during the generative AI tool’s development and throughout its life cycle.
    6. Allow all parties to fulfil their data controller and processor responsibilities proportionate to the volume, variety and usage of the data they process and without overburdening others.
    7. Comply with all relevant data protection legislation and ICO codes and standards, including the ICO’s age-appropriate design code if they process personal data.
    8. Not collect, store, share, or use personal data for any commercial purposes, including further model training and fine-tuning, without confirmation of appropriate lawful basis.
  • Intellectual Property
    1. Unless there is permission from the copyright owner, inputs and outputs should not be:
      • Collected
      • Stored
      • Shared for any commercial purposes, including (but not limited to) further model training (including fine-tuning), product improvement and product development.
    2. In the case of children under the age of 18, it is best practice to obtain permission from the parent or guardian. In the case of teachers, this is likely to be their employer—assuming they created the work in the course of their employment.
  • Design and Testing
    1. Sufficient testing with a diverse and realistic range of potential users and use cases is completed.
    2. Sufficient testing of new versions or models of the product to ensure safety compliance before release is completed.
    3. The product should consistently perform as intended.
  • Governance
    1. A clear risk assessment will be conducted for the product to assure safety for educational use.
    2. A formal complaints mechanism will be in place, addressing how safety issues with the software can be escalated and resolved in a timely fashion.
    3. Policies and processes governing AI safety decisions are made available.

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Avallain Reinforces its Commitment to Research-Driven Solutions with a Newly Commissioned GenAI Report

How is GenAI being integrated into schools to enhance teaching and learning? ‘Teaching with GenAI: Insights on Productivity, Creativity, Quality and Safety’ delves into this critical question by exploring the opportunities GenAI offers, the challenges it poses and how it’s shaping the future of education.

Avallain Reinforces its Commitment to Research-Driven Solutions with a Newly Commissioned GenAI Report

St. Gallen, January 30, 2025 – Education technology pioneer Avallain introduces, Teaching with GenAI: Insights on Productivity, Creativity, Quality and Safety. This independent report commissioned by the Avallain Group and produced by Oriel Square Ltd, is key research that provides valuable insights for educators and policymakers alike.

This timely and comprehensive report explores how generative AI is being integrated into schools to enhance teaching and learning outcomes and the critical opportunities and challenges it presents.

Professor Rose Luckin, of University College London and Founder of Educate Ventures Research, says the report is ‘an essential read for any education leader navigating the AI landscape.’

Navigating the Opportunities, Challenges and Risks of GenAI

The ‘Teaching with GenAI: Insights on Productivity, Creativity, Quality and Safety’ report provides detailed insights into how GenAI saves time and boosts efficiency, allowing educators to streamline workflows and dedicate more time to impactful teaching. It delves into the tools and training needed to create meaningful learning materials, providing practical advice for designing engaging and effective content. The report examines how GenAI fosters creativity and innovation in teaching practices, encouraging educators to reimagine their instructional approaches.

Beyond this, the report also stresses the importance of quality control in GenAI applications, identifying areas where oversight is essential to ensure high standards in AI-generated content. Critical advice is offered around data security and tackling inbuilt bias, helping educators and institutions confidently address these key concerns. More importantly, the report provides actionable recommendations on how schools and organisations can effectively integrate and apply GenAI to maximise its potential while ensuring ethical and responsible use.

As Professor John Traxler, Academic Director of the Avallain Lab, explains, ‘While schools and educators acknowledge the potential of GenAI tools to assist in key pedagogical tasks, they also express concerns about content accuracy, the risk of perpetuating biases and the impact of these tools on their evolving role in the classroom. This underscores the need to provide educators with GenAI solutions tailored to educational contexts and the critical analysis skills required to engage with these technologies safely and effectively.’

A Commitment to Research-Driven Solutions

The rapid rise of GenAI has introduced both unprecedented possibilities and complex challenges in the educational landscape. With a long history of developing educator-led technology, Avallain has always believed that research-driven approaches are essential to ensuring technology supports learning outcomes.

‘This report reflects our commitment to research-driven solutions that empower educators. By exploring the benefits, potential and challenges of GenAI through the experiences of teachers and specialists, we aim to provide valuable insights and actionable recommendations to the educational community. Together, we are navigating this transformative field to deliver technology that ethically and safely supports teachers and students.’ As Ignatz Heinz, President and Co-Founder of Avallain, highlights.

Over 50% of teachers in England use AI tools to reduce workload, and 40% use them to personalise learning content.

Avallain’s Approach to Ethical and Safe GenAI Integration

As GenAI enters classrooms, Avallain is doubling down on this commitment with these informative reports and its broader AI strategy, Avallain Intelligence, which aims to responsibly integrate AI across the entire edtech value chain. This initiative is built on the principle that ethical AI is essential—not only for achieving better outcomes, enhanced productivity and safe, innovative learner interactions but, more importantly, as a foundation for the reliable adoption of these tools in our societies, particularly in our educational systems.

Carles Vidal, Avallain Lab Business Director, explains further, ‘Avallain’s unwavering commitment to Ethical AI is reflected in a range of AI solutions designed in alignment with the Ethical Key Requirements, outlined in the EU’s Ethics Guidelines for Trustworthy AI. These guidelines uphold the principles of respect for human autonomy, prevention of harm, fairness and explicability.’

The newly commissioned report aligns with this AI strategy by exploring critical ethical, safe, and effective implementation considerations. It provides actionable recommendations for schools and educators to adopt these technologies while ensuring responsible use confidently.

Leveraging Insights to Drive GenAI in Education

Avallain strives to remain at the forefront of educational innovation, actively monitoring and analysing educators’ difficulties as they integrate generative AI into their teaching practices. With a particular focus on ethics and pedagogy, these insights shape the ongoing development of Avallain’s next generation of GenAI features implemented in our solutions. Explore the full report and gain a deeper understanding of how GenAI can enhance teaching and learning. 

Download your free copy here.

Register Now for Upcoming Live Report Briefings

As part of our commitment to supporting educators and institutions, look out for upcoming report briefings to explore key insights from the report, including practical and ethical steps for integrating GenAI effectively. This is an opportunity to engage in discussions about the future of AI in education.

Secure my place

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

EU’s Guidelines for Trustworthy AI: A Reliable Framework for Edtech Companies

This post is the first in a series that highlights the most relevant recommendations, and regulations on ethics and AI-systems, produced by international institutions and educational agencies world-wide. Our goal is to provide updated and actionable insights to all stakeholders, including designers, developers and users involved in the field.

EU’s Guidelines for Trustworthy AI: A Reliable Framework for Edtech Companies

A look into the EU’s ethical recommendations and their possible adaptation to Gen-AI-based educational content creation services.

Author: Carles Vidal, Business Director of the Avallain Lab

Since the release of OpenAI’s ChatGPT-3.5, in November 2022, the edtech sector has focused its efforts on delivering products and services that leverage the creative potential of large language models (LLMs) to offer personalised and localised learning content to users. 

LLMs have prompted the educational content industry to reassess traditional editorial processes, and have also transformed the way in which teachers and professors plan, create and distribute classroom content across schools and universities.

The generalised uptake of Generative AI technologies [Gen-AI], in education, calls for ensuring that their design, development and use are based on a thorough understanding of the ethical implications at stake, a clear risk analysis, and the application of the corresponding mitigating strategies.

We start by discussing the work of the High-level Expert Group on AI (HLEG on AI), appointed by the European Commission in 2018 to support the implementation of the European strategy on AI. The work provides policy recommendations on AI-related topics. The “Ethics Guidelines for Trustworthy AI”(2019) and its complementary “Assessment List for Trustworthy AI, for Self-Assessment” (2020) are two non-binding texts that can be read as one single framework.

1. Ethics Guidelines for Trustworthy AI

From an AI practitioner’s point of view, the guidelines and the assessment list for trustworthy AI are strategic tools with which companies can build their own policies to ensure the implementation of ethical AI-Systems. In this sense, the work of the HLEG on AI is presented as a generalist model that can/should be adapted to the context of each specific AI-System. Additionally, due to its holistic approach, the framework addresses not only the technological requirements of AI-systems, but also considers all actors and processes involved throughout the entire life cycle of the AI.

As the HLEG on AI states, the guidelines’ “foundational ambition” is the achievement of trustworthy AI, which requires AI-systems, actors and processes to be “lawful, ethical, and robust”. Having said this, the authors explicitly exclude legality from the scope of the document, deferring to the corresponding regulations, and focus on addressing the ethical and robust dimensions for trustworthy AI-systems.

The framework is structured around three main conceptual levels, progressing from more abstract to more concrete. At the top level, defining the foundations of trustworthy AI, four “ethical imperatives” are established, to which all AI systems, actors, and processes must adhere:

  1. Respect for Human Agency
  2. Prevention of Harm
  3. Fairness 
  4. Explicability

At a second level, the framework introduces a set of seven key requirements for the realisation of trustworthy AI. The list is neither exhaustive nor presented in a hierarchical order. 

  1. Human Agency and Oversight
  2. Technical Robustness and Safety
  3. Privacy and Data Governance 
  4. Transparency
  5. Diversity, Non-discrimination and Fairness
  6. Societal and Environmental Wellbeing
  7. Accountability

The relevance of these key requirements extends beyond these guidelines. They also inform recital 27 and, implicitly, article 1 of the recently published EU AI Act, of April 2024.

The guidelines suggest a range of technical and non-technical methods for their implementation (e.g., architectures for trustworthy AI, codes of conduct, standardization, diversity and inclusive design) that actors can use to enforce the mentioned requirements. 

Achieving trustworthy AI is an ongoing and iterative process that requires continuous assessment and adaptation of the methods employed to implement key requirements in dynamic environments.

2. Assessment List for Trustworthy AI

The third level of the framework consists of an “Assessment List for Trustworthy AI” (ALTAI), intended to operationalise the key requirements. It is primarily addressed to developers and deployers of AI-Systems that directly interact with users. 

The ALTAI list breaks down the key requirements into more concrete categories. It provides a range of self-assessment questions for each of these, aiming to spark reflection around every aspect. Each individual actor is left to decide on the corresponding mitigating measures.

For example, the ethical requirement of Diversity, Non-Discrimination and Fairness, is divided in three subsections: 

1) Avoidance of unfair bias

2) Accessibility and Universal Design 

3) Stakeholder participation

In turn, for Avoidance of Unfair Bias, a series of self-assessment questions are proposed, a sample of which is listed below:

  • Did you establish a strategy or a set of procedures to avoid creating or reinforcing unfair bias in the AI system, both regarding the use of input data as well as for the algorithm design? 
  • Did you consider diversity and representativeness of end-users and/or subjects in the data? 
    • Did you test for specific target groups or problematic use cases? 
    • Did you research and use publicly available technical tools, that are state-of-the-art, to improve your understanding of the data, model and performance? 
    • Did you assess and put in place processes to test and monitor for potential biases during the entire lifecycle of the AI system (e.g. biases due to possible limitations stemming from the composition of the used data sets (lack of diversity, non-representativeness)? 
    • Where relevant, did you consider diversity and representativeness of end-users and or subjects in the data? 

The guidelines also suggest that companies incorporate their assessment processes into a governance mechanism, involving both top management and operations. The text even proposes a governance model, describing roles and responsibilities. 

The assessment list is not intended to be exhaustive and follows a generalist (horizontal) approach. The purpose of the HLEG on AI is to provide a set of questions that help all AI-system actors operationalise the more abstract key requirements, and to encourage them to adapt the assessment list to the specific needs of their sector and continuously update it.

In accordance with this vision, and grounded in the same framework, the European Commission published in September 2022, the “Ethical Guidelines on the Use of AI and Data in Teaching and Learning for Educators”. This document is a valuable resource for teachers and educators, helping them to reflect on AI and critically assess whether the AI systems they are using comply with the Key Requirements for Trustworthy AI.

3. Adapting and implementing the guidelines.

Having analysed the work of the HLEG on AI, we understand that it is proposed as a framework that companies like Avallain, along with other AI-system deployers, can build upon to create an adapted version that ensures the ethical design, development, and use of AI tools for the educational content creation community.

To this end, we support the framework’s recommendation of establishing a multidisciplinary body within companies to define ethical and robustness standards, identify the corresponding mitigating interventions, and ensure their implementation across all involved areas. This governing body should play a crucial role in the continuous adaptation of the company’s ethics and AI strategy to future ethical challenges.

About the Avallain Lab

We established the Avallain Lab in 2023 to be an ethically and pedagogically sound academic resource, providing support to Avallain product designers and partners, as well as the wider e-learning community.

This unit operates under the academic leadership of John Traxler and the business direction of Carles Vidal. The Avallain Lab also has the support of an advisory panel including Professor Rose Luckin. This experience and expertise allows us to deliver research-informed technology and experiences for learners and teachers, including in the field of AI.

The Avallain Lab is a unique, novel and innovative approach acting as the interface between the world’s vast and rapidly evolving research outputs, activities, networks and communities and Avallain’s continued ambition to enhance both the pedagogic and technical dimensions of its products and services with relevant medium-term ideas and longer-term concepts.

The Lab supports Avallain’s trials and workshops, informs internal discussion and draws in external expertise. The Lab is building a library of research publications, contributing to blogs and research papers and presenting at conferences and webinars. Early work focussed on learning analytics and spaced learning but the current focus is artificial intelligence, specifically ethics and pedagogy and their interactions.

About Carles Vidal

Business Director of the Avallain Lab, 
MSc in Digital Education by the University of Edinburgh.

Carles Vidal is an educational technologist with more than twenty years of experience in content publishing, specializing in creating e-learning solutions that empower educators and students in K12 and other educational stages. His work has included the publishing direction of learning materials aligned with various curricula across Spain and Latin American countries.

About John Traxler

Academic Director of the Avallain Lab, 
FRSA, MBCS, AFIMA, MIET

John Traxler, FRSA, MBCS, AFIMA, MIET, is Professor of Digital Learning, UNESCO Chair in Innovative Informal Digital Learning in Disadvantaged and Development Contexts and Commonwealth of Learning Chair for innovations in higher education. His papers are cited over 11,000 times and Stanford lists him in the top 2% in his discipline. He has written over 40 papers and seven books, and has consulted for a variety of international agencies including UNESCO, ITU, ILO, USAID, DFID, EU, UNRWA, British Council and UNICEF.

About Rose Luckin

Advisory Panellist of the Avallain Lab,
Doctor of Philosophy – PhD, Cognitive Science and AI

Rosemary (Rose) Luckin is Professor of Learner Centred Design at UCL Knowledge Lab, Director of EDUCATE, and author of Machine Learning and Human Intelligence: The Future of Education for the 21st Century (2018). She has also authored and edited numerous academic papers.  

Dr Luckin’s work centres on investigating the design and evaluation of educational technology. On top of this, she is Specialist Adviser to the UK House of Commons Education Select Committee for their inquiry into the Fourth Industrial Revolution. 

Her other positions include: 

  • Co-founder of the Institute for Ethical AI in Education
  • Past President of the International Society for AI in Education
  • A member of the UK Office for Students Horizon Scanning panel
  • Adviser to the AI and Robotics panel of the Topol review into the future of the NHS workforce
  • A member of the European AI Alliance
  • Previous Holder of an International Franqui Chair at KU Leuven

Avallain increases impact with strategic investment in AI Platform, TeacherMatic

Education Technology Provider, Avallain, today reaffirmed its commitment to responsible generative AI and innovation in education with a number of key announcements.

Avallain, a twenty year veteran of innovative and impactful edtech, has acquired TeacherMatic, one of Europe’s fastest growing generative AI toolsets for educators. The acquisition supports Avallain’s broader AI strategy including remediation and copyright protection, both features developed for its industry leading content creation tool, Avallain Author. 

The Avallain product suite already enables publishers to use the full breadth of the best generative AI while meeting educational, legal and commercial requirements. TeacherMatic has, over the last year, developed and organised one of the most complete AI toolsets to support educators globally, allowing everything from lesson plans and flashcards to schemes of work and multiple choice quizzes  – alignable to curricula – at the click of a few buttons. This coming together of TeacherMatic and Avallain forms the basis of a strong partnership of leading-edge and ethical capability applying generative AI for education.

Ursula Suter, Co-Founder and Executive Chairwoman at Avallain, says “We see this joining of forces with TeacherMatic as a crucial step to counter the main risks from generative AI while also benefiting educators and education, in general, in a manner that will cater to high quality educational publishing and learning outcomes. For many years we have been delivering grounded and considered educational innovation. With TeacherMatic, we will continue to do that and more. Our product suite achieves both high-quality education and commercial viability with success for all parties involved.”

Peter Kilcoyne, MD at TeacherMatic comments “TeacherMatic was formed by a group of lifelong educators with the aim of making generative AI available and accessible to all teaching staff to help reduce workloads and improve creativity. We are delighted to have been acquired by Avalllain whose expertise and experience in terms of both education and technology will greatly enhance our future developments, improve TeacherMatic as a platform as well as engaging with new markets around the world. We see the ethical, technical and educational principles that drive both Avallain and TeacherMatic make this a partnership that will benefit both organisations as well as our customers and all teachers and students in organisations that we support.”

Ignatz Heinz (President & Co-Founder), Ursula Suter (Executive Chairwoman & Co-Founder), Alexis Walter (MD), Monika Morawska (COO), Rahim Hirji (Executive VP) © Mario Baronchelli

In an additional announcement, Professor Rose Luckin has been appointed to the advisory board of Avallain. Rosemary (Rose) Luckin is a Professor at University College London and Founder of Educate Ventures Research (EVR) who has spent over 30 years developing and studying AI for Education. She is renowned for her research into the design and evaluation of educational technology and AI. 

Rose comments “Avallain has, for many years, been the quality engine of education for publishers and content providers. I am delighted to support them and provide guidance and direction for Avallain’s products as we step forward into this exciting era of AI within education” 

Finally, Avallain also officially announced the Avallain Lab, with John Traxler as Academic Director. Traxler holds Chairs from the Commonwealth of Learning for innovations in higher education and from UNESCO for Innovative Informal Digital Learning in Disadvantaged and Development Contexts. The Avallain Lab was incubated in 2023 with a remit to provide research and rigour around product development for partners covering everything from learner analytics and accessibility to ethical AI applicability for learners. The Lab will support Avallain’s current partners and operate commercially in partnership with other institutions exploring innovation in educational contexts – and welcomes global collaborators.

Avallain’s announcements today build upon its established commitment to ethical generative AI, which is already available for Avallain Author, its market-leading content authoring tool and its newly launched SaaS Learning Management System (LMS), Avallain Magnet. Current clients can leverage new tools that can automate parts of the editorial workflow while leaving editors and learning designers firmly in control of the process.

About Avallain

Avallain powers some of the most renowned educational brands including Oxford University Press, Cengage National Geographic Learning, Cambridge University Press, Santillana, Klett, and Cornelsen, reaching millions of learners worldwide. Avallain most recently raised 8M Euros from Round2 Capital Partners and is advised by i5invest. Through the Avallain Foundation, the technology is also made available to partners improving access to quality education.

Find out more at avallain.com

About TeacherMatic

TeacherMatic was formed in 2022 as Innovative Learning Technologies Limited and has developed a suite of generative AI tools for educators. Teachermatic has adoption in FE Colleges in the UK, Universities and Schools and recently partnered with OpenLMS.

Free trials are available at teachermatic.com

For media comment, contact Rahim Hirji, Avallain, rhirji@avallain.com

Contact:
Daniel Seuling
VP Sales & Marketing
dseuling@avallain.com