Award-winning reader designed with Avallain Author

An Avallain-supported English language teaching application has won the English-Speaking Union’s top award. Richmond Mazes: Crisis at Clifton is an interactive, graded digital reader focusing on real-life work scenarios.

Richmond International, an Oxford-based English Language Teaching publisher, published the digital edition using Avallain Author software.

“We are delighted that this application was recognized by the ESU’s President’s Award,” said Ignatz Heinz, Avallain MD. “This reader successfully combines innovative software design, scenario-based training principles, and proven English language teaching pedagogy. It maximizes the potential of our learning technology.”

Avallain is an international education technology enterprise with headquarters in Switzerland.

Crisis at Clifton is part of the Richmond Mazes series, available in digital and print formats. The reader is challenged to really comprehend what they have read by being asked to take decisions at the end of each section.

Luke Baxter, Digital Publisher at Richmond explains: “The stories are set in realistic business settings and follow a professional path of someone getting their first proper job to more senior management positions.”

The ESU especially commended the learning application for its use of audio clips, illustrations, and decision trees, which allow students to repeat activities as needed. The ESU is a registered, UK-based charity aiming to empower people of different languages and cultures using English as a common language.

CategoryExpectations
Category1. Users are effectively and reliably prevented from generating or accessing harmful and inappropriate content.
2. Filtering standards are maintained effectively throughout the duration of a conversation or interaction with a user.
3. Filtering will be adjusted based on different levels of risk, age, appropriateness and the user’s needs (e.g., users with SEND).
4. Multimodal content is effectively moderated, including detecting and filtering prohibited content across multiple languages, images, common misspellings and abbreviations.
5. Full content moderation capabilities are maintained regardless of the device used, including BYOD and smartphones when accessing products via an educational institutional account.
6. Content is moderated based on an appropriate contextual understanding of the conversation, ensuring that generated content is sensitive to the context.
7. Filtering should be updated in response to new or emerging types of harmful content.

Category1. Identify and alert local supervisors to harmful or inappropriate content being searched for or accessed.
2. Alert and signpost the user to appropriate guidance and support resources when access to prohibited content is attempted (or succeeds).
3. Generate a real-time user notification in age-appropriate language when harmful or inappropriate content has been blocked, explaining why this has happened.
4. Identify and alert local supervisors of potential safeguarding disclosures made by users.
5. Generate reports and trends on access and attempted access of prohibited content, in a format that non-expert staff can understand and which does not add too much burden on local supervisors.
Category1. Offer robust protection against ‘jailbreaking’ by users trying to access prohibited material.
2. Offer robust measures to prevent unauthorised modifications to the product that could reprogram the product’s functionalities.
3. Allow administrators to set different permission levels for different users.
4. Ensure regular bug fixes and updates are promptly implemented.
5. Sufficiently test new versions or models of the product to ensure safety compliance before release.
6. Have robust password protection or authentication methods.
7. Be compatible with the Cyber Security Standards for Schools and Colleges.

Category1. Provide a clear and comprehensive privacy notice, presented at regular intervals in age-appropriate formats and language with information on:
– The type of data: why and how this is collected, processed, stored and shared by the generative AI system.
– Where data will be processed, and whether there are appropriate safeguards in place if this is outside the UK or EU.
– The relevant legislative framework that authorises the collection and use of data.
2. Conduct a Data Protection Impact Assessment (DPIA) during the generative AI tool’s development and throughout its life cycle.
3. Allow all parties to fulfil their data controller and processor responsibilities proportionate to the volume, variety and usage of the data they process and without overburdening others.
4. Comply with all relevant data protection legislation and ICO codes and standards, including the ICO’s age-appropriate design code if they process personal data.
5. Not collect, store, share, or use personal data for any commercial purposes, including further model training and fine-tuning, without confirmation of appropriate lawful basis.
Category1. Unless there is permission from the copyright owner, inputs and outputs should not be:
– Collected
– Stored
– Shared for any commercial purposes, including (but not limited to) further model training (including fine-tuning), product improvement and product development.
2. In the case of children under the age of 18, it is best practice to obtain permission from the parent or guardian. In the case of teachers, this is likely to be their employer—assuming they created the work in the course of their employment.
Category1. Sufficient testing with a diverse and realistic range of potential users and use cases is completed.
2. Sufficient testing of new versions or models of the product to ensure safety compliance before release is completed.
3. The product should consistently perform as intended.
Category1. A clear risk assessment will be conducted for the product to assure safety for educational use.
2. A formal complaints mechanism will be in place, addressing how safety issues with the software can be escalated and resolved in a timely fashion.
3. Policies and processes governing AI safety decisions are made available.

addition testing