No items found.
No items found.
No items found.
*This resource has been tested for appropriateness in the classroom and scrutinised for safeguarding and cybersecurity issues. However, please do carry out any due diligence processes required by your own institution before using or recommending it to others.
*This is an example of a resource that is under development and may not have been fully vetted for security, safety or ethics.  Please carry out your own safeguarding and cybersecurity due diligence before using it with students or recommending it to others.

Utilising AI for Live Marking: An A Level Business Studies Experiment

Sixth Form
No items found.
Teaching & Inclusive Practices
Key Stage 5
Case Study
No items found.
No items found.
Chris Goodall

Head of Digital Education, Bourne Education Trust

An 'A' Level Business Studies class engaged in a real-time experiment, employing AI to mark a 9-mark exam question. Through integrating the AI with Microsoft Teams and providing a comprehensive marking guide, educators aimed to streamline the marking process and enhance feedback. This study delves into the observations and implications of this innovative approach.

A class of 20 'A' Level Business Studies students undertook a unique experiment. Their task was simple - attempt a 9-mark exam question on Microsoft Teams. Upon submission, instead of the usual manual marking process, AI took centre stage.

Armed with the mark scheme, indicative content, and further guidance, AI was tasked with grading each answer, determining its level, marking it, elucidating reasons, providing feedback, and suggesting five improvements. All of this was showcased live on a whiteboard for a collaborative analysis with the students.

Upon completion, feedback was transmitted via Teams, directing students towards answer refinement based on AI-generated feedback and collective discussion.

However, as with all experiments, observations varied:

Whilst the AI's marking wasn't always precise, its determination of the mark band was generally accurate. The AI tended to play safe, usually awarding marks around the median range (4 or 5 out of 9). The feedback and suggested improvements were notably beneficial, guiding students toward enhanced answer quality.

The AI's incessant quest for perfection was evident; despite some students revising their answers based on the initial feedback, AI continuously sought more, seemingly overlooking the time constraints of an actual exam setting.

An unintended but valuable outcome was the heightened engagement and critical evaluation skills demonstrated by the students, as they dissected AI's feedback and contemplated its application and possible improvements.

As a safety net, manual marking was conducted post-experiment.

The scenario in this case study is genuine and based upon real events and data, however its narration has been crafted by AI to uphold a standardised and clear format for readers.

Key Learning

While AI can augment the marking process, its accuracy isn't infallible.

Feedback from AI tools can serve as a valuable educational tool for students.

Involving students in the AI feedback process can boost engagement and foster critical thinking.

Manual oversight remains indispensable inensuring the authenticity and accuracy of AI-generated outcomes.


Over-reliance on AI for exam marking can lead to discrepancies in grading.

AI's continuous strive for perfection may not align with real-world constraints.

Without human intervention, feedback might lack contextual understanding and nuances.