No items found.
No items found.
No items found.
No items found.
No items found.
Endorsed
*This resource has been tested for appropriateness in the classroom and scrutinised for safeguarding and cybersecurity issues. However, please do carry out any due diligence processes required by your own institution before using or recommending it to others.
Experimental
*This is an example of a resource that is under development and may not have been fully vetted for security, safety or ethics.  Please carry out your own safeguarding and cybersecurity due diligence before using it with students or recommending it to others.

AI in education: can it raise us up or will it divide us further? A discussion paper for the Centre for Progressive Policy March 2024

No items found.
No items found.
Panel Insight
No items found.
No items found.
Opinion Piece
No items found.
No items found.
Advisory
Roger Taylor

AI Ethics Adviser

This new paper written by Roger Taylor, inaugural Chair at the Centre for Data Ethics and Innovation (2018-21), was commissioned by the Centre for Progressive Policy to look at the opportunities for using AI tools in schools, careers advice and lifelong learning. The paper signals that without government interventions, the positive impact that EdTech could make will be lost and instead the attainment gap in Education will grow wider.

Roger's discussion paper on AI in education recommends personalised AI-tutoring in schools and colleges but only if we take steps to stop it widening gaps in educational attainment or increasing economic disparities.

Time will tell whether AI Ed-Tech is as good as hoped. Finding the right relationship between the AI, the teacher and the learners remains challenging. But the potential is extraordinary - both good and bad - and adoption is picking up. So, we should put the right foundations in place now.

Contracts for AI tutoring systems must protect privacy and ensure that systems can be scrutinised.  Ideas of how to apply that scrutiny must evolve over time if, as expected, AI affects how we understand success in education as well as how we learn.  But we can set down some principles.

We will need to understand the effect of using products in real-world settings, not just product test data. We will need to understand the impact on different pupils and different communities in ways that allow comparison between products, between settings and between uses. And we will need to have information that is timely and actionable for when problems arise, as they will. Nothing in the way we currently oversee education equips us for this.

Contract terms and oversight requirements should be coordinated by national governments who will need to develop new capabilities in this area. Where does the money for that come from?

We could start by getting rid of low value data systems. The National Reference Test costs a few million a year and provides data that lacks the precision or breadth to be of much use for the one purpose it was created - to inform GCSE grade boundaries. It should be scrapped, and exam boards should strengthen their own mechanisms with the use of comparative judgement.

The Reception Baseline Assessment is another strong candidate for the bin. The plan is to use it for performance management. But the data - noisy at best - depends, for its accuracy, on people who will be judged by the results. This is a well-trodden path to failure and waste.

Key Learning

The adoption of AI into schools and colleges is a moment to think carefully about how we oversee education. If AI takes off as hoped, we will need to put less reliance on blunt indicators and more time into understanding education with sufficient precision to manage AI-associated risks.

It will take time and money to develop the right ways of working. But fortunately, there are some things we could usefully stop doing as well as things we need to start doing.

Risks

Roger identifies several risks associated with the adoption of AI in education:

  1. Privacy concerns related to data collection and analysis by AI systems.
  2. Intrusive assessment methods by AI tutoring systems that may lead to unintended consequences.
  3. Potential misuse of data that could disadvantage learners and citizens.
  4. Uncertainty regarding the impact of AI on human behavior and learning outcomes.
  5. The need for transparent and trustworthy policies to govern the use of AI in education.

These risks highlight the importance of careful consideration and oversight in integrating AI technologies into educational settings to ensure positive outcomes for all stakeholders.