DfE updates AI safety expectations for schools

AI

The Education Secretary Bridget Phillipson has announced stronger safety measures to be introduced around AI in education. 

Announced at the UK AI for Education Summit, the updated AI safety expectations have details added on how AI tools used in schools must protect children’s mental health, cognitive, emotional and social development, and also protect against manipulation.

Phillipson said: "High profile cases have alerted the world to the risk of a link between unregulated conversational AI and self-harm.  So our standards make sure pupils are directed to human support when that’s what’s needed."

The standards outline the capabilities and features that generative artificial intelligence (AI) products and systems should meet to be considered safe for users in educational settings. 

The standards say that AI products used in schools should detect signs of learner distress including negative emotional cues in language or behaviour, patterns of use that indicate crisis, such as a sudden escalation in help-seeking, and references to mental health conditions, such as depression, anxiety, psychosis, delusion, paranoia, suicide or self-harm.

If distress is detected, the AI products should follow an appropriate pathway, including providing tiered response actions such as soft signposting to age-appropriate support pages and resources and raising a safeguarding flag to the institution’s safeguarding lead.