The assessment integrity problem is no longer theoretical
If everyone knows the system is strained, why are we still pretending it isn’t?
For years, discussions about academic integrity lived at a comfortable distance from everyday teaching practice. They surfaced periodically, usually around plagiarism, contract cheating, or misconduct cases, and were handled as exceptions rather than signals. That distance no longer exists. Integrity is now a live, daily design problem. Not because students have suddenly become less ethical, but because the conditions under which learning is produced, documented, and assessed have changed faster than our systems have adapted. What was once hypothetical is now operational.
Generative AI didn’t create the integrity problem; it exposed it. Long before large language models entered the classroom, assessment systems were already under strain. We were relying heavily on written artefacts as proxies for learning, often detached from the contexts in which they were produced. Authorship was inferred rather than understood. Process was implied rather than examined. AI simply removed the last layer of plausible deniability. When students can generate fluent, structurally sound text in seconds, the limits of artefact-based assessment become impossible to ignore. Detection tools promise reassurance, but they offer only the illusion of control – brittle, adversarial, and always one step behind. The deeper issue isn’t technological. It’s epistemic. We have been mistaking outputs for understanding.
Much of the current response to integrity anxiety has focused on fortification:
tighter controls
more surveillance
more explicit prohibitions
more adversarial language
These approaches may reduce risk in narrow cases, but they do so at a cost. They position students as potential offenders rather than learners, and staff as enforcers rather than educators. They also scale poorly, require constant updating, and disproportionately affect students who are already navigating anxiety, language barriers, or unfamiliar academic cultures.Most importantly, they fail to address the core question: How do we know what a student actually understands?
If we stop treating integrity as a compliance issue and start treating it as a design problem, different options emerge.
Integrity is strongest when:
learning processes are visible
decision-making can be articulated
use of tools (including AI) is discussable rather than hidden
assessment invites explanation rather than concealment
This is where learning conversations begin to matter. A conversation makes authorship legible without requiring confession. It allows students to situate their work, explain how tools were used, and reflect on choices and limits. It shifts the emphasis from whether something was produced independently to how understanding was constructed. In other words, integrity becomes something that can be demonstrated, not merely asserted.
When assessment includes structured dialogue:
students are more willing to talk openly about their process, including AI use
misunderstandings surface quickly and productively
superficial fluency is exposed without accusation
genuine insight becomes easier to recognise
This doesn’t eliminate the need for standards or consequences. It does, however, make those standards easier to apply fairly, because judgement is grounded in interaction rather than inference.
The integrity problem isn’t going away. If anything, it will intensify as tools become more capable and more embedded in everyday practice. The choice facing higher education isn’t whether to respond, but how. We can continue to layer controls onto systems that were never designed for this moment. Or we can redesign assessment so that integrity is a natural by-product of how learning is evidenced, rather than something policed after the fact. Learning conversations are not a silver bullet. But they offer a credible, scalable way to bring integrity back into alignment with learning — without retreating into surveillance or suspicion.
In the next post, I want to focus on language. Specifically, why I’ve moved away from terms like viva and oral examination, and how renaming the assessment event reshapes power, anxiety, and expectation in ways that materially affect student performance and equity.
About the author
Don Parker is an Associate Professor in Design Thinking & Innovation at the University of Bristol and Programme Director for the MA Digital Innovation (Media, UX Design & Innovation, Digital Business Innovation). His work focuses on assessment design, dialogic learning, inclusive pedagogy, and educational practice in AI-augmented environments.
This series reflects ongoing professional practice and does not reference individual students or confidential cases.