Learning Conversations: Why assessment has to change

What am I actually trying to protect when I design assessment?

For a long time now, I’ve felt an increasing sense of unease about how assessment operates in higher education. Not because standards matter less, they matter deeply, but because the mechanisms we rely on to evidence learning often feel misaligned with how learning actually happens. Assessment has become procedural, defensive, and strangely brittle. We ask students to demonstrate originality in systems that increasingly reward compliance. We ask for authenticity while constraining the forms it can take. We talk about learning as growth, but assess it as performance. This series, Learning Conversations, is my attempt to work through that tension in public. Not as a manifesto or a policy critique, but as a record of practice: decisions made under real constraints, with real students, within real institutional systems.

I sit at an intersection that makes these tensions impossible to ignore. As an Associate Professor and Programme Director, I’m responsible not just for individual units, but for the coherence, integrity, and credibility of an entire postgraduate programme. That means assessment isn’t an abstract design problem; it’s operational, ethical, and reputational. At the same time, the conditions under which assessment now takes place have shifted dramatically. Generative AI has unsettled long-held assumptions about authorship. Student lives are more complex, more precarious, and more unevenly resourced than our assessment calendars often acknowledge. Anxiety around presentation, performance, and exposure has become a structural issue rather than an individual one. The result is a system under strain, not because people aren’t trying hard enough, but because the form no longer matches the function.

Rather than treating assessment as a thing to be secured, I’ve begun treating it as a designed encounter. A learning conversation is not an informal chat, nor a diluted viva. It is a deliberately structured, criteria-driven exchange in which students are asked to articulate, evidence, and reflect on their work in dialogue with an assessor.

The shift is subtle but significant:

  • from artefact to encounter

  • from submission to explanation

  • from performance to sense-making

  • from detection to discernment

This doesn’t remove rigour, it relocates it. The standard is no longer whether a piece of work looks right in isolation, but whether a student can account for its development, decisions, limits, and implications.

Even at this early stage, a few things are becoming clear:

  • Students often understand their work more deeply than their written submissions allow them to show.

  • Dialogue surfaces learning that static artefacts routinely obscure.

  • Anxiety decreases when expectations are explicit, and the assessment event is framed as collaborative sense-making rather than adversarial judgement.

  • Academic integrity becomes easier to evaluate when students must explain how and why something came into being.

None of this is theoretical. It emerges from real assessment moments, with real stakes and real consequences.

This series is not an argument for abandoning written work, nor a claim that conversation solves everything. It is not a rejection of standards, outcomes, or policy. It is an attempt to document how assessment might evolve – carefully, responsibly, and within governance – to better reflect the realities of contemporary learning. Each post will work through a specific decision, constraint, or prototype. Some will describe what worked. Others will describe what didn’t. All of them are written as a conversation with myself: an attempt to stay honest about intent, impact, and uncertainty.

The next step is to make this operational rather than aspirational. In the following post, I’ll look more closely at the assessment integrity problem, not as a crisis narrative, but as a design challenge, and outline why surveillance and detection tools are a dead end if we care about learning rather than compliance.

About the author

Don Parker is an Associate Professor in Design Thinking & Innovation at the University of Bristol and Programme Director for the MA Digital Innovation (Media, UX Design & Innovation, Digital Business Innovation). His work focuses on assessment design, dialogic learning, inclusive pedagogy, and educational practice in AI-augmented environments.

This series reflects ongoing professional practice and does not reference individual students or confidential cases.

Previous
Previous

The assessment integrity problem is no longer theoretical