From Viva to Learning Conversation
Why naming the assessment event matters
What actually happens when we name an assessment event , and who does that name serve? The word viva carries a lot of weight. For some, it signals seriousness, rigour, and academic tradition. For others, it evokes exposure, judgement, and a moment in which authority is sharply asymmetrical. Even when the assessment itself is well designed, the name alone can shape how students enter the room – tense, defensive, and already braced for failure. I’ve come to realise that this isn’t a superficial issue. Naming is part of the assessment design.
In many disciplines, oral assessment is treated as an inherited form. We replicate its structure, language, and rituals because they are familiar, not because they are optimal. Terms like viva voce or oral examination encode a particular power dynamic: the examiner as gatekeeper, the student as candidate, the interaction as test. That framing may be appropriate in some contexts, but it becomes problematic when we are assessing exploratory, practice-led, or iterative work, especially with diverse postgraduate cohorts. Language sets expectation. Expectation shapes behaviour. Behaviour affects performance. If we want students to demonstrate learning rather than survive an ordeal, the frame matters.
I’ve deliberately adopted the term learning conversation to describe a specific kind of assessment encounter. This is not a euphemism. Nor is it an attempt to soften standards. It is a precise descriptor of what the assessment is designed to do.
A learning conversation:
is structured, timed, and criteria-aligned
involves dialogue rather than interrogation
treats explanation as evidence
makes space for reflection, uncertainty, and boundary-setting
positions the assessor as a listener as well as a judge
Crucially, the term signals intent before the event begins. Students arrive expecting to talk through their work, not to defend themselves against it. That shift alone has measurable effects on anxiety, clarity, and depth of response.
Assessment anxiety is not evenly distributed. Students from minoritised backgrounds, those studying in a second language, and those with previous negative assessment experiences often carry additional cognitive load into high-stakes encounters. When the assessment frame is adversarial, or even perceived as such, that load increases. Renaming the event doesn’t remove power. Assessment always involves judgement. But it can redistribute how that power is experienced. A Learning Conversation makes the rules explicit. It clarifies that the goal is understanding, not entrapment. It reduces the performative pressure to appear “correct” and increases the opportunity to demonstrate how thinking has developed. In practice, this leads to fairer outcomes, not because standards are lower, but because the assessment is better aligned with how learning is actually evidenced.
This shift isn’t only about students. When assessment is framed as a conversation, assessors listen differently. They probe reasoning rather than hunt for errors. They become attuned to process, decision-making, and reflective capacity, qualities that are often invisible in static submissions. Moderation also improves. When judgement is based on articulated understanding, differences in marker interpretation are easier to resolve because the criteria are anchored in observable dialogue rather than inferred intent. The work becomes more human, and, paradoxically, more robust.
A learning conversation is not:
an unstructured chat
an informal catch-up
a substitute for clear criteria
a way to avoid difficult judgement
It requires careful design, calibration, and support. It only works when embedded within a coherent assessment framework, with explicit expectations and transparent marking. Renaming without redesign would be cosmetic. The two must move together.
In the next post, I want to turn to policy. Specifically, how university assessment regulations – often seen as restrictive – actually create unexpected room for innovation when read carefully, and how working within governance has been essential to making learning conversations legitimate rather than exceptional.
About the author
Don Parker is an Associate Professor in Design Thinking & Innovation at the University of Bristol and Programme Director for the MA Digital Innovation (Media, UX Design & Innovation, Digital Business Innovation). His work focuses on assessment design, dialogic learning, inclusive pedagogy, and educational practice in AI-augmented environments.
This series reflects ongoing professional practice and does not reference individual students or confidential cases.