Absence, self-certification, and the reality of student life
What happens to a beautifully designed assessment when real life intervenes? One of the quiet failures of assessment design is the assumption of stability. Most assessment systems are built as if students arrive fully resourced, uninterrupted, and consistently available. Deadlines are fixed. Formats are rigid. Contingencies are framed as exceptions rather than expectations. But student life, particularly at postgraduate level, is anything but stable. If Learning Conversations are to function as a serious assessment model, they have to work not only in ideal conditions, but in the messy, unpredictable reality of people’s lives.
Absence is not an edge case. It is structural. Students miss assessments for reasons that are complex, cumulative, and often difficult to categorise neatly: caring responsibilities, acute anxiety, health issues, immigration pressures, financial instability. Institutions recognise this reality, but assessment design often lags behind policy in how it responds. Self-certification and exceptional circumstances frameworks exist to manage this complexity. Yet they are frequently treated as administrative afterthoughts rather than design inputs. That gap creates stress for students and uncertainty for staff.
One of the advantages of learning conversations is that they force assessment designers to think temporally rather than purely textually. A conversation happens at a moment in time. That makes scheduling, availability, and contingency unavoidable considerations, not something deferred to professional services after the fact.
In practice, this has led me to design with a different set of assumptions:
absence will happen
not all absence will fit neat categories
flexibility must be structured, not improvised
fairness depends on consistency of process, not identical experience
Learning conversations can accommodate this when they are embedded within clear windows, alternative slots, and transparent rules for rescheduling, all mapped carefully to institutional policy.
Poorly handled absence is an integrity issue. When students feel forced to perform while unwell, anxious, or under extreme pressure, the assessment no longer measures learning, it measures endurance. Conversely, when flexibility is applied inconsistently, trust erodes quickly. Learning Conversations make these tensions visible. They require staff to articulate, in advance, what flexibility looks like, who decides, and on what basis. That clarity protects students and assessors. It also reduces the adversarial tone that often accompanies discussions about missed assessments, because expectations are set upfront rather than negotiated under stress.
Designing with absence in mind has had several concrete effects:
clearer communication to students about what happens if they cannot attend
better alignment between academic judgement and administrative process
fewer last-minute crises framed as “exceptions”
increased confidence among staff when making decisions
Importantly, none of this requires special treatment or bespoke arrangements for individual students. It requires systems that assume variation rather than resist it.
Assessment design is not just about how learning is demonstrated, it’s about how learning is supported under real conditions. Learning Conversations work best when they are not brittle. When flexibility is designed in rather than bolted on, assessment becomes more humane without becoming arbitrary. This is not leniency. It is realism.
In the next post, I want to focus on equivalence, specifically how different assessment formats can meet the same learning outcomes without collapsing standards, and why “or equivalent” should be treated as a design challenge rather than a loophole.
About the author
Don Parker is an Associate Professor in Design Thinking & Innovation at the University of Bristol and Programme Director for the MA Digital Innovation (Media, UX Design & Innovation, Digital Business Innovation). His work focuses on assessment design, dialogic learning, inclusive pedagogy, and educational practice in AI-augmented environments.
This series reflects ongoing professional practice and does not reference individual students or confidential cases.