What University policy requires, and what it quietly allows
Am I really constrained by policy, or by how narrowly I’ve been reading it? When assessment innovation fails, it’s often blamed on regulation. Policy becomes the convenient antagonist: immovable, conservative, allergic to experimentation. I’ve heard this narrative repeatedly, usually from people who haven’t actually read the policy in detail, or who treat governance as something to work around rather than work with. The reality is more nuanced. University assessment policy does impose constraints. But it also creates structured flexibility, if you’re willing to read it carefully and design responsibly.
As a Programme Director, I don’t have the luxury of experimentation without consequence. Any assessment model I introduce must withstand scrutiny from multiple directions: quality assurance, external examining, student appeal, and institutional reputation. That means innovation has to be legible. Not just to students, but to committees, colleagues, and future reviewers who were not present when the design decisions were made. The challenge, then, isn’t whether policy allows learning conversations in principle, it’s whether they can be articulated clearly enough to sit comfortably inside existing regulatory language.
When you strip away the legal phrasing, most assessment policy is remarkably consistent in what it prioritises:
clear learning outcomes
transparent assessment criteria
fairness and consistency
opportunities for students to demonstrate achievement
defensible decision-making
Notice what’s not specified in detail. Policy rarely dictates format. It doesn’t insist that learning must be evidenced only through written artefacts, nor that assessment must be silent, solitary, or text-bound. Instead, it focuses on whether the method chosen is appropriate, equitable, and properly governed. In other words, policy cares about what is assessed and how decisions are justified, not about preserving tradition for its own sake.
The phrase that does the most quiet work in assessment regulations is also one of the most overlooked: “or equivalent”. That small allowance creates space, not for improvisation, but for equivalence by design.
If an assessment method can:
validly address the stated learning outcomes
apply criteria consistently
be moderated and reviewed
provide an auditable record of decision-making
…then it is, in policy terms, legitimate. Learning conversations meet these requirements when they are designed with intent rather than novelty. Structure, documentation, and calibration matter more than the medium itself.
One of the most important shifts I’ve made is to treat policy as a design partner rather than an obstacle.
That means:
mapping learning conversations explicitly to outcomes and criteria
documenting how dialogue produces assessable evidence
ensuring moderation processes are built in from the start
retaining artefacts (notes, recordings where appropriate, structured summaries) that support transparency
When these elements are in place, conversations cease to look like exceptions. They become just another, perfectly ordinary, assessment format. The work moves from justification to explanation.
Working within policy has had an unexpected effect: it has slowed the work down in the right way. Instead of rushing to innovate, I’ve been forced to articulate why each design choice exists, how it operates at scale, and what risks it introduces. That rigour has made the model stronger, more transferable, and easier to defend. It has also made conversations with colleagues more productive. Rather than arguing for permission, I can point to alignment. Rather than relying on enthusiasm, I can show structure. Innovation becomes boring, and that’s a compliment.
Policy doesn’t need to be rewritten to accommodate learning conversations. It needs to be read carefully, interpreted generously, and applied thoughtfully. The constraint isn’t regulation. It’s often imagination, or the fear of stepping outside inherited forms without institutional cover. Learning Conversations work not because they bypass policy, but because they take its underlying principles seriously.
In the next post, I want to turn to a less visible but equally important layer: absence, self-certification, and the realities of student life, and how assessment design has to accommodate unpredictability without collapsing into inconsistency.
About the author
Don Parker is an Associate Professor in Design Thinking & Innovation and Programme Director for the MA Digital Innovation (Media, UX Design & Innovation, Digital Business Innovation). His work focuses on assessment design, dialogic learning, inclusive pedagogy, and educational practice in AI-augmented environments.
This series reflects ongoing professional practice and does not reference individual students or confidential cases.