‘Or equivalent’ is not a loophole; it’s inclusion by design
Why does this tiny phrase make people so nervous? There’s a line that appears in almost every programme specification I’ve ever worked on. Essay, report, presentation (or equivalent). It’s usually there for safety. A hedge. Something to cover future changes. And yet, in practice, it’s treated like a trapdoor everyone agrees not to step on. I’ve noticed how quickly the temperature in a room changes when someone suggests actually using it.
Most assessment systems quietly assume a default format. Often an essay. Sometimes a presentation. Occasionally a report with slides attached. These formats aren’t bad, but over time they stop being choices and start being treated as the learning itself. That’s where things get blurry. I’ve sat with students whose understanding was clearly there, deep, thoughtful, hard-earned, but who struggled to translate it cleanly into the approved format. Not because they didn’t know the material, but because the format flattened it. And I’ve also seen the reverse: work that looked fluent, polished, and ‘correct’, but where it was hard to locate the thinking underneath. This isn’t a student problem. It’s a design problem.
I don’t think ‘or equivalent’ was ever meant to mean ‘anything goes’. If anything, it asks more of us. Equivalence isn’t about offering something easier, or more comfortable. It’s about asking: what is the learning we’re actually trying to see? And then designing multiple ways for that learning to show itself clearly. That’s harder than standardisation.
It means being explicit about:
What counts as evidence
What depth looks like
What “meeting the outcome” really involves
You can’t hide behind a format when you do this. You have to mean it.
Learning conversations didn’t start as an accessibility fix. They emerged because I kept bumping into the same frustration: I could hear the learning when students spoke, but I couldn’t always see it on the page. Conversation made the learning legible. Not performative. Not rehearsed. Just explained. That explanation, why something was done, how it developed, where it fell short, turned out to be the most reliable form of evidence I’d found. Which raised an awkward question: if that explanation meets the learning outcomes, why is it treated as secondary?
What I wanted to avoid was the familiar pattern where alternative formats only appear once something has gone wrong. That approach quietly marks difference as deficit. Designing equivalence upfront changes the tone entirely. It says: there is more than one legitimate way to show learning here. Not as a favour. As a design choice. That doesn’t make assessment looser, it makes it more precise.
I don’t think equivalence scales automatically. It needs structure, shared criteria, and staff confidence. Without that, it can drift into inconsistency very quickly. So I’m not claiming this is solved. But I am convinced of this: treating ‘or equivalent’ as a loophole is a failure of imagination. Treating it as a responsibility forces better assessment design.
I needed to stop arguing in the abstract and actually build something that could survive delivery. The next post is about that first proper attempt, when learning conversations had to work at scale, on a timetable, with real marking pressure.
About the author
Don Parker is an Associate Professor in Design Thinking & Innovation at the University of Bristol and Programme Director for the MA Digital Innovation.
This series reflects ongoing professional practice.