What this looks like in practice

Prompt to self

If someone had to run this without me, what would they need to see?

Up to now, I’ve been writing about learning conversations at the level of design decisions: why they exist, what they respond to, how they fit within policy and programme architecture. That only gets you so far. At some point, if this is going to be taken seriously as a model rather than a set of intentions, it has to survive contact with administration, timetabling, marking spreadsheets, and guidance documents that someone else actually has to read. This is the point where things usually fall apart.

The moment the thinking stopped being enough

I noticed the shift when I started getting the same kinds of questions from colleagues:

  • How long does this actually take per student?

  • Who owns the scheduling?

  • What happens when someone doesn’t show up?

  • How do you mark it without writing a novel?

  • What does moderation look like if the evidence is conversational?

These aren’t philosophical questions. They’re operational ones, and they don’t get answered by principles.

The uncomfortable truth

Most assessment innovation fails not because the ideas are weak, but because the infrastructure never gets built.

We talk about pedagogy, but not about spreadsheets.
We talk about inclusion, but not about calendars.
We talk about integrity, but not about where decisions get recorded.

I realised fairly quickly that if Learning Conversations were going to scale, I had to treat documentation itself as part of the pedagogy. Not as bureaucracy. As design.

What I actually built

Over time, a small but critical ecosystem emerged:

  • Excel spreadsheets that controlled flow, not just data

  • scheduling tools that made workload visible

  • marking templates designed to reduce interpretation rather than invite it

  • moderation trackers that assumed scrutiny rather than feared it

  • SharePoint guidance written for people under time pressure, not ideal readers

None of this was elegant. Most of it was revised repeatedly. Some of it I initially resisted building at all. But without it, the assessment wouldn’t hold.

Why I’m showing this now (and to follow)

There’s a temptation to keep this layer hidden, to talk about outcomes without showing the machinery underneath. I don’t think that helps anyone. If Learning Conversations are going to be more than a local solution, they need to be legible to the people who inherit them: programme directors, unit leads, administrators, external examiners. So the next part of this series moves deliberately into show and tell. Not to present a finished system, but to document what actually exists, why it exists, and what I’d do differently if I were starting again.

What comes next

The posts that follow won’t read like essays. They’ll be slower. More specific. Occasionally dull in the way practical things are dull. They’ll talk about:

  • why a particular spreadsheet is structured the way it is

  • how scheduling decisions affect assessment fairness

  • where ambiguity causes real problems

  • what breaks under pressure

  • and which parts matter more than they look like they should

This isn’t about codifying best practice. It’s about leaving a usable trail.

.

About the author
Don Parker is an Associate Professor in Design Thinking & Innovation at the University of Bristol and Programme Director for the MA Digital Innovation.
This series reflects ongoing professional practice.

Next
Next

Programme architecture as pedagogy