The first prototype: a scalable Learning Conversation assessment

What breaks first when this stops being a nice idea?

There’s a moment where every assessment idea stops being theoretical. It’s when you open the spreadsheet. Names. Numbers. Time slots. Staff availability. Moderation windows. Suddenly, the elegance disappears, and you’re left with logistics. This was the point where Learning Conversations either became real , or quietly died.

I didn’t want a pilot that only worked because everyone was enthusiastic.

So I set a few non-negotiables:

  • it had to work for a full cohort

  • it had to sit inside existing regulations

  • it had to be markable without heroics

  • it couldn’t quietly double staff workload

If it failed on any of those, it wasn’t viable.

What the assessment actually looked like

At its simplest, the prototype was this:

A short pre-submitted artefact.
A structured conversation.
Clear criteria.
A fixed time limit.

That was it.

No theatrical questioning. No ambush. No “defend your work” energy. Just: talk me through what you did, why you did it, and what you learned. The conversation wasn’t a supplement. It was the assessment.

Marking (the part everyone worries about)

This was where I expected trouble. But something unexpected happened: marking became clearer.

When students explained their decisions out loud, it was easier to distinguish:

  • confidence from understanding

  • polish from substance

  • ambition from overreach

Assessors weren’t guessing intent from the artefact. They were listening to it being articulated. That reduced ambiguity more than any rubric tweak I’ve tried.

The messy bits

Not everything was smooth. Timekeeping mattered more than I’d anticipated. Some assessors needed support in not over-leading the conversation. Some students arrived over-prepared, treating it like a defence rather than a discussion. Those things didn’t break the model, but they reminded me this is a practice, not a plug-in.

Moderation and cover

I was very conscious that if this ever went wrong, someone else would have to explain it.

So we built in:

  • shared calibration conversations

  • simple written summaries

  • consistency checks across assessors

Nothing fancy. Just enough to make decisions legible after the event. That mattered more than I expected.

What changed for me

This was the point where I stopped thinking of learning conversations as an innovation and started thinking of them as infrastructure. If they couldn’t be repeated calmly, by people who weren’t me, they weren’t finished. They passed that test, cautiously, imperfectly, but convincingly enough to keep going.

What became clear next was that this couldn’t sit in isolation.

The assessment worked best when the programme supported it, when students had multiple chances to practice articulating their thinking, not just one. That pushed my attention upward, to programme architecture itself.

About the author
Don Parker is an Associate Professor in Design Thinking & Innovation at the University of Bristol and Programme Director for the MA Digital Innovation.
This series reflects ongoing professional practice.

Previous
Previous

Programme architecture as pedagogy

Next
Next

Authorship in the age of the algorithm: process, assertion, and the secretarial machine