Programme architecture as pedagogy

What if assessment isn’t the end of learning, but the structure that shapes it? By the time the first learning conversation prototype was operating reliably, something else had become clear. The assessment itself was only part of the story. Learning Conversations worked best not when they were treated as isolated events, but when they were embedded within a wider programme architecture that was intentionally designed to support them. In other words, assessment was not a terminal activity, it was a structuring force. This realisation shifted my attention from individual units to the programme as a whole.

Postgraduate programmes often present themselves as collections of discrete units. Each has its own brief, its own assessment, its own timetable. Coherence is assumed to emerge through accumulation rather than design. In practice, this leads to fragmentation. Students experience sharp transitions between units, repeated onboarding into new expectations, and assessment formats that reset every few weeks. Learning conversations, by contrast, reward continuity: familiarity with criteria, confidence in articulation, and reflective capacity built over time. For them to work properly, the programme itself had to behave like a learning system.

The MA Digital Innovation programme is built around a studio-based model, not in the traditional arts sense, but as a sequence of structured learning environments with increasing complexity and autonomy.

This architecture supports learning conversations in several ways:

  • recurring assessment logic across units

  • shared language around evidence and reflection

  • consistent expectations about dialogue and explanation

  • progressive development of confidence and fluency

Rather than encountering oral assessment as a one-off, students learn how to participate in learning conversations as a normal part of their academic practice. Assessment becomes cumulative rather than episodic.

One advantage of a programme-level view is that it reveals misalignments that are invisible at the unit level.

When learning conversations are embedded intentionally, several things begin to align:

  • Teaching emphasises process, not just output

  • Formative feedback mirrors summative dialogue

  • professional practice sessions reinforce articulation and reflection

  • assessment criteria recur and deepen rather than reset

This reduces cognitive overhead for students and allows learning to compound. Instead of asking, ‘What does this unit want?’, students begin asking, ‘How does this build on what I already know how to do?’

Programme architecture matters for staff as much as for students.

When assessment logic is shared:

  • Marking becomes faster and more confident

  • Calibration improves naturally over time

  • Workload is easier to predict and distribute

  • Innovation is easier to introduce without destabilising the system

Learning conversations stop being “that one unit’s thing” and become part of the programme’s identity. This is critical for sustainability.

Thinking at the programme level reframes several persistent tensions in higher education:

  • Consistency vs flexibility becomes coherence vs rigidity

  • Innovation vs. regulation becomes design vs improvisation

  • Student experience vs academic standards becomes alignment vs mismatch

Programme architecture does not constrain pedagogy. It enables it. Learning conversations thrive when they are not exceptional, but expected.

Assessment shapes behaviour long before marks are awarded.When programmes are designed so that explanation, reflection, and dialogue are normalised, students learn how to think aloud about their work, a skill that extends well beyond the university.

Programme architecture, in this sense, is pedagogy.

In the next post, I want to look outward. Specifically, how learning conversations intersect with professional pathways, external partners, and employability. and why dialogic assessment turns out to be a powerful bridge between academic learning and professional practice.

About the author

Don Parker is an Associate Professor in Design Thinking & Innovation at the University of Bristol and Programme Director for the MA Digital Innovation (Media, UX Design & Innovation, Digital Business Innovation). His work focuses on assessment design, dialogic learning, inclusive pedagogy, and educational practice in AI-augmented environments.

This series reflects ongoing professional practice and does not reference individual students or confidential cases.

Previous
Previous

What this looks like in practice

Next
Next

The first prototype: a scalable Learning Conversation assessment