
At Mizizi Elimu Afrika, we engage with stakeholders across the education system to generate insights that inform policy and practice. This reflection draws from the EE4A 2025 prioritization exercise and explores a critical question: when stakeholders agree, are they really saying the same thing?
At the EE4A 2025 conference, eighty-four stakeholders were asked about their top priority. Nearly half said the same thing. This finding is not without value, it signals where attention is concentrated. However, it offers far less actionable clarity than it appears and reveals something important about how we ask questions.
Imagine asking 84 education experts what they believe is the single most important priority for improving foundational learning in Kenya. Forty of them give you what appears to be the same answer. That is half the room pointing in one direction. By any reading of a consultation exercise, that is a strong signal — is it not?
The EE4A 2025 stakeholder prioritization exercise (normal at the end of conferences and workshops) produced exactly this result. Forty respondents cited teacher capacity and support as their top priority for foundational learning. In the teacher education domain, thirty-eight cited teacher professional development. Across three separate domains, the same theme rose to the top each time. This convergence may reflect not only conceptual ambiguity, but also social and institutional incentives to align with dominant, low-risk narratives. Is this actionable? That's what this blog is about.
84 Stakeholders surveyed across three domains | 40 Cited teacher capacity as their top foundational learning priority | 0 Clearly specified interventions |
THE PROBLEM:
When the same word means a hundred different things
The phrase “teacher professional development”, or “teacher capacity and support,” or “parental engagement” is not an answer. It is a category. And categories, however widely endorsed, cannot be turned into a budget line, a programme design, or a policy instruction without additional interpretation that the data simply does not provide.
Teacher professional development functions as an umbrella term broad enough to encompass structured pre-service mentorship, subject-specific pedagogical content knowledge training, digital upskilling, county-level coordination of in-service programmes, mandate-shifting from TSC to universities, and dozens of other distinct policy actions.
This is the core of the illusion. When a respondent writes “strengthen teacher professional development,” it is not possible to determine which of these they mean or whether they have a specific intervention in mind at all. They may be reaching for a familiar, socially acceptable phrase rather than articulating a considered position. The result is a finding that appears to be a mandate but contains no instructions.
What "teacher professional development" could mean — a non-exhaustive list | |
Structured pre-service mentorship | Pairing student teachers with experienced mentors during practicum |
Subject-specific PCK training | Deepening pedagogical content knowledge in literacy or numeracy |
Digital and AI upskilling | Building teacher competency in educational technology tools |
County-level in-service coordination | Decentralising CPD planning and delivery to county structures |
Mandate shift to universities | Moving TPD responsibility from TSC to universities and TTCs |
Coaching and mentorship systems | School-based peer coaching and instructional leadership support |
This ambiguity is not accidental. In stakeholder settings, respondents often default to broad, familiar categories that are widely accepted but loosely defined. This pattern is evident beyond teacher-related responses. For instance, ‘Parental and caregiver engagement’ appeared in 27 of 84 responses for foundational learning. But consider what those 27 respondents might actually have been proposing:
Interpretation A | Structured home literacy programmes (materials and guidance for parents) |
Interpretation B | Strengthening parent-teacher associations (school level accountability structures) |
Interpretation C | Community radio campaigns to promote literacy at home |
Interpretation D | A general aspiration for parental involvement (no specific mechanism) |
This variation in interpretations does not reflect disagreement, but the absence of a shared, specific definition. The data treats all four interpretations as identical signals -they are not. Each of the first three interpretations implies a fundamentally different intervention, cost structure, and implementation pathway, whereas D requires nothing — because it is not yet a proposal.
The planning gap
The difference between “participants said teacher professional development matters” and “here is what should be done about TPD, at what cost, by whom, and by when” is entirely unbridged by this data. Consensus on a category is not consensus on an intervention — and confusing the two is where policy design begins to fail.
What the finding does — and does not — tell us
It would be wrong to conclude that the finding is worthless. The prioritization data does do something: it confirms that teacher-related concerns dominate stakeholder attention across all three domains. It establishes that parental engagement is widely understood as a foundational learning concern, not merely a systemic one. It signals that technology is viewed as a policy and structural issue rather than a classroom-level priority for early learning, a potentially significant observation in itself.
These are legitimate signals. They tell programme designers and policymakers where attention is concentrated. They help identify which conversations are worth deepening. What they cannot do, is substitute for those deeper conversations. A signal is not a recommendation, and a recommendation is not a plan.
The critical error, one that consultation exercises frequently make, is, to treat the output of this kind of exercise as if it were the latter when it is only the former. Headline findings get lifted into strategy documents. “Teacher professional development” appears as a priority intervention in a results framework. The ambiguity is never resolved because nobody returns to the data to ask: which kind, at what level, for whom, and how? When categories are treated as decisions, policy design risks becoming misaligned with actual implementation needs.
How to ask the question better
The good news is that the illusion of consensus is a design problem, not an inherent feature of stakeholder consultation. I propose two targeted fixes that go directly to the source of the problem.
Fix 1 Ensure specificity through structured follow-up probes |
If open-ended responses are retained, immediately follow each one with a conditional probe that requires the respondent to be specific. After a respondent writes“teacher professional development,” ask: “What specific aspect of TPD do you consider most urgent?” or “Describe one concrete action that should be taken in the next 12 months.” |
This design change costs very little and improves data quality. Instead of 38 coded mentions of “TPD,” the dataset would contain 38 distinct, specific positions, which could be compared, clustered, and acted upon. |
Example redesign: “What is your top priority for teacher education policy?” → [open field] → “In one sentence, describe a specific action that should be taken on this priority within the next year.” → [open field] |
Fix 2 Replace categories with actionable options from the start |
Rather than asking respondents to name a category, present a pre-defined list of 8–12 specific, actionable policy options per domain, drawn from existing research, policy proposals, and consultations conducted before the conference. Respondents rank their top three or allocate points across the list. |
For teacher education policy, instead of an open field, offer options such as: “Extend practicum to a minimum of one full school term,” “Establish county-level TPD coordination units,” or “Mandate minimum annual in-service training days.” Now respondents are choosing between real interventions. |
Note: The apparent loss of openness is smaller than it appears. Pre-defined options can still be drawn from prior stakeholder input, and an “other — please specify” option preserves the exploratory dimension. |
Fix 3 Introduce an iterative, two-round prioritization process |
A two-stage approach can improve the process. First, collect open-ended responses and synthesize them into specific, actionable options. Then, return to respondents to evaluate or rank these options. |
This preserves participation while generating more decision-relevant data. Although more resources-intensive, it shifts the exercise from symbolic consultation to a meaningful input into planning. |
The EE4A 2025 exercise produced a finding that will likely appear in reports for some time: that stakeholders prioritise teacher professional development. That finding is not wrong. But it is incomplete in a way that matters enormously when decisions about resources and programmes follow from it.
The illusion of consensus is seductive because it feels like an agreement, and agreement feels like a mandate to act. The lesson from this exercise is that before acting on apparent consensus, it is worth asking a harder question: are these 38 people actually agreeing, or are they all reaching for the same comfortable word to describe 38 different things?
The next prioritization exercise has an opportunity to find out. It requires only the discipline to ask the follow-up question — and the instrument design to make that follow-up unavoidable.
For Mizizi, this reinforces the need to move beyond broad consensus toward actionable, evidence-driven solutions that can strengthen foundational learning systems in Kenya and beyond.


