RTI uses cookies to offer you the best experience online. By clicking “accept” on this website, you opt in and you agree to the use of cookies. If you would like to know more about how RTI uses cookies and how to manage them please view our Privacy Policy here. You can “opt out” or change your mind by visiting: http://optout.aboutads.info/. Click “accept” to agree.

If you can dramatically improve educational outcomes in a school, can you do it across the district? How about across an entire country? How about across the more than 22,000 schools in Kenya that serve more than 3.3 million Grade 1 to 3 students?

In recent years some educational interventions have dramatically improved learning outcomes on a small scale, and several countries are now attempting to replicate those successes at a national level. However, history is littered with the remains of pilot programs that failed when attempts were made to implement at scale.

RTI education experts Benjamin Piper and Joseph DeStefano are co-authors of new research examining lessons learned from the USAID Tusome Early Grade Reading Activity, a national literacy program in Kenya that was successfully scaled up from its pilot predecessor. Esther Kinyanjui and Salome Ong’ele are also co-authors of the paper, as the previous National Coordinator and Chief of Party of Tusome. Piper and DeStefano answer questions about the key elements of Tusome’s successful scale-up and reflect on what other educational interventions might learn from Tusome’s experience.

To learn more, read the full article, “Scaling up successfully: Lessons from Kenya’s Tusome national literacy program” in the Journal of Educational Change.

What most surprised you about the success of Tusome?

We were shocked when we saw the magnitude of the results identified in the external impact evaluation. We would have been happy with gains that were half as large as what we saw. To see the percentage of children who reach the government benchmarks doubling within a year is extremely encouraging, and something we have never seen before, and certainly have not seen achieved at scale. We were also surprised to see learning gains in Kiswahili oral reading fluency with effect sizes that were 2.5 times those of the pilot. It is unusual to say the least to have greater impact at scale than under the conditions of a pilot.

On further reflection, the indicators that we were tracking as the implementors of Tusome showed very high levels of program fidelity, which we did see as a predictor of program success. So we expected Tusome to have positive impact on learning outcomes. What happily surprised us was the magnitude of those impacts.

What did your research indicate as key factors in achieving those outcomes at scale?

While impact evaluations determine whether the program worked, they don’t tell you why they worked. For this paper, we examined whether there is evidence of improved system capacity related to three core functions, namely the system’s ability to: set and communicate expectations; monitor progress toward meeting those expectations; and provide targeted, differentiated support. Our paper shows that you can collect evidence during the implementation of the program that will suggest whether the system is developing these core capacities, and therefore whether the program is on track to produce improved outcomes.

For example, we found that the typical teacher in a rural classroom could tell you the benchmark expectations for learning outcomes of their students. This demonstrates that those expectations have been communicated, and as teachers come to internalize the new expectations, that helps orient them to have much higher expectations for what children can do. Second, we found that the necessary inputs were reaching the schools and, importantly, 80 percent of teachers in the program were using the Tusome materials 80 percent of the time, or more. This meant that the Tusome intervention was bought into by the typical teacher. This may seem to be a simple thing, but trust us, it really isn’t. Many programs fail not because the program wasn’t good but because the teachers never used the materials. So when we saw that Tusome materials were being used with this level of adherence, the signs were positive that Tusome could make a big difference. Finally, we found evidence of system capacity to monitor implementation. The classroom observation and accountability system that we designed was really working. Curriculum Support Officers (CSOs) were averaging 20,000 classroom observations a month for much of 2016 and 2017, which showed that the national system was supporting, reinforcing, and holding teachers accountable for meeting the expectation of using the new materials and for achieving improved outcomes. CSOs were visiting schools more frequently than had ever been seen in Kenya, and they were using the tablet-based instructional support program to provide teachers regular feedback and thus reinforce the improved teacher practice essential to Tusome’s success.

In what other ways do you think Tusome impacted the education system in Kenya?

Our research leads us to conclude that the changes mentioned above contributed to a shift in the institutional culture of the education system. Frequent CSO visits, regular review of data on those visits, and available data on student performance all altered the prevailing social and institutional norms that for years allowed the education system to function at a sort of low-level equilibrium trap. The new norms, that admittedly may be fragile at this stage, did increase accountability for key actors in the system—county directors, curriculum support officers, and teachers. For example, data from the classroom visits were being used to hold the CSOs accountable. Each county shared the results of the number of visits of each CSO in a public forum, with county-level administrators and supervisors in attendance. This clearly signaled a new set of norms and formed the basis for greater accountability within the Kenyan system’s own mechanisms and structures, rather than through those of the project. We postulate that this worked because the project helped the Kenyan government system improve management of the system’s own personnel in ways that incentivize the key behaviors needed to assure the success of a literacy improvement program.

Please tell us more about the Tusome program. What is the purpose of the program and how was it designed?

Tusome is the national literacy program of Kenya, which means it supports improved literacy of all Kenyan children from Grades 1 to 3. Tusome is funded by USAID but implemented by the Kenyan government with technical support from RTI International. Through the program, the Kenyan government redesigned the materials used in English and Kiswahili classes, provided focused professional development to all Grade 1 to 3 teachers and head teachers, and helped the system provide technical feedback through a national tablets program to teachers. The combination of materials, training, coaching, and feedback that Tusome provides was built on rigorous evidence from the Primary Math and Reading (PRIMR) initiative.

PRIMR (implemented from 2011–2014) was a set of randomized control trials designed to test approaches to improving literacy and numeracy. The program showed effects on literacy in mother tongue, in English and Kiswahili, and on numeracy. PRIMR worked through coaches, testing the cost-effectiveness of a variety of ICT approaches that could work at scale, and tested which components of the program were most cost-effective, ultimately showing that teachers’ guides were an essential part of the package, and that the program could work in both public and low-cost private schools in nonformal urban settlements.

Based on the rigorous evidence from PRIMR, Tusome’s implementation included the key characteristics shown to be essential to improving outcomes at scale. Tusome began supporting all of Kenya’s Grade 1 pupils in 2015, expanded to Grade 2 in 2016, expanded again to Grade 3 in 2017, and will continue supporting Grade 1 to 3 through 2019. Part of why Tusome worked, we presume, is that PRIMR was organized to be successful at scale (1,384 schools), and the evaluations done on PRIMR were focused on whether the government system could implement the program. Tusome simply expanded the scope, though obviously at much larger scale.

What can education implementers learn from Tusome’s success?

It is unfortunate that we must tout the provision of foundational inputs, such as materials and basic teacher professional development, as a significant accomplishment of an education system, but that is the reality faced by teachers and students in many countries in the Global South. Too often, and too consistently, books in the hands of children and training for their teachers are not adequately assured, and teachers are infrequently provided with any meaningful or consistent instructional support. Teacher professional development—when it is delivered at all—is of low quality, too theoretical, or mistimed. Quantities of materials too often are insufficient to allow every child to hold and use their own book. Failure to provide these basic supports to schools substantially erodes the potential impact of innovations as they are taken to scale.

The external midline evaluation of Tusome showed that these basic supports were being provided to schools, teachers, and students at scale. That was one signal that the norms within the system were changing.  The dramatic improvement in the frequency of CSO visits and creating forums where CSOs would face public accounting of how good a job they were doing in supporting teachers was another signal of changing norms.  Tracking, reporting and discussing student outcomes, and doing so with respect to clearly understood and shared expectations for the levels of performance students should be able to reach, also signalled a change in the norms of the education system. Other programs can think about how their work can have similar impact—not just in terms of providing inputs, but in terms of shifting the institutional culture within the education system so as to raise everyone’s expectations about the provision and use of inputs and the outcomes that therefore can be achieved.  

Development programs can help countries like Kenya make radical implementation decisions, invest in the resources needed to follow these decisions, and focus on robust teacher support structures that help reform the core of their instructional practice. By doing so, Tusome shows that substantial gains in learning outcomes are possible after all.

Disclaimer: This piece was written by Benjamin Piper (Senior Director, Africa Education) and Joseph DeStefano (Director, Policy, Systems & Governance) to share perspectives on a topic of interest. Expression of opinions within are those of the author or authors.