This blog was written by Alina Lipcan, Senior Education Adviser at the Education Outcomes Fund for Africa and the Middle East, and Rachel Outhred, Senior Researcher with the Young Lives programme at Oxford Universityand Director at Oxford MeasurEd. The blog was first published UKFIET website here and is also published on the Oxford Policy Management and Education Outcome Fund websites.
In these times of COVID-19, 91 percent of the world’s student population is impacted by school closures. While recognising the many challenges of the pandemic, many have highlighted the opportunities this moment in time brings for the future of education technologies (EdTech).
In our forthcoming two-part EdTech COVID-19 blog series we write about the importance of producing evidence during and after this pandemic, as well as the need for more than evidence for EdTech to deliver of its promise. We argue that the academic field of EdTech is currently unstructured and unorganised. We also argue that we need better innovation and better decisions built on a foundation of better theory, more and better evidence, stronger dialogue, increased capacities and more and better investment in EdTech.
This blog takes a more practical slant and discusses the process of experimental innovation in developing appropriate EdTech to support learning. COVID-19 brings both opportunities to innovate quickly, with a vision to scale, and threats associated with scaling up untested innovations.
Experimental innovation
Experimental innovation is based on two key concepts:
- design thinking and
- evidence-based decision-making.
These two concepts do not always sit comfortably together (read The rise of public sector innovation labs: experiments in design thinking for policy). Design thinking is about doing something truly innovative, and so unlikely to have rigorous evidence related to it and a high chance it fails. For example, attempting to deliver adaptive learning content to pre-literate children via mobile phones. Evidence-based approaches, in contrast, are about doing something that is firmly grounded in evidence and low risk, and so unlikely to be truly new – cash transfers come to mind as a good example.
Balancing rigour and risk is central to harnessing the potential of EdTech to deliver learning for all. Radical change will be required to deliver on this potential, yet most institutions responsible for education service delivery are risk averse and resistant to speedy systemic transformation. In Sierra Leone, the Education Innovation Challenge (EIC) launched by the Department for Science and Innovation in collaboration with the Ministry of Education, is a prime example of a visionary application of the innovation process to drive large-scale change within education systems, incorporating EdTech solutions in a few innovations.
Experimental innovation needs to begin with small-scale testing, as exemplified by the five EIC winners rolling out interventions in a small number of schools, with rigorous evaluation nested within the pilot. The innovations engage with end users to learn what is working, and what is not working in order to adapt. This test-learn-adapt process at a small scale is essential to managing risk as it allows innovators to be flexible and to work with agility to improve the design solution. It is complemented with effective monitoring and stage-gates through regular review meetings, enabling providers to fail fast and stop investing resources when certain criteria or targets are not met. The vision is to scale up successful innovations over two to three years, and then take what works to national scale.
Now, the current EIC providers and other education implementers around the world face the additional challenge of more directly incorporating distance solutions in the times of COVID-19. This cannot always follow the same systems to monitor or manage risk given limits on data collection. As Ministers and their partners scramble to deliver education remotely due to school closures, they are understandably looking for solutions that can be implemented immediately, and at scale. The risks associated with applying untested solutions at scale are high. However, the risks of inaction are also great and therefore, we must push on.
Practical tips for mitigating risks
We propose that governments and donors can mitigate risks, as much as possible, by:
- Considering partnering with private sector companies and investors (commercial and impact-focused) and sharing the risk of appropriate innovation;
- Adopting flexible funding models to support innovators to continue to design and adapt solutions throughout the pandemic;
- Supporting researchers to monitor and regularly feedback information on the utilisation and effectiveness of implemented solutions;
- Supporting researchers to rapidly disseminate evidence of what is working and what isn’t working during the pandemic;
- Establishing data hubs where findings about which EdTech interventions work and why can be synthesised across programmes and contexts;
- Being willing to fail fast and change approaches if the evidence finds the design solution is not working; and
- Remembering the EdTech solutions we have enacted during the pandemic will likely need further adaptation in order to unlock the true potential for EdTech to support learning.
This blog was written by Alina Lipcan, Senior Education Adviser at the Education Outcomes Fund for Africa and the Middle East, and Rachel Outhred, Senior Researcher with the Young Lives programme at Oxford Universityand Director at Oxford MeasurEd. The blog was first published UKFIET website here and is also published on the Oxford Policy Management and Education Outcome Fund websites.
In these times of COVID-19, 91 percent of the world’s student population is impacted by school closures. While recognising the many challenges of the pandemic, many have highlighted the opportunities this moment in time brings for the future of education technologies (EdTech).
In our forthcoming two-part EdTech COVID-19 blog series we write about the importance of producing evidence during and after this pandemic, as well as the need for more than evidence for EdTech to deliver of its promise. We argue that the academic field of EdTech is currently unstructured and unorganised. We also argue that we need better innovation and better decisions built on a foundation of better theory, more and better evidence, stronger dialogue, increased capacities and more and better investment in EdTech.
This blog takes a more practical slant and discusses the process of experimental innovation in developing appropriate EdTech to support learning. COVID-19 brings both opportunities to innovate quickly, with a vision to scale, and threats associated with scaling up untested innovations.
Experimental innovation
Experimental innovation is based on two key concepts:
- design thinking and
- evidence-based decision-making.
These two concepts do not always sit comfortably together (read The rise of public sector innovation labs: experiments in design thinking for policy). Design thinking is about doing something truly innovative, and so unlikely to have rigorous evidence related to it and a high chance it fails. For example, attempting to deliver adaptive learning content to pre-literate children via mobile phones. Evidence-based approaches, in contrast, are about doing something that is firmly grounded in evidence and low risk, and so unlikely to be truly new – cash transfers come to mind as a good example.
Balancing rigour and risk is central to harnessing the potential of EdTech to deliver learning for all. Radical change will be required to deliver on this potential, yet most institutions responsible for education service delivery are risk averse and resistant to speedy systemic transformation. In Sierra Leone, the Education Innovation Challenge (EIC) launched by the Department for Science and Innovation in collaboration with the Ministry of Education, is a prime example of a visionary application of the innovation process to drive large-scale change within education systems, incorporating EdTech solutions in a few innovations.
Experimental innovation needs to begin with small-scale testing, as exemplified by the five EIC winners rolling out interventions in a small number of schools, with rigorous evaluation nested within the pilot. The innovations engage with end users to learn what is working, and what is not working in order to adapt. This test-learn-adapt process at a small scale is essential to managing risk as it allows innovators to be flexible and to work with agility to improve the design solution. It is complemented with effective monitoring and stage-gates through regular review meetings, enabling providers to fail fast and stop investing resources when certain criteria or targets are not met. The vision is to scale up successful innovations over two to three years, and then take what works to national scale.
Now, the current EIC providers and other education implementers around the world face the additional challenge of more directly incorporating distance solutions in the times of COVID-19. This cannot always follow the same systems to monitor or manage risk given limits on data collection. As Ministers and their partners scramble to deliver education remotely due to school closures, they are understandably looking for solutions that can be implemented immediately, and at scale. The risks associated with applying untested solutions at scale are high. However, the risks of inaction are also great and therefore, we must push on.
Practical tips for mitigating risks
We propose that governments and donors can mitigate risks, as much as possible, by:
- Considering partnering with private sector companies and investors (commercial and impact-focused) and sharing the risk of appropriate innovation;
- Adopting flexible funding models to support innovators to continue to design and adapt solutions throughout the pandemic;
- Supporting researchers to monitor and regularly feedback information on the utilisation and effectiveness of implemented solutions;
- Supporting researchers to rapidly disseminate evidence of what is working and what isn’t working during the pandemic;
- Establishing data hubs where findings about which EdTech interventions work and why can be synthesised across programmes and contexts;
- Being willing to fail fast and change approaches if the evidence finds the design solution is not working; and
- Remembering the EdTech solutions we have enacted during the pandemic will likely need further adaptation in order to unlock the true potential for EdTech to support learning.