Young Lives has just published ten updated case studies, each of which demonstrates how the study has resulted in significant positive change in both policy and research for addressing childhood poverty. We’ve put these briefs together to better understand what worked in ensuring research use, and as part of our accountability approach for how we’ve used our resources. Off the back of this I’d like to do two things, first discuss some of the overarching ‘pathways to impact’ lessons; and second, to discuss what types of impact ought to be valued from a study like Young Lives.
Valuing the pathway not just the impact itself
We often get asked, ‘what is the impact of the research?’ That question shows an important assumption that research ought to lead to impact (I find that rather comforting in an era of fake news and when some in the UK at least seem to have had enough of experts…). But I think a more revealing question is ‘how did that impact come about?' The first question is about outcomes that depend on many factors (which include luck). The second question tests (and values) the processes over which researchers have control. Michael Marmott puts it well “Scientific findings do not fall on blank minds that get made up as a result. Science engages with busy minds that have strong views about how things are and ought to be.” The key point should therefore be how researchers get the right analysis to the right place in the right ways and at the right time.
So what did we learn about these pathways? Understanding, navigating and being able to respond to the particular challenges and opportunities that power and politics present has been critical for enabling positive change. Considering the Young Lives experience, I’ll draw out four common stories. There is much more in the case studies. We have also previously written some of this in a Picture of change.
First, robust and accessible research. Young Lives is a research study and the bedrock is the credibility of the underlying data and research. But the strength and quality of the research is not just its ‘technical’ virtues. It is also about how it adds value to existing knowledge and how it reaches a wider audience in a timely way. Young Lives has teams in each study country engaged with national debates and it has invested in briefs, papers and a broad range of innovative communication materials, including fact sheets, infographics, data visualisations and short films, with a strong focus on digital media.
Second, flexibility to policy demand. A number of our most successful examples of impact came when researchers used existing research within an identified policy window or adapted their research plans in light of policy demand. A key impact captured in the case study on violence affecting children came from an identified opportunity to contribute to a debate about corporal punishment in Peru. The design of the preschool work in Ethiopia was developed when it became clear what the Ministry of Education’s needs actually were. That involved building on the original longitudinal design to meet policy needs even where that needed different, tailored studies.
Third, working in partnership. The long-term nature of Young Lives has enabled strong partnerships with government and with other development collaborators. The trust those partnerships create is vital, and has often been used to create a shared sense of purpose (for example the creation of the Child Research and Practice Forum in Ethiopia). Close working with officials or organisations, help get research closer to the place it needs to be. Working alongside other organisations such as in the Global Coalition to End Child Poverty creates co-produced benefits from the skills, knowledge and networks each partner brings. Long-term working has allowed an active engagement with Government research users (as is well shown by the education and school effectiveness case study), from research questions and questionnaire design through to dissemination engagement.
Fourth, measuring impact is not straightforward. Our approach is a case study one, describing the actions and process to a particular impact, and using external sources to verify (quotes and pick up of analysis elsewhere). This approach requires a careful internal logging of meetings and mentions (otherwise much gets forgotten). In the often complex and messy world of influencing policy and programmes, ‘contribution’ is often a better framing than ‘attribution’. There are also influences we cannot capture. Those who have used our research are not always in a position to say this publicly, and research may inform policy documents without ever being formally acknowledged.
Ways to understanding the impact of longitudinal studies
I am often struck by how poorly defined impact actually is. Young Lives Theory of Change identifies types of impact as conceptual (shifting thinking); instrumental (responding to existing concerns); and capacity (expanding the capacity to collect and use data and evidence). Our impact case studies have examples of each. Donors usually recognise the importance of each (see for example DFID’s research uptake guidance). But, still it is often the instrumental that is most highly prized as being of immediate real world relevance. That’s not a surprise; instrumental impacts are also likely to be the most ‘provable’ and more similar to measuring the impact of programme spending on vaccinations, clean water and so on. Instrumental impacts are important but they are not the only contribution research longitudinal studies make, and I do not think that they are always the most important. How to measure how thinking changes is a fuzzy sort of a problem, however, those types of impact may have greater long-term significance at scale.
The benefit of longitudinal studies covering a broad range of topics is that large observational, forward-thinking exercises can reach places other approaches cannot. Such studies enable long-term, dynamic analysis and providing insights on questions that were not thought about when the studies were designed. Such studies should not only revolve around today’s policy concerns, as by the time studies come to fruition years later things may have moved on. For the immediate policy need, other approaches are often better (e.g. policy evaluations). Longitudinal studies can do something different.
For me, one of the most exciting stories emerging from Young Lives is of post-infancy growth recovery. The research finding required a long-term longitudinal study to find those patterns, and the question was not one that was thought of when the study was initiated. This ground-breaking finding has stimulated much debate. It has been found that (a) some children recover and f
The UK has a great history of longitudinal studies. The Economic and Social Research Council (ESRC), the UK's principle public funder of social science, has likened the British Cohort Studies as equivalent for the social sciences to the Large Hadron collider. These longitudinal studies provide an ‘infrastructure’ for social science investigation. The ESRC has recently commissioned an independent review of its (significant) longitudinal investments in the UK. In its response, the ESRC committed to continued support for longitudinal studies and identified the need to better understand impact. The ESRC’s Rebecca Fairburn noted that, “Somehow we need to be able to capture data on the impact of relationships and conversations, on impacts of the counterfactual, and the policies avoided as a result of social science evidence.”
That message feels familiar so with it in mind I’ll finish on five reflections on valuing and assessing the more fuzzy (but important) nature of impacts:
-
Keep a broad view of what impact is. Narrow instrumental impact may be easier to measure but to conflate ‘impact’ with instrumental impact to current policy agendas ignores the potential for new ideas to emerge and challenge existing assumptions.
-
Evidencing impact requires a qualitative approach. The ESRC review notes the challenge of understanding impact in the social sciences, given that influence often comes through engaging in relationships and processes. Qualitative case studies are the best way to understand this subtly.
-
Obtaining external verification of impact is not straightforward. Getting the right external quotes and mentions for verification is not always easy. Policymakers may not be able or willing to say on the record that research has shaped their thinking (particularly when researchers from one country are part of a collaboration studying problems in another). Donors judging studies by their impact need to be realistic in their desire for evidence.
-
Valuing the potential future contribution of research. Given that not all impact happens in a neat and time-bound way, it is helpful to establish the future potential. To make a couple of suggestions. First, it is helpful to be very clear about what couldn’t have been known before a study happened (and why it matters). Second, it may be helpful to show how many people are affected by a particular problem (or how severely) and so how many could potentially benefit from better policy (in my example above the number of children might be helped if it was possible to reverse early undernutrition at older age points). And third, it may be possible to track the development of debates (for example across social media as well as academic publishing) to show how narratives around policy debates develop and how research has contributed to this.
-
Value the pathway, not just the impact. Finally to return to the start of this blog. To only judge the outcome is often to judge some things over which researchers do not have full control. Worse, the risk is to encourage over-claiming. A focus on ensuring researchers have thought about the pathway by which research gets to the right place at the right time is more constructive.
The Young Lives survey data is now in the public domain as a public good to support future research. The long-term impact should therefore be considerably greater than we are able to show in case studies that can be produced now. Experience from longitudinal studies suggests the most interesting analysis may emerge long after the study is complete. And with that in mind, some trust is needed that if the right pathway is in place, future positive impacts for policy and programming are likely.
Young Lives has just published ten updated case studies, each of which demonstrates how the study has resulted in significant positive change in both policy and research for addressing childhood poverty. We’ve put these briefs together to better understand what worked in ensuring research use, and as part of our accountability approach for how we’ve used our resources. Off the back of this I’d like to do two things, first discuss some of the overarching ‘pathways to impact’ lessons; and second, to discuss what types of impact ought to be valued from a study like Young Lives.
Valuing the pathway not just the impact itself
We often get asked, ‘what is the impact of the research?’ That question shows an important assumption that research ought to lead to impact (I find that rather comforting in an era of fake news and when some in the UK at least seem to have had enough of experts…). But I think a more revealing question is ‘how did that impact come about?' The first question is about outcomes that depend on many factors (which include luck). The second question tests (and values) the processes over which researchers have control. Michael Marmott puts it well “Scientific findings do not fall on blank minds that get made up as a result. Science engages with busy minds that have strong views about how things are and ought to be.” The key point should therefore be how researchers get the right analysis to the right place in the right ways and at the right time.
So what did we learn about these pathways? Understanding, navigating and being able to respond to the particular challenges and opportunities that power and politics present has been critical for enabling positive change. Considering the Young Lives experience, I’ll draw out four common stories. There is much more in the case studies. We have also previously written some of this in a Picture of change.
First, robust and accessible research. Young Lives is a research study and the bedrock is the credibility of the underlying data and research. But the strength and quality of the research is not just its ‘technical’ virtues. It is also about how it adds value to existing knowledge and how it reaches a wider audience in a timely way. Young Lives has teams in each study country engaged with national debates and it has invested in briefs, papers and a broad range of innovative communication materials, including fact sheets, infographics, data visualisations and short films, with a strong focus on digital media.
Second, flexibility to policy demand. A number of our most successful examples of impact came when researchers used existing research within an identified policy window or adapted their research plans in light of policy demand. A key impact captured in the case study on violence affecting children came from an identified opportunity to contribute to a debate about corporal punishment in Peru. The design of the preschool work in Ethiopia was developed when it became clear what the Ministry of Education’s needs actually were. That involved building on the original longitudinal design to meet policy needs even where that needed different, tailored studies.
Third, working in partnership. The long-term nature of Young Lives has enabled strong partnerships with government and with other development collaborators. The trust those partnerships create is vital, and has often been used to create a shared sense of purpose (for example the creation of the Child Research and Practice Forum in Ethiopia). Close working with officials or organisations, help get research closer to the place it needs to be. Working alongside other organisations such as in the Global Coalition to End Child Poverty creates co-produced benefits from the skills, knowledge and networks each partner brings. Long-term working has allowed an active engagement with Government research users (as is well shown by the education and school effectiveness case study), from research questions and questionnaire design through to dissemination engagement.
Fourth, measuring impact is not straightforward. Our approach is a case study one, describing the actions and process to a particular impact, and using external sources to verify (quotes and pick up of analysis elsewhere). This approach requires a careful internal logging of meetings and mentions (otherwise much gets forgotten). In the often complex and messy world of influencing policy and programmes, ‘contribution’ is often a better framing than ‘attribution’. There are also influences we cannot capture. Those who have used our research are not always in a position to say this publicly, and research may inform policy documents without ever being formally acknowledged.
Ways to understanding the impact of longitudinal studies
I am often struck by how poorly defined impact actually is. Young Lives Theory of Change identifies types of impact as conceptual (shifting thinking); instrumental (responding to existing concerns); and capacity (expanding the capacity to collect and use data and evidence). Our impact case studies have examples of each. Donors usually recognise the importance of each (see for example DFID’s research uptake guidance). But, still it is often the instrumental that is most highly prized as being of immediate real world relevance. That’s not a surprise; instrumental impacts are also likely to be the most ‘provable’ and more similar to measuring the impact of programme spending on vaccinations, clean water and so on. Instrumental impacts are important but they are not the only contribution research longitudinal studies make, and I do not think that they are always the most important. How to measure how thinking changes is a fuzzy sort of a problem, however, those types of impact may have greater long-term significance at scale.
The benefit of longitudinal studies covering a broad range of topics is that large observational, forward-thinking exercises can reach places other approaches cannot. Such studies enable long-term, dynamic analysis and providing insights on questions that were not thought about when the studies were designed. Such studies should not only revolve around today’s policy concerns, as by the time studies come to fruition years later things may have moved on. For the immediate policy need, other approaches are often better (e.g. policy evaluations). Longitudinal studies can do something different.
For me, one of the most exciting stories emerging from Young Lives is of post-infancy growth recovery. The research finding required a long-term longitudinal study to find those patterns, and the question was not one that was thought of when the study was initiated. This ground-breaking finding has stimulated much debate. It has been found that (a) some children recover and f
The UK has a great history of longitudinal studies. The Economic and Social Research Council (ESRC), the UK's principle public funder of social science, has likened the British Cohort Studies as equivalent for the social sciences to the Large Hadron collider. These longitudinal studies provide an ‘infrastructure’ for social science investigation. The ESRC has recently commissioned an independent review of its (significant) longitudinal investments in the UK. In its response, the ESRC committed to continued support for longitudinal studies and identified the need to better understand impact. The ESRC’s Rebecca Fairburn noted that, “Somehow we need to be able to capture data on the impact of relationships and conversations, on impacts of the counterfactual, and the policies avoided as a result of social science evidence.”
That message feels familiar so with it in mind I’ll finish on five reflections on valuing and assessing the more fuzzy (but important) nature of impacts:
-
Keep a broad view of what impact is. Narrow instrumental impact may be easier to measure but to conflate ‘impact’ with instrumental impact to current policy agendas ignores the potential for new ideas to emerge and challenge existing assumptions.
-
Evidencing impact requires a qualitative approach. The ESRC review notes the challenge of understanding impact in the social sciences, given that influence often comes through engaging in relationships and processes. Qualitative case studies are the best way to understand this subtly.
-
Obtaining external verification of impact is not straightforward. Getting the right external quotes and mentions for verification is not always easy. Policymakers may not be able or willing to say on the record that research has shaped their thinking (particularly when researchers from one country are part of a collaboration studying problems in another). Donors judging studies by their impact need to be realistic in their desire for evidence.
-
Valuing the potential future contribution of research. Given that not all impact happens in a neat and time-bound way, it is helpful to establish the future potential. To make a couple of suggestions. First, it is helpful to be very clear about what couldn’t have been known before a study happened (and why it matters). Second, it may be helpful to show how many people are affected by a particular problem (or how severely) and so how many could potentially benefit from better policy (in my example above the number of children might be helped if it was possible to reverse early undernutrition at older age points). And third, it may be possible to track the development of debates (for example across social media as well as academic publishing) to show how narratives around policy debates develop and how research has contributed to this.
-
Value the pathway, not just the impact. Finally to return to the start of this blog. To only judge the outcome is often to judge some things over which researchers do not have full control. Worse, the risk is to encourage over-claiming. A focus on ensuring researchers have thought about the pathway by which research gets to the right place at the right time is more constructive.
The Young Lives survey data is now in the public domain as a public good to support future research. The long-term impact should therefore be considerably greater than we are able to show in case studies that can be produced now. Experience from longitudinal studies suggests the most interesting analysis may emerge long after the study is complete. And with that in mind, some trust is needed that if the right pathway is in place, future positive impacts for policy and programming are likely.