No one wants to do research that goes unused. As researchers, we aspire to making a contribution, however small, to enriching our world with evidence and insight. But it can be hard to unpick the impacts that research has had, and even more challenging to demonstrate those impacts with robust evidence.  

In this post I share my approach to understanding and assessing research impact. I developed this approach through my PhD and have since used it in seven independent impact studies, in advising several research programmes, and in supporting academics to write impact case studies.

When I set out studying for a mid-life PhD in 2008, little did I know that my topic – research impact assessment – was about to take off in the UK and around the world as more interest became focused on the need to understand the wider benefits of research to society and economy. I was fortunate to gain insights into this by studying a successful research partnership, and building a framework based on contribution analysis1.

Defining research impact

Definitions are important in a PhD, so I developed one that has been useful to many colleagues grappling with how research and knowledge exchange activities contribute to high level outcomes like improved well-being.

My definition separates research impact into three distinct levels:

  • Research uptake: research users have engaged with research: they have read a briefing, attended a conference or seminar, were research partners, were involved in advising and shaping the research project in some way, or engaged in some other kind of activity which means they know the research exists.
  • Research use: research users act upon research, discuss it, pass it on to others, adapt it to context, present findings, use it to inform policy, or practice developments.
  • Research impact: changes in awareness, knowledge and understanding, ideas, attitudes and perceptions, and policy and practice as a result of research.

The challenge then is to trace impact through these levels, and find the evidence to demonstrate that change has occurred and can be reasonably linked to research.

This definition is based on the conceptualization of research use as a complex process, and follows work by Weiss2, and Gabay & LeMay3 who have unpicked the processes of research use in policy and practice to show that it is not a linear process. People who use research are not passive recipients of knowledge, instead they take up and use research according to the complex needs of the situation they are in, and combine it with other forms of knowledge to support their policy or practice. This presents many challenges for those of us who want to understand impact.

How to assess research impact

I advocate setting out a theory of change to show how research has contributed to different levels of outcomes. (We refer to this as an impact or outcome map – see our post Understand the outcomes and impacts that matter for more on this).

Theories of change are dynamic – they can be used as a planning tool for knowledge exchange, as well as the framework for assessing impact. The Research Contribution Framework published in Research Evaluation shows how to use a theory of change approach to understanding and demonstrating the impact of your research. This method of understanding impact aligns well with the case study approach used by many research excellence frameworks, such as the REF UK.

Having set out your theory of change, a variety of data and evidence can then be assembled against it.

At Matter of Focus, we use an outcome or impact map to help us clarify and clearly articulate the contribution story of our research. Our software OutNav makes it easy to add data as we go, and collaborate on evidencing the impacts of our research.


Text reads: An overview of the Matter of Focus approach

To learn more about the Matter of Focus approach we recommend starting with our overview post.


Working collaboratively is particularly helpful for identifying the assumptions we’ve made about how impact might occur. We advise spending time interrogating these assumptions and naming the any risks that your intended impact might not occur. These discussions form the basis for assembling and analysing data. Common risks and assumptions and data potential is also listed here.

Data for research impact assessment

  • Outputs/Activities: administrative data about knowledge exchange and research activities undertaken, and descriptions of outputs and how they were disseminated.
  • Engagement/Involvement: who was engaged and why, data on attendance, social media, web hits, downloads. Reflections on what went well and where there were challenges.
  • Reactions/awareness: feedback from research users and collaborators through events, social media surveys etc.
  • Changes in knowledge, skills, capacities: feedback from research users and collaborators. Measures of knowledge change, observations on changes.
  • Changes in Behavior, Policy and Practice: reported changes in policy and practice, backed up with documentary evidence. Interviews with key stakeholders, and follow-up interviews with others who have been identified as using the research
  • Final outcomes and contribution: logically extrapolated or recorded changes in outcomes.

Mapping the contribution of the Global Kids Online programme – the assessment of this can be found on the Unicef Office of Research-Innocenti website.

Top tips for assessing impact

  1. Be a detective. Assessing the impact of research can be a bit like being a detective. I often think of it as a detective job – finding clues and following them up. Sometimes one research user has a rich story to tell of how they have used research to influence policy or practice. I discuss this in another paper published here.
  2. Collect as you go. It really helps if you collect data and feedback as you go, rather than waiting until time has passed and trying to fill the gaps. Ideally if you have set out theory of change or impact map, then you will be able to see what feedback will be needed to evidence your contribution. I would advocate for using every interaction with research users as an opportunity to collect feedback, preferably formal feedback, but observations and reflections if that’s not possible.
  3. You don’t need lots of data – just the right data. You don’t need lots of data of impact, just enough convincing and well analysed data to demonstrate your contribution. Often a few key research users or collaborators can give rich insights that you can follow up and, if looking for policy change, can help shape any documentary analysis that is needed.
  4. It takes time for impact, and impact assessment. Impact takes time to emerge, so just stick with it, collecting data as you go. It can be useful to ask people if you can follow up with them after six months or a year to track emerging impacts.

Of course, I have taken this approach into Matter of Focus and some research teams are using our software OutNav to embed this way of working. If you want to see this in action, please contact us – or book a place at our research impact school.

How the Research Contribution Framework aligns with the Matter of Focus approach.



If you’re ready for a system to start tracking your research impact, take a look at our Research Impact School training.


Some publications using the approach

Research and knowledge exchange on resilience and intimate partner violence: “Make Resilience Matter” for Children Exposed to Intimate
Partner Violence Project: Mobilizing Knowledge to Action Using
a Research Contributions Framework. [Go to report (pdf)]

Knowledge to action in healthcare: ‘Developing a framework to evaluate knowledge into action interventions.’ [Go to report]

An assessment of the Global Kids Online programme: ‘Children’s experiences online: Building global understanding and action. [Go to report]

References

1 Morton, S. (2015). Progressing research impact assessment: A ‘contributions’ approach Research Evaluation. 2015 October 1, 2015;24(4):405-19. https://doi.org/10.1093/reseval/rvv016

2 Weiss, C., H (1979). The many meanings of research utilisation. Public Administration Review 39(5): 426-431.

3 Gabbay, J. and A. le May (2004). Evidence based guidelines or collectively constructed mindlines? Ethnographic study of knowledge management in primary care. BMJ 329. https://www.bmj.com/content/329/7473/1013

To receive a regular round-up of our insights and news please sign up to our mailing list.

Sign up now
Signup to our Newsletter feature