Vital

News & commentary about the global health workforce
Vital Home

It Doesn’t Count If You Can’t Count It: Measuring Successes in the Health Workforce

Last week, at the Second Global Forum on Human Resources for Health, I attended some provocative sessions about generating and translating evidence to inform policy. Many sessions echoed the new USAID evaluation policy released earlier this month by USAID Administrator Rajiv Shah. “By aggressively measuring and learning from our results, we will extend the impact of our ideas and of knowledge we helped generate,” Shah said, adding that USAID “will collect baseline data and employ study designs that explain what would have happened without our interventions so we can know for sure the impact of our programs.”

Generate Evidence

Getting the evidence needed to inform a program’s success or failure is the cornerstone of any health program. But sometimes generating evidence on human resources for health (HRH) health workforce programs is trickier and less straight forward than a laboratory study. HRH interventions are often complex, involve many different stakeholders, and occur over long periods of time. Unlike laboratory research, it is much more difficult to control variables in HRH interventions. In many cases, there are many interventions going on at one time. This does not mean that HRH studies should be any less rigorous than a laboratory study. On the contrary, evaluating HRH interventions needs to be more rigorous—and more creative—in measuring impact. We have to continue asking ourselves, ”is my data reliable?” In the lab we go to great lengths to ensure our methodology is sound, results are well documented, and data is assessed for quality. Why is it that when we think of rigorous research, we often think of it only occurring in a lab? Why can’t an evaluation conducted outside the lab, such as an evaluation of HRH programs, be just as meticulous? In fact, it can, and it should be. As with any study, we need to start with the question “What exactly are we measuring?”

Measure ‘Smartly’

Too often in HRH evaluation, interventions are evaluated without clear identification of what is being measured, and whether it is outcomes or impact. We need to stop and ask ourselves, “Are we measuring the right things; are we doing it in the right way; and using the right tools?” Laboratory scientists go to great lengths to assure that they are measuring with precision and accuracy. They rely on the most cutting edge methodology and technology in order to maximize precision and minimize error. This should be no different for HRH evaluators! The development and use of tools to measure outcome, and impact—two different things that require two different tools—of HRH projects need to be just as cutting-edge and just as precise. Evidence providers in HRH need to start thinking of the world as their ‘laboratory.’ We need to start devising better, smarter, and more precise methods and tools to measure our objectives.

Link Evidence to Informed Policy

Most of us who are monitoring and evaluation specialists working in HRH are not evaluating projects for the sake of evaluating projects. Rather, we are providing the Ministries of Health, Ministries of Finance, Ministries of Labor, and other bodies with the information they need to decide whether to further invest, scale-up, modify, or relinquish an intervention. Hence, we need to generate, synthesize, collate and package data in ways that make sense to policymakers. And, although the traditional role of monitoring and evaluation (M&E) advisors has been to respond to ‘demand,’ now it is time for M&E advisors and program implementers to proactively create demand among policymakers. In a session about translating evidence into policy, one panel member talked about how, in many countries, there is little demand for evidence. In other countries, many policymakers question the reliability and quality of the data. This is especially true of developing countries which need high-quality evidence even more than developed countries as a way to appropriately direct very limited resources. Developing countries simply cannot afford to enact ineffective policy.

Support the Evidence Providers

There are many players involved with strengthening HRH systems; this was evidence by the large and diverse group in Bangkok last week. Although we might not all agree on the types of interventions or directions of policies, one thing is clear: the overwhelming need to build capacity to generate reliable data. To do this, we need to build capacity among country staff evaluators, donors, non-governmental organizations, and multi- and bilateral partners to become top-notch evidence providers. This will require more than formal education or on-the-job training—it will require a long-term commitment to mentorship and partnerships. To provide robust evidence we need to involve the right mix of people with the right skills.

While I was watching the Global Health Workforce Alliance’s video, “A health worker for everyone, everywhere at last Wednesday’s opening plenary session, I was heartened to see what one health worker can accomplish when she is supported. It made me think that while it might take only one health worker to make a difference in a village, it takes a whole team of evaluators to tell you if that difference is having an impact. Imagine that.