Where We Work
See our interactive map
As researchers, we're often tempted to track data that are easy to capture—rather than searching for what we really need to know.
A police officer sees a drunk man searching for something under a streetlight and asks, “What have you lost?”
“My keys,” the drunk man says, and they both look under the streetlight together. After a few minutes, the officer asks if he is sure he lost the keys here, and the man replies, “No, I lost them in the dark alley across the street.” The officer asks why he is searching here, and the drunk man replies, "This is where the light is."
This is just an old joke, of course, but maybe we are not too different from the drunk man in the way we attempt to measure the impact of our development programs.
The report is a wake-up call regarding our own observational biases.
As development implementers, we might behave just like this when we design our measurement and evaluation frameworks. We need evidence. We need to see results or impact. So we select indicators that are easiest to measure—standing out there in the light—and we shape our interventions around that framework.
How often are we as researchers tempted to track data that are easiest to capture rather than searching in the dark for what we really need to know? Under the streetlight, we examine and document easy-to-see data when the information we really need may not be anywhere nearby or accessible to us.
In reading the new Lancet Global Health Commission report, High-quality health systems in the Sustainable Development Era: time for a revolution, we ask ourselves how we became sidetracked in measuring coverage, utilization, and access to the detriment of quality as measured by patient experience, competent care, and health outcomes. As development practitioners and researchers whose daily work intersects measurement, we see the report as a wake-up call regarding our own observational biases.
The streetlight effect is a type of observational bias that occurs when we search for something only where it is easiest to look. Coverage, utilization, and access have been easier to quantify—and they are important health systems indicators—but they do not adequately measure quality of care and health system performance.
Indicators that more closely measure quality of care, such as competent care and user experience, are at times much more qualitative, and require local contextualization and alignment with patient-reported outcomes and patient values, which are harder to obtain and quantify. This means we have to venture outside of the light and work a little harder to find the answers.
To measure quality requires a revolution in not only metrics but also governance.
Governments responsible for delivering high-quality public health services to their citizens and communities must understand the importance of making that effort and must establish mechanisms to advance and measure quality. That kind of true accountability is revolutionary.
Citizens too need a new approach. The community assessment of quality is important, and to measure this, citizens must understand what quality looks like. For example, a respectful, friendly health worker does not always equate with quality of care. Nor does receiving the right medications without delay. Perhaps the patient did not need medicine, or perhaps the patient was diagnosed incorrectly.
The commission calls on all of us to help local communities know the standards of care and quality and to establish effective feedback mechanisms. To measure quality requires a revolution in not only metrics but also governance to develop and steward quality metrics that incorporate patient experience, community capacity to recognize and assess the quality of care they receive, and research methodologies in the way we monitor and evaluate data around patient experience to make health systems more accountable to the people they serve.
Without these efforts, we shall continue to search in vain under the streetlight for those keys lost in a dark alley.
Allison Annette Foster, Mai-Anh Hoang, and Todd Nitkin serve as the co-chairs of the CORE Monitoring & Evaluation Working Group.