Vital

News & commentary about the global health workforce
Vital Home

Are We at the “End of the Beginning” for Global Digital Health? And What’s Next?

A health worker uses a tablet at Hospital Puertos Barrios in Izabal, Guatemala. Photo by Anna Watts for IntraHealth International.


To impact health outcomes, we still have challenges and opportunities to tackle.


After decades of sustained focus by the digital health community, global bodies like the WHO, UNICEF, and the World Bank, and many country governments have embraced digital transformation as a key driver for achieving universal health coverage. But that may just be the end of the beginning for global digital health—to enable digital transformation that impacts health outcomes, we still have major challenges to address:

  • The proliferation of innovative, but not interoperable, digital tools has arguably left eHealth in eChaos.
     
  • The global health workforce is facing a shortage of 10 million workers by 2030 and many in post are leaving due to systemic burnout.
     
  • The open source data systems we rely on to make data available for high value Artificial Intelligence (AI) use cases suffer from under-investment and weak business models. 

What opportunities exist to better leverage data and what long-term investments should be made to enable high value data use cases? Read on to learn how we’re taking on these challenges and opportunities at IntraHealth. 

Connect with us at the Global Digital Health Forum, December 4-7, to discuss how we can partner to achieve the future we want.

Answering the most impactful business questions often requires more than one database.

Let’s face it, a lot of reporting systems in global health are built for donor reporting or national government monitoring rather than improving local program implementation. One can argue this came about out of necessity as the development sector grew and regulatory control is essential. But now we have an opportunity to shift our perspective and re-use data to answer some of our most critical business questions. 

IntraHealth implemented the USAID Regional Health Integration to Enhance Services in Eastern Uganda (RHITES-E) program from 2018-2023. We had a seemingly simple question: how many lay workers needed to be employed and could we dynamically redistribute their workload to improve retention? Our data science team had to merge four siloed databases to answer this question: data on the total staff by cadre at health facilities (PEPFAR HRH inventory), percent of those positions filled to proxy for workload (iHRIS), retention data and GPS locations of facilities (DHIS2), and the size of the cohort (DATIM High Frequency Reporting). We determined that if each lay worker had 50 clients they would have enough time to engage with them, helping to promote their long-term retention. Mapping the actual ratios let us see where we could redistribute workload. Since a lot of engagement is via phone, we could redistribute client lists without having to physically move any staff.

Having an internal data science team at IntraHealth has reduced the lead time from good ideas to great results. We have refined our approaches through several projects since 2019; you can read about our lessons learned from applying machine learning to messy, real-world data in LMICs here.

Generative AI can help cope with software developer and health worker shortages.

The world is not only facing a shortage of health workers. There is also a shortage of FHIR-literate software developers in global digital health—we don’t have estimates but anecdotally it is a lot. 

At IntraHealth we’re excited about the opportunity created by large language models and generative AI like ChatGPT to support and upskill health workers and software developers who can help build FHIR-compliant global goods software. And the data from other industries is backing it up. 

Early studies are showing that ChatGPT-like support for college-educated professionals and developers can help in two ways. When college-educated professionals were given a writing task and randomly assigned to use ChatGPT or not, those who used ChatGPT improved the quality of their work and reduced the time spent on the task. Interestingly, those with lower baseline scores improved their scores more while the benefit for more skilled professionals was to reduce time on the task and maintain high quality. Similarly at GitHub, their AI-based CoPilot boosted productivity of software developers by 55%! 

What could this look like in a busy, understaffed inpatient nursing ward? We know that turnover among nurses is high due to systemic burnout caused by understaffing and lack of support. What if ChatGPT-style interventions could be that mentor providing support and advice for a young nurse who finds themselves alone in the inpatient ward before they’re ready? If she feels supported, maybe she’ll stay.

Come check out our presentation at GDHF where we’ll present the results of using ChatGPT to iterate on our iHRIS software. Monday, December 4, 1:30-2:45 pm in the Oakley Conference Room at the Bethesda North Marriot in Bethesda, MD. Full GDHF agenda is here.

The foundation of any AI use case is good data.

We’ve all heard the saying “garbage in, garbage out” to make the point that the data going into an AI model impacts the utility of the results you get out. Imagine you have a program delivering HIV care to clients in Uganda and you want to use electronic medical record (EMR) data to predict which clients are at risk for interrupting treatment.

  • How should you deal with inevitable missing data? If values are missing, it could be because the visit got cut short and the data didn’t get entered or some other reason that could be correlated to the outcome of missing an appointment. Some machine learning algorithms will automatically impute data but if the data is not missing at random, you should model the missingness and use it in the analysis (e.g., feature engineering).
     
  • How often should you update the model in production? When the results of the model can be pushed back to the EMR, this could be as easy as running the model at the beginning of each week. However, as you intervene with clients to reduce interruptions in treatment and those clients’ results go back into the model each week, the model’s accuracy can drift. Let’s say your intervention is successful and all 25-year-old males that got your intervention are less likely to interrupt. Is a 25-year-old male in your dataset now less likely to interrupt, or is the model learning from your intervention and becoming less accurate about typical 25-year-old males?
     
  • Can you make AI work in a paper-based world? EMRs may not be available during the visit for clinicians to check on the status of a client at risk for interruption in treatment and intervene. Can you create a paper-based tool that curates the model output into an assessment so that clinicians can evaluate clients sitting in front of them and provide support? Palindrome has had success doing that in South Africa.
     
  • Who ensures that global good open source software powering EMRs and other necessary tools is maintained? The EMRs we’re talking about are often open source software that is maintained at a loss by the original developers and without sufficient investment in security and maintenance, open source software can pose significant security risks. How can we ensure better coordination in the open source software space with sustainable business models?

IntraHealth is among a small group of companies working to develop AI models that can predict which HIV clients are going to interrupt in treatment before they do. You can read more about these efforts here and here

Check out IntraHealth’s presentation at the GDHF to hear how we addressed these issues in the USAID-funded HIV Care & Treatment Project in Central America. Wednesday, December 6, 1-2 pm in the Forest Glen Conference Room at the Bethesda North Marriot in Bethesda, MD. Full GDHF agenda is here.

How do we solve some of these challenges?

1. Create a data use architecture plan for how data is going to be used 

Any data scientist worth their salt will tell you that data science isn’t magic, it’s driven by good questions and good data. Projects can start today by assessing their monitoring and evaluation (M&E) data used for reporting (i.e., telling stakeholders what’s been done) and figuring out how to turn it into strategic information (i.e., telling program managers what to do next). Donors can work with implementers to reduce the burden of M&E reporting.

At IntraHealth, we call our overall approach to data use READIRapid, Efficient, and Data-Driven Implementation. READI’s guiding principles are real-time, granular, automated data; re-use of donor reporting and secondary data for implementation where possible; and working with governments to review their data and make decisions through a Data to Action (D2A) process.

In READI, we re-use as much donor reporting M&E data as possible for implementation. When the data we need isn’t an M&E indicator, we leverage free, publicly available secondary data (e.g., hyper-local 1x1 km estimates of socioeconomic and population indicators) to help increase the efficiency of our programs. We also engage with communities of practice to re-envision M&E data to support program implementation and have developed a standard set of indicators our projects can choose from for family planning, maternal and child health + nutrition, and non-communicable disease implementation.

2. Invest in open source software 

You may have seen various versions of this image around the web. While many open source communities have created great enhancements to software over the years, the complexity and needs of the global goods community are often beyond what can be managed and maintained by volunteers. Better investment is needed in global goods to ensure that the software being provided is being maintained, incorporates standards, is secure, and evolves with advances in technology. This is particularly essential where reasonably priced, commercial alternatives do not exist.

3. Create markets for health information system (HIS) components

A fully functional HIS supports innovation and entrepreneurship by reducing transaction costs so that innovators can build technology able to leapfrog physical infrastructure barriers (like telemedicine apps) with APIs to high-quality health worker and facility lists maintained by a single, public entity. In a perfect world, governments would be able to implement a fully interoperable HIS already using existing architecture specifications like OpenHIE, and digital health sandboxes would allow entrepreneurs to easily introduce new innovations to the ecosystem. However, there are markets that are decades away from this model and there are transitional approaches that might be easier to achieve in the short-term. Markets could benefit today from middleware business to government (B2G), government to business (G2B), or business to business (B2B) models to build, maintain, and license the registries needed to build telemedicine apps to innovators rather than each innovator building and verifying their own. What other opportunities are there for B2G, G2B, or B2B use cases that can support public health? And what non-traditional funders (e.g., health tech investors) might be interested in starting up middleware companies to create the market needed to make health tech profitable? Read more in our blog.