Stage 3: Identify gaps and strengths

This stage in the OPQ process—determining the difference between desired and actual performance—involves three critical steps, and each step has several substeps. Statements of desired performance, actual performance, and gaps and strengths are all described according to the same specific, observable, and measurable indicators.

The OPQ team and other stakeholders:

Step 3.1: Define desired performance or expected quality standards

To gain consensus on exactly what is desired in terms of performance, stakeholders will work together to state desired performance in results-based, measurable outcomes that support organizational goals. This focus will help every other step of the OPQ process become more precise, clear, and targeted. Indeed, without a careful definition of desired performance, it will be difficult to measure progress toward desired outcomes.

This cooperative work to define desired performance is vital for building consensus among stakeholders and achieving the desired outcomes. It is also the basis of the evaluation plan that lays out what will be measured before and after the interventions to demonstrate to what extent the interventions contributed to closing of the gaps or expanding the high-performing areas.

The hierarchy of performance chart below illustrates the importance of aligning the desired performance outcomes with overall organizational goals. Ask the following questions:

hierarchy
Hierarchy of Performance
  • What is the organization trying to accomplish (goal)?
  • What groups (ministries, departments, district offices, facilities, private entities, non-profit organizations, implementing partners, donors, etc.) can help achieve the goal (leader/responsible)?
  • What outcomes does each group have to achieve to help the organization accomplish the goal? (desired performance and quality outcomes and standards)
  • What key work processes are necessary to result in these outcomes?
  • What activities, tasks, and steps comprise the key work processes?

See a template for the Hierarchy of Performance Chart and Examples in the Tools section.

Note that desired performance outcomes and ideal performance outcomes are not necessarily the same. Desired performance reflects the expectations of the stakeholder group and may change over time. At the beginning of what must be viewed as an ongoing process, undue emphasis on ideal performance might set seemingly unreachable standards that could discourage, rather than encourage, improvement.

Step 3.1.1: Determine priority areas for which you want to develop desired performance outcome statements. For example:

  • Where desired performance outcomes are unclear or not specified
  • Where you have received complaints
  • Where you suspect that there are gaps
  • Where you suspect there are high-performing areas that could be scaled up
  • Where you are initiating new services, processes, jobs, responsibilities, and/or departments

Step 3.1.2: Identify the available resources to help you write desired performance outcome statements.

Stakeholders usually use organizational, national, or international guidelines and standards as resources for setting the desired performance outcomes. There may also be strategic plans, job descriptions, or performance management and supportive supervision checklists that set forth expected performance or quality.

Step 3.1.3: Write desired performance outcome statements according to the criteria and performance indicators below.

Desired performance outcome statements should adhere to the following criteria. Namely, statements should:

  • State the activity and accomplishments/results of the “performer” (e.g., individual, team, department, organization, health facility)
  • Be under the control or authority of the performer
  • Be SMART: specific, measurable, attainable, relevant, and time-bound
  • Be clear and unambiguous (can be agreed upon by independent observers)
  • Be measurable in terms of quality, quantity, timeliness, and/or cost

Desired performance outcome statements may contain one or more of the following measures or indicators:

  • Quality: Does the performance match the standard? Does it meet client expectations?
  • Quantity: Does the performance happen as much or as often as it should?
  • Timeliness: Does the performance happen on time or as soon as it should?
  • Cost: Does the performance maximize resource use and avoid waste?

This step will help the stakeholders decide what really matters about the performance in question: Is it how well it is done, how often it happens, on how timely a basis, how much it costs, or all four?

Step 3.1.4: Set targets for each indicator.

Once stakeholders have decided on the measurable indicators for the desired performance, they should set targets for each indicator. For example, should the performance meet the standard all of the time, or is 90% of the time acceptable to start with? The targets you set now may be revised in the future.

Setting targets that are unrealistic can be a detractor for performance. Revising targets is a good technique when performance levels are initially very low and you need to set interim performance goals.

You may begin to use the Performance Specification Form in the Tools section by filling in the first column. See also Performance Specification Form Examples.

Step 3.2: Assess actual performance

The OPQ team assesses actual performance outcomes using the same specific, observable, and measurable indicators that are used to define desired performance outcomes. Possible sources of performance data include institutional information systems and records, clinic records, and previous projects and studies completed in the same area. Always start with existing data. Many times, however, existing data on current performance outcomes will be insufficient. Only then should the team collect additional data. Data collection may be a simple exercise but can also become a complex activity. If a monitoring and evaluation expert is not already on your team, now is the time to solicit help and advice.

The data assembled for this stage will serve as the baseline for determining the effectiveness of any interventions that are implemented. After the interventions have been implemented for a designated time period, you will compare performance data to the baseline data to determine whether or how much performance has changed.

Start your monitoring and evaluation (M&E) plan (see Stage 7) using the inputs described in Step 3.1. The M&E plan lays out what will be measured before and after the interventions related to desired and actual performance, to demonstrate whether—and to what extent—those interventions contributed to the reduction or closing of gaps or expansion of strengths. The M&E plan can be further refined once the actual interventions are selected and designed.

During baseline performance data collection, the OPQ team also gathers information about the presence or absence of performance factors that will be used during root cause analysis—Stage 4. These performance factors are described in the Performance Factors section. Questions related to the performance factors can be found in Diagnosing Performance Problems: What to Look at First in the Tools section.

Step 3.2.1: Decide on data collection methods.

The team must decide which methods to use to collect current performance data linked to the desired performance outcome statements written earlier. For each statement of desired performance, determine the best method or combination of methods for gathering valid and reliable data. The methods should be selected based on the type of performance you are measuring and the availability of existing data. For example, if you are conducting a clinical audit to monitor quality practices in a health care setting, you will want to review patient files, perhaps supplemented by self-assessments, interviews, and/or and questionnaires for supervisors, and patient satisfaction surveys. Typical Data Gathering Methods include:

  • records reviewReview of service data, registers, and logs
  • Review of individual patient records
  • Facility/institutional audit
  • Review of human resources data
  • Review of strategic plans or action plans and progress on implementation
  • Process mapping (see example Cross-Functional Roles/Responsibility Matrix, Job Model Guide, and cross-functional process maps for FP Client Visit and District Recruitment in the Tools section.)
  • Interviews with health workers, supervisors, managers, directors, executives, and board members
  • Testing (written or computer-based tests, simulation with standardized patients)

assessing-performance

  • Self-assessment with standards-based checklists
  • Observation with standards-based checklists (by expert observers, trained peers, and supervisors)
  • Patient exit interviews or mystery clients/customers
  • Patient satisfaction surveys
  • Community meetings or focus group discussions
  • Community surveys

As part of data collection methodology, you will decide on the most adequate sample size and strategy for your needs. Whatever the sample size, it should be representative (i.e., avoids selecting the most convenient; includes a wide range of users/respondents; uses random selection as well as possible). A large sample size is not always needed. For example, if one expects a huge change from very low baselines, the sample size could be as low as a few dozen individuals.

If existing data are insufficient, you may need to carry out steps 3.2.2-3.2.4.

Step 3.2.2: Design data collection instruments.

With the team or a subset of the team, design data collection instruments. The forms should simplify data compilation and be easy to use. Data collection instruments often include interview guides, observation checklists, focus group discussion guides, questionnaires, and survey forms.

Step 3.2.3: Identify and equip data collectors.

Identify the data collectors and prepare them for the data collection activity. At times, the data collectors will come from your immediate team or organization. In other instances, you will need to hire data collectors. Once the team is identified, they must be equipped with everything they need to collect baseline data. Readying them may include training and arranging logistics for travel.

Step 3.2.4: Collect the data.

During the data collection phase, arrange all logistics so as to avoid delays. This includes arranging transportation and ensuring the availability of forms, batteries, and charging stations (if using electronic equipment). There should always be a supervisory team carrying out checks in a sample of data collectors and reviewing data extraction and entry for consistency.

Step 3.2.5: Compile and analyze data.

Compile the data that you have assembled from various sources. For smaller projects, this may be done by hand on tables and check sheets. For larger efforts, statistical analysis software may be used. In either case, the data should be compiled in a way that communicates the current level of performance to the team and to others outside the immediate team. For example, if the desired level of performance is that 90% of health workers counsel antenatal care (ANC) clients on exclusive breastfeeding, what is the actual performance? Is it 40%? 50%? Keep the stakeholder audience in mind when choosing presentation formats.

Step 3.2.6: Make statements about actual performance.

Take each statement of desired performance and make a corresponding statement that shows the actual (current) performance. Be able to back up the statements with appropriate data presentations (charts, graphs, etc.).

You may continue to use the Performance Specification Form in the Tools section by filling in the second column. See also Performance Specification Form Examples.

Step 3.3: Describe performance gaps and strengths

Once you have defined the desired and actual levels of performance outcomes, identifying the performance gaps and high-functioning areas becomes a simple matter of comparing the two levels. The description of gap and strengths shows, in objective terms, the difference between current performance and the performance/level of achievement that is desired. It is best to quantify the gap between desired and actual performance because it is then easier to determine progress in closing the gap by measuring actual performance outcomes again after implementing interventions.

Step 3.3.1: Describe the performance gaps and strengths.

Using the same measures that were used to describe desired and actual performance outcomes, describe the performance gaps and strengths and communicate this information to stakeholders. See Describing Performance Gaps and Strengths-Examples in the Tools section.

Step 3.3.2: Decide whether to work on each gap/strength.

It is important for stakeholders to pause and decide as a group whether each gap or strength is worth the effort that will be required to close it or to scale it up. This stage also involves prioritizing which performance gaps and strengths to address or in what sequence to address them, as more than one will likely be identified. Criteria for prioritizing or sequencing could be the following:

  • Seriousness/urgency (e.g., involves safety or priority health indicators): Contemplate how the gap/strength affects client outcomes.
  • Scope or frequency: Consider whether the gap is wide or narrow, frequent or infrequent.
  • Alignment with organizational goals: Take into account whether the gap/strength seriously affects the potential of the organization to reach its goals.
  • Gender equality: Consider whether the gap or strength affects conditions for women and men to equally realize their full rights and potential, participate in the workforce, be healthy, contribute to health development, and benefit from the results.
  • Time to solve and Resources required: Weigh the amount of time and resources necessary to overcome potential obstacles and close the gap or scale up the strength.

Often it helps to prioritize gaps or strengths by using a 5-point scale to rate each gap or strength according to the criteria. See sample scale for Rating Gaps and Strengths in the Tools section.

Some gaps may not deserve further attention. Efforts to close some gaps may need to be postponed while you work on more serious or urgent gaps.

You may continue to use the Performance Specification Form in the Tools section by filling in the third column. See also Performance Specification Form Examples.