Stage 7: Monitor and evaluate
Throughout the OPQ process, workplace performance should be monitored, documented, and evaluated to measure changes in the performance gaps or expansion of the high-performing areas as a result of the process.
Monitoring: The routine tracking of data that measure progress toward achieving objectives of a program or intervention. The purpose of monitoring:
- To ensure activities are implemented according to plan and timeline
- To identify activities or resource allocation in need of adjustment or improvement to achieve desired results
- To provide information for decision-making and program evaluation
- For reporting requirements
- To facilitate advocacy
Evaluation: The process of collecting and analyzing data to measure how well a program or intervention has met expected objectives and/or the extent to which changes in outcomes can be attributed to the program or intervention, or to other factors. The purpose of evaluation is to confirm that adopted strategies and available funding produced the desired results and to assist stakeholders in decision-making about future program improvement and implementation by:
- Providing an objective and reliable assessment of the activities
- Providing feedback to local organizers and other stakeholders about:
- The outcomes of the activities
- Strengths and weaknesses
- Other influencing factors
- Suggested measures for improvement
To guide any monitoring and evaluation (M&E) activity, a framework is needed to illustrate the steps and levels at which measurements will be carried out. The figure below provides an illustration of such a framework.
A typical M&E framework contains the levels illustrated in this figure. Monitoring activities generally measure indicators related to inputs, processes or activities, and outputs. Evaluation activities usually measure indicators related to the effects or results of the inputs, processes, or outputs, and sometimes measure the long-term impact of these results. (See Tools section for Suggested Questions to Address During Monitoring and Evaluation.)
Step 7.1: Develop the M&E plan
(See M&E Plan Format and Sample M&E Plan in Tools section.)
The OPQ team develops an evaluation plan that can be integrated into workplace processes to serve as an ongoing feedback device for workers and managers to measure changes in performance and quality.
Note that the M&E plan should be developed during Stage 3, after defining desired performance. The M&E plan will:
- Identify the purpose, users, resources and timelines of monitoring and evaluation
- Select the key monitoring and evaluation questions and indicators and the best design to measure intended results
- Sequence monitoring and evaluation activities (such as completing baseline documentation, tracking progress toward milestones, and conducting materials pretests, participant follow-ups, project reviews, and special studies)
- Prepare data collection and data analysis plans, including cost as well as results or program data
- Plan for communication, dissemination and use of evaluation results
- Identify the technical competencies needed on the monitoring and evaluation team(s)
Step 7.2: Monitor routinely and make adjustments
Monitoring is done on an ongoing basis, at every stage of the process, so that progress can be tracked and documented, and changes can be made as needed during the implementation or at the next cyclic phase. Monitoring during implementation of activities is discussed in Stage 6.
Step 7.3: Repeat the baseline data collection using the same indicators and instruments
An evaluation should again measure the performance levels of the workers or organization, and assess the extent to which gaps, including gaps in gender equality, have been closed and strengths have been expanded as a result of the interventions, as well as delineate any broader impact such as improved health outcomes and/or increased productivity.
For the evaluation of effects, the evaluation team will typically conduct a data-gathering exercise similar to the collection of baseline data to describe actual performance. The consistency of sampling tools and indicators will help contrast the levels of performance before and after the interventions and determine whether there have been demonstrable changes.
If the evaluation design uses a control group, the team will compare changes in the intervention group with changes (if any) in the control group to arrive at net effects (i.e., changes in the intervention group minus changes in the control group over a similar period of time). Ideally the control group selected should be very similar to that of the intervention, so that any net effects can be more easily attributed to it, with “other things being equal.”
Step 7.4: Compare results with baseline
Where goals were met, celebrate and recognize team members’ contributions! Where goals were not met, analyze reasons and cycle through the OPQ process again.
Step 7.5: Report and communicate evaluation results
In many cases, a written report of evaluation results is required. This step includes:
- Writing a report describing the methodology, findings, and conclusions
- Selecting appropriate graphics to communicate summary findings
- Formulating recommendations based on conclusions and consultations with the client
The report should present findings so an audience can clearly see:
- Changes in performance
- How these changes can be attributed to the interventions
- Cost of the interventions
If the evaluation design warrants, the report should also present the effects (if any) of alternative interventions or the absence of interventions in control areas and discuss differences between those areas and the intervention area.
The ultimate goal of evaluation is the use of results to:
- Demonstrate the validity of a new approach (i.e., “what works”)
- Identify areas to be strengthened in future project designs (i.e., “what didn’t work and what to do differently next time”)