CHAPTER 32: Evaluation of Programs and Projects.
Programs within the occupational health service must be evaluated regularly to ensure that they are cost-effective, successful in meeting their objectives, and managed optimally. Programs that are not successful or that meet no real need should be phased out. Programs that once met a real need might have to be adapted as needs change. Every program can be improved, but blind interference with a program in the absence of solid information is more likely to hurt than to help.
A methodical, objective evaluation provides management with the information it needs to continue, increase, or withdraw support from a program. It provides participants with a guide to their own performance and how to improve their services. It provides the client using the service with a voice to indicate whether their needs were met. A good evaluation takes planning and preparation from the onset of the program. This chapter outlines a general approach to evaluating individual programs and projects within an occupational health service.
The process of evaluation is invaluable for answering important questions about the delivery of the services. The principles of evaluation are simple to know and often exceedingly difficult to apply. This is because programs are usually set up to provide a specific service other than as a controlled experiment. The evaluation component is seldom an overriding consideration in the design of a program and indeed is often an afterthought. Consideration should be given in advance of beginning the program to the information that will be needed to evaluate that program. Records and information systems can then be designed to retrieve and process the data quickly, efficiently, and without disrupting normal activities. An evaluation that requires a halt to operations, is disruptive, and can only be achieved at great expense is unlikely to be carried out at all or will be resented by all participants.
An evaluation is carried out to answer several basic questions, among them:
An informal evaluation based on the "feel" for the program on the part of someone involved in its implementation or operation may be very astute and perfectly adequate. It is more likely to be uncritical. A formal evaluation by an outsider with an objective viewpoint is best. The person or team performing the evaluation should be somewhat detached from its operation, especially if the stakes are high. Evaluation is always subject to bias but bias can be controlled by collecting as much objective data as needed, to recognize potential sources of bias before conclusions are drawn, and to balance points of view by obtaining input for all interested parties.
Programs for the provision of health services are seldom true experiments. A true experiment could involve, at the least, an experimental group receiving the treatment and a control group treated identically except that the treatment would be withheld. This is almost never possible in an occupational health program. The closer to an experimental design an evaluation method can be, however, the more objective and therefore less biased it is likely to be. Study designs which come close to a true experiment are called quasiexperimental. Quasiexperimental evaluation methods are based on comparing the experience of program participants with one of three benchmarks: 1) a similar comparison group, which is not identical but is matched closely enough to yield useful information over the same time period, 2) a before-after comparison, in which the group experience is compared before the program being evaluated began or in its earlier phases or 3) an after-only comparison group, in which the outcome of the group served by the program is compared to that of another very similar group not served by the program.
Each of the methods has serious limitations. Before-after comparisons cannot account for changes during the same period which are not related to the program, such as changes in the incidence of a disease in the community, changes in education and public attitudes as the result of news media attention, and changes in the composition of the workforce as the result of personnel turnover. Before-after comparisons and after-only comparison groups may be misleading because members of the groups may be self-selected. Individuals who cannot tolerate a job may simply quit and find another. Both the comparison group and the after-only comparison group models depend on close matching between the groups of all characteristics affecting the outcome other than the program itself. A major limitation of all three approaches is that they are poorly suited to comparisons of very small groups. Despite their limitations, these quasiexperimental models are standard techniques in the evaluation of programs.
A set of versatile techniques are adaptable to small groups and thus are potentially very powerful for the evaluation of occupational health programs and for the evaluation of the effectiveness of interventions in the workplace. The simplest of these techniques is called time series analysis. Time series analysis is an adaptation of the before-after comparison approach. Applied to programs, time series analysis may deal with specific organizational groups (departments or work teams), plants, or small companies. The measurement may include such variables as frequency of absence from work, reported injuries, or number of complaints of low back pain. To be valid, the changes in trend must be consistent for the different groups but the same intervention must have been introduced at different times. Comparison groups are not necessary. Time series analysis has many advantages. It can be applied to individuals or to groups. It is applicable to any clinical setting in which the subject or patient can be followed over time. The technique is suitable for relatively small numbers of subjects and does not require the intervention to be instituted for all subjects at the same time. It can be used when the intervention cannot be withdrawn or is irreversible. Time series analysis does have some drawbacks, however. It cannot be used when all subjects will receive the intervention at one particular time, because the variation in the timing of the intervention is what controls for outside influences on the outcome. The technique requires repeated measurements before and after the intervention. If the measurement is very subjective (such as measurement of satisfaction), depends on learning or degree of training or the age of the subject, time series analysis may not be applicable because a change may occur over time regardless of the intervention. Time series analysis is not easily analyzed by statistical tests. Despite these drawbacks, time series analysis can be extremely useful for the program manager.
Selection of an appropriate method depends on what information is accessible and whether a comparison group is available. No model is perfect for all purposes, but the most valid model should be chosen that will use the pertinent information available. Generally speaking, the most valid model is a comparison group, followed by time series and before-after comparison. After-only comparison should only be used if no other approach is possible, as in the use of programs begun in the past in which no attempt has been made to collect information systematically. The validity of a model is only part of the story, however. The data must also be accurate and the measurements meaningful.
What to Measure
The selection of a variable to measure in order to evaluate a program or intervention depends on the setting and the nature of the process. Four categories of measurements or comparisons may be made:
These measurements are all useful in different ways. Combined, they piece together a total picture of the program as a whole and spot problems which can be solved through changes in the structure or procedures of the program.
Planning and Evaluation
Effective evaluation depends on good record-keeping and the cooperation of all concerned. The following are some practical measures which facilitate the effective evaluation.
Evaluation of the Occupational Health Service as A Whole
At some point in the future, perhaps all medical practices will monitor and evaluate their experience using computers. At present this is difficult but becoming easier as new techniques are developed that can be adapted for office use. Evaluation allows the physician or nurse-manager to pinpoint problems and to act to correct them. It allows the service to adapt by adding equipment, staff, or space or by cutting back in some areas for more efficient operation. It permits the physician and nurse to plan for continuing education of the staff based on the needs of the practice or weaknesses in their own preparation. It helps reduce liability by highlighting problem areas which require better control, safer procedures, referral, or further training on the part of the provider.
The methodology of evaluation rests on comparing experience against a benchmark, usually the experience of the immediate past, a comparison group, predicted results, or expectations. The experience can be expressed in terms such as patient visits, case fatality, test results positive, recovery rates, dollars expended, or other quantifiable terms. A set of criteria is established against which the experience is compared, preferably using statistical analysis which takes into account random error and variability. These criteria are broadly classified as relating to process (how the care was provided) or outcome (what the result of care turned out to be). For example, one may choose to evaluate how frequently routine blood pressure checks were performed (process) or how well blood pressure was controlled in hypertensive patients (outcome). A useful set of criteria to begin with is given in Appendix 1 in the form of an audit questionnaire.
Table 32.1 Criteria for Performance Evaluation:
Workers' Compensation Advocate
The first step in evaluation is to obtain a profile of the service. This might mean checking appointment sheets or keeping track of first reports to determine the frequency of new patients with occupational disorders and the distribution of cases by problem (musculoskeletal injuries, back strain, dermatitis and so forth by employer in the case of groups or clinics (which companies are using the service and what industries do they represent?), by occupation of employee (are all such patients truck drivers or is there a mix?), by service (acute care, consultation, periodic screening and so forth), and by geographic area (are most patients coming from one part of town?). The second step is to identify changes by comparing similar data from previous years or by continuously monitoring utilization of the service.
Once the basic descriptive data are available, other methods of analysis can be applied to develop the evaluation. For example, a sample of medical records can be audited to determine how cases were handled and how they were resolved. Financial records can be used to determine whether payment was complete or whether a few cases ran up disproportionate charges. Groups with a common occupation or exposure, such as welders or asbestos workers, may be examined to determine if they are showing consistent findings and whether screening evaluations are up-to-date. Cases of nonoccupational problems may be reviewed to see whether consistent criteria were used in certifying absence from work or fitness to return to work.
An Approach to Evaluation
The problem of evaluating the effectiveness and cost of programs is made much simpler by gathering appropriate data at the outset for later comparison. The evaluation outline in this section is designed for occupational services providing services to a number of employers.
The essential first step in designing an evaluation is to be clear on the purposes of the exercise. Employees and sponsors of a program will understandably wish to show its progress to their advantage. This can distort the picture and obscure identification of changes that must be made. The usual objectives for evaluating occupational medicine service are the following:
1. To determine the effectiveness of the program in reducing job-related employee health problems: Effectiveness.
The ultimate use of the evaluation is to modify the program to optimize Effectiveness, Acceptance, and Satisfaction by minimizing Cost and minimizing Specification.
Specific objectives are the yardstick for evaluation. They may be grouped under each purpose in the form of an explicit general statement of the goal and then as specific objectives:
1.2.2 To provide rapid and appropriate care to the impaired worker.
2.1.3 To deliver services at a significantly smaller cost per employee than would be the case were services obtained on a piecemeal basis.
2.2 To reduce the cost to the employers of secondary costs related to medical services.
2.2.1 To reduce employer insurance premiums.
2.2.2 To reduce employer assessments for workers' compensation.
3.1 To provide services in such a way that the workers subjectively feel that their needs are being met.
3.1.1 To establish a degree of confidence in care delivered on the part of the employees.
3.1.2 To establish a sense of overall satisfaction on the part of the employees with the personal attention of the clinic staff.
3.2 To ensure that workers' preferences and reactions are heard by the staff.
4.1 To provide services in such a way that the employers feel that their needs are being met.
4.1.1 To establish confidence in care delivered to their employees.
4.1.2 To establish overall satisfaction on the part of the employer with the attention of the clinic to the employer's special needs and desires.
5.1 To identify those components of the program which are the most cost-effective.
5.1.1 To determine those components which are the most effective by the criteria of Effectiveness.
5.1.2 To determine the specific per employee cost of each program.
5.2 To determine which components of the program are perceived as the most effective or desirable.
5.2.1 To determine those components of the program which are considered highest by the criteria of Acceptance by the employees
5.2.2 To determine those components of the program which are considered highest by the criteria of Satisfaction by the employers.
Any number of other objectives could be formulated but these are the essential ones which lend themselves to measurement. The design of the evaluation then must identify outcome measurements for the objectives and compare them against one of three standards:
In the real-world setting of contracting for services, a nonparticipating company is hardly likely to release its balance sheets and offer its employees for interview. Thus, a control group is not realistic except in the case of similar companies which choose different components of the program for their employees within the same facility. This is not a common situation. The criterion of improvement compared to performance before participation assumes that no major unrelated activities which affected health are implemented during the study period. For this reason, the study period must be kept brief, perhaps three years after entry into the program. Arbitrary criteria are least desirable but may be the only alternative for some relatively subjective measurements such as Acceptance.
Outcome measurements must be quantifiable and accessible. Clinic-acquired data si most useful with supplementary data obtained from the employer necessary to assess preparticipation costs. Specific outcome measurements for each objective might be as follows, tabulated by participating company and calculated per 1000 employee-hours:
1.1 Protection of employees
1.1.1 Reported occupational injuries and illnesses during the year preceding and the third year of participation in theprogram.
1.1.2 Rating of injuries on initial clinic visit on a standard scale of disability determined during the first six months and the last six months of participation.
1.2.2 Continuous chart audits with a medical standards review of a 5 to 10% sample.
2.2.2 Workers' compensation assessment before and during last year of participation.
Questionnaire responses for employees using clinic program components during first six months (covering previous arrangement and expectations of new services) and last six months (covering interim experience and subjective rating of care) of employer participation during study period.
Questionnaire responses in like manner directed at acceptance of each program component.
4.1 Employer satisfaction
Questionnaire to employers covering each component of program, similar to Acceptance, but directed toward employer's opinion.
4.2 Employer satisfaction over program components.
The data accumulated for the foregoing would then be examined by program component and analyzed by cost-effectiveness, cost/acceptance, and cost-satisfaction. The employees of a company are not independent in attitudes and response to services. Each program should be evaluated on the basis of its success or failure in meeting objectives for the majority of participating companies, not on the basis of all employees served.
This is a program evaluation, not a controlled epidemiologic study. Inevitable problems of evaluating programs which exist in a competitive and political milieu. By anticipating data needs at the outset and designing a general strategy for evaluation before the initiation of service, conflicts and friction with program personnel can be avoided. Program evaluations solve important problems in the real world. It is in the real world that one identifies and manages occupational disorders.