-
615 F.Supp. 1574 (1985) Charlie F. WADE; Laverne Y. Lindsey; Bryan Keith Lindsey, Bryon Kevin Lindsey, by their mother and next friend, Laverne Y. Lindsey; Gwendolyn Durham, Joann Durham, Ja Nel Durham, Arnold Guyton Durham, by their mother and next friend, Odell Durham; and Howard T. Bailey, individually and on behalf of all others similarly situated, Plaintiffs,
v.
MISSISSIPPI COOPERATIVE EXTENSION SERVICE, et al., Defendants.Civ. A. No. EC 70-29-K. United States District Court, N.D. Mississippi, E.D.
August 29, 1985. Jere Krakoff, Lawyers' Committee for Civil Rights Under Law, Jackson, Miss., for plaintiffs.
James M. Ward, Starkville, Miss., Thomas E. Childs, Jr., Fulton, Miss., Edwin L. Pittman, Atty. Gen. of Miss., Jackson, Miss., for defendants.
MEMORANDUM OF DECISION
KEADY, District Judge.
This case involves another chapter in long-standing litigation in which the Mississippi Cooperative Extension Service (MCES) petitions the court to approve the performance evaluation instrument (PEI) and procedure previously submitted and implemented pursuant to the stipulation of the parties. MCES seeks approval to allow full implementation of such instrument and procedure for the purpose of allowing pay increases for certain employees meriting same and allow use of the performance evaluation score as an additional factor in determining the objective qualifications of personnel seeking promotions. The plaintiff *1575 class, consisting of MCES black professional employees, opposes the relief sought.
The background of the present controversy may be found in Wade v. Mississippi Cooperative Extension Service, 372 F.Supp. 126 (1974), aff'd but rev'd on other grounds, 528 F.2d 508 (5th Cir.1976), in which it was determined that MCES's professional employees would be paid and promoted in accordance with their job category, academic training and degree, technical knowledge, and tenure of employment with MCES. The court struck down a performance evaluation form then used by MCES on the ground that it was racially discriminatory. MCES was directed "to devise an objective, valid and nondiscriminatory instrument which would reliably predict job success, and which shall be developed pursuant to the standards required by the Equal Employment Opportunity Commission's Guidelines on Employees Selection Procedure," 29 C.F.R. § 1607, et seq., and Griggs v. Duke Power Co., 401 U.S. 424, 91 S.Ct. 849, 28 L.Ed.2d 158 (1971). The court further ordered that, at MCES's option, an evaluation instrument validated in accordance with such standards might be used as an acceptable criterion for promotions and for merit pay, but only after the court approved the use of such form and procedure.
Since 1975, MCES has utilized for purposes of merit pay and promotions, the court-approved criteria of (a) academic training and degree; (b) tenure of employment by MCES; and (c) technical knowledge derived from in-service training and scores thereon. However, MCES has exercised its option to develop, or attempt to develop, a PEI and procedure designed as an additional factor for determining pay increases and promotions of professionals. The PEI factor would be equal in weight to all of the court-approved criteria.
On August 26 and 27, 1985, the court conducted an evidentiary hearing on the sufficiency of a new PEI and implementing procedure, received oral and documentary evidence from the parties, heard oral argument of counsel, and now makes findings of fact and conclusions of law pursuant to Rule 52(a), Fed.R.Civ.P.
I. Findings of Fact
In June 1975, MCES contracted with Roy B. Mefferd, Jr., and Timothy G. Sadler, consulting psychologists affiliated with Birkman-Mefferd Research Foundation (Foundation), of Houston, Texas, to conduct a job analysis of all MCES employees at county level positions. The consultants began with MCES professionals at the state level to determine the purposes and missions of MCES at the county level. The Foundation extensively interviewed four groups of county level professionals from which forty-three black agents and ninety-nine white agents, of both sexes, were selected. The expert interviewers were out-of-state people fully qualified to conduct such a survey. As a result of these interviews, on September 21, 1976, the Foundation submitted to MCES a document entitled "Job Analysis and Validation of a Performance Evaluation Procedure of County Level and Area Level Agents of the Mississippi Cooperative Extension Service" (D-2). The essence of this study concluded that, while it was not possible to develop quantitative measures to establish criterion validity, nevertheless forty-two elements were ascertainable and identifiable as fairly measuring the knowledge and skills of MCES professionals and as being necessary and relevant to job performance. These elements fit into five broad categories as follows:
I. THE IDENTIFICATION AND ASSESSMENT OF NEEDS
1. The agent maintains updated knowledge of the relevant Extension needs of targeted clients.
2. The agent identifies and assesses on a regular basis throughout the year.
3. The agent identifies needs which have long-term implications.
4. The agent is open and responsive to changing needs.
5. The agent coordinates the local need assessment with the District Staffs to ensure that state-wide goals are considered.
*1576 6. The agent coordinates his/her local need assessment with other County Extension Agents and when appropriate with the District Staff to ensure a coordinated program which includes state-wide goals.
7. The agent organizes information about needs in a way to permit analyses of their relative significance, relevance, timeliness, urgency, equitability, and feasibility.
II. PROGRAM PLANNING
1. The agent completes the Annual Plan of Work and other planning and programming on time.
2. The agent plans the timely implementation of programs to meet unexpected needs.
3. The agent's programs have realistic and measurable goals and provisions for evaluating the results.
4. The priorities, levels of effort and scheduling determined by the agent allow attainment of the goals of programs.
5. The agent plans for the use of resource people to augment (but not provide) his/her programs.
6. The agent plans programs for a cross-section of targeted clients, county-wide or within program-defined groups as required by the job assignment.
7. The agent coordinates immediate and long-term plans with those of the other county Extension agents, and when appropriate, with District Staff, so as to participate in a team effort.
III. PROGRAM IMPLEMENTATION
1. The agent is responsive to county, area, or targeted-group needs in his/her area and level of responsibility.
2. The agent implements programs of two types: (1) those that emphasize economic improvement, and (2) those that emphasize quality of life improvement as the job assignment requires.
3. The agent implements state-wide programs in accordance with MCES policy.
4. The agent adheres to his/her Annual Plan of Work while accommodating emergency or new needs.
5. The agent regularly uses non-extension resources and resource people to augment his/her program in meeting goals.
6. The agent regularly demonstrates proficiency in the use of media to reach audiences as the job assignment requires.
7. The agent utilizes but keeps a balance between both group and individual client contacts in teaching and program promotion.
8. The agent augments his/her own program efforts by utilizing Extension specialists (i.e., specialists are used to augment service, not to provide the service), as the job assignment requires.
9. The agent motivates others in Extension programs.
10. The agent exhibits a personal commitment to team efforts involving the other county professionals.
11. The agent utilizes team efforts among his/her clientele to accomplish his/her goals.
12. The agent uses regular program evaluations for the purpose of making prompt changes to improve his/her programs.
13. The agent maintains a regular program for self-improvement of his/her teaching, communication, and program promotion skills.
14. The agent maintains a regular program for self-improvement of the technical aspects of his/her job assignment.
IV. PROGRAM EVALUATION
1. The agent evidences a continuing commitment to the systematic evaluations of the impact of his/her programs.
2. The agent makes systematic and documented evaluations of his/her personal effectiveness in the delivery of the programs.
*1577 3. The agent regularly uses client feedback for the assessment of impact of his/her programs on economic and qualitative aspects of life.
4. The agent maintains regular documented progress reports for formal reviews of major or long-term programs.
5. The agent uses program evaluation information to modify the goals of his/her long-term programs.
6. The agent reviews and evaluates program activities promptly for the immediate impact using measures such as attendance, client inquiries, post-event publicity, etc.
7. The agent coordinates his/her program reviews with other agents in the county.
V. GENERAL DUTIES AND REQUIREMENTS
1. The agent is punctual in the performance of his/her duties with the public.
2. The agent plans and allocates his/her personal time so as to meet the requirement of both programmed and unscheduled duties.
3. The agent completes the necessary routine paperwork and office activities required by the job assignment without distracting from his/her on-going program work.
4. The agent maintains an orderly work area in the county offices.
5. The agent maintains public relations with clients which are in accord with his/her job assignment.
6. The agent regularly supports cooperative involvement with other county professionals.
7. The agent resolves conflict tactfully.
(D-10).
The Foundation determined the weight to be given the forty-two items by ascertaining how much time a professional spent performing particular tasks. Studies showed that 10% of an agent's time was spent in various tasks other than program planning and implementation which consumed 50% of his time. This accounts for the weight given to program planning and implementation.
The Foundation submitted a final report to the court on August 31, 1977. The parties, however, delayed presenting the matter to the court pending further experience with the plan.
On December 5, 1984, the Foundation submitted a comprehensive ten-year review of the development of PEI (D-4). This report stated that each of the forty-two elements in the PEI represents a critical job requirement as determined by the MCES professionals themselves, that the elements uniformly cover the full range of the job, and that the PEI is both content valid and reliable. The Foundation's report concluded that PEI scores "are sensitive to changes in individual performance as well as to programmatical changes of MCES," and reaffirmed that there were no professionally acceptable alternatives to a content valid PEI process to measure current performance, such as criterion based scores. The report concluded that the PEI is measuring actual professional performance of field-level extension agents.
Since 1979, MCES has unofficially utilized the PEI and has implemented a system of professional employee evaluation as a continuous process throughout the year. It is important to MCES to monitor the extension program, which is carried on by a professional staff dispersed throughout the eighty-two counties of the state, in order to be sure that the services embodied in MCES's programs are being effectively carried out. Hence, a cycle for each year encompasses a county professional's production of a detailed pre-plan of work, followed by plan of work, its implementation, and, finally, a post-plan of work. In each step, there is extensive coordination of efforts between county professionals, district program leaders, state program leaders, and review by district agents.
Documentation is maintained on an agent's personal calendar of activities and events, in minutes of staff conferences he attended and other committee minutes, in copies of mass media messages, and various reports. There is continuous feedback to the affected professional. After the *1578 plan of work has been executed, review is made by district program leaders, district agents, and state program leader auditors. The procedure for evaluating county professionals is accomplished in the following manner. The twelve district program leaders for agriculture, home economics and 4-H youth work, which are the three fields of MCES activity, initially evaluate and score the county professionals under their individual supervision in accordance with the performance evaluation sheet (D-10) for each of the forty-two job-related elements. This preliminary scoring by a district program leader is held in abeyance until all program leaders from the four Mississippi districts meet at Mississippi State University where they discuss consistency in scoring each element. These discussions, which last for a week, are held with the state program leaders present. The consistency scoring is fully documented (D-6, P-1). In such meetings, each program leader seeks to justify or defend the scoring methodology which he has tentatively applied. If changes are found applicable, they are made. The results of the scoring are then tabulated and submitted to the district agent for his review. Next, the district program leader presents the results to the professional being evaluated and discusses the evaluative scores with him.
As regards actual scoring, the procedure is to rate the professional on a scale of 1 to 5, with 1 being below expectation and 5 being above expectation. Detailed definitions for a score of 3 are contained in the Guidebook for Performance Evaluation (D-3). If the score in any element is less than 3, the program leader reviews with the agent his work and suggests improvements. District program leaders are instructed "to professionally draw a line to separate 5, 4, 3, 2, 1. Scores of 1 and 2 and sometimes 3 are easier to derive in most cases and should be derived first" (D-6). A maximum score is 244, and a minimum is 1.[1] The court finds as a fact that as a practical matter in evaluating scores in terms of "below expectation" and "above expectation," district program leaders make interim memoranda throughout the year as poor, fair, satisfactory, good, or excellent, with a score of 3 in any element denoting satisfactory performance.
As heretofore noted, the function of the district program leaders is a strategic part of the evaluative process, although it is a responsibility shared with their peers and is subject to supervision of the state program leader. During the period under review, there were nine white and three black program leaders. Since 1979, approximately 250 agents have been evaluated on the PEI each year, resulting in some 1,263 personnel evaluations in the past five years.
The evidence shows that in 1979 and 1980 MCES leaders made a conscious effort to rate black professionals with higher scores to justify an increased salary procedure and promotions necessary to meet requirements of the court order directing such relief. Black professionals were then scored separately from white professionals. Hence, little weight and attention need be given to the results of those two years in which the evaluative process was skewed. Since 1981, however, MCES has maintained detailed data on personnel evaluations from which the Foundation has drawn significant conclusions. Plaintiffs' expert, Dr. Erich Prein, an industrial psychologist from Memphis, Tennessee, draws the bases for his critical comments from that same data. The findings of these experts will be discussed in some detail, since the correct factual determination must necessarily be reached on close scrutiny of their testimony.
As reflected in their comprehensive report (D-4) and in their trial testimony, Mefferd and Sadler found that the criteria of level of academic training and degree, tenure of MCES employment (four years or more), and in-service training grades were factors, or correlates, of job performance. They postulated that agents with advanced education should be better prepared, that agents with greater experience or tenure *1579 should be better prepared, and that agents who scored higher on in-service training tests should be better prepared, thereby enhancing their job performance. Mefferd and Sadler ascertained that the score levels of whites were consistently above that of blacks in 1981, 1982, and 1983, and that the differences were statistically significant, ranging from 5.2% to 6.7% of the average. They proceeded to determine whether the differences between the two groups of county professionals were related only to difference in job performance or to irrelevant factors such as race.
The principal conclusions of Mefferd and Sadler are summarized as follows.
1. The race of the rater was considered, bearing in mind that there were three blacks and nine whites who served as raters in the 1981-83 period. Black raters, as well as white raters, consistently rated the white agents higher than the black agents, although the group differences were greater with white raters (D-4, 2, Appendix B, Table 4).
2. Agents having both in-service scores and PEI scores in the three-year period were correlated since higher in-service training scores were made by the same agents who scored higher on the PEI (D-4, pp. 52-53).
3. Agents with four or more years' experience with MCES scored higher on the PEI than the average of those with less experience, with the experience effect being somewhat greater among whites than among blacks (D-4, 3, 4, Appendix B, Table 5).
4. Both black and white agents with better academic preparation scored higher on the PEI, although the effect was more substantial for black agents (D-4, 6, Appendix B, Table 6).
5. Black agents who scored higher on in-service tests and had advanced degrees from predominantly white universities compared favorably with whites having similar qualifications (D-4, 7, Appendix B, Table 7).
6. Mefferd and Sadler concluded that after all factors were taken into account, the mean scores of black agents as a group remained consistently lower than those of white agents but that
this difference between groups does not appear to be due to the irrelevant factor of race, but rather due to a factor of differential preparation. This conclusion is supported by the facts that newer Black and White agents are rated similarily (newer Black agents have typically been educated since school system integration has occurred), that Blacks and Whites with advanced degrees are rated similarly, and that Black and White agents equated for high quality of preparation have exactly the same average scores on the PEI.
Thus, the PEI is content valid, reliable and used in a fair manner.
(D-4, p. 66).
Dr. Prein, plaintiffs' expert, stated that he examined the PEI scores for 1981-83 and found that there were differences between the scores of black and white agents that were significant both statistically and in a practical sense. He pointed out that on a review of 254 evaluations two blacks and twenty-two whites would be in the top 10% of the scores for 1980-81, four blacks and twenty whites would be in the top 10% for 1981-82, and one black and twenty-four whites would be in the top 10% for 1982-83. He regarded these scoring results as having a substantial impact on black professionals, both statistically and in a numerical sense. He took issue with the Mefferd-Sadler conclusions that the differences between the PEI scores of each race were due to differences in preparation. Prein further stated that the defendants' experts incorrectly assumed that education and experience were valid predictors of job performance and criticized in-service training and the scores thereon as having the same infirmity. On cross-examination, however, Prein acknowledged that he did not make a job analysis for MCES professionals and did not interview any program leader at the state, district or county level. Nor did he review any documents in the files of MCES. It was brought out that Prein conducted his own tests on the correlation of *1580 PEI tests with the academic training and experience of black and white agents for the years in issue and that his tests showed significant correlation between education, experience and the PEI score. He further acknowledged that there was no material difference between the ratings of black agents and white agents by either white raters or black raters. Dr. Prein was unable to account for the statistical difference between the scores of white professionals and black professionals other than to suggest that the samples of 254, 250 and 240 professionals during the years in issue were too small and that there were three times as many white raters as black raters.
The experts of the parties also clashed on whether the implementation of the PEI procedure was objective. Mefferd and Sadler testified that the definitions contained in the MCES guidebook and consistency scoring sheets for most of the forty-two elements within the five categories were sufficiently specific and detailed to be objective, and that an accurate measure of job performance could be applied by the rater. Prein testified that the PEI lacked objectivity in that the scoring procedure of 1 to 5 was vague, that the definitions contained in the guidebook and other materials related only to a score of 3 and did not define a score of 1, 2, 4, and 5. He further was critical of terms used in the definition section such as "quality," "variety," and "appropriate." In his opinion, the raters did not know what was meant by those terms, and each rater would have the opportunity, if not the necessity, of filling in his own ideas to accomplish the rating.
On this conflicting evidence, the court finds as a fact that the PEI is not racially discriminatory, even though black agents on the average may score less than white agents. Although this difference is statistically significant, it is due to difference in overall preparation. To the extent that blacks have the same academic training with advanced degrees and experience as whites, the differences are minimal. The court further finds as a fact that the mission of MCES is to deliver important educational services to the citizens of Mississippi in a variety of fields that are important to their welfare and that a trained corps of professionals is essential to accomplish that task. The defendants have, at a cost of $195,000 to the state, engaged professionals to develop the PEI, and the five-years' experience with unofficial use of the PEI shows that it has a high degree of correlation to job performance. The court further finds as a fact that while the definitions contained in the MCES guidebook are elaborate in detail, they do not and cannot cover every eventuality in an educational program that MCES seeks to deliver. Such programs are subject to changing needs each year. Future program changes will, of course, necessitate definitional changes. The court further finds as a fact that the PEI and the definitions and the procedure whereby the system is implemented is objective and enables independent observers to reach the same result.
The evidence reflects that MCES has heretofore administratively granted merit pay increases and promotions to professionals scoring within the top 25%. Under this formula, merit pay increases for the fiscal year 1982-83 would have produced the following results: seven black agents out of sixty-three (11.1%) scored within the top 25%, and fifty-six white agents out of one hundred ninety (21%) scored within the top 25%. For the fiscal year 1983-84, nine black agents out of sixty-two (14%) scored within the top 25%, and fifty-two white agents out of one hundred eighty (29%) scored within the top 25%.
II. Conclusions of Law
MCES's attempt to effectuate the remedial measures previously ordered by this court must pass muster under the Title VII standards enunciated in Griggs v. Duke Power Co., 401 U.S. 424, 91 S.Ct. 849, 28 L.Ed.2d 158 (1971), which forbid the use of employment tests that are discriminatory in effect unless the employer carries "the burden of showing that any given requirement [has] ... a manifest relationship to the employment in question." Id. at 432, 91 S.Ct. at 854. See Wade, 528 F.2d at 518. At the outset, plaintiffs concede that the PEI was developed in accordance with EEOC guidelines and is content valid for *1581 use in promoting employees to county-level agent positions. However, they contend that the PEI is not objective and nondiscriminatory and further assert that it is not content validated for use in promotion decisions concerning district-level agent positions.
This court concludes that MCES has carried the burden of establishing that the PEI is nondiscriminatory in both form and application and is sufficiently objective to provide an accurate measure of county agents' on-the-job performance. We accept the findings of Mefferd and Sadler which indicate that the mean score differential between blacks and whites for the years 1981-1983 is not because of race, but rather is attributable to disparities in educational preparation and experience. As to objectivity, MCES has established definitions, guidelines, and procedures which ensure that the scoring of agents by district program leaders is consistent and as non-subjective as is humanly possible. The concrete definition provided for a score of "3" on each of the forty-two elements, when considered in conjunction with the consistency scoring procedures and the district program leaders' personal reviews of scores with each agent, adequately safeguards against intrusion of any rater's personal whim or fancy into the scoring process. Moreover, MCES has demonstrated that it has the expertise to devise such future definitional changes as the programs of MCES may require. The PEI is so constructed that independent persons rating a particular agent would reach the same conclusion.
The court further concludes that the PEI is objective, nondiscriminatory, and content valid for use in promotion to district agent positions. It was developed through a job analysis of all MCES professional positions and through extensive interviews with agents at all levels regarding the requirements of their jobs. A county agent's scores on the PEI are reasonable factors to consider in determining whether or not he is qualified for promotion to the district level.
Having determined that the PEI developed by MCES is objective, nondiscriminatory, and content valid, we approve it as a job-related and necessary measure of job performance, which may be implemented by MCES in making promotion and merit pay decisions for county and area levels of the agency's employees.
Let an order issue accordingly.
NOTES
[1] A professional's score on job performance is proposed to be equal the weight to a maximum of 57 ½ on academic training and degree, tenure of employment, and in-service training.
Document Info
Docket Number: Civ. A. EC 70-29-K
Judges: Keady
Filed Date: 8/29/1985
Precedential Status: Precedential
Modified Date: 11/6/2024