Evaluation of Administrators by University of Michigan Faculty

John T. Lehman, AAUP Executive Committee Member

The University of Michigan Faculty Senate conducted a precedent-setting campus-wide evaluation of academic administrators during December 2004, and results were formally reported to the faculty during the March 2005 meeting of the Senate Assembly.

The evaluation process had been mandated by vote of the University of Michigan Faculty Senate in March 2004.  The Senate called for creation of a new standing committee, the Administrator Evaluation Committee (AEC), and for construction of an on-line evaluation system to begin operating during Fall Term 2004.  Accordingly, the AEC developed electronic questionnaires, designed and developed software and information systems, and instituted reporting practices novel to faculty governance for the purposes of this evaluation.

A total of 864 individuals participated in the evaluations (28% of all eligible from the Ann Arbor campus).  Each of these individuals typically was eligible to submit evaluations for their chair, dean, provost, etc.; 2511 evaluation forms were submitted (20% of all possible).  By way of simple comparison, election records from the City of Ann Arbor ( http://www.ewashtenaw.org/government/clerk_register/elections/election_results/cto_annarbor.pdf) report that in the 4 November 2003 general election, of 82,834 registered voters 21,660 ballots were cast (26% of all possible).  Experience with and feedback received from the inaugural round of evaluations have been positive and constructive, Future evaluation efforts will be beneficial to faculty governance. 

Critical findings identified across all units are as follows:

Evaluations were conducted of the president, provost, all deans, and all department chairs from the Ann Arbor campus.  Complete evaluation results for all administrators are available to all University Senate members at http://aec.umich.edu.

The call for evaluation of U-M administrators began as a grassroots effort stemming from dissatisfaction with a lack of accountability by administrators to the governing faculty of the university, and was articulated in a Faculty Perspectives Page article (‘Administrative Accountability,’ The University Record, 12 Jan 04).  Three months later, faculty turned out in force for a meeting of the University Senate at which a well-publicized resolution calling for administrator evaluation was the chief item of business.

The evaluation process and the AEC itself had been opposed by a number of faculty governance representatives on two influential faculty governance committees, the Academic Affairs Advisory Committee (AAAC) and the Senate Advisory Committee on University Affairs (SACUA).  Both committees meet regularly with university executive officers, and the SACUA chair had expressed the view that the AEC could “harm the special relationship” the committee members enjoyed with the executive officers.  In the final accounting, however, grassroots sentiment prevailed; the vote was 87 to 11 in favor of evaluation.

The on-line questionnaires included a mix of “core” and “topical” subjects that differed somewhat according to administrator rank.  However, the following core questions were common to all administrators:

Faculty participants were also given the opportunity of submitting text comments for each administrator they evaluated.  Text comments were electronically shuffled into random order and were transmitted to the administrators automatically along with responses to any questions posed by the administrators themselves.  The AEC retained no records of either type of response.  The AEC reported to faculty and archived only the anonymous results of responses to the core and topical questions that it had designed. 

Thanks to the talents of AEC members from Electrical Engineering and Computer Science, the AEC developed a sophisticated system to ensure anonymity and confidentiality for the participants.  The security measures were thoroughly described in the AEC’s report to the faculty dated 15 March 2005, and the software is open for inspection by interested parties.  It is also available free for adoption by other groups or institutions.

Results for Specific Administrators

President -  The overall response rate was 16%.  The highest median score concerned representing the University to the outside constituency; the lowest median score concerned consulting with faculty.  Schools and Colleges that gave low marks to their deans tended to give lower marks to the President, too.

Provost -  The overall response was 18%.  The highest median score concerned promoting scholarly environment; the lowest concerned consulting with faculty.  Schools and Colleges that gave low marks to their deans tended to give lower marks to the Provost, too.

Deans -  Response rates ranged from a high of 60% (Social Work) to a low of 7% (Law). Two units with response rates below 15% (Law and Pharmacy) were omitted from the subsequent analysis.

Department chairs –Response rates ranged from a high of 58% (Chemical Engineering) to 0% (nine departments: four from Music, two from Dentistry, and one each from Medicine, Pharmacy, and LS&A).  The 32 departments with response rates below 15% were omitted from subsequent analysis.

Survey results indicate that improved consultation with faculty by administrators at every level would be desirable in the view of strong faculty opinion. The AEC also identified a series of steps that could be taken to improve response rates in the future.  Its recommendations to the University Senate on this subject involve (1) the timing of the evaluation process during the academic term, (2) assurances about survey security, (3) outreach to eligible Senate members, and (4) demonstration by faculty governance representatives that faculty opinion can be translated into action.

AEC members are continuing a statistical analysis of evaluation results.  One approach has been to compare the responses segregated by unit to the overall responses summed across all units as a common reference.  Many statistically significant results are emerging from this analysis.  For example, evaluation scores assigned to deans are highly associated with response rates across units (only 6 chances in 1000 that the pattern is caused by random chance alone).  The pattern is such that lower rates of response are strongly associated with higher scores by the deans.  The units for which multiple index questions scored deans significantly above the university-wide reference (Type I error less than 0.05) are LSA, Medicine, Business, Social Work, Public Policy, and the Library.  Units in which deans were rated significantly below the university-wide reference are Engineering, Music, Dentistry, Education, and Nursing.


The 2004 evaluations served as a useful diagnostic for both the university administration and the governing faculty on which administrators are perceived by participating faculty to be most – and least – effective. Experience obtained during the inaugural round of evaluations provides a basis for improvement of the instrument and interpretations in future iterations of the process.  The variations observed in median response scores to different questions indicate that participating faculty gave serious thought to their responses, and those variations provide specific guidance to administrators toward self-improvement.

Click here to return to list of articles

Click here to return to list of Annual Newsletters

Click here to return to Chapter Home Page