next up previous
Next: 2000 NATIONAL ELECTION STUDY Up: No Title Previous: 2000 STUDY DESCRIPTION FOR

2000 STUDY DESIGN, CONTENT AND ADMINISTRATION

 
STUDY DESIGN 

     The 2000 National Election Study entailed both a pre-election interview
and a post-election re-interview.  A freshly drawn cross section of the 
electorate was taken to yield 1807 cases.  The 65 minute pre election survey 
went into the field September 5th, nine weeks before election day.  The 65 
minute post election study, unique to the time series in that no president
elect was named for several days, went into the field the day after the
election, November 8th, and remained in the field until December 18th.

     Because of the study's most innovative feature, a carefully designed mode
experiment, the data represent two presidential studies in 2000, side by side. 
The core study preserves our past commitment to probability area sampling and
face to face interviewing: 1006 respondents interviewed prior to the election
and 694 were re-interviewed face to face after the election.  Supporting the
core study, we used the efficiencies of RDD sampling and telephone
interviewing: 801 respondents were interviewed by phone prior to the election
and 862 respondents were interviewed by phone after the election.  As such,
the experiment will define sharply the differences between the two modes and
allow us to learn what a shift to telephone interviewing will mean for the NES
time-series.  Further details of the administration of the surveys are given
in "Study Administration," below.


STUDY CONTENT

Substantive themes

     The content for the 2000 Election Study reflects its double duty, both as
the traditional presidential election year time-series data collection and as
a mode study.  Substantive themes represented in the 2000 questionnaires
include:

*  interest in the political campaigns; concern about the outcome; and 
   attentiveness to the media's coverage of the campaign      
*  information about politics
*  evaluation of the presidential candidates and placement of presidential 
   candidates on various issue dimensions  
*  knowledge of the religious background of the major Presidential and Vice-
   Presidential candidates 
*  partisanship and evaluations of the political parties    
*  vote choice for President, the U.S. House, and the U.S. Senate, including
   second choice for President    
*  political participation:  turnout in the November general election; other
   forms of electoral campaign activity
*  personal and national economic well-being
*  positions on social welfare issues including:  government health insurance;
   federal budget priorities, the budget surplus, and the role of the
   government 
   in the provision of jobs and good standard of living    
*  position on campaign finance and preference for divided government
*  positions on social issues including:  gun control, abortion; women's
   roles; the rights of homosexuals; the death penalty; school vouchers;
   environmental policy
*  Clinton legacy
*  knowledge of George Bush Sr. and his previous administration
*  fairness in elections; satisfaction with democracy; and the value of voting
*  racial and ethnic stereotypes; opinions on affirmative action; attitudes
   towards immigrants
*  opinions about the nation's most important problem
*  values and predispositions:  moral traditionalism; political efficacy; 
   egalitarianism; humanitarianism individualism; trust in government
*  social altruism and social connectedness       
*  feeling thermometers on a wide range of political figures and political 
   groups; affinity with various social groups  
*  social networks, shared information and expertise on politics
*  detailed demographic information and measures of religious affiliation and
   religiosity.   

Several new concepts addressed in the 2000 study: 

SOCIAL TRUST: Over the last decade, research on social trust has exploded. In
order to allow NES to contribute to this research effort, we developed a
series of new measures that approach the problem from a new angle. With
supplementary funding from the Russell Sage Foundation, we developed measures
addressed not to the trustworthiness of people in general, but to the
trustworthiness of neighbors and co-workers. Our 2000 Special Topic Pilot
Study showed that the new measures gauge trust reliably, that neighborhood and
workplace trust are related to but distinct from general social trust, and
that they contribute independently to participation in politics. We included
these measures in the 2000 NES, again, with support from the Russell Sage
Foundation. Together with an expanded set of questions on participation in
civic life that are also part of the 2000 study, we expect to see a wide range
of exciting new investigations on trust and participation. 

VOTER TURNOUT: A particularly vexing problem for NES has been over-reporting
of voter turnout. Over the years we have sponsored a series of investigations
trying out possible remedies, without much success. But now it seems that we
may have a solution in hand, based on the source monitoring theory of recall.
The notion here is that some people may remember having voted sometime in the
past but confuse the source of that memory, accidentally misassigning it to
the most recent election, when it actually derives from a prior election. We
are therefore implementing a new item, with expanded response categories to
help respondents be more accurate in determining whether they did in fact vote
in November of 2000. 

POLITICAL KNOWLEDGE: The 2000 study also sees a slight change in the way 
political knowledge is measured. In the past, we have encouraged respondents
to say they "don't know" the answer to our information questions, partly to
avoid embarrassment. But research shows that this differentially encourages
"don't know" responses from some people who may actually know the correct
answer but lack the confidence to say so. As a consequence, the standard way
of putting these questions may underestimate levels of knowledge. In the 2000
study we are therefore encouraging respondents to take their best guesses when
answering the political knowledge questions.

SOCIAL NETWORKS: The reality of citizenship is that individuals seldom go it
alone when they engage in political activities.  Preferences, choices, and 
levels of engagement are contingent on the location of individuals within 
particular social settings.  The 2000 study incorporates a social network 
battery.  The battery is based entirely on the perceptions of survey
respondents regarding the characteristics of their identified discussants.

COGNITIVE STYLE: The 2000 NES includes two brief but reliable measures of 
cognitive style: need for cognition and need to evaluate. The first 
differentiates among people in the care they give to thinking through
problems; the second differentiates among people in their tendency to evaluate
objects as good or bad. Both are associated with extensive literatures in
psychology, which led to their audition in the 1998 NES Pilot Study. Because
of their success there in clarifying turnout, knowledge about politics, voter
decision-making, and more, they were added to the 2000 NES.

SURVEY MODE: Perhaps the most important single feature of the 2000 NES is a
mode experiment, which supplies the ability to compare interviews taken in
person (as we've taken them for the past fifty years) with interviews taken
over the phone. This carefully designed mode experiment, driven by theoretical
and practical interest, allows scholars to test the consequences of survey
mode on data quality and reliability.  Moreover, it allows the community to
asses the impact of what such a change in mode would mean for the NES times
series.  The 2000 study incorporates numerous experiments to look at the
effects of mode on: 7 pt. scales and branching, response order, don't know
filters, and social desirability.

Congressional Ballot Cards and Incumbent Bias

     In 2000, NES redesigned the Congressional ballot card used in face
to face interviewing in an attempt to combat overreport for incumbents.
The ballot redesign was based on the research of Box-Steffensmeier, 
Jacobson, and Grant, (later published in POQ, 2000).  Moreover, the change
in ballot form was intended to eliminate the measurement error in vote 
report that has concerned numerous scholars (Wright 1993; Gow and 
Eubank 1984; Jacobson and Rivers 1993; and Jackson and Carsey 2001). Based 
on three experiments during the 1996 elections - the Ohio Union Study, the 
National Black Election Study, and the Texas Post Election Study, NES 
concluded that a modification to the 1982 style ballot was in order.

     The new ballot cards are intended to give respondents two cues in 
recalling their vote - party identification and name of candidate.  Based on
the findings of Box-Steffensmeier et al., party is the predominant cue in the 
revised ballot.  To randomly distribute that cue, each respondent had two 
ballots printed for the interview - one with the Republican listed first, and
one with the Democrat listed first.  Based on a randomly generated number, 
interviewers were instructed via CAPI to show the respondent the gold or the
blue card.  Examples of the redesigned ballot cards are available on the 2000
Election Study Page: http://www.umich.edu/~nes/studyres/nes2000/nes2000.htm.

     In another effort to combat incumbent bias, the vote report question 
was placed earlier in the interview than in previous studies to avoid any 
possible contamination from thermometers, which ask R to rate their member 
of Congress.


Features of a CAI questionnaire

     Using the capabilities of computer-assisted interviewing (CAI) in the
2000 NES enabled the introduction of several features that are not
feasible using a paper-and-pencil questionnaire.  The most significant of
these for users of this data are: randomization within batteries or sequences
of questions; application of half-sampling to some questions; and random order
of presentation of blocks of questions.  Randomization within batteries refers
to presenting, in a randomly determined order, a series of questions about the
same objects (or people).  An example would be the questions about the
respondent's likes and dislikes of the four main Presidential candidates
where the names of Gore, Bush, Buchanan, and Nader were inserted randomly as
the first, second, third or fourth person to be asked about in this series.  
Randomization of names/objects in this way avoids ordering effects that might
be obtained if, for example, the candidates were always asked about in the
same order in every series of questions where a parallel question is asked
about each of the three.  Questions where randomization of order within a
series was in force are clearly identified in the codebook.  Randomization
variables, which allow the user to identify the order of presentation, are
provided for all instances of randomized presentation.  A few questions,
primarily open-ended questions, were half-sampled, so that a randomly selected
half of respondents were asked the question.  Finally, an order experiment,
where a sequence of closed-ended questions was asked early in the interview
for a random half of respondents and late in the interview for the other half,
was included as part of the mode comparison experiment described below.  For
both of these features, the relevant codebook entries contain explanatory
notes.  All random selections were programmed into the computer application of
the questionnaire and occurred automatically and independently of other
circumstances of the interview.  CAI eliminates the preparation of a paper 
and pencil version which would previously have been published in the codebook.

     Candidate information (names, gender and candidate codes) were 
"pre-loaded" into the application to be used during the interview.  
The pre-loaded information is included in the released data.  However,
since paper candidate lists are no longer utilized as field materials, 
there is no "Candidate List" appended to this codebook, although the 
term 'Candidate List' continues to be used in the codebook as a reference 
to the candidate information available to the interviewer (CAPI preload).


STUDY ADMINISTRATION: MODE EXPERIMENT

     NES election studies are traditionally based on personal, face to face 
interviewing rather than telephone interviewing in order to preserve the
quality of sampling and survey response.  Given questions that have been
raised within the research community about the relatively high expense of
face-to-face interviewing compared with the more widely used telephone mode,
the NES Board of Overseers authorized a series of efforts to investigate
possibilities for maximizing the use of telephone interviewing.  The 1996 and
1998 election studies included smaller mode experiments to test the
consequences of mode on survey quality and reliability.  The design and
administration of the mode experiment in 2000 was guided by the work of a blue
ribbon committee and the commission of two reports (available at
http://www.umich.edu/~nes/) comparing face to face with telephone surveys. 
The issues included sample coverage, non-response, item non-response, social
desirability bias, and satisficing.  Several experiments were designed in the
2000 NES to gather more evidence on those effects.  Those experiments are
labeled in the question tags by the letter "E".


Question wording experiments for mode effects

In assessing possible mode effects, the NES Board of Overseers along with the
2000 Planning committee implemented a number of experiments to analyze
response order effects, satisficing, and other possible fatigue effects of
phone interviewing.

The experiments, placed almost exclusively in the pre-election survey are: 
G6, G7, G8, G9, G10, H1, H2,H4, H11, H12, L3, L6, M4, P1, and K2 in the
post-election survey.  Question tags identify experimental questions with the
letter "E".  The table below specifies the type of experiment, concept and
question number, and the altered wording.

Concept                                         Experiment
===============================                 =============================

Liberal/Conservative - G6, G7, G8, G9, G10      Branching vs. scale format
-----------------------------------------------------------------------------
Where would you place yourself on this scale, or haven't you thought much
Do you usually think of yourself as extremely liberal, liberal, slightly
liberal, moderate or middle of the road, slightly conservative, conservative
or extremely conservative?
Do you usually think of yourself as a liberal, a conservative, a moderate
or haven't you thought much about this?  Strong or not strong?

Economy - H1                                    Response order effects
-----------------------------------------------------------------------------
...gotten better, stayed about the same, or gotten worse
...worse, stayed about the same, or gotten better

Economic Conditions - H2                        Response order effects
-----------------------------------------------------------------------------
...or gotten easier for people to find enough work
...or gotten harder for people to find enough work

Economic Expectations - H4                      Response order effects
-----------------------------------------------------------------------------
...to get better, stay about the same, or get worse
...to get worse, stay about the same, or get better

Policy Positions on Imports - H11               Don't know effects by mode
-----------------------------------------------------------------------------
...placing new limits on imports, or haven't you thought much about this?
...Do you favor or oppose placing new limits on imports?

Isolationism - H12                              Agree/Disagree format
-----------------------------------------------------------------------------
...Do you agree or disagree with this statement
...stay at home or try to solve problems

Govt v. Private Health Care - L3                Response order effects         
----------------------------------------------------------------------------- 
Some people feel that there should be a govt insurance plan....suppose these
people are at one end of the scale, at point 1.  Others feel that all medical
expenses should be paid by individuals...

Affirmative Action - L6                         Balancing and mode effects
-----------------------------------------------------------------------------
Should companies that have discriminated against blacks have to have an
affirmative action program?
Should companies that have discriminated ... or should companies not have
to have an affirmative action program?

Tradeoff: Environment v. Jobs - M4              Don't know effects by mode
-----------------------------------------------------------------------------
Where would you place yourself on this scale, or haven't you thought much
about this?
Where would you place yourself on this scale, or haven't you thought much?

Women's Rights - P1                             Don't know effects by mode
------------------------------------------------------------------------------
Where would you place yourself on this scale, or haven't you thought much?
Where would you place yourself on this scale?

Political Knowledge - K2                        Don't know effects by mode
------------------------------------------------------------------------------
The first name is Trent Lott.  What job or political office does he now hold?
[DON'T PROBE DON'T KNOWS]
The first name is Trent Lott.  What job or political office does he now hold?
[PROBE DON'T KNOWS WITH, "WELL, WHAT'S YOUR BEST GUESS?] 


Telephone wording

     Because the questions asked by NES over the last fifty years have been 
administered in person, the question text , that we are careful not to alter, 
reflects the context of that traditional face to face interview.  To
understand what such a change in mode would mean to the time series we
implemented the RDD study with a questionnaire that reflected the necessary
changes in mode.  The overlap between those questions is approximately 75%.
Where questions were to be read differently, question tags are identified with
the letter "T".

Pre-election study: administration

     Interviewing for the pre-election survey began on September 5, 2000
and concluded on November 6, 2000.  A total of 1807 interviews were conducted
prior to the election - 1006 face to face and 801 by telephone.  The average
length of interview was 68.1 minutes - 70.5 minutes in face to face interviews
and 65.1 minutes in telephone interviews.  The overall response rate was
61.2% - 64.8 for the face to face interviewing and 57.2 for the telephone 
interviewing.

     In an effort to improve response rates, respondents received a pre-
notification packet by two day mail, which included a brochure on the study,
and a "Monte Blanc" style pen with the University of Michigan seal, and 
a letter notifying them we would be contacting them and would offer them 
payment for their time - 20 dollars.  Toward the end of the study, NES staff
became concerned that the production goals would not be met by election day.
This concern motivated a number of interventions:  refusal conversion
training for interviewers having difficulty, refusal conversion packets
mailed by two day mail, and interviewer incentives, and increased respondent
incentives.  Interviewers were given ten dollars for every interview
conducted after 10/26/01, and respondent incentives were increased 
from $20 to $40.  To take account of those changes, variable V000139a
identifies those cases where interviewers received an incentive per 
completed case, and variable V00016 identifies those cases where R 
received the increased incentive.

Post-election study: administration

     In an effort to cut rising costs while in the field, two segment areas 
of the face to face sample were randomly selected to receive post interviews
by telephone.  By randomly selecting forty-seven segments for telephone post 
interviews, 200 cases were removed from the strict mode experiment.

     Respondents again received a prenotification letter.  Respondents
were informed that they would receive $20 dollars as payment for their time.
Incentives were not increased for those who had received $40 in the pre-
election.

     Interviewing began on November 8, 2000 and concluded on December 18,
2000.  A total of 1555 interviews were conducted after the election -
693 face to face and 862 by telephone.  The average length of interview
was 63.7 minutes - 66.6 minutes in face to face interviews and 61.4 minutes
in telephone interviews.  The overall response rate was 86% - 86.1 face to
face, and 85.8%.

     The day after the election, it remained unclear who would be President
and
issues of fairness were increasingly being raised.  To take advantage of this
historical moment NES promptly included additional content on the fairness of
the election, the importance of one's vote, and whether R was satisfied with
democracy.

Evaluation of problems in study implementation

     Two implementation problems arose in the post-election field
randomization problem.  The first involves randomization and the second
involves the mode treatment.  On 11/16/00 it was discovered that the seed 
used to generate randomization in the instrument application was not properly
assigned within the CAPI program.  Consequently, interviews conducted prior 
to the correction of this error (or, for interviews started before and
completed after correction of this error, portions of interviews) did not have
randomization functioning for interview logic. Cases conducted without
randomization in the logic were administered as if only 1 choice were
available at each point where logic was intended to make a random selection
among two or more choices: most of these cases have an identical choice made
at each point where randomization was to have been effected.  The Form
description variables V000127a and V000127b and the randomization variables
documented in V001752-V001810 describe the Post randomizations affected.

     The second problem involves the 200 FTF Pre cases randomly selected to 
be switched to Phone administration in the Post  (see above "Post-election
study: assignment to telephone mode").  Post interviews were completed for 
168 of these cases.  Among these 168 Post interviews, 5 were mistakenly
administered by interviewers face-to-face instead of by phone.  These 
5 cases are flagged in the Post administration variable describing mode 
(V000126) as code 7; note that in 3 of these 5 cases, the IWR actually 
identified the case as Phone at the start of the interview (although it was 
being administered face-to-face), and telephone logic was followed by the CAPI
survey instrument as the interview was conducted: telephone versions of 
questions were produced for the interviewer to administer.  In the 4th case, 
the interviewer identified the case at the start of the interview as a
face-to-face interview, and FTF logic was used.


RESPONSE RATES

     The final result codes for the face to face and telephone sample were
used to calculate the two response rates below.  The pre-election face to face 
response rate (the ratio of completed interviews to the total number of 
potential respondents) for the study was 64.8%.  The pre-election telephone 
response rate was 57.2%.  The overall re-interview response rate in the post 
election interviewing was 86%  The response rate in the face to face mode was 
86.1% and for telephone it was 85.8%.

2000 Election Study: Response Rates

Face to Face       completed interviews     response rate    cooperation rate
-----------------------------------------------------------------------------
Pre-election             1006                   64.8%              86.4%
Post-election            693                    57.2%              96.9%

Telephone
-----------------------------------------------------------------------------
Pre-election             801                    57.2%              77.4%
Post-election            862                    85.8%**            95.5%

Summary
----------------------------------------------------------------------------
Pre-election             1807                   61.2%              82.1%
Post-election            1555                   86.0%              96.1%


     The field and study staff implemented a number of strategies to bolster 
response rates, including respondent incentives, interviewer incentives, 
carefully written appeals to respondents sent express mail, special 
non-response training for interviewers, and extensive refusal conversion
attempts.  Most of these strategies were implemented during the pre-election
study.  The post-election study, which occurred during a unique time for the
country, was marked by the willingness of our respondents to be 
re-interviewed.  The overall refusal rate (the proportion of all cases in 
which a respondent refuses to do an interview to the total eligible 
respondents contacted) for the post election study was 4%.

**The 200 cases from the face to face sample that were assigned for telephone
interviewing in the post had a response rate of 84.5%  The response rate for
all the cases minus the 200 "reassigned mode" cases is 86.3%.


Walter Mebane
Mon Nov 19 01:34:04 EST 2001