next up previous
Next: SAMPLING INFORMATION FOR CONTINUOUS Up: No Title Previous: GENERAL INFORMATION ABOUT THE

CONTINUOUS MONITORING STUDY DESIGN

                      CONTINUOUS MONITORING STUDY DESIGN


          The Continuous Monitoring study was intended to capture the
     dynamics of an election campaign. To understand the impact of a
     campaign from a voter`s perspective--how perceptions, beliefs and
     preferences are developed--required the collection of survey evidence
     as the campaign unfolded. The interview emphasized those elements
     important to electoral choice most likely to be affected by the
     campaign and by external events that intrude upon the campaign.

          Since events which can affect a campaign may take place at any
     time, it was desirable to be monitoring the electorate on a
     continuous basis. Hence, Continuous Monitoring began January 11th,
     1984. That start date was chosen to give a number of interviews
     before the stimulus of the Iowa caucus and New Hampshire primaries.
     Monitoring continued past election day, with the last interview taken
     on December 7th.

          The study includes 46 small, independent, consecutively
     administered cross-sections. Each such cross-section sample is
     designated as a different sample "week". The average sample size is
     76 cases. The interviews were taken by telephone. Respondents were
     selected by random digit dialing. (See Sample Design, below.)

          WEEKS AND SAMPLES. Because of the difficulty of obtaining an
     adequate response rate in a short period of time, the sample "week"
     is actually a 17 day interviewing period. The goal was to take
     two-thirds of the interviews in the first seven days of interviewing,
     with a 10 day grace period for picking up the remaining one-third of
     the interviews.

          Each sample week began on a Wednesday, a day selected because
     Tuesdays were Primary days. After 17 full days of interviewing, the
     sample week ended at midnight on Friday. On Wednesdays, Thursdays and
     Fridays, interviews were being conducted for three distinct sample
     weeks: for the sample begun on that Wednesday, for the sample begun
     on the previous Wednesday and now entering its first week's "grace"
     period, and the sample begun two full weeks ago, working on its last
     three days of grace.

          Variable 104 denotes the sample "week" and this variable should
     be used when one is interested in comparing the samples as such.
     There are two other variables which record the actual 7-day week in
     which the interview was taken. One (variable 113) records the week in
     which the interview was begun. The other (variable 114) records the
     calendar week in which the interview was completed. Any difference
     between these two variables is due to "break-offs9" (See variables
     22-29). The user should note that an interview taken in any one of
     the SAMPLE weeks" could have been taken in one of THREE calendar
     weeks.

          VERSIONS AND SAMPLES. The survey instrument was intended to be
     very much the same from one sample week to the next. At the same
     time, the design allowed for the addition of new questions as
     campaign events made necessary, and for deletion of questions no
     longer relevant as the campaign unfolded. From time to time, it did
     prove necessary to add and delete questions. For example, the
     original coverage of Gary Hart was very thin, and a number of
     questions about him were added immediately after the New Hampshire
     primary. Similarly, questions about John Glenn were dropped from the
     survey four weeks after he dropped out of the race. Versions are
     defined by question additions or deletions. Each time one such change
     took place, a new version was created. There were eventually THIRTEEN
     versions, many of them reflecting the addition or deletion of only
     one or two variables. The INAP codes for each variable clearly
     indicate for which version(s) the question was asked. (Please see
     "Questions & Versions," below, for a detailed listing of differences
     between versions.)

          PLEASE NOTE that there is not an exact correspondence between
     version beginning dates and sample weeks. With one notable exception,
     version changes were made "across the board," i.e., a question was
     added or dropped for all open sample weeks. Thus, when the
     thermometer rating for Alan Cranston was dropped (the only difference
     between versions 2 and 3) this was done not only for Sample Week 12,
     which opened on the day the new version was implemented, but also for
     interviews from Weeks 10 and 11 which were still in the field at that
     time. The switch between Versions 1 & 2 is an exception to this
     procedure. In this instance, when a set of new questions for Gary
     Hart were added on the day following the New Hampshire primary, they
     were added only for Sample Week 8, not the still open Sample Weeks 6
     & 7.

          VERSIONS AND MISSING DATA. In all releases of all NES studies,
     the codebook and dictionary treat certain code values for most
     variables as "missing data." Don't know, Not ascertained and Not
     asked (INAP) codes are almost always treated as missing data.
     However, the analyst has the responsibility of determining if these
     missing data assignments are appropriate for his/her research.
     Missing data code assignments such as INAP should be read carefully
     before analysis is begun. This is particularly important for the
     Continuous Monitoring dataset, where the distinction between versions
     is carried in the INAP codes (as well as a Version variable, see
     variable 117).


Walter Mebane
Mon Dec 3 06:08:49 EST 2001