All Academic, Inc. Research Logo

Info/CitationFAQResearchAll Academic Inc.
Document

Explaining Information Effects in Collective Preferences
Unformatted Document Text:  of format characteristics known to prejudice responses in survey questions. If the gaps between surveyed and simulated opinions are manifestations of question wording and order effects introduced by the survey questionnaires themselves, then the consistent patterns of ideological bias revealed in simulation studies (e.g., Althaus 1998; Althaus 2003) could be meaningless artifacts of the survey process rather than meaningful evidence that information inequalities cause certain points of view to be systematically underrepresented. A program of research by Jon Krosnick and colleagues (Alwin and Krosnick 1991; Krosnick 1991, 1999a, 1999b; Krosnick and Fabrigar 1997, forthcoming) has detailed how many question wording and order effects arise from “satisficing” behavior: strategic responses to the cognitive demands placed on survey respondents with lower levels of cognitive ability or motivation. 2 Lacking the ability or motivation to engage in systematic processing of the survey response, these respondents become inclined to choose the first acceptable response they are offered (weak satisficing) or even to disengage from thoughtful responses altogether by carelessly choosing any answer that appears to be an acceptable choice (strong satisficing). The effects of satisficing range from biases in retrieving information from long-term memory to selecting answers randomly or responding with “don’t know” answers merely to avoid effortful processing of the survey question. Rather than suggesting that respondents who satisfice are mistaken about their opinions, favoring a policy, for instance, when with greater knowledge they might oppose it, this literature suggests that opinion convergence may also result from relatively mindless responses to the attributes of survey questions rather than to the topics of these questions. Since as much as 40% of the variance in attitude measures is due to systematic error introduced by the survey instrument itself (Cote and Buckley 1987) , identifying features of survey questions that contribute to information effects might help survey researchers design questions that minimize the gaps between surveyed and “fully informed” opinion. 2 Several other dual-process models have been developed to explain the survey response process (e.g., Cannel, Miller, and Oksenberg 1981; Tourangeau, Rips, and Rasinski 2000). Unlike most models of the survey response which typically focus on psychological processes alone (e.g., Zaller 1992a; Zaller and Feldman 1992), Krosnick’s satisficing theory (adapted from Simon 1957) makes specific predictions about the impact of question attributes and respondent characteristics on the likelihood for systematic processing (“optimizing,” in the parlance of Krosnick’s model) in a way that can account for a variety of well-documented response effects. The power of the model lies in its potential for explaining a wide range of disparate and seemingly inconsistent findings on response effects dating back to the beginning of the 20 th century.

Authors: Althaus, Scott.
first   previous   Page 6 of 53   next   last



background image
of format characteristics known to prejudice responses in survey questions. If the gaps between
surveyed and simulated opinions are manifestations of question wording and order effects introduced
by the survey questionnaires themselves, then the consistent patterns of ideological bias revealed in
simulation studies (e.g., Althaus 1998; Althaus 2003) could be meaningless artifacts of the survey
process rather than meaningful evidence that information inequalities cause certain points of view to
be systematically underrepresented.
A program of research by Jon Krosnick and colleagues (Alwin and Krosnick 1991; Krosnick
1991, 1999a, 1999b; Krosnick and Fabrigar 1997, forthcoming) has detailed how many question
wording and order effects arise from “satisficing” behavior: strategic responses to the cognitive
demands placed on survey respondents with lower levels of cognitive ability or motivation.
2
Lacking
the ability or motivation to engage in systematic processing of the survey response, these respondents
become inclined to choose the first acceptable response they are offered (weak satisficing) or even to
disengage from thoughtful responses altogether by carelessly choosing any answer that appears to be
an acceptable choice (strong satisficing). The effects of satisficing range from biases in retrieving
information from long-term memory to selecting answers randomly or responding with “don’t know”
answers merely to avoid effortful processing of the survey question. Rather than suggesting that
respondents who satisfice are mistaken about their opinions, favoring a policy, for instance, when
with greater knowledge they might oppose it, this literature suggests that opinion convergence may
also result from relatively mindless responses to the attributes of survey questions rather than to the
topics of these questions. Since as much as 40% of the variance in attitude measures is due to
systematic error introduced by the survey instrument itself
(Cote and Buckley 1987)
, identifying
features of survey questions that contribute to information effects might help survey researchers
design questions that minimize the gaps between surveyed and “fully informed” opinion.
2 Several other dual-process models have been developed to explain the survey response process (e.g., Cannel,
Miller, and Oksenberg 1981; Tourangeau, Rips, and Rasinski 2000). Unlike most models of the survey response
which typically focus on psychological processes alone (e.g., Zaller 1992a; Zaller and Feldman 1992), Krosnick’s
satisficing theory (adapted from Simon 1957) makes specific predictions about the impact of question attributes and
respondent characteristics on the likelihood for systematic processing (“optimizing,” in the parlance of Krosnick’s
model) in a way that can account for a variety of well-documented response effects. The power of the model lies in
its potential for explaining a wide range of disparate and seemingly inconsistent findings on response effects dating
back to the beginning of the 20
th
century.


Convention
All Academic Convention can solve the abstract management needs for any association's annual meeting.
Submission - Custom fields, multiple submission types, tracks, audio visual, multiple upload formats, automatic conversion to pdf.
Review - Peer Review, Bulk reviewer assignment, bulk emails, ranking, z-score statistics, and multiple worksheets!
Reports - Many standard and custom reports generated while you wait. Print programs with participant indexes, event grids, and more!
Scheduling - Flexible and convenient grid scheduling within rooms and buildings. Conflict checking and advanced filtering.
Communication - Bulk email tools to help your administrators send reminders and responses. Use form letters, a message center, and much more!
Management - Search tools, duplicate people management, editing tools, submission transfers, many tools to manage a variety of conference management headaches!
Click here for more information.

first   previous   Page 6 of 53   next   last

©2012 All Academic, Inc.