Search By: SubjectAbstractAuthorTitleFull-Text


Showing 1 through 5 of 1,996 records.
Pages: Previous - 1 2 3 4 5 6 7 8 9 10 11 12 13 ... 400 - Next  Jump:
2008 - NCA 94th Annual Convention Pages: 28 pages || Words: 6429 words || 
1. Levine, Kenneth. and Violanti, Michelle. "Rating My Group Members and their Rating of Me: Is there a Relationship Between Ratings, Social Loafing and Group Cohesion?" Paper presented at the annual meeting of the NCA 94th Annual Convention, TBA, San Diego, CA, Nov 20, 2008 Online <PDF>. 2019-09-14 <>
Publication Type: Conference Paper/Unpublished Manuscript
Abstract: Previous research on group cohesion has treated it as an independent variable. This study set out to determine whether group cohesion could be predicted as the dependent variable. A total of 43 groups (474 undergraduate students) enrolled in an introduction to communication studies course completed a group project that accounted for more than 20 percent of their grade. After completing the group project, they filled out ratings of themselves and the other group members as well as 23 items on group functioning. Exploratory factor, confirmatory factor, and path analyses created a path model showing the relationships among how group members rated each other, were rated by other group members, individual-level input satisfaction, group-level input satisfaction, and cohesion. Conclusions from this research were that cohesion can be predicted as the dependent variable, and students who work and those who social loaf or freeload know who they are and include that in their ratings.

2015 - SRCD Biennial Meeting Pages: unavailable || Words: unavailable || 
2. Soliday Hong, Sandra., Burchinal, Margaret. and Sabol, Terri. "Do Quality Rating and Improvement System Ratings Work in Different Settings? Ratings, Quality, and Child Outcomes" Paper presented at the annual meeting of the SRCD Biennial Meeting, Pennsylvania Convention Center and the Philadelphia Marriott Downtown Hotel, Philadelphia, PA, Mar 19, 2015 <Not Available>. 2019-09-14 <>
Publication Type: Individual Poster
Review Method: Peer Reviewed
Abstract: Starting about 20 years ago, states created Quality Rating and Improvement Systems (QRIS) as market-based incentive systems in an effort to improve ECE quality and children’s school-readiness among other goals. “Validation” studies that examined the association between QRIS ratings and child outcomes have, to date, been limited to analyses of data from state pre-k programs (Hong et al., 2014; Sabol & Pianta, 2014; Sabol et al., 2013). These programs tend to have higher quality standards than the community-based programs that typically volunteer to participate in QRIS. Therefore, this study will examine the extent to which QRIS ratings of ECE programs with and without standards are associated with differences in ECE quality, and children’s school-readiness skills by comparing replicated analyses across studies of varied ECE program types.

QRIS ratings were simulated using secondary data from three large studies of child care quality. All studies collected data on quality indicators widely used in QRIS, and measured school-readiness (see Table 1). The sample included: two nationally representative studies of Head Start, the Head Start Family and Child Care Experiences Survey (FACES) 2006 (n = 127 programs and 2,710 children) and 2009 (n = 108 centers and 1,986 children). Federal Head Start guidelines set standards for programs, and a triennial onsite-review monitors them. The other study, Early Childhood Longitudinal Survey-Birth Cohort (ECLS-B; n = 1,400 centers and 700 children) is a nationally representative cohort study, and included a sample of child care settings in which the children enrolled as 4-year-olds. We focused on center-care that represented 77% of the settings, of which 42% were community-based programs, 58% were Head Start. We conducted all analyses using multi-level models, accounting for nesting of children in programs and including the child’s fall score and child and family demographics as covariates. Grand-mean standardization (M=0, SD=1) of all variables means regression coefficients can be interpreted as effect sizes. Multiple imputation accounted for missing data. Parallel analyses were conducted using the data from each study, and coefficients were combined using meta-analytic techniques.

Results are shown in Table 2. Overall, we found very little evidence that the quality indicators or the simulated QRIS scores related to observed quality or child outcomes differently in studies of Head Start only (FACES) and in the study of programs from multiple-auspices (ECLS-B). Only two indicators showed different patterns of association in the 31 comparisons that were conducted. Teacher education was a stronger predictor of ECERS scores and group size of social skills in ECLS-B than in FACES. The general lack of differences in associations across the two sets of studies occurred despite less variability in director education, group size, and ECERS scores in FACES than in ECLS-B, probably due to higher program standards in Head Start than for community child care. Across the two studies, findings suggested that ECERS scores were related as expected to the simulated QRIS scores and to specific quality indicators. Residualized gains in child outcomes were related to some quality indicators – teacher education, director education, and group size.

2010 - Theory vs. Policy? Connecting Scholars and Practitioners Words: 37 words || 
3. Kim, Moohong. "Cultural Dimension of Credit Rating: Emergence of Domestic Credit Rating Agencies and Regionalization of Credit Rating in Asia" Paper presented at the annual meeting of the Theory vs. Policy? Connecting Scholars and Practitioners, New Orleans Hilton Riverside Hotel, The Loews New Orleans Hotel, New Orleans, LA, Feb 17, 2010 <Not Available>. 2019-09-14 <>
Publication Type: Conference Paper/Unpublished Manuscript
Abstract: This paper explores the cultural dimension of credit rating with special reference to Domestic Credit Rating Agencies (DCRAs) which have been largely neglected in IPE studies despite their prominent emergence and increasing role in emerging markets since

2005 - American Association For Public Opinion Association Words: 304 words || 
4. Davern, Michael., Thiede Call, Kathleen ., Brown Good, Meg. and Ziegenfuss, Jeanette. "Are Lower Response Rates Hazardous for Your Health? Do Higher Response Rates Translate Into Better Estimates of Health Insurance Coverage and Access to Care?" Paper presented at the annual meeting of the American Association For Public Opinion Association, Fontainebleau Resort, Miami Beach, FL, <Not Available>. 2019-09-14 <>
Publication Type: Paper/Poster Proposal
Abstract: Response rates for random digit dial surveys have been falling over recent years. Recent Pew studies (Pew Research Center 2004) have found that national surveys with response rates as low as 27 percent can be as representative as surveys with 51 percent response rates on opinion, civic engagement and attitude items. These studies have pointed to a non-response mechanism that meets the criteria of “missing at random” as opposed to “missing completely at random” (Little and Rubin 1987). We examine whether this holds for health insurance and health care access variables from statewide surveys. Low response rates may lead to biased estimates of state health insurance coverage and access. We examine two recent surveys conducted by the University of Minnesota for the states of Oklahoma (n=5,847, AAPOR response rate #4=45%) and Minnesota (n=13,512, AAPOR response rate #4=56%). Using these data we estimate the probability of being uninsured, having different types of insurance coverage, and lacking access to care by whether the household refused to participate during a previous call, and whether the household took 5 or more days to be completed. Although certain demographic characteristics varied significantly between the two groups such as age (showing the data were not “missing completely at random”), there are no statistically significant differences in multivariate models predicting key health access and health insurance coverage estimates controlling for the demographic differences (i.e., our data meet the criteria for “missing at random.”). Not including the initial refusals and surveys completed after 5 days would result in response rates that are half of the actual rates but would not affect the quality of our estimates after imposing weighting controls for demographic variables. Thus we should consider developing additional summary measures of survey quality that are related to the estimates generated from the survey.

2003 - American Association for Public Opinion Research Words: 266 words || 
5. Krosnick, Jon., Thomas, Randall. and Shaeffer, Eric. "How Does Ranking Rate?: A Comparison of Ranking and Rating Tasks." Paper presented at the annual meeting of the American Association for Public Opinion Research, Sheraton Music City, Nashville, TN, Aug 16, 2003 <Not Available>. 2019-09-14 <>
Publication Type: Conference Paper/Unpublished Manuscript
Review Method: Peer Reviewed
Abstract: Survey authors commonly have respondents rate or rank a series of items along some dimension of judgment. Alwin and Krosnick (1985) indicated that the different tasks leads to significantly different latent structures among the variables.


Respondents: 1882 respondents participated, randomly drawn from the Harris Poll Online panel.


Target assigned (more often visited or less often visited grocery store)
Number of elements to evaluate (5 versus 10)
Evaluative task:
Absolute rating of quality
Comparative rating of quality
Importance rating
Likelihood of influence
Quality Ranking
Importance Ranking

1. Asked about grocery stores they visited.
2. Assigned 1 store to rate
3. Rated familiarity with store and each element
4. Rated criteria (e.g. overall evaluation)
5. Evaluated store on elements (e.g. store’s price of products).
6. Asked 2 questions on task difficulty and accuracy.


Importance ratings paralleled importance rankings in terms of order. Rating means did not change between the 5 and 10 element conditions, but there were significant shifts for the ranking means.

The Absolute Rating of Quality and Comparative Rating of Quality groups had significantly higher average correlations with the criteria (.48 and .45, respectively) than with the Absolute Ranking of Quality group (the highest average correlation for a single element with was .13).

Respondents with 10 elements perceived that the task was more difficult and they were less accurate than those with 5 elements. Respondents assigned rating tasks perceived the task as easier and they felt more accurate doing it than those assigned the ranking task.


Ranks obtained seemed to be dependent on the presence or absence of other elements, so the selection of the other elements seems a critical, yet understudied, area.

Pages: Previous - 1 2 3 4 5 6 7 8 9 10 11 12 13 ... 400 - Next  Jump:

©2019 All Academic, Inc.   |   All Academic Privacy Policy