Guest  

 
Search: 
Search By: SubjectAbstractAuthorTitleFull-Text

 

Showing 1 through 5 of 1,994 records.
Pages: Previous - 1 2 3 4 5 6 7 8 9 10 11 12 13 ... 399 - Next  Jump:
2010 - Theory vs. Policy? Connecting Scholars and Practitioners Words: 37 words || 
Info
1. Kim, Moohong. "Cultural Dimension of Credit Rating: Emergence of Domestic Credit Rating Agencies and Regionalization of Credit Rating in Asia" Paper presented at the annual meeting of the Theory vs. Policy? Connecting Scholars and Practitioners, New Orleans Hilton Riverside Hotel, The Loews New Orleans Hotel, New Orleans, LA, Feb 17, 2010 <Not Available>. 2019-06-25 <http://citation.allacademic.com/meta/p415820_index.html>
Publication Type: Conference Paper/Unpublished Manuscript
Abstract: This paper explores the cultural dimension of credit rating with special reference to Domestic Credit Rating Agencies (DCRAs) which have been largely neglected in IPE studies despite their prominent emergence and increasing role in emerging markets since

2008 - NCA 94th Annual Convention Pages: 28 pages || Words: 6429 words || 
Info
2. Levine, Kenneth. and Violanti, Michelle. "Rating My Group Members and their Rating of Me: Is there a Relationship Between Ratings, Social Loafing and Group Cohesion?" Paper presented at the annual meeting of the NCA 94th Annual Convention, TBA, San Diego, CA, Nov 20, 2008 Online <PDF>. 2019-06-25 <http://citation.allacademic.com/meta/p259748_index.html>
Publication Type: Conference Paper/Unpublished Manuscript
Abstract: Previous research on group cohesion has treated it as an independent variable. This study set out to determine whether group cohesion could be predicted as the dependent variable. A total of 43 groups (474 undergraduate students) enrolled in an introduction to communication studies course completed a group project that accounted for more than 20 percent of their grade. After completing the group project, they filled out ratings of themselves and the other group members as well as 23 items on group functioning. Exploratory factor, confirmatory factor, and path analyses created a path model showing the relationships among how group members rated each other, were rated by other group members, individual-level input satisfaction, group-level input satisfaction, and cohesion. Conclusions from this research were that cohesion can be predicted as the dependent variable, and students who work and those who social loaf or freeload know who they are and include that in their ratings.

2015 - SRCD Biennial Meeting Pages: unavailable || Words: unavailable || 
Info
3. Soliday Hong, Sandra., Burchinal, Margaret. and Sabol, Terri. "Do Quality Rating and Improvement System Ratings Work in Different Settings? Ratings, Quality, and Child Outcomes" Paper presented at the annual meeting of the SRCD Biennial Meeting, Pennsylvania Convention Center and the Philadelphia Marriott Downtown Hotel, Philadelphia, PA, Mar 19, 2015 <Not Available>. 2019-06-25 <http://citation.allacademic.com/meta/p958932_index.html>
Publication Type: Individual Poster
Review Method: Peer Reviewed
Abstract: Starting about 20 years ago, states created Quality Rating and Improvement Systems (QRIS) as market-based incentive systems in an effort to improve ECE quality and children’s school-readiness among other goals. “Validation” studies that examined the association between QRIS ratings and child outcomes have, to date, been limited to analyses of data from state pre-k programs (Hong et al., 2014; Sabol & Pianta, 2014; Sabol et al., 2013). These programs tend to have higher quality standards than the community-based programs that typically volunteer to participate in QRIS. Therefore, this study will examine the extent to which QRIS ratings of ECE programs with and without standards are associated with differences in ECE quality, and children’s school-readiness skills by comparing replicated analyses across studies of varied ECE program types.

QRIS ratings were simulated using secondary data from three large studies of child care quality. All studies collected data on quality indicators widely used in QRIS, and measured school-readiness (see Table 1). The sample included: two nationally representative studies of Head Start, the Head Start Family and Child Care Experiences Survey (FACES) 2006 (n = 127 programs and 2,710 children) and 2009 (n = 108 centers and 1,986 children). Federal Head Start guidelines set standards for programs, and a triennial onsite-review monitors them. The other study, Early Childhood Longitudinal Survey-Birth Cohort (ECLS-B; n = 1,400 centers and 700 children) is a nationally representative cohort study, and included a sample of child care settings in which the children enrolled as 4-year-olds. We focused on center-care that represented 77% of the settings, of which 42% were community-based programs, 58% were Head Start. We conducted all analyses using multi-level models, accounting for nesting of children in programs and including the child’s fall score and child and family demographics as covariates. Grand-mean standardization (M=0, SD=1) of all variables means regression coefficients can be interpreted as effect sizes. Multiple imputation accounted for missing data. Parallel analyses were conducted using the data from each study, and coefficients were combined using meta-analytic techniques.

Results are shown in Table 2. Overall, we found very little evidence that the quality indicators or the simulated QRIS scores related to observed quality or child outcomes differently in studies of Head Start only (FACES) and in the study of programs from multiple-auspices (ECLS-B). Only two indicators showed different patterns of association in the 31 comparisons that were conducted. Teacher education was a stronger predictor of ECERS scores and group size of social skills in ECLS-B than in FACES. The general lack of differences in associations across the two sets of studies occurred despite less variability in director education, group size, and ECERS scores in FACES than in ECLS-B, probably due to higher program standards in Head Start than for community child care. Across the two studies, findings suggested that ECERS scores were related as expected to the simulated QRIS scores and to specific quality indicators. Residualized gains in child outcomes were related to some quality indicators – teacher education, director education, and group size.

2017 - ICA's 67th Annual Conference Pages: unavailable || Words: unavailable || 
Info
4. Lee, Stella., Liu, Jiaying., Gibson, Laura. and Hornik, Robert. "Using Crowd-Sourced Labelling to Rate the Valence of Media Texts: Rating Instructions for Achieving Valid Results" Paper presented at the annual meeting of the ICA's 67th Annual Conference, Hilton San Diego Bayfront, San Diego, USA, May 25, 2017 Online <APPLICATION/PDF>. 2019-06-25 <http://citation.allacademic.com/meta/p1234251_index.html>
Publication Type: Session Paper
Review Method: Peer Reviewed
Abstract: The task of quantifying the valence of news coverage is an integral part of communication research. With the advent of crowd-sourcing platforms such as Amazon Mechanical Turk, it has become possible to ask multiple raters to rate the valence of media texts. This study aimed to empirically determine the most appropriate instructions and definitions for rating valence using crowd-sourced raters that would yield the most consistent and unbiased ratings. Raters were randomly assigned to four conditions that varied instructions with regard to the reference point from which raters were to make a judgment. Results indicated that the condition where raters were instructed to rate valence by referring to their own understanding yielded the most consistent and unbiased ratings. Implications for crowd-sourced rating are discussed.

2005 - American Association For Public Opinion Association Words: 304 words || 
Info
5. Davern, Michael., Thiede Call, Kathleen ., Brown Good, Meg. and Ziegenfuss, Jeanette. "Are Lower Response Rates Hazardous for Your Health? Do Higher Response Rates Translate Into Better Estimates of Health Insurance Coverage and Access to Care?" Paper presented at the annual meeting of the American Association For Public Opinion Association, Fontainebleau Resort, Miami Beach, FL, <Not Available>. 2019-06-25 <http://citation.allacademic.com/meta/p16939_index.html>
Publication Type: Paper/Poster Proposal
Abstract: Response rates for random digit dial surveys have been falling over recent years. Recent Pew studies (Pew Research Center 2004) have found that national surveys with response rates as low as 27 percent can be as representative as surveys with 51 percent response rates on opinion, civic engagement and attitude items. These studies have pointed to a non-response mechanism that meets the criteria of “missing at random” as opposed to “missing completely at random” (Little and Rubin 1987). We examine whether this holds for health insurance and health care access variables from statewide surveys. Low response rates may lead to biased estimates of state health insurance coverage and access. We examine two recent surveys conducted by the University of Minnesota for the states of Oklahoma (n=5,847, AAPOR response rate #4=45%) and Minnesota (n=13,512, AAPOR response rate #4=56%). Using these data we estimate the probability of being uninsured, having different types of insurance coverage, and lacking access to care by whether the household refused to participate during a previous call, and whether the household took 5 or more days to be completed. Although certain demographic characteristics varied significantly between the two groups such as age (showing the data were not “missing completely at random”), there are no statistically significant differences in multivariate models predicting key health access and health insurance coverage estimates controlling for the demographic differences (i.e., our data meet the criteria for “missing at random.”). Not including the initial refusals and surveys completed after 5 days would result in response rates that are half of the actual rates but would not affect the quality of our estimates after imposing weighting controls for demographic variables. Thus we should consider developing additional summary measures of survey quality that are related to the estimates generated from the survey.

Pages: Previous - 1 2 3 4 5 6 7 8 9 10 11 12 13 ... 399 - Next  Jump:

©2019 All Academic, Inc.   |   All Academic Privacy Policy