Citation

Are two data points worth two million dollars? Re-examining our approach to building evidence in education

Abstract | Word Stems | Keywords | Association | Citation | Similar Titles



Abstract:

This presentation puts forward an argument for more inclusive and transparent treatment of evidence and a call for greater contexualization of evaluations by integrating systems-thinking methods into evaluation design and interpretation. Specific recommendations based on new analyses will be presented to address these challenges.

Rigorous evaluations of education programs in developing countries have grown exponentially over the past decade. Much of this growth can be attributed to efforts by organizations such as the Center for Global Development who convened the Evaluation Gap Working Group to address the lack of rigorous evidence in the health and education sectors. The 2006 publication stemming from this initiative, "When Will We Ever Learn: Improving Lives through Impact Evaluation", put forward a strong call to action and a roadmap for increasing the number of high-quality impact evaluations to drive better programming and policy decisions. The emergence of organizations such as the Abdul Latif Jameel Poverty Action Lab (JPAL), the International Initiative for Impact Evaluations (3ie) and the World Bank’s Strategic Impact Evaluation Fund (SIEF), gave life to this movement, making a significant expansion of impact evaluations in developing countries possible. Bi-lateral funders including the United Kingdom’s Department for International Development (DFID) and the United States’ Agency for International Development (USAID) began to expand their commitment to and funding for evaluations of their programs. Private foundations such as the Bill & Melinda Gates Foundation and the William and Flora Hewlett Foundation played a critical role through both their thought leadership and financial support. Fast forward to the present day, and we have a steady supply of well-designed and well-executed independent impact evaluations of education programs in developing countries.

A natural by-product of this increase in impact evaluations has been the need to synthesize these disparate evaluations that are measuring different outcomes, in different ways, across different contexts, into something meaningful and actionable - all while maintaining transparency, addressing issues of comparability and acknowledging limitations with respect to external validity. Welcome onto the scene an expanding set of systematic reviews; guided by protocols put forward by initiatives such as the Campbell and Cochrane Collaborations and thought-leaders including Patrick McEwan, Dave Evans, Rachel Glennerster, Paul Glewwe and Micheal Kremer. These systematic reviews are close cousins of the impact evaluation in terms of rigor and transparency and significant effort has been made to translate the findings into program and policy recommendations.

While these developments certainly signal advances in evidence building for the education sector, challenges remain. Study selection biases (geographic, publication, etc.), insufficiently detailed categorization of interventions, scarcity of evaluations that build more nuanced evidence through treatment arms, disparate measurements and methods, and striking the right balance when assessing the generalizability of findings top the list of recognized vulnerabilities in systematic reviews. We are also leaving a lot of important information and learning on the cutting floor. Taking stock for example that roughly 90% of the evaluations funded by USAID are not impact evaluations due to suitability of program designs for impact evaluations, contextual challenges, resource constraints, timing factors and other drivers; and recognizing that there is considerable investment by a range of development organizations, both northern and southern, in “internal evaluations” - we need to find a way to be more inclusive of these evaluations while maintaining sufficient transparency in methods and threats to internal and external validity.

This presentation will present specific strategies for: improving inclusivity in systematic reviews without compromising transparency and rigor; and more effective contexualization of evaluation findings by integrating systems-thinking methods, specifically causal-loop diagramming, into evaluation design and interpretation.
Convention
Need a solution for abstract management? All Academic can help! Contact us today to find out how our system can help your annual meeting.
Submission - Custom fields, multiple submission types, tracks, audio visual, multiple upload formats, automatic conversion to pdf.Review - Peer Review, Bulk reviewer assignment, bulk emails, ranking, z-score statistics, and multiple worksheets!
Reports - Many standard and custom reports generated while you wait. Print programs with participant indexes, event grids, and more!Scheduling - Flexible and convenient grid scheduling within rooms and buildings. Conflict checking and advanced filtering.
Communication - Bulk email tools to help your administrators send reminders and responses. Use form letters, a message center, and much more!Management - Search tools, duplicate people management, editing tools, submission transfers, many tools to manage a variety of conference management headaches!
Click here for more information.

Association:
Name: Comparative and International Education Society Conference
URL:
http://www.cies.us


Citation:
URL: http://citation.allacademic.com/meta/p1354435_index.html
Direct Link:
HTML Code:

MLA Citation:

Beggs, Christine. "Are two data points worth two million dollars? Re-examining our approach to building evidence in education" Paper presented at the annual meeting of the Comparative and International Education Society Conference, Hilton Mexico City Reforma Hotel, Mexico City, Mexico, <Not Available>. 2018-10-15 <http://citation.allacademic.com/meta/p1354435_index.html>

APA Citation:

Beggs, C. "Are two data points worth two million dollars? Re-examining our approach to building evidence in education" Paper presented at the annual meeting of the Comparative and International Education Society Conference, Hilton Mexico City Reforma Hotel, Mexico City, Mexico <Not Available>. 2018-10-15 from http://citation.allacademic.com/meta/p1354435_index.html

Publication Type: Panel Paper
Abstract: This presentation puts forward an argument for more inclusive and transparent treatment of evidence and a call for greater contexualization of evaluations by integrating systems-thinking methods into evaluation design and interpretation. Specific recommendations based on new analyses will be presented to address these challenges.

Rigorous evaluations of education programs in developing countries have grown exponentially over the past decade. Much of this growth can be attributed to efforts by organizations such as the Center for Global Development who convened the Evaluation Gap Working Group to address the lack of rigorous evidence in the health and education sectors. The 2006 publication stemming from this initiative, "When Will We Ever Learn: Improving Lives through Impact Evaluation", put forward a strong call to action and a roadmap for increasing the number of high-quality impact evaluations to drive better programming and policy decisions. The emergence of organizations such as the Abdul Latif Jameel Poverty Action Lab (JPAL), the International Initiative for Impact Evaluations (3ie) and the World Bank’s Strategic Impact Evaluation Fund (SIEF), gave life to this movement, making a significant expansion of impact evaluations in developing countries possible. Bi-lateral funders including the United Kingdom’s Department for International Development (DFID) and the United States’ Agency for International Development (USAID) began to expand their commitment to and funding for evaluations of their programs. Private foundations such as the Bill & Melinda Gates Foundation and the William and Flora Hewlett Foundation played a critical role through both their thought leadership and financial support. Fast forward to the present day, and we have a steady supply of well-designed and well-executed independent impact evaluations of education programs in developing countries.

A natural by-product of this increase in impact evaluations has been the need to synthesize these disparate evaluations that are measuring different outcomes, in different ways, across different contexts, into something meaningful and actionable - all while maintaining transparency, addressing issues of comparability and acknowledging limitations with respect to external validity. Welcome onto the scene an expanding set of systematic reviews; guided by protocols put forward by initiatives such as the Campbell and Cochrane Collaborations and thought-leaders including Patrick McEwan, Dave Evans, Rachel Glennerster, Paul Glewwe and Micheal Kremer. These systematic reviews are close cousins of the impact evaluation in terms of rigor and transparency and significant effort has been made to translate the findings into program and policy recommendations.

While these developments certainly signal advances in evidence building for the education sector, challenges remain. Study selection biases (geographic, publication, etc.), insufficiently detailed categorization of interventions, scarcity of evaluations that build more nuanced evidence through treatment arms, disparate measurements and methods, and striking the right balance when assessing the generalizability of findings top the list of recognized vulnerabilities in systematic reviews. We are also leaving a lot of important information and learning on the cutting floor. Taking stock for example that roughly 90% of the evaluations funded by USAID are not impact evaluations due to suitability of program designs for impact evaluations, contextual challenges, resource constraints, timing factors and other drivers; and recognizing that there is considerable investment by a range of development organizations, both northern and southern, in “internal evaluations” - we need to find a way to be more inclusive of these evaluations while maintaining sufficient transparency in methods and threats to internal and external validity.

This presentation will present specific strategies for: improving inclusivity in systematic reviews without compromising transparency and rigor; and more effective contexualization of evaluation findings by integrating systems-thinking methods, specifically causal-loop diagramming, into evaluation design and interpretation.


 
All Academic, Inc. is your premier source for research and conference management. Visit our website, www.allacademic.com, to see how we can help you today.