2019 Federal Standard of Excellence


Millennium Challenge Corporation

Score
8
Leadership

Did the agency have senior staff members with the authority, staff, and budget to build and use evidence to inform the agency’s major policy and program decisions in FY19?

1.1 Did the agency have a senior leader with the budget and staff to serve as the agency’s Evaluation Officer (or equivalent)? (Example: Evidence Act 313)
  • The Monitoring and Evaluation (M&E) Managing Director serves as the Millennium Challenge Corporation (MCC) Evaluation Officer. The Managing Director is a career civil service position with the authority to execute M&E’s budget, an estimated $24.6 million in due diligence funds in FY19, with a staff of 28 people. In accordance with the Foundations for Evidence-Based Policymaking Act, MCC designated an Evaluation Officer.
1.2 Did the agency have a senior leader with the budget and staff to serve as the agency’s Chief Data Officer (or equivalent)? (Example: Evidence Act 202(e))
  • The Director of Product Management in the Office of the Chief Information Officer is MCC’s Chief Data Officer. The Chief Data Officer manages a staff of 7 and an estimated FY19 budget of $900,000 in administrative funds. In accordance with the Foundations for Evidence-Based Policymaking Act, MCC designated a Chief Data Officer.
1.3 Did the agency have a governance structure to coordinate the activities of its evaluation officer, chief data officer, statistical officer, and other related officials in order to inform policy decisions and evaluate the agency’s major programs?
  • The MCC Evaluation Management Committee (EMC) oversees decision-making and quality control on the agency’s evaluation and programmatic decision-making. The EMC is intended to integrate evaluation with program design and implementation to ensure that evaluations are designed and implemented in a manner that increases their utility, to both MCC and in-country stakeholders. The EMC includes the agency’s evaluation officer, Chief Data Officer, representatives from M&E, the project lead, the economist, and gender and environmental safeguards staff.
Score
5
Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and learning agenda (evidence-building plan), and did it publicly release the findings of all completed program evaluations in FY19?

 2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • Every MCC investment must adhere to MCC’s rigorous Policy for Monitoring and Evaluation (M&E) that requires every MCC investment to contain a comprehensive M&E Plan, which includes two main components. The monitoring component lays out the methodology and process for assessing progress towards the investment’s objectives. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Each country’s M&E Plan represents the evaluation plan and learning agenda for that country’s set of investments.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • For FY19, in an effort to advance MCC’s evidence base and respond to the Evidence Act, MCC is pursuing learning agendas at the country, sector, and agency level. At the country level, each country’s Monitoring and Evaluation Plan contains the context-specific learning that MCC hopes to advance during the life cycle of its investments. At the sector level, communities of practice around key MCC sectors have been formed to create, capture, and advance sector-level learning. In FY19, communities of practice around education, water, and sanitation will produce learning reports based on their findings to key research questions for the sector. Finally, at the agency level, MCC is embarking on an agency-wide learning agenda to better understand how MCC develops, implements, monitors, and evaluates the policy and institutional reforms (PIRs) it undertakes alongside capital investments. The PIR learning agenda is focused on better evidence for methodological guidance to economists and sector practices to support the expanded use of cost-benefit analysis in more cases of PIR that MCC supports. The purpose is to make investments in PIR more effective by meeting the same investment criteria as other interventions MCC considers for investment; to make assumptions and risks more explicit for all its investments that depend on improved policies or institutional performance; and to help inform the design of PIR programs to ensure that they have a high economic rate of return. In developing each of these learning agendas, MCC consults with internal staff, technical experts, partner country governments, beneficiaries, and MCC stakeholders.
2.4 Did the agency publicly release all completed program evaluations?
  • MCC publishes each independent evaluation of every project, underscoring the agency’s commitment to transparency, accountability, learning, and evidence-based decision-making. All independent evaluations and reports are publicly reported on the MCC Evaluation Catalog. As of August 2019, MCC has contracted or is planning 195 independent evaluations. To date, 109 Interim and Final Reports have been finalized and released to the public.
  • In FY19, MCC also began to publish Evaluation Briefs that distill key findings and lessons learned from MCC’s independent evaluations. MCC will produce Evaluation Briefs for each evaluation moving forward, and is in the process of writing Evaluation Briefs for the backlog of all completed evaluations.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
  • Once a compact or threshold program is in Implementation, Monitoring and Evaluation (M&E) resources are used to procure evaluation services from external independent evaluators to directly measure high-level outcomes to assess the attributable impact of all of MCC’s programs. MCC sees its independent evaluation portfolio as an integral tool to remain accountable to stakeholders and the general public, demonstrate programmatic results, and promote internal and external learning. Through the evidence generated by monitoring and evaluation, the M&E Managing Director, Chief Economist, and Vice President for the Department of Policy and Evaluation are able to continuously update estimates of expected impacts with actual impacts to inform
    future programmatic and policy decisions. In FY19, MCC began or continued comprehensive, independent evaluations for every compact or threshold project at MCC, a requirement stipulated in Section 7.5.1 of MCC’s Policy for M&E. All evaluation designs, data, reports, and summaries are available on MCC’s Evaluation Catalog.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • MCC employs rigorous, independent evaluation methodologies to measure the impact of its programming, evaluate the efficacy of program implementation, and determine lessons learned to inform future investments. As of August 2019, 37% of MCC’s evaluation portfolio consists of impact evaluations, and 63% consists of performance evaluations. All MCC impact evaluations use random assignment to determine which groups or individuals will receive an MCC intervention, which allows for a counterfactual and thus for attribution to MCC’s project, and best enables MCC to measure its impact in a fair and transparent way. Each evaluation is conducted according to the program’s Monitoring and Evaluation (M&E) Plan, in accordance with MCC’s Policy for M&E.
Score
9
Resources

Did the agency invest at least 1% of program funds in evaluations in FY19? (Examples: Impact studies; implementation studies; rapid cycle evaluations; evaluation technical assistance, rigorous evaluations, including random assignments)

3.1. ____ (Name of agency) invested $____ on evaluations, evaluation technical assistance, and evaluation capacity-building, representing __% of the agency’s $___ billion FY19 budget.
  • MCC invested $26.3 million on evaluations, evaluation technical assistance, and evaluation capacity-building, representing 3.7% of the agency’s $905 million FY19 budget (minus staff/salary expenses).
3.2 Did the agency have a budget for evaluation and how much was it? (Were there any changes in this budget from the previous fiscal year?)
  • MCC budgeted $26.3 million on monitoring and evaluation in FY19, an increase of $11.2 million compared to FY18 ($15.1 million total).
3.3 Did the agency provide financial and other resources to help city, county, and state governments or other grantees build their evaluation capacity (including technical assistance funds for data and evidence capacity building)?
  • In support of MCC’s emphasis on country ownership, MCC also provides intensive and ongoing capacity building to partner country Monitoring and Evaluation staff in every country in which it invests. As a part of this, MCC provides training and ongoing mentorship in the local language. This includes publishing select independent evaluations, Evaluation Briefs, and other documentation in the country’s local language. The dissemination of local language publications helps further MCC’s reach to its partner country’s government and members of civil society, enabling them to fully reference and utilize evidence and learning beyond the program.
Score
5
Performance Management / Continuous Improvement

Did the agency implement a performance management system with outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY19?
(Example: Performance stat systems, frequent outcomes-focused data-informed meetings)

4.1 Did the agency have a strategic plan with outcome goals, program objectives (if different), outcome measures, and program measures (if different)?
4.2 Does the agency use data/evidence to improve outcomes and return on investment?
  • MCC is committed to using high-quality data and evidence to drive its strategic planning and program decisions. The Monitoring and Evaluation plans for all programs and tables of key performance indicators for all projects of key performance indicators for all projects are available online by compact and threshold program and by sector, for use by both partner countries and the general public. Prior to investment, MCC performs a Cost-Benefit Analysis to assess the potential impact of each project, and estimates an Economic Rate of Return (ERR). MCC uses a 10% ERR hurdle to more effectively prioritize and fund projects with the greatest opportunity for maximizing impact. MCC then recalculates ERRs at investment closeout, drawing on information from MCC’s monitoring data (among other data and evidence), to test original assumptions and assess the cost effectiveness of MCC programs.
  • In addition, MCC produces periodic reports that capture the results of MCC’s learning efforts in specific sectors and translate that learning into actionable evidence for future programming. At the start of FY18, MCC published a Principles into Practice (PiP) report on its investments into roads; this report demonstrated MCC learning around the implementation and evaluation of its roads projects, and critically assessed how MCC was changing its practice as a result of this learning. In FY19, MCC will publish a PiP report on its technical and vocational education training activities in the education sector. MCC is also midway through research related to learning in the water, sanitation, and hygiene sector, with a PiP capturing the results of that learning expected in FY20.
4.3 Did the agency have a continuous improvement or learning cycle processes to identify promising practices, problem areas, possible causal factors, and opportunities for improvement? (Examples: stat meetings, data analytics, data visualization tools, or other tools that improve performance)
  • In FY18, MCC implemented a new reporting system that enhanced MCC’s credibility around results, transparency, learning, and accountability. The Star Report and its associated quarterly business process captures key information to provide a framework for results and improve the ability to promote and disseminate learning and evidence throughout the compact and threshold program lifecycle. For each compact and threshold program, evidence is collected on performance indicators, evaluation results, partnerships, sustainability efforts, and learning, among other elements. Critically, this information will be available in one report after each program ends. Each country will have a Star Report published roughly seven months after completion. In FY19, MCC released its first Star Reports about its recently completed investment in Cabo Verde and Indonesia. There are currently four Star Reports in progress that cover MCC’s programs in Honduras, Zambia, Georgia, and Malawi. These are expected to be published in FY20.
  • MCC also supports the creation of multidisciplinary country teams to manage the development and implementation of each compact and threshold program. Teams meet frequently to gather evidence, discuss progress, make project design decisions, and solve problems. Prior to moving forward with a program investment, teams are encouraged to use the lessons from completed evaluations to inform their work going forward.
  • Continual learning and improvement is a key aspect of MCC’s operating model. MCC continuously monitors progress towards compact and threshold program results on a quarterly basis using performance indicators that are specified in the Monitoring and Evaluation (M&E) Plan for each country’s investments. The M&E Plans specify indicators at all levels (process, output, and outcome) so that progress towards final results can be tracked. Every quarter each partner country submits an Indicator Tracking Table that shows actual performance of each indicator relative to the baseline that was established before the activity began and the performance targets that were established in the M&E Plan. Key performance indicators and their accompanying data by country are updated every quarter and published online. MCC management and the relevant country team review this data in a formal Quarterly Performance Review meeting to assess whether results are being achieved and integrates this information into project management and implementation decisions.
Score
6
Data

Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections – to improve (or help other entities improve) outcomes, cost-effectiveness, and/or the performance of federal, state, local, and other service providers programs in FY19? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; open data policies; data-use policies)

5.1 Did the agency have a strategic data plan, including an open data policy? (Example: Evidence Act 202(c), Strategic Information Resources Plan)
  • As detailed on MCC’s Digital Strategy and Open Government pages, MCC promotes transparency to provide people with access to information that facilitates their understanding of MCC’s model, MCC’s decision-making processes, and the results of MCC’s investments. Transparency, and therefore open data, is a core principle for MCC because it is the basis for accountability, provides strong checks against corruption, builds public confidence, and supports informed participation of citizens.
  • As a testament to MCC’s commitment to and implementation of transparency and open data, the agency was the highest-ranked U.S. government agency in the 2018 Publish What You Fund Aid Transparency Index for the fifth consecutive year. In addition, the U.S. government is part of the Open Government Partnership, a signatory to the International Aid Transparency Initiative, and must adhere to the Foreign Aid Transparency and Accountability Act. All of these initiatives require foreign assistance agencies to make it easier to access, use, and understand data. All of these actions have created further impetus for MCC’s work in this area, as they establish specific goals and timelines for adoption of transparent business processes.
  • Additionally, MCC convened an internal Data Governance Board, an independent group consisting of representatives from departments throughout the agency, to streamline MCC’s approach to data management and advance data-driven decision-making across its investment portfolio.
5.2 Did the agency have an updated comprehensive data inventory? (Example: Evidence Act 3511)
  • MCC makes extensive program data, including financials and results data, publicly available through its Open Data Catalog, which includes an “enterprise data inventory” of all data resources across the agency for release of data in open, machine readable formats. The Department of Policy and Evaluation leads the MCC Disclosure Review Board process for publicly releasing the de-identified microdata that underlies the independent evaluationson the Evaluation Catalog, following MCC’s Microdata Management Guidelines to ensure appropriate balance in transparency efforts with protection of human subjects’ confidentiality.  
5.3 Did the agency promote data access or data linkage for evaluation, evidence-building, or program improvement? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; downloadable machine-readable, de-identified tagged data; Evidence Act 3520(c))
  • MCC’s Data Analytics Program (DAP) enables enterprise data-driven decision-making through the capture, storage, analysis, publishing, and governance of MCC’s core programmatic data. The DAP streamlines the agency’s data lifecycle, facilitating increased efficiency. Additionally, the program promotes agency-wide coordination, learning, and transparency. For example, MCC has developed custom software applications to capture program data, established the infrastructure for consolidated storage and analysis, and connected robust data sources to end user tools that power up-to-date, dynamic reporting and also streamlines content maintenance on MCC’s public website. As a part of this effort, the Monitoring and Evaluation team has developed an Evaluation Pipeline application that provides up-to-date information on the status, risk, cost, and milestones of the full evaluation portfolio for better performance management.
5.4 Did the agency have policies and procedures to secure data and protect personal, confidential information? (Example: differential privacy; secure, multiparty computation; homomorphic encryption; or developing audit trails)
  • MCC’s Disclosure Review Board ensures that data collected from surveys and other research activities is made public according to relevant laws and ethical standards that protect research participants, while recognizing the potential value of the data to the public. The board is responsible for: reviewing and approving procedures for the release of data products to the public; reviewing and approving data files for disclosure; ensuring de-identification procedures adhere to legal and ethical standards for the protection of research participants; and initiating and coordinating any necessary research related to disclosure risk potential in individual, household, and enterprise-level survey microdata on MCC’s beneficiaries.
  • The Microdata Evaluation Guidelines inform MCC staff and contractors, as well as other partners, on how to store, manage, and disseminate evaluation-related microdata. This microdata is distinct from other data MCC disseminates because it typically includes personally identifiable information and sensitive data as required for the independent evaluations. With this in mind, MCC’s Guidelines govern how to manage three competing objectives: share data for verification and replication of the independent evaluations, share data to maximize usability and learning, and protect the privacy and confidentiality of evaluation participants. These Guidelines were established in 2013 and updated in January 2017. Following these Guidelines, MCC has publicly released 76 de-identified, public use, microdata files for its evaluations. MCC’s experience with developing and implementing this rigorous process for data management and dissemination while protecting human subjects throughout the evaluation life cycle is detailed in Opening Up Evaluation Microdata: Balancing Risks and Benefits of Research Transparency.
5.5 Did the agency provide assistance to city, county, and/or state governments, and/or other grantees on accessing the agency’s datasets while protecting privacy?
  • Both MCC and its partner in-country teams produce and provide data that is continuously updated and accessed. MCC’s website is routinely updated with the most recent information, and in-country teams are required to do the same on their respective websites. As such, all MCC program data is publicly available on MCC’s website and individual MCA websites for use by MCC country partners, in addition to other stakeholder groups. As a part of each country program, MCC provides resources to ensure data and evidence are continually collected, captured, and accessed. In addition, each project’s evaluation has an Evaluation Brief that distills key learning from MCC-commissioned independent evaluations. Select Evaluation Briefs have been posted in local languages, including Mongolian, Georgian, French, and Romanian, to better facilitate use by country partners.
  • MCC also has a partnership with the President’s Emergency Plan for AIDS Relief (PEPFAR), referred to as the Data Collaboratives for Local Impact (DCLI). This partnership is improving the use of data analysis for decision-making within PEPFAR and MCC partner countries by working toward evidence-based programs to address challenges in HIV/AIDS and health, empowerment of women and youth, and sustainable economic growth. Data-driven priority setting and insights gathered by citizen-generated data and community mapping initiatives contribute to improved allocation of resources in target communities to address local priorities, such as job creation, access to services, and reduced gender-based violence. DCLI continues to inform and improve the capabilities of PEPFAR activities through projects such as the Tanzania Data Lab, which has trained nearly 700 individuals, nearly 50% of whom are women, and has hosted a one-of-a-kind “Data Festival.” Recently, the Lab has announced a partnership with the University of Virginia Data Science Institute and catalyzed launching of the first Masters in Data Science in East Africa, in partnership with the University of Dar es Salaam.
Score
5
Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding purposes; did that framework prioritize rigorous research and evaluation methods; and did the agency disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY19? (Example: What Works Clearinghouses)

6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • For each investment, MCC’s Economic Analysis (EA) division undertakes a Constraints Analysis to determine the binding constraints to economic growth in a country. To determine the individual projects in which MCC will invest in a given sector, MCC’s EA division combines root cause analysis with a cost-benefit analysis. The results of these analyses allow MCC to determine which investments will yield the greatest development impact and return on MCC’s investment. Every investment also has its own set of indicators for monitoring during the lifecycle of the investment and an evaluation plan for determining the results and impact of a given investment. MCC’s Policy for Monitoring and Evaluation details MCC’s evidence-based research and evaluation framework. Per the Policy, each completed evaluation requires a summary of findings, now called the Evaluation Brief, to summarize the key components, results, and lessons learned of the evaluation. Evidence from previous MCC programming is considered during the development of new programs. Per the Policy, “monitoring and evaluation evidence and processes should be of the highest practical quality. They should be as rigorous as practical and affordable. Evidence and practices should be impartial. The expertise and independence of evaluators and monitoring managers should result in credible evidence. Evaluation methods should be selected that best match the evaluation questions to be answered. Indicators should be limited in number to include the most crucial indicators. Both successes and failures must be reported.”
6.2 Did the agency have a common evidence framework for funding decisions?
  • MCC uses a rigorous evidence framework to make every decision along the investment chain, from country partner eligibility to sector selection to project choices. MCC uses evidence-based selection criteria, generated by independent, objective third parties, to select countries for grant awards. To be eligible for selection, World Bank-designated low- and lower-middle-income countries must first pass the MCC 2019 Scorecard – a collection of 20 independent, third-party indicators that objectively measure a country’s policy performance in the areas of economic freedom, investing in people, and ruling justly. An in-depth description of the country selection procedure can be found in the annual Selection Criteria and Methodology report.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions? 
  • All evaluation designs, data, reports, and summaries are made publicly available on MCC’s Evaluation Catalog. To further the dissemination and use of MCC’s evaluations’ evidence and learning, in May 2019 the Agency launched Evaluation Briefs, a new product to capture and disseminate the results and findings of its independent evaluation portfolio. An Evaluation Brief will be produced for each evaluation and offers a succinct, user-friendly, systematic format to better capture and share the relevant evidence and learning from MCC’s independent evaluations. These accessible products will take the place of MCC’s Summaries of Findings. Evaluation Briefs will be published on the Evaluation Catalog and will complement the many other products published for each evaluation, including evaluation designs, microdata, survey questionnaires, baseline findings, interim reports, and final reports from the independent evaluator.
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • From FY18-FY19, MCC conducted internal research and analysis to understand where and how its published evaluations, datasets, and knowledge products are utilized. The forthcoming results of this analysis will guide future efforts on evidence-based learning such as which sectors MCC prioritizes for evidence generation and publication and what types of products best disseminate MCC’s evidence and learning. Evaluation Briefs are in part a result of MCC’s findings around evaluation user metrics. MCC finalized baseline metrics around evidence and evaluation utilization in April 2018 and is continuing to track global use of its knowledge products on a quarterly basis with a goal of expanding the base of users of MCC’s evidence and evaluation products.
  • MCC has also used in-country evaluation dissemination events to ensure further results and evidence building. For example, a dissemination event was held in Zambia in June 2019 to disseminate results from two studies by an independent evaluator (Centers for Disease Control and Prevention): (i) a baseline survey of 12,000 (future) beneficiary and control groups, and (ii) water quality data along the utility’s water distribution system and in people’s homes. Attendees of the event included 60+ stakeholders representing water sector ministries, local community leaders, the water utility, the post-compact implementation unit, and donor partners.
  • To further bring attention to MCC’s evaluation and evidence, MCC periodically publishes an evaluation newsletter called Statistically Speaking. This newsletter highlights recent evidence and learning from MCC’s programs with a special emphasis on how MCC’s evidence can offer practical policy insights for policymakers and development practitioners in the United States and in partner countries. It also seeks to familiarize a wider audience with the evidence and results of MCC’s investments.
Score
6
Innovation

Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY19? (Examples: Prizes and challenges; behavioral science trials; innovation labs/accelerators; performance partnership pilots; demonstration projects or waivers with rigorous evaluation requirements) 

7.1 Did the agency engage leadership and staff in its innovation efforts?
  • In 2014, MCC developed an internal “Solutions Lab” that was designed to encourage innovation in program development and implementation by engaging staff to come up with creative solutions to some of the biggest challenges MCC faces. MCC promotes agency-wide participation in its Solutions Lab through an internal portal. To further encourage staff who pursue innovative ideas throughout the compact lifecycle, MCC launched the annual MCC Innovation Award as a part of the Agency’s Annual Awards Ceremony held each summer. The Innovation Award recognizes individuals who demonstrate “exemplary” leadership integrating innovation in project design, project implementation, and/or systems functionality and efficiency. Selections for the Innovation Award are based on a demonstrated ability to lead and implement innovative strategies from project conception that foster sustained learning and collaboration and add value to MCC and/or country partnerships.
 7.2 Did the agency have policies that promote innovation?
  • MCC’s approach to development assistance hinges on its innovative and extensive use of evidence to inform investment decisions, guide program implementation strategies, and assess and learn from its investment experiences. As such, MCC’s Office of Strategic Partnerships offers an Annual Program Statement (APS) opportunity that allows MCC divisions and country teams to tap the most innovative solutions to new development issues. In FY19, the Monitoring and Evaluation division, using MCC’s APS and traditional evaluation firms, has been piloting partnerships with academics and in-country think tanks to leverage innovative, lower cost data technologies across sectors and regions. These include:
    • using satellite imagery in Sri Lanka to measure visible changes in investment on land to get early indications if improved land rights are spurring investment;
    • leveraging big data and cell phone applications in Colombo, Sri Lanka to monitor changes in traffic congestion and the use of public transport;
    • independently measuring power outages and voltage fluctuations using cell phones in Ghana, where utility outage data is unreliable, and where outage reduction is a critical outcome targeted by the Compact;
    • using pressure loggers on piped water at the network and household levels to get independent readings on access to water in Dar es Salaam, Tanzania; and
    • using remote sensing to measure water supply in water kiosks in Freetown, Sierra Leone.
7.3 Did the agency have processes, structures, or programs to stimulate innovation?
  • MCC recently launched an internal Millennium Efficiency Challenge (MEC) designed to tap into the extensive knowledge of MCC’s staff to identify efficiencies and innovative solutions that can shorten the compact and threshold program development timeline while maintaining MCC’s rigorous quality standards and investment criteria.
  • In September 2014, MCC’s Monitoring and Evaluation division launched the agency’s first Open Data Challenge, which continued into FY19. The Open Data Challenge initiative is intended to facilitate broader use of MCC’s U.S.-taxpayer funded data, encourage innovative ideas, and maximize the use of data that MCC finances for its independent evaluations.
  • MCC regularly engages in implementing test projects as part of its overall compact programs. A few examples include: (1) in Morocco, an innovative pay-for-results mechanism to replicate or expand proven programs that provide integrated support; (2) a “call-for-ideas” in Benin for information regarding potential projects that would expand access to renewable off-grid electrical power; (3) a regulatory strengthening project in Sierra Leone that includes funding for a results-based financing system; and (4) an Innovation Grant Program in Zambia to encourage local innovation in pro-poor service delivery in the water sector.
7.4. Did the agency evaluate its innovation efforts, including using rigorous methods?
  • Although MCC rigorously evaluates all program efforts, MCC takes special care to ensure that innovative or untested programs are thoroughly evaluated. In addition to producing final program evaluations, MCC is continuously monitoring and evaluating all programs throughout the program lifecycle, including innovation efforts, to determine if mid-program course-correction actions are necessary. This interim data helps MCC continuously improve its innovation efforts so that they can be most effective and impactful. Although 37% of MCC’s evaluations use random-assignment methods, all of MCC’s evaluations – both impact and performance – use rigorous methods to achieve the three-part objectives of accountability, learning, and results in the most cost-effective way possible.
Score
13
Use of Evidence in 5 Largest Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its 5 largest competitive grant programs in FY19? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring for evidence; Pay for Success provisions)

8.1 What were the agency’s 5 largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
  • MCC awards all of its agency funds through two competitive grants: (1) the compact program ($631.5 million in FY19; eligible grantees: developing countries) and (2) the threshold program ($45.0 million in FY19; eligible grantees: developing countries).
8.2 Did the agency use evidence of effectiveness to allocate funds in 5 largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
  • For country partner selection, as part of the compact and threshold competitive programs, MCC uses 20 different indicators within the categories of economic freedom, investing in people, and ruling justly to determine country eligibility for program assistance. These objective indicators of a country’s performance are collected by independent third parties.
  • When considering granting a second compact, MCC further considers whether countries have (1) exhibited successful performance on their previous compact; (2) improved Scorecard performance during the partnership; and (3) exhibited a continued commitment to further their sector reform efforts in any subsequent partnership. As a result, the MCC Board of Directors has an even higher standard when selecting countries for subsequent compacts. Per MCC’s policy for Compact Development Guidance (p. 6): “As the results of impact evaluations and other assessments of the previous compact program become available, the partner country must use this use data to inform project proposal assessment, project design, and implementation approaches.”
8.3 Did the agency use its 5 largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations) 
  • Per its Policy for Monitoring and Evaluation (M&E), MCC requires independent evaluations of every project to assess progress in achieving outputs and outcomes and program learning based on defined evaluation questions throughout the lifetime of the project and beyond. As described above, MCC publicly releases all these evaluations on its website and uses findings, in collaboration with stakeholders and partner countries, to build evidence in the field so that policymakers in the United States and in partner countries can leverage MCC’s experiences to develop future programming. In line with MCC’s Policy for M&E, MCC projects are required to submit quarterly Indicator Tracking Tables showing progress toward projected targets.
8.4 Did the agency use evidence of effectiveness to allocate funds in any competitive grant program?
  • MCC uses evidence of effectiveness to allocate funds in all its competitive grant programs as noted above.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?  
  • Based on the results of a rigorous impact evaluation, MCC’s compact in Burkina Faso improved educational infrastructure, by renovating 396 classrooms in 132 primary schools and funding ancillary educational needs for students (e.g., latrines, school supplies, and food) and adults (e.g., teachers’ housing and gender-sensitivity training). Students in intervention schools had overall student enrollment rates increase by 6%, with girls’ enrollment increasing by 10.3%; higher test scores; higher primary school graduation rates; and lower early marriage rates. In completing this program, MCC learned that addressing the factors that specifically threaten female education helps girls access and remain in school. Additionally, addressing schools’ weak educational quality (e.g., curriculum, faculty, management), coupled with improving the quality of students’ access to and facilities for education, should further improve students’ learning. This learning has since been applied in current education investments.
 8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
  • As described above, MCC develops a Monitoring & Evaluation (M&E) Plan for every grantee, which describes the independent evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. As such, grantees use program funds for evaluation.
  • MCC’s Policy for Monitoring and Evaluation stipulates that the “primary responsibility for developing the M&E Plan lies with the MCA [grantee] M&E Director with support and input from MCC’s M&E Lead and Economist. MCC and MCA Project/Activity Leads are expected to guide the selection of the indicators at the process and output levels that are particularly useful for management and oversight of activities and projects.” The M&E policy is intended primarily to guide MCC and partner country staff decisions to utilize M&E effectively throughout the entire program life cycle in order to improve outcomes. All MCC investments also include M&E capacity-building for grantees.
Score
N/A
Use of Evidence in 5 Largest Non-Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its 5 largest non-competitive grant programs in FY19? (Examples: Evidence-based funding set-asides; requirements to invest funds in evidence-based activities; Pay for Success provisions)

MCC does not administer non-competitive grant programs (score for criteria #8 applied).

Score
6
Repurpose for Results

In FY19, did the agency shift funds away from or within any practice, policy, or program that consistently failed to achieve desired outcomes? (Examples: Requiring low-performing grantees to re-compete for funding; removing ineffective interventions from allowable use of grant funds; incentivizing or urging grant applicants to stop using ineffective practices in funding announcements; proposing the elimination of ineffective programs through annual budget requests; incentivizing well-designed trials to fill specific knowledge gaps; supporting low-performing grantees through mentoring, improvement plans, and other forms of assistance; using rigorous evaluation results to shift funds away from a program)

10.1 Did the agency shift funds/resources away from ineffective practices or interventions used within programs or by grantees?
  • In a number of cases, MCC has repurposed investments based on real-time evidence. In MCC’s first compact with Lesotho, MCC cancelled the Automated Clearing House Sub-Activity within the Private Sector Development Project after monitoring data determined that it would not accomplish the economic growth and poverty reduction outcomes envisioned during compact development. The remaining $600,000 in the sub-activity was transferred to the Debit Smart Card Sub-Activity, which targeted expanding financial services to people living in remote areas of Lesotho. In Tanzania, the $32 million Non-Revenue Water Activity was re-scoped after the final design estimates on two of the activity’s infrastructure investments indicated higher costs that would significantly impact their economic rates of return. As a result, $13.2 million was reallocated to the Lower Ruvu Plant Expansion Activity, $9.6 million to the Morogoro Water Supply Activity, and $400,000 for other environmental and social activities. In all of these country examples, the funding is either reallocated to activities with continued evidence of results or returned to MCC for investment in future programming.
10.2 Did the agency shift funds/resources away from ineffective policies used within programs or by grantees? 
  • MCC also consistently monitors the progress of compact programs and their evaluations across sectors, using the learning from this evidence to make changes to MCC’s operations. For example, as part of MCC’s Principles into Practice initiative, in November 2017 MCC undertook a review of its portfolio investments in roads in an attempt to better design, implement, and evaluate road investments. Through evidence collected across 16 countries with road projects, MCC uncovered seven key lessons including the need to prioritize and select projects based on a road network analysis, to standardize content and quality of road data collection across road projects, and to consider cost and the potential for learning in determining how road projects are evaluated. In FY19, the lessons from this analysis are being applied to road projects in compacts in Côte d’Ivoire and Nepal as MCC roads investments see a shift toward increased maintenance investments. Critically, the evidence also pointed to MCC shifting how it undertakes road evaluations which led to a new request and re-bid for proposals for MCC’s roads evaluations based on new guidelines and principles. 
10.3 Did the agency shift funds/resources away from ineffective grantees?
  • MCC has established a Policy on Suspension and Termination that lays out the reasons for which MCC may suspend or terminate assistance to partner countries, including if a country “engages in a pattern of actions inconsistent with the MCC’s eligibility criteria,” by failing to achieve desired outcomes such as:
    • A decline in performance on the indicators used to determine eligibility;
    • A decline in performance not yet reflected in the indicators used to determine eligibility; or
    • Actions by the country which are determined to be contrary to sound performance in the areas assessed for eligibility for assistance, and which together evidence an overall decline in the country’s commitment to the eligibility criteria.
  • Of the compacts that have been selected by MCC’s Board of Directors, 12 have had their partnerships ended due to concerns about country commitment to MCC’s eligibility criteria. MCC’s Policy on Suspension and Termination also allows MCC to reinstate eligibility when countries demonstrate a clear policy reversal, a remediation of MCC’s concerns, and an obvious commitment to MCC’s eligibility indicators, including achieving desired results. For example, in March 2012, MCC suspended Malawi’s compact prior to Entry into Force as MCC determined that the Government of Malawi had engaged in a pattern of actions inconsistent with MCC’s eligibility criteria. Thereafter, the new Government of Malawi took a number of decisive steps to improve the democratic rights environment and reverse the negative economic policy trends of concern to MCC, which led to a reinstatement of eligibility for assistance in June 2012.
10.4 Did the agency shift funds/resources away from ineffective programs? (e.g., eliminations or legislative language in budget requests)
  • MCC essentially operates only two programs, compact and threshold, which are so tied to core service delivery that they could not be eliminated as such.
10.5 Did the agency shift funds/resources away from consistently ineffective products and services?
  • No examples available.
Back to the Standard

Visit Results4America.org