2019 Federal Standard of Excellence
U.S. Agency for International Development
8
Leadership
Did the agency have senior staff members with the authority, staff, and budget to build and use evidence to inform the agency’s major policy and program decisions in FY19?
1.1 Did the agency have a senior leader with the budget and staff to serve as the agency’s Evaluation Officer (or equivalent)? (Example: Evidence Act 313)
- The Director of the Office of Learning, Evaluation, and Research (LER) serves as the USAID evaluation officer. In compliance with the Foundations for Evidence-Based Policymaking Act, the Administrator of USAID designated the LER Director as the Agency’s Evaluation Officer through an internal Executive Message that was shared with the Agency on June 4, 2019.
- USAID’s Office of Learning, Evaluation, and Research (LER) in the Bureau for Policy, Planning, and Learning (PPL) helps the Agency build a body of evidence from which to learn and adapt programs. The LER Director is a senior staff member with the authority, staff, and budget to ensure agency evaluation requirements are met, including that all projects are evaluated at some level, and that decision-making is informed by evaluation and evidence. The LER Director oversaw approximately 25 staff and an estimated $4.6 million budget in Fiscal Year (FY) 2018.
- USAID has proposed creating a Bureau for Policy, Resources, and Performance (PRP), which will align policy, resources and evidence-based programming, and elevate the evaluation function by creating an Office for Learning and Evaluation that will manage the Agency’s Evaluation Policy. The office will also create and update the Agency Learning and Evaluation Plans, and commission or conduct cross-cutting evaluations. If approved by Congress, the estimated timeline for establishing the bureau is approximately a year and a half. In the meantime, working groups for each new office are developing work plans and focus areas for the new bureau to ensure PRP will be able to meet its mandate.
1.2 Did the agency have a senior leader with the budget and staff to serve as the agency’s Chief Data Officer (or equivalent)? (Example: Evidence Act 202(e))
- The Agency’s Chief Data Officer (CDO) reports to the Chief Information Officer in the Bureau for Management. In compliance with the Foundations for Evidence-Based Policymaking Act, the Administrator of USAID designated the Chief Data Officer through an internal Executive Message that was shared with the Agency on June 4, 2019. The CDO manages the USAID Data Services team which focuses exclusively on improving the usage of data and information to ensure the Agency’s development outcomes are supported and enhanced by evidence. The CDO’s team includes several direct hire data science and IT professionals along with a budget for contract professionals who provide a comprehensive portfolio of data services in support of the Agency’s mission. The CDO oversaw approximately 55 staff and an estimated $12.5 million budget in 2019. The CDO is a senior career civil servant, and the USAID Data Services team is regularly called upon to generate products and services to support the Agency’s highest priorities. USAID also invests in roles including the Chief Innovation Officer, Chief Geographer, Chief Economist, Chief Scientist, and other key roles that drive the use of evidence across the agency.
1.3 Did the agency have a governance structure to coordinate the activities of its evaluation officer, chief data officer, statistical officer, and other related officials in order to inform policy decisions and evaluate the agency’s major programs?
- The Agency uses several governance structures and processes currently, and will be updating these in accordance with OMB guidance related to the Foundations for Evidence-Based Policymaking Act. Two notable current examples include:
- InfoGov: Agency policy, ADS 579 – USAID Development Data, establishes an Information Governance Committee (InfoGov) under USAID’s Management Operations Council (MOC). First issued in 2014, this policy is undergoing a comprehensive re-write and will constitute a Data Administration and Technical Advisory (DATA) Board, replacing InfoGov, to address the full data management lifecycle and will enhance the Agency’s data governance structure to align with more recent legislation, including the Evidence Act and Open Government Data Act. The DATA Board supports the work of the Agency Evaluation Officer by directing data services to facilitate evaluations.
- Management Operations Council: USAID also uses a Management Operations Council (MOC) as the platform for Agency leadership to assess progress toward achieving the strategic objectives in USAID’s Strategic Planand cross-agency priority goals and additional management issues. Established in 2014, the MOC provides Agency-wide leadership for initiatives and investments to reform USAID business systems and operations worldwide. The MOC also provides a platform for senior leaders to learn about and discuss improving organizational performance, efficiency, and effectiveness. The Agency’s Performance Improvement Officer and Chief Operating Officer co-chair the MOC, and membership includes, among others, all the Agency’s Chief Executive Officers (e.g., Senior Procurement Executive, Chief Human Capital Officer, Chief Financial Officer, Chief Information Officer and Project Management Improvement Officer). Depending on the agenda, it also includes the Chief Data Officer, Agency Evaluation Officer, and (once in place) the Agency Senior Statistical Official
8
Evaluation & Research
Did the agency have an evaluation policy, evaluation plan, and learning agenda (evidence-building plan), and did it publicly release the findings of all completed program evaluations in FY19?
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
- The agency-wide USAID Evaluation Policy, published in January 2011 and updated in October 2016, incorporates changes that better integrate with USAID’s Program Cycle Policy and ensure compliance with the Foreign Aid Transparency and Accountability Act (FATAA). The 2016 changes to the evaluation policy updated evaluation requirements to simplify implementation and increase the breadth of evaluation coverage, dissemination, and utilization.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
- USAID has an agency-wide evaluation registry that collects information on all evaluations planned to commence within the next three years (as well as tracking ongoing and completed evaluations). Currently, this information is used internally and is not published. To meet the Evidence Act requirement, USAID will publish an agency-wide evaluation plan in the Agency’s Annual Performance Plan/Annual Performance Report in future years.
- In addition, USAID’s Office of Learning, Evaluation, and Research works with bureaus to develop internal annual Bureau Monitoring, Evaluation and Learning Plans that review evaluation quality and evidence building and use within each bureau, and identify challenges and priorities for the year ahead.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
- USAID has an agency-wide learning agenda called the Self-Reliance Learning Agenda (SRLA). The SRLA prioritizes evidence needs related to the Agency’s mission to foster country self-reliance which covers all development program/sector areas, humanitarian assistance and resilience, and agency operations. This vision and mission is articulated in USAID’s Policy Framework and reorients the Agency’s programs, operations, and workforce around the vision of self-reliance or ending the need for foreign assistance.
- USAID used a strongly consultative process for developing SRLA, as described in the SRLA Fact Sheet. First, the Agency compiled learning questions from a number of feedback processes to initially capture 260 questions which through consultations were reduced to the final to thirteen that represent the Agency’s priority learning needs related to Self-Reliance.
- Now that the questions are published, USAID will partner with internal and external stakeholders to generate and gather evidence and facilitate the utilization of learning. These stakeholders include USAID’s implementing partners, other U.S. agencies, private coalitions and think tanks, researchers and academics, bilateral/multilateral organizations, and local actors and government in the countries in which it works.
2.4 Did the agency publicly release all completed program evaluations?
- All final USAID evaluation reports are published on the Development Experience Clearinghouse (DEC), except for a small number of evaluations that receive a waiver to public disclosure (typically less than 5 percent of the total completed in a fiscal year). The process to seek a waiver to public disclosure is outlined in the document Limitations to Disclosure and Exemptions to Public Dissemination of USAID Evaluation Reports and includes exceptions for circumstances such as those when “public disclosure is likely to jeopardize the personal safety of U.S. personnel or recipients of U.S. resources.”
- To increase awareness of available evaluation reports, USAID has created infographics showing the number and type of evaluations completed in FY2015, FY2016, and FY2017. These include short narratives that describe findings from selected evaluations and how that information informed decision-making. The information for FY2018 is being finalized.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
- USAID recognizes that sound development programming relies on strong evidence that enables policymakers and program planners to make decisions, improve practice, and achieve development outcomes.
- USAID staff review evaluation quality on an ongoing basis and the Agency is in the process of commissioning an external study to assess the current coverage, quality, methods, effectiveness and independence of the Agency’s evaluation, research and analysis efforts. In the meantime, there are several studies that have looked at parts of this question over the previous several years. These include GAO reports, such as Agencies Can Improve the Quality and Dissemination of Program Evaluations; From Evidence to Learning: Recommendations to Improve Foreign Assistance Evaluations; reviews by independent organizations like the Center for Global Development’s Evaluating Evaluations: Assessing the Quality of Aid Agency Evaluations in Global Health – Working Paper 461; and studies commissioned by USAID such as the Meta-Evaluation of Quality and Coverage of USAID Evaluations 2009 – 2012. These studies generally show that USAID’s evaluation quality is improving over time with room for continued improvement.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
- USAID uses rigorous evaluation methods, including random control trials (i.e. assignment studies) and quasi-experimental methods for research and evaluation purposes. For example, in FY2018, USAID completed 21 impact evaluations, 12 of which used random control trials.
- The Development Innovation Ventures (DIV) program makes significant investments using randomized controlled trials to provide evidence of impact for pilot approaches to be considered for scaled funding. USAID is also experimenting with cash benchmarking—using household grants to benchmark traditional programming. USAID conducted five randomized control trials (RCT) of household grants or “cash lump sum” programs, and three RCTs of more traditional programs with household grant elements.
8
Resources
Did the agency invest at least 1% of program funds in evaluations in FY19? (Examples: Impact studies; implementation studies; rapid cycle evaluations; evaluation technical assistance, rigorous evaluations, including random assignments)
3.1. ____ (Name of agency) invested $____ on evaluations, evaluation technical assistance, and evaluation capacity-building, representing __% of the agency’s $___ billion FY19 budget.
- USAID invested $195.6 million on evaluations, evaluation technical assistance, and evaluation capacity-building, representing 1% of the agency’s $18.8 billion FY18 budget.
3.2 Did the agency have a budget for evaluation and how much was it? (Were there any changes in this budget from the previous fiscal year?)
- In FY18, USAID operating units reported investing approximately $191 million on evaluations that were completed or ongoing in that fiscal year. LER’s budget for evaluation technical assistance and evaluation capacity-building in FY18 was $4.6 million, coming to a total of $195.6 million. This represents 1% of the Agency’s $18.8 billion FY18 budget.¹ This total does not include other research, studies, analysis or other data collection that is often used for evaluation, such as USAID’s investment in the Demographic Health Survey or some of the assessments done by third-parties across USAID’s innovation portfolio. It also does not include funding by agency sub-components for evaluation technical assistance.
- USAID Missions and Operating Units (OUs) reported completing 189 evaluations with resources totaling approximately $70 million. In addition, Missions/OUs are currently managing another 209 ongoing evaluations, many that span more than one year, with total ongoing evaluation budgets estimated to reach almost $121 million. Overall, USAID’s spending on evaluations completed or ongoing in FY18 was $191 million, a reduction from the FY17 total of $252 million. LER’s FY18 budget was $4.6 million, down from $8.9 million in FY17 due in part to an overall FY18 Agency program budget decline from $19.6 billion in FY17² to $18.8 billion in FY18. Despite these reductions, the overall proportion the Agency invested in evaluations remained at 1% of program funds.
3.3 Did the agency provide financial and other resources to help city, county, and state governments or other grantees build their evaluation capacity (including technical assistance funds for data and evidence capacity building)?
- While specific data on this is limited, USAID estimates that investment in contracts or grants that provide support to build local organizational or governmental capacity in data collection, analysis, and use could be as high as $250 million.
- For example, USAID’s Data for Impact (D4I) activity helps low- and middle-income countries—primarily in sub-Saharan Africa—to increase their capacity to use available data and generate new data to build evidence for improving health programs, health policies, and for decision-making. D4I’s goal is to help low-resource countries gather and use information to strengthen their health policies and programs and improve the health of their citizens.
- In another example, the MEASURE Evaluation project, funded by USAID, has a mandate to strengthen health information systems (HIS) in low-resource settings. The Project enables countries to improve lives by strengthening their capacity to generate and use high-quality health information to make evidence-informed, strategic decisions at local, subregional, and national levels.
¹Source for FY2018 Agency budget: FY 2020 Congressional Budget Justification. Page 2. Bilateral Economic Assistance total ($24,433,542,000) minus State’s Global Health Programs ($5,670,000,000) is $18,763,542,000.
²Source for FY2017 Agency budget: FY 2019 Congressional Budget Justification. Page 2. Bilateral Economic Assistance total ($25,316,492) minus State’s Global Health Programs ($5,670,000) is $19,646,492.
9
Performance Management / Continuous Improvement
Did the agency implement a performance management system with outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY19?
(Example: Performance stat systems, frequent outcomes-focused data-informed meetings)
4.1 Did the agency have a strategic plan with outcome goals, program objectives (if different), outcome measures, and program measures (if different)?
- USAID partners with the U.S. Department of State to jointly develop and implement clear strategic goals, strategic objectives, and performance goals, which are articulated in the FY 2018 – 2022 U.S. Department of State – USAID Joint Strategic Plan (JSP). The Agency measures progress towards its own strategic goals, strategic objectives, and performance goals using data from across the Agency, including from annual Performance Plan and Reports (PPRs) completed by operating units, and uses that information to report on performance externally through the Annual Performance Plan/Annual Performance Report (APP/APR) and the Agency Financial Report.
- To aggregate and track performance in key sectors, USAID works with the U.S. Department of State to develop and manage nearly 200 standard foreign assistance indicators that have common definitions and defined collection methods. Once finalized, USAID publishes indicator data on a publicly available dashboard known as Dollars to Results. Finally, USAID reports on Agency Priority Goals (APG) and Cross Agency Priority (CAP) goal progress on www.performance.gov.
4.2 Does the agency use data/evidence to improve outcomes and return on investment?
- Many of USAID’s innovation or co-created programs and those done in partnerships reflect a data-driven “pay for results” model, where milestones are agreed by all parties, and payments are made when milestones are achieved. This means that, for some programs, if a milestone is unmet, funds may be re-applied to an innovation or intervention that is achieving results. This rapid and iterative performance model means that USAID more quickly understands what isn’t working and can move resources away from it and toward what is working.
- Approaches such as prizes, Grand Challenges, and ventures can also be constructed to be “pay for results only” where interventions such as “Development Impact Bonds” are used to create approaches where USAID only pays for outcomes and not inputs or attempts only. The Agency believes this model will pave the way for much of USAID’s work to be aligned with a “pay for results” approach. USAID is also piloting the use of the impact per dollar of cash transfers as a minimum standard of cost-effectiveness for applicable program designs.
- Additionally, USAID Missions develop Country Development Cooperation Strategies (CDCSs) with clear goals and objectives and a Performance Management Plan (PMP) that identifies expected results, performance indicators to measure those results, plans for data collection and analysis, and regular review of performance measures to use data and evidence to adapt programs for improved outcomes. USAID also promotes operations performance management to ensure that the Agency achieves its development objectives and aligns resources with priorities. USAID uses its Management Operations Council OC to conduct an annual Strategic Review of progress toward achieving the strategic objectives in the JSP.
- To improve linkages and break down silos, USAID continues to develop the Development Information Solution (DIS)—an enterprise-wide management information system that will enable USAID to collect, manage, and visualize performance data across units, along with budget and procurement information, to more efficiently manage and execute programming.
4.3 Did the agency have a continuous improvement or learning cycle processes to identify promising practices, problem areas, possible causal factors, and opportunities for improvement? (Examples: stat meetings, data analytics, data visualization tools, or other tools that improve performance)
- USAID’s Program Cycle policy (ADS 201.3.2.18) requires that Missions conduct at least one portfolio review per year that focuses on progress toward strategy-level results. Missions must also conduct a CDCS mid-course stocktaking at least once during the course of implementing their Country Development Cooperation Strategy, which typically spans five years.
- USAID developed an approach to explicitly ensure adaptation through learning called Collaborating, Learning, and Adapting (CLA). It is incorporated into USAID’s Program Cycle guidance (ADS 201.3.5.19) where it states: “Strategic collaboration, continuous learning, and adaptive management link together all components of the Program Cycle.” Through CLA, USAID ensures its programming is coordinated with others, grounded in a strong evidence base, and iteratively adapted to remain relative throughout implementation.
- In addition to this focus through its programming, USAID has two senior bodies which oversee Enterprise Risk Management, and meet regularly to improve the accountability and effectiveness of USAID programs and operations through holistic risk management. USAID tracks progress toward strategic goals and annual performance goals during data-driven reviews at Management Operations Council meetings.
8
Data
Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections – to improve (or help other entities improve) outcomes, cost-effectiveness, and/or the performance of federal, state, local, and other service providers programs in FY19? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; open data policies; data-use policies)
5.1 Did the agency have a strategic data plan, including an open data policy? (Example: Evidence Act 202(c), Strategic Information Resources Plan)
- USAID’s data related investments and efforts are guided by its Information Technology Strategic Plan. This includes support for the Agency’s Development Data Policy, USAID’s open data policy, that provides a framework for systematically collecting Agency-funded data, structuring the data to ensure usability, and making the data public while ensuring rigorous protections for privacy and security. In addition, this policy sets requirements for how USAID data is tagged, submitted, and updated. The Development Data Library (DDL) is the Agency’s repository of USAID-funded, machine readable data, created or collected by the Agency and its implementing partners. The DDL, as a repository of structured and quantitative data, complements the DEC which publishes qualitative reports and information.
5.2 Did the agency have an updated comprehensive data inventory? (Example: Evidence Act 3511)
- Launched in November 2018 as part of the Development Information Solution (DIS), USAID’s public-facing Development Data Library (DDL) provides a comprehensive inventory of data assets available to the Agency. The DDL’s data catalog is also harvested via javascript on an ongoing basis for further distribution on the federal Data.gov website.
5.3 Did the agency promote data access or data linkage for evaluation, evidence-building, or program improvement? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; downloadable machine-readable, de-identified tagged data; Evidence Act 3520(c))
- The Data Services team— located in USAID’s Management Bureau’s Office of the Chief Information Officer (M/CIO)— manages a comprehensive portfolio of data services in support of the Agency’s mission. This includes enhancing the internal and external availability and ease-of use of USAID data and information via technology platforms such as the USAID Economic Analysis And Data Services platform, broadening global awareness of USAID’s data and information services, and bolstering the Agency’s capacity to use data and information via training and the provision of demand-driven analytical services.
- The Data Services Team also manages and develops the Agency’s digital repositories, including the Development Data Library (DDL), the Agency’s central data repository. USAID and external users can search for and access datasets from completed evaluations and program monitoring by country and sector.
- USAID staff also have access to an internal database of over 100 standard foreign assistance program performance indicators and associated baseline, target, and actual data reported globally each year. This database and reporting process, known as the Performance Plan and Report (PPR) promotes evidence building and informs internal learning and decisions related to policy, strategy, budgets, and programs.
- The United States is a signatory to the International Aid Transparency Initiative (IATI)—a voluntary, multi-stakeholder initiative that created a data standard for publishing foreign assistance spending data in machine-readable format. The standard links an activity’s financial data to its evaluations. USAID continues to improve and add to its published IATI data, and is looking into ways to utilize these data as best practice—including using it to populate partner country systems, fulfill transparency reporting as part of the U.S. commitment to the Grand Bargain, and make decisions internally, including based on what other development actors are doing by using the newly launched Development Cooperation Landscape tool.
- The Landscape tool enables USAID staff to better understand cooperation partners’ priorities and identify potential areas of alignment. This data source is contributing to more robust cooperation strategy development, decision making, and helping USAID to more effectively and efficiently use cooperation resources. USAID created the Global Innovation Exchange that shares information around development innovations with hundreds of other industry partners and governments.
5.4 Did the agency have policies and procedures to secure data and protect personal, confidential information? (Example: differential privacy; secure, multiparty computation; homomorphic encryption; or developing audit trails)
- USAID’s Privacy Program directs policies and practices for protecting personally identifiable information and data, while several policy references (ADS303maz and ADS302mbj) provide guidance for protecting information to ensure the health and safety of implementing partners. USAID’s Development Data Policy (ADS Chapter 579) details a data publication process that provides governance for data access and data release in ways that ensure protections for personal and confidential information. As a reference to the Development Data Policy, ADS579maa explains USAID’s foreign assistance data publications and the protection of any sensitive information prior to release. USAID applies statistical disclosure control on all public data before publication or inclusion in the DDL.
5.5 Did the agency provide assistance to city, county, and/or state governments, and/or other grantees on accessing the agency’s datasets while protecting privacy?
- While specific data on this is limited, USAID does invest in contracts or grants that provide support to build local organizational or governmental capacity in data collection, analysis, and use. In addition, to date, more than 245 USAID data assets are available to the public via USAID’s DDL. These assets include microdata related to USAID’s initiatives that provide partner countries and development partners with insight into emerging trends and opportunities for expanding peace and democracy, reducing food insecurity, and strengthening the capacity to deliver quality educational opportunities for children and youth around the globe. Grantees are encouraged to use the data on the DDL, which provides an extensive User Guide to aid in accessing, using, securing and protecting data. The Data Services team conducts communication and outreach to expand the awareness of websites with development data, how to access it, and how to contact the team for support. In addition, the Data Services team has developed a series of videos to show users how to access the data available. The [email protected] mail account responds to requests for assistance and guidance a range of data services from both within the Agency and from implementing partners and the public.
2
Common Evidence Standards / What Works Designations
Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding purposes; did that framework prioritize rigorous research and evaluation methods; and did the agency disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY19? (Example: What Works Clearinghouses)
6.1 Did the agency have a common evidence framework for research and evaluation purposes?
- USAID’s evidence standards are embedded within its policies and include requirements for the use of evidence in strategic planning, project design, activity design, program monitoring, and evaluation. USAID has a Scientific Research Policy that sets out quality standards for research across the Agency. USAID’s Program Cycle Policy requires the use of evidence and data to assess the development context, challenges, potential solutions, and opportunities in all of USAID’s country strategies. Specific programs, such as the Development Innovation Ventures (DIV) use evaluation criteria related to evidence and cost effectiveness to determine funding decisions to test and scale innovations.
6.2 Did the agency have a common evidence framework for funding decisions?
- While there is no one evidence framework for all funding decisions, there are general guidelines for evaluation and scientific research, and some specific types of programs do use evidence framework or standards to make funding decisions.
- Development Innovation Ventures (DIV) uses a tiered funding system to test and scale evidence-based innovations, making funding decisions based on its evaluation criteria: evaluation and impact; cost-effectiveness; evidence and evaluation; implementation; sustainability and pathway to scale; and project team (see page 6 in DIV’s most recent Annual Program Statement for the evaluation criteria). DIV’s expectations vary by stage, but every awardee must report against a set of pre-negotiated key performance indicators and nearly all grants are structured in a pay-for-performance model.
- For large scale Stage 2 DIV grants of $500,000 or more, DIV requires evidence of impact that must be causal and rigorous – the grantee must either have rigorous underlying evidence already established, use this funding to run an evaluation with an evaluation partner, or run an evaluation with its own funding during the grant period. There must be significant demonstrated demand for the innovation.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
- USAID does have an Agency-wide repository for development information (including evaluation reports and other studies) which is available to the public at the Development Experience Clearinghouse. In addition, USAID uses the International Initiative for Impact Evaluations (3ie) database of impact evaluations relevant to development topics (including over 4,500 entries to date), knowledge gap maps, and systematic reviews that pull the most rigorous evidence and data from across international development donors. 3ie also houses a collection of institutional policies and reports that examine findings from its database of impact evaluations on overarching policy questions to help policymakers and development practitioners improve development impact through better evidence.
- USAID’s Agency Programs and Functions policy designates technical bureaus responsible for being the repository for latest information in the sectors they oversee; prioritizing evidence needs and taking actions to build evidence; and disseminating that evidence throughout the agency for those sectors. Several USAID bureaus and sectors have created user friendly tools to disseminate information on evidence-based solutions. These include, but are not limited to:
- CLIMATELINKS: A global knowledge portal for climate change and development practitioners
- EDUCATIONLINKS: Shares innovations and lessons learned on implementation of the USAID Education Policy
- Natural Resources Management and Development Portal
- URBANLINKS: USAID’s sharing platform for resources on sustainable urban development
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
- USAID’s approach to Collaborating, Learning, and Adapting (CLA) helps ensure that evidence from evaluation of USAID programming is shared with and used by our staff, partners, and stakeholders in the field. USAID requires a dissemination plan and post-evaluation action plan for each evaluation, and USAID field staff are encouraged to co-create evaluation action plans with key stakeholders based on evaluation evidence. USAID collects examples through the CLA Case Competition, held annually, which recognizes implementers, stakeholders, and USAID staff for their work generating and sharing technical evidence and learning from monitoring and evaluation. It is another way that the Agency encourages evidence-based practices among its stakeholders.
- In one example of USAID’s efforts to promote evidence utilization, the Mission in Senegal worked with a government ministry to co-create evaluation recommendations and an action plan based on those recommendations. Another example shows how an implementer adapted its approach based on findings from a mid-term evaluation for a project in Cambodia. The project maintains a collaborative and responsive relationship with the Mission and utilizes continuous learning and improvement supported by evidence for better development results.
- USAID also periodically hold large learning events with partners and others in the development community around evidence including, but not limited to, Evaluation Summits, engagement around the Self-Reliance Learning Agenda, and Moving the Needle. These gatherings are designed to build interest in USAID’s evidence, build capacity around applying that evidence and learning, and elicit evidence and learning contributions.
8
Innovation
Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY19? (Examples: Prizes and challenges; behavioral science trials; innovation labs/accelerators; performance partnership pilots; demonstration projects or waivers with rigorous evaluation requirements)
7.1 Did the agency engage leadership and staff in its innovation efforts?
- In FY2019, USAID appointed a new Chief Innovation Officer to advocate for innovation throughout development and national security strategies across USAID, the U.S. Government, and the international community. The Chief Innovation Officer promotes opportunities for entrepreneurs, implementing partners, universities, donors, and others to test and scale innovative solutions and approaches to development problems around the world. In FY2019, the U.S. Global Development Lab also engaged USAID leadership and Mission staff from around the world at the Mission Directors Conference, the Contracting Officer and Controller Conference, the Foreign Service National Conference, and the Private Sector Engagement Forum.
- For innovations specific to a particular sector, Agency leadership has supported technical staff in surfacing groundbreaking ideas, such as how the Bureau for Global Health’s Center for Innovation and Impact (CII) used open innovation approaches to issue the Saving Lives at Birth Grand Challenge and identify promising, life-saving maternal and newborn health innovations.
7.2 Did the agency have policies that promote innovation?
- In December 2018, USAID released its first Acquisition and Assistance Strategy to guide changes to policy and practice to increase flexibility, creativity, and adaptability in USAID partnering approaches. The strategy states: “USAID will increase our use of collaborative and co-creative approaches by 10 percentage points in terms of total dollars and awards in FY 2019.” Building on past years of experimentation and innovation, USAID will challenge design and procurement officers to engage a much wider range of practices that emphasize collaboration and co-creation. USAID will diversify approaches to design, solicitation, and awards—designing activities less prescriptively and more collaboratively; simplifying access for new and local partners; and increasing usage of awards that pay for results, as opposed to presumptively reimbursing for costs.
- In the same month, USAID released the first Private Sector Engagement Policy to transform how USAID engages the private sector throughout core operations across all sectors. The policy represents an intentional shift to pursue market-based approaches, investment, and private sector expertise and innovation to accelerate country journeys to self-reliance.
7.3 Did the agency have processes, structures, or programs to stimulate innovation?
- Since 2014, the U.S. Global Development Lab has advanced science, technology, innovation, and partnerships to accelerate development impact. In FY 2019, the Lab’s budget was $75 million with 76 direct hire staff (Civil Service and Foreign Service). The Lab is home to numerous innovation teams and programs, including those housed in the Center for Development Innovation.
- In addition, the Center for Innovation and Impact (CII)—the Bureau for Global Health’s dedicated innovation office—takes a business-minded approach to fast-tracking the development, introduction, and scale-up of health innovations that address the world’s most important health challenges, and assessing and adopting cutting-edge approaches (such as unmanned aerial vehicles and artificial intelligence).
- USAID and its partners have launched eleven Grand Challenges for Development since 2011. Across the Grand Challenges portfolio, partners have jointly committed over $535 million ($155 million from USAID) in grants and technical assistance for over 528 innovators in 107 countries. To date, more than $614 million in follow-on funding has been catalyzed from external sources, a key measure of success.
- Feed the Future Partnering for Innovation partners with agribusinesses to help them commercialize and scale new agricultural innovations to help improve the livelihoods of smallholder farmers, increasing their productivity and incomes. To date the program has worked with 59 partners in 20 different countries, investing more than $43 million in new technologies and services, and leveraging nearly $100 million in private sector investment. The program has helped commercialize over 118 innovations, which resulted in an estimated $99 million in sales.
7.4. Did the agency evaluate its innovation efforts, including using rigorous methods?
- Within the U.S. Global Development Lab, the MERLIN program works to innovate on traditional approaches to monitoring, evaluation, research and learning. While innovative in themselves, these approaches can also be better suited to evaluating an innovation effort. Two examples include Developmental Evaluation, which aims to provide ongoing feedback to managers on implementation through an embedded evaluator, and Rapid Feedback, which allows implementers to test various methods to reach certain targeted results (more quickly than through traditional midterm or final evaluations).
- Many of the agency’s programs such as Grand Challenges and Development Innovation Ventures (DIV) have been reviewed by formal audit and other performance and impact interventions. DIV is USAID’s tiered, evidence-driven open innovation program. It awards grants for innovative solutions to any development problem, on the basis of rigorous evidence of impact, cost-effectiveness, and a pathway to scale via the public and/or private sectors. The DIV model is designed to source breakthrough solutions, to minimize risk, and maximize impact by funding according to outcomes and milestones, to rigorously evaluate impact and cost-effectiveness, and to scale proven solutions. Since 2010, DIV has supported 197 innovations in 45 countries with approximately $118 million. It has generated experimental or quasi-experimental evaluation studies of more than a third of those innovations. And a forthcoming working paper rigorously assesses the social rate of return of DIV’s early portfolio.
10
Use of Evidence in 5 Largest Competitive Grant Programs
Did the agency use evidence of effectiveness when allocating funds from its 5 largest competitive grant programs in FY19? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring for evidence; Pay for Success provisions)
8.1 What were the agency’s 5 largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- USAID’s top five program accounts based on actual appropriation amounts in FY18 are: 1) International Disaster Assistance ($4.3 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303); Economic Support Fund ($3.9 billion; eligible grantees: any U.S. or non-U.S. organization, individual, non-profit, or for- profit entity that meets the requirements described in ADS 303); Migration and Refugee Assistance ($3.4 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303); Global Health (USAID) ($3 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303); Development Assistance ($3 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303).
- See the U.S. Foreign Assistance Reference Guide for more information on each of these accounts. More information can also be found in the FY2020 Congressional Budget Justification (page 2, column 4). USAID generally does not limit eligibility when awarding grants and cooperative agreements; eligibility may be restricted for an individual notice of funding opportunity in accordance with the procedures in ADS 303.
8.2 Did the agency use evidence of effectiveness to allocate funds in 5 largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- USAID is committed to using evidence of effectiveness in all of its competitive contracts, cooperative agreements, and grants, which comprise the majority of the Agency’s work. USAID’s Program Cycle Policy ensures evidence from monitoring, evaluation and other sources informs funding decisions at all levels, including during strategic planning, project and activity design, procurement and implementation.
- USAID’s Senior Obligation Alignment Review (SOAR) helps to ensure the Agency is using evidence to design and approve funding for innovative approaches to provide long-term sustainable outcomes and provides oversight on the use of grant or contract mechanisms and proposed results.
- USAID includes past performance to comprise 30 percent of the non-cost evaluation criteria for contracts. As part of determining grant awards, USAID’s policy requires an applicant to provide a list of all its cost-reimbursement contracts, grants, or cooperative agreements involving similar or related programs during the past three years. The grant Selection Committee chair must validate the applicant’s past performance reference information based on existing evaluations to the maximum extent possible, and make a reasonable, good faith effort to contact all references to verify or corroborate how well an applicant performed.
- For assistance, as required by 2 CFR 200, USAID also does a risk assessment to review an organization’s ability to meet the goals and objectives outlined by the agency. Internal procedures for conducting the risk assessment are found in ADS 303.3.9, with guidance on how to look for evidence of effectiveness from potential grantees. Per the ADS, this can be done through reviewing past performance and evaluation/performance reports such as the Contractor Performance Assessment Reporting System (CPARS).
- Even though there is no federal requirement (as there is with CPARS), USAID has recently created a process, detailed in ADS 303 (p. 66), to assess grantee past performance for use when making funding decisions. Per USAID’s ADS 303 policy, before making an award of any grant or cooperative agreement the Agreement Officer must state in the memorandum of negotiation that the applicant has a satisfactory record of performance. When making the award, the Agreement Officer may consider withholding authority to proceed to the next phase of a grant until provided evidence of acceptable performance within a given period.
- USAID was recognized by GAO in its recent report published on September 5, 2018, Managing for Results: Government-wide Actions Needed to Improve Agencies’ Use of Performance Information in Decision Making(GAO-18-609SP) as one of four agencies (out of 23 surveyed) with proven practices for using performance information. USAID was also the only CFO Act agency with a statistically significant increase in the Agency Use of Performance Information Index since 2007.
8.3 Did the agency use its 5 largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- Grantees are required to report on the progress of activities through documentation such as Activity Monitoring, Evaluation, and Learning (MEL) Plans, periodic performance reporting, and external and internal evaluation reports (if applicable). These reports help USAID remain transparent and accountable and also help the Agency build evidence of what does and doesn’t work in its interventions. Any internal evaluation undertaken by a grantee must also be provided to USAID for learning purposes. All datasets compiled under USAID-funded projects, activities, and evaluations are to be submitted by grantees to the USAID Development Data Library. All final evaluation reports must also be submitted to the Agency’s Development Experience Clearinghouse (DEC), unless they receive a waiver to the USAID’s public dissemination requirements. These are rare and require the concurrence of the Director of the Office of Learning, Evaluation, and Research.
8.4 Did the agency use evidence of effectiveness to allocate funds in any competitive grant program?
- USAID is actively engaged in utilizing evidence of effectiveness to allocate funds. For example, Development Innovation Ventures (DIV) invests in innovations that demonstrate evidence of impact, cost-effectiveness, and a viable pathway to scale. DIV provides four types of grants: 1) proof of concept, 2) positioning for scale, 3) scaling proven solutions, and 4) evidence grants.
- The more funding requested (up to $5 million dollars), the more DIV requires in an innovation’s evidence base, the deeper the due diligence process, and the greater the expectation that the applicant will be able to demonstrate development impact and potential to scale. After a decision is made to allocate funding, 98% of all DIV awards are structured as fixed amount pay-for-performance grants, ensuring that awards maximize the impact of U.S. taxpayer dollars. Over the past 8 years, DIV has invested $118 million in nearly 200 innovations across 45 countries.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- A mid-term, whole of project evaluation of USAID/Rwanda’s Community Health and Improved Nutrition (CHAIN) project found positive results from its approach to coordination and collaboration among implementing partners carrying out CHAIN’s activities. For example, as a result of collaboration, many implementing partners exceeded their targets, found cost-savings from carrying out joint trainings, and improved relationships with local government officials, given their work toward a common goal. CHAIN’s Project Management Team continues to support learning and to adapt the collaboration approach based on evaluation findings
- The Better Coffee Harvest project was a public-private partnership that worked to improve the livelihoods of smallholder coffee farmers in El Salvador and Nicaragua. Through Collaborative Learning, and Adapting (CLA) approaches, the project more effectively leveraged monitoring and evaluation activities to engage stakeholders and share information, ideas, and connections with sector actors. This approach helped the project meet its key performance targets and improve coordination across the larger coffee industry. A team-led focus on data for reflection and learning meant project leadership had the inputs, feedback, and time necessary to make informed decisions. The project’s efforts to convene stakeholders to discuss findings also had a significant effect on the sector. After participating in roundtables where project results were discussed, industry stakeholders signed new sales contracts, formed new partnerships, and launched private-sector initiatives to address long-standing challenges in the sector.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- USAID’s Program Cycle Policy states that “[f]unding may be dedicated within a project or activity design for implementing partners to engage in an internal evaluation for institutional learning or accountability purposes.”
- USAID’s Development Innovation Ventures (DIV) specifically references evaluations and rigorous evidence in the official solicitation: “Larger scale Stage 2 innovations (over $500,000) must include or test the evidence of impact of an innovation. This evidence of impact must be causal and rigorous—the grantee must either have rigorous underlying evidence already established, use this funding to run an evaluation with an evaluation partner, or run an evaluation with its own funding during the grant period.” More on DIV’s funding framework can be found in its evaluation criteria (see page 6 in DIV’s most recent Annual Program Statement for the evaluation criteria).
N/A
Use of Evidence in 5 Largest Non-Competitive Grant Programs
Did the agency use evidence of effectiveness when allocating funds from its 5 largest non-competitive grant programs in FY19? (Examples: Evidence-based funding set-asides; requirements to invest funds in evidence-based activities; Pay for Success provisions)
- USAID does not administer non-competitive grant programs (score for criteria #8 applied).
7
Repurpose for Results
In FY19, did the agency shift funds away from or within any practice, policy, or program that consistently failed to achieve desired outcomes? (Examples: Requiring low-performing grantees to re-compete for funding; removing ineffective interventions from allowable use of grant funds; incentivizing or urging grant applicants to stop using ineffective practices in funding announcements; proposing the elimination of ineffective programs through annual budget requests; incentivizing well-designed trials to fill specific knowledge gaps; supporting low-performing grantees through mentoring, improvement plans, and other forms of assistance; using rigorous evaluation results to shift funds away from a program)
10.1 Did the agency shift funds/resources away from ineffective practices or interventions used within programs or by grantees?
- Through the Program Cycle, USAID encourages managing projects and activities adaptively, responding to rigorous data and evidence and shifting design and/or implementation accordingly. For example, USAID’s Lowland Water, Sanitation, and Hygiene (WASH) activity works to accelerate access to improved WASH in three rural lowland regions in Ethiopia. A mid-activity data review highlighted several disappointing results that prompted a program team (USAID/Ethiopia staff and implementing partner) to rethink their approach. The team utilized pause and reflect, strategic collaboration, adaptive management, and monitoring and evaluation for learning. Using this intentional CLA process, the team initiated a virtuous cycle of learning. Soon the team realized that, in these communities dominated by (semi-) pastoralist groups, the operating conditions for effective, sustained behavior change are highly variable. A CLA approach helped the program team define, pivot, and re-design activities that addressed program effectiveness.
10.2 Did the agency shift funds/resources away from ineffective policies used within programs or by grantees?
- In April 2019, USAID released its new Policy Framework which articulates USAID’s approach to providing development and humanitarian assistance and the Agency’s programmatic and operational priorities that follow from it. These priorities and approaches are based on evidence and inform issue-specific development policies, strategies, and vision papers; budget requests and allocations; country and regional strategic plans; good-practice documents and project designs; evaluations and learning agendas; and overall engagement with partners.
- The Framework requires a reorientation of USAID’s programmatic approach to foster self-reliance more effectively. This approach marks a new direction for USAID, but draws on deep experiences and the lessons it has learned. The new approach is grounded in three principles that underpin why it provided assistance to each country, what assistance will be most effective, and how it can ensure the sustainability of its results. These principles are Advance Country Progress, Invest for Impact, and Sustain Results. Ultimately, the framework serves to shift policy to improve program effectiveness in supporting partner countries in their journey to self-reliance.
10.3 Did the agency shift funds/resources away from ineffective grantees?
- USAID shifts funds away from ineffective grantees. For example, the Securing Water for Food Grand Challenge is designed with a Technical Assistance Facility to consult and work with grantees to identify specific growth barriers, and then connect them with vetted service providers that bring expertise and capabilities to help these grantees overcome their strategic barriers. The Technical Assistance Facility provides tailored financial and acceleration support to help these grantees improve their market-driven business development, commercial growth, and scaling.
- If a grantee is unable to meet specific performance targets, such as number of customers or product sales, further funding is not granted and the grantee is re-categorized into the program’s group of unsuccessful alumni. The Securing Water for Food Grand Challenge used milestone-based grants to terminate 15 awards that were not meeting their annual milestones and shifted that money to both grants and technical assistance for the remaining 25 awards in the program.
10.4 Did the agency shift funds/resources away from ineffective programs? (e.g., eliminations or legislative language in budget requests)
- The Agency continuously works to improve upon its programming based on the best-available evidence. In one example, the findings of a mid-term evaluation of Reproductive, Maternal, Neonatal, Child, and Adolescent Health (RMNCH+A) activities resulted in shifting resources towards a more effective implementation of the interventions. Taking lessons from the first phase of the project, and based on the evaluation recommendation, the project decided to focus on limited technical areas where activities have sufficient feedback loops and opportunities for learning and continuous improvements. For example, the project will continue to cover maternal, newborn, and child health areas, but the adolescent health interventions will be done through other USAID funded projects.
10.5 Did the agency shift funds/resources away from consistently ineffective products and services?
- USAID uses rigorous evaluations to maximize its investments. A recent independent study found that 71 percent of USAID evaluations have been used to modify and/or design USAID projects. Below are some examples where USAID has shifted funds and/or programming decisions based on performance:
- An evaluation of an education project in El Salvador revealed the need to improve effectiveness by better matching interventions with individual target school and community needs to serve out-of-school youth. As a result, the project changed how it is working with the Ministry of Education.
- USAID Acceleration programs use a pilot and pivot approach to test out different services and vendors to help its innovators accelerate their work. These services and vendors are held to high customer satisfaction minimums, if the vendors are not constantly scoring a 90/100 or higher they are removed from the service offering.
- USAID’s INVEST program is designed for constant feedback loops around the partner performance. Not only are under-performing partners dropped, but new partners can be added dynamically, based on demand. This greatly increases USAID’s new partner base and increases the performance standard across the board.