2019 Federal Standard of Excellence


Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding purposes; did that framework prioritize rigorous research and evaluation methods; and did the agency disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY19? (Example: What Works Clearinghouses)

Score
7
Administration for Children and Families (HHS)
6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • ACF has established a common evidence framework adapted for the human services context from the framework for education research developed by the U.S. Department of Education and the National Science Foundation. The ACF framework, which includes the six types of studies delineated in the ED/NSF framework, aims to (1) inform ACF’s investments in research and evaluation and (2) clarify for potential grantees’ and others’ expectations for different types of studies.
6.2 Did the agency have a common evidence framework for funding decisions?
  • While ACF does not have a common evidence framework across all funding decisions, certain programs do use a common evidence framework for funding decisions. For example:
    • The Family First Prevention Services Act (FFPSA) enables states to use funds for certain evidence-based services. In April 2019, ACF published the Prevention Services Clearinghouse Handbook of Standards and Procedures, which provides a detailed description of the standards used to identify and review programs and services in order to rate programs and services as promising, supported, and well-supported practices.
    • The Personal Responsibility Education Program Competitive Grants were funded to replicate effective, evidence-based program models or substantially incorporate elements of projects that have been proven to delay sexual activity, increase condom or contraceptive use for sexually active youth, and/or reduce pregnancy among youth. Through a systematic evidence review, HHS selected 44 models that grantees could use, depending on the needs and age of the target population of each funded project.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
  • ACF sponsors several user-friendly tools that disseminate and promote evidence-based interventions. Several evidence reviews of human services interventions disseminate and promote evidence-based interventions by rating the quality of evaluation studies and presenting results in a user-friendly searchable format. Reviews to date have covered: teen pregnancy prevention; home visiting; marriage education and responsible fatherhood; and employment and training and include both ACF-sponsored and other studies. ACF is currently developing two new websites that will disseminate information on rigorously evaluated, evidence-based solutions: 1) The Pathways to Work Evidence Clearinghouse will be a user-friendly website (expected to launch in Spring 2020) that will report on “projects that used a proven approach or a promising approach in moving welfare recipients into work, based on independent, rigorous evaluations of the projects”; 2) ACF’s Title IV-E Prevention Services Clearinghouse project launched a website in June 2019 that is easily accessible and searchable and allows users to navigate the site and find information about mental health and substance abuse prevention and treatment services, in-home parent skill-based programs, and kinship navigator services designated as “promising,” “supported,” and “well-supported” practices by an independent systematic review.
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • OPRE’s evaluation policy states that it is important for evaluators to disseminate research findings in ways that are accessible and useful to policy-makers and practitioners and that OPRE and program offices will work in partnership to inform potential applicants, program providers, administrators, policy-makers, and funders through disseminating evidence from ACF-sponsored and other good quality evaluations. OPRE has a robust dissemination function that includes the OPRE website, an OPRE e-newsletter, and social media presence on Facebook and Twitter. OPRE also biennially hosts two major conferences, the Research and Evaluation Conference on Self-Sufficiency and the National Research Conference on Early Childhood to share research findings with researchers and with program administrators and policymakers at all levels.
Score
5
Administration for Community Living (HHS)
6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • ACL defines evidence-based programs on its website. ACL’s National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) uses a stages of research framework (SORF) to classify and describe its funded grants and research projects within the grants. The four stages of SORF include: exploration and discovery, intervention development, intervention efficacy, and scale-up evaluation. Using SORF, NIDILRR gains insight into what is known and unknown about a problem; whether it is time to develop interventions to address a particular problem; whether it is time to test the efficacy of interventions; and whether it is time to scale-up interventions for broader use.
6.2 Did the agency have a common evidence framework for funding decisions?
  • The Older Americans Act requires the use of evidence-based programming in Title III-D-funded activities: Disease Prevention and Health Promotion Services. In response, ACL developed a definition of the term evidence-based, and created a website containing links to a range of resources for evidence-based programs. This is a common evidence framework used for Older Americans Act funded activities. For programs that are not legislatively required to use evidence-based models, through its funding process ACL requires all programs to provide clear justification and evidence (where available) that proposed projects will achieve their stated outcomes.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
Score
2
U.S. Agency for International Development
6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • USAID’s evidence standards are embedded within its policies and include requirements for the use of evidence in strategic planning, project design, activity design, program monitoring, and evaluation. USAID has a Scientific Research Policy that sets out quality standards for research across the Agency. USAID’s Program Cycle Policyrequires the use of evidence and data to assess the development context, challenges, potential solutions, and opportunities in all of USAID’s country strategies. Specific programs, such as the Development Innovation Ventures (DIV) use evaluation criteria related to evidence and cost effectiveness to determine funding decisions to test and scale innovations.
6.2 Did the agency have a common evidence framework for funding decisions?
  • While there is no one evidence framework for all funding decisions, there are general guidelines for evaluation and scientific research, and some specific types of programs do use evidence framework or standards to make funding decisions.
  • Development Innovation Ventures (DIV) uses a tiered funding system to test and scale evidence-based innovations, making funding decisions based on its evaluation criteria: evaluation and impact; cost-effectiveness; evidence and evaluation; implementation; sustainability and pathway to scale; and project team (see page 6 in DIV’s most recent Annual Program Statement for the evaluation criteria). DIV’s expectations vary by stage, but every awardee must report against a set of pre-negotiated key performance indicators and nearly all grants are structured in a pay-for-performance model.
  • For large scale Stage 2 DIV grants of $500,000 or more, DIV requires evidence of impact that must be causal and rigorous – the grantee must either have rigorous underlying evidence already established, use this funding to run an evaluation with an evaluation partner, or run an evaluation with its own funding during the grant period. There must be significant demonstrated demand for the innovation.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
  • USAID does have an Agency-wide repository for development information (including evaluation reports and other studies) which is available to the public at the Development Experience Clearinghouse. In addition, USAID uses the International Initiative for Impact Evaluations (3ie) database of impact evaluations relevant to development topics (including over 4,500 entries to date), knowledge gap maps, and systematic reviews that pull the most rigorous evidence and data from across international development donors. 3ie also houses a collection of institutional policies and reports that examine findings from its database of impact evaluations on overarching policy questions to help policymakers and development practitioners improve development impact through better evidence.
  • USAID’s Agency Programs and Functions policy designates technical bureaus responsible for being the repository for latest information in the sectors they oversee; prioritizing evidence needs and taking actions to build evidence; and disseminating that evidence throughout the agency for those sectors. Several USAID bureaus and sectors have created user friendly tools to disseminate information on evidence-based solutions. These include, but are not limited to:
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • USAID’s approach to Collaborating, Learning, and Adapting (CLA) helps ensure that evidence from evaluation of USAID programming is shared with and used by our staff, partners, and stakeholders in the field. USAID requires a dissemination plan and post-evaluation action plan for each evaluation, and USAID field staff are encouraged to co-create evaluation action plans with key stakeholders based on evaluation evidence. USAID collects examples through the CLA Case Competition, held annually, which recognizes implementers, stakeholders, and USAID staff for their work generating and sharing technical evidence and learning from monitoring and evaluation. It is another way that the Agency encourages evidence-based practices among its stakeholders.
  • In one example of USAID’s efforts to promote evidence utilization, the Mission in Senegal worked with a government ministry to co-create evaluation recommendations and an action plan based on those recommendations. Another example shows how an implementer adapted its approach based on findings from a mid-term evaluation for a project in Cambodia. The project maintains a collaborative and responsive relationship with the Mission and utilizes continuous learning and improvement supported by evidence for better development results.
  • USAID also periodically hold large learning events with partners and others in the development community around evidence including, but not limited to, Evaluation Summits, engagement around the Self-Reliance Learning Agenda, and Moving the Needle. These gatherings are designed to build interest in USAID’s evidence, build capacity around applying that evidence and learning, and elicit evidence and learning contributions.
Score
6
Corporation for National and Community Service
6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • CNCS uses the same standard scientific research methods and designs for all of its studies and evaluations following the model used by clearinghouses like Department of Education’s What Works Clearinghouse, the Department of Labor’s Clearinghouse for Labor Evaluation and Research, and the Department of Health and Human Services’ Home Visiting Evidence of Effectiveness project.
6.2 Did the agency have a common evidence framework for funding decisions?
  • CNCS has a common evidence framework for funding decisions in the Senior Corps and AmeriCorps State and National programs. This framework, which is articulated in the AmeriCorps State and National program notice of funding, includes the following evidence levels: pre-preliminary, preliminary, moderate, and strong.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
  • The CNCS Evidence Exchange is a virtual repository of reports and resources intended to help CNCS grantees and other interested stakeholders find information about evidence- and research- based national service programs. R&E is working with a contractor to enhance the functionality and usability of the repository. Examples of the types of resources available in the Evidence Exchange include research briefs that describe the core components of effective interventions such as those in the areas of education, economic opportunity, and health.
  • R&E also creates campaigns and derivative products to distill complex report findings and increase their utility for practitioners (for example, this brief on a study about the health benefits of Senior Corps). R&E has categorized reports according to their research design, so that users can easily search for experimental, quasi-experimental, or non-experimental studies, and those that qualify for strong, moderate, or preliminary evidence levels.
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • As part of the AmeriCorps State and National FY19 application process, CNCS provided technical assistance to grantees on using evidence-based practices through webinars and calls. R&E and AmeriCorps conducted a process evaluation of grantees with varied replication experiences to produce a series of products designed to help grantees implement evidence-based interventions (including a forthcoming article in The Foundation Review). Senior Corps continues to encourage and support the use of evidence-based programs, as identified by the HHS’s Administration for Community Living, by its grantee organizations.
Score
9
U.S. Department of Education
6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • ED has an agency-wide framework for evidence that is based on ratings of studies’ internal validity. ED evidence-building activities are designed to meet the highest standards of internal validity (typically randomized control trials) when causality must be established for policy development or program evaluation purposes. When random assignment is not feasible, rigorous quasi-experiments are conducted. The framework was developed and is maintained by IES’s  What Works ClearinghouseTM (WWC).
  • Since 2002, ED—as part of its compliance with the Information Quality Act and OMB guidance— has required that all “research and evaluation information products documenting cause and effect relationships or evidence of effectiveness should meet that quality standards that will be developed as part of the What Works Clearinghouse” (see Information Quality Guidelines). Those standards, currently in their 4th version, are maintained on the WWC website. A stylized representation of the standards can be found here, along with information about how ED reports findings from research and evaluations that meet these standards.
6.2 Did the agency have a common evidence framework for funding decisions?
  • ED’s evidence standards for its grant programs, as outlined in the Education Department General Administrative Regulations (EDGAR), build on ED’s What Works ClearinghouseTM (WWC) research design standards. ED employs these same evidence standards in all of its discretionary grant competitions that use evidence to direct funds to applicants that are proposing to implement projects that have evidence of effectiveness and/or to build new evidence through evaluation.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
  • ED’s What Works ClearinghouseTM (WWC) identifies studies that provide valid and statistically significant evidence of effectiveness of a given practice, product, program, or policy (referred to as “interventions”), and disseminates summary information and reports on the WWC website. The WWC has reviewed more than 10,600 studies that are available in a searchable database, including a commitment to review all publicly available evaluation reports generated under i3 grants. In spring 2019, the WWC tagged each study in its database to indicate whether study findings met EDGAR (and therefore ESSA) Tier 1/Strong Evidence or Tier 2/Moderate Evidence standards to make it easier for users to identify evidence-based interventions.
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • IES has funded two projects to study and promote knowledge utilization in education, including the Center for Research Use in Education and the National Center for Research in Policy and Practice.
  • The Evidence Leadership Group has coordinated the development of revised evidence definitions and related selection criteria for competitive programs that align with ESSA to streamline and clarify provisions for grantees. These revised definitions align with ED’s suggested criteria for states’ implementation of ESSA’s four evidence levels, included in ED’s non-regulatory guidance, Using Evidence to Strengthen Education Investments. ED also developed a fact sheet to support internal and external stakeholders in understanding the revised evidence definitions. This document has been shared with internal and external stakeholders through multiple methods, including the Office of Elementary and Secondary Education ESSA technical assistance page for grantees.
  • WWC Practice Guides are based on reviews of research and experience of practitioners. These guides are designed to address challenges in classrooms and schools. The WWC released two new Practice Guides in FY19: Using Technology to Support Postsecondary Student Learning and Improving Mathematical Problem Solving in Grades 4 through 8. WWC began three more guides in FY19: Assisting Students Struggling in Mathematics; Career and Technical Education Programs in Community College Settings; and Supporting Prosocial and Positive Behavior.
  • IES manages the Regional Educational Laboratory (REL) program, which supports districts, states, and boards of education throughout the United States to use research and evaluation in decision making. The research priorities are determined locally, but IES approves the studies and reviews the final products.
Score
2
U.S. Dept. of Housing & Urban Development
6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • PD&R’s Program Evaluation Policy defines standards that prioritize rigorous methods for research and evaluation covering impact evaluations; implementation or process evaluations; descriptive studies; outcome evaluations; and formative evaluations; and both qualitative and quantitative approaches. It also provides for dissemination of such evidence to stakeholders in a timely fashion.
6.2 Did the agency have a common evidence framework for funding decisions?
  • HUD seeks to employ tiered evidence in funding decisions by embedding implementation and impact evaluations in funding requests for program initiatives, including major program demonstrations that employ random assignment methods. These include the Moving To Work Expansion demonstration, the Rental Assistance Demonstration, the Rent Reform Demonstration, the Family Self-Sufficiency Demonstration, and the Housing Counseling Demonstration. Such trials provide robust evidence to inform scale-up funding decisions.
  • In FY17, HUD developed and piloted a new standardized data collection and reporting framework for its discretionary grant programs called Standards for Success. The framework consists of a repository of data elements that participating programs use in their grant reporting, creating common definitions and measures across programs for greater analysis and coordination of services.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
  • HUD provides resources and assistance to support community partners in evidence-based practice through the HUD Exchange web portal. PD&R provides the public, policymakers, and practitioners with evidence of what works through the Regulatory Barriers Clearinghouse and HUD USER, which is a portal and web store for program evaluations, case studies, and policy analysis and research.
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • HUD provides resources and assistance to support community partners in evidence-based practice through theHUD Exchange web portal. PD&R provides the public, policymakers, and practitioners with evidence of what works primarily through HUD USER, a portal and web store for program evaluations, case studies, and policy analysis and research; the Regulatory Barriers Clearinghouse; and through initiatives such as Innovation of the Day, Sustainable Construction Methods in Indian Country, and the Consumer’s Guide to Energy-Efficient and Healthy Homes. This content is designed to provide current policy information, elevate effective practices, and synthesize data and other evidence in accessible formats such as Evidence Matters. Through these resources, researchers and practitioners can see the full breadth of work on a given topic (e.g., rigorous established evidence, case studies of what has worked in the field, and new innovations currently being explored) to inform their work.
Score
9
U.S. Department of Labor
6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • DOL uses the Cross-agency Federal Evidence Framework for evaluation planning and dissemination. Additionally, DOL collaborates with other agencies (U.S. Department of Health and Human Services (HHS), the  U.S. Department of Education’s Institute of Education Sciences (IES), the National Science Foundation (NSF), and the Corporation for National and Community Service (CNCS)) on refining cross-agency evidence guidelines and developing technological procedures to link and share reviews across clearinghouses. The Interagency Evidence Framework conveys the categories of evaluations, the quality review of evaluation methodologies and results, and the use of evaluation findings. The framework is accepted Department-wide. Additionally, the Clearinghouse for Labor Evaluation and Research’s (CLEAR) evidence guidelines, which describe quality standards for different types of studies, are applied to all independent evaluations, including all third party evaluations of DOL programs, determined eligible for CLEAR’s evidence reviews across different topic areas. Requests for proposals also indicate these CLEAR standards should be applied to all Chief Evaluation Office (CEO) evaluations when considering which designs are the most rigorous and appropriate to answer specific research questions.
6.2 Did the agency have a common evidence framework for funding decisions?
  • DOL uses the CLEAR evidence guidelines and standards to make decisions about discretionary program grants awarded using evidence-informed or evidence-based criteria. The published guidelines and standards are used to identify evidence-based programs and practices and to review studies to assess the strength of their causal evidence or to do a structured evidence review in a particular topic area or timeframe to help inform agencies what strategies appear promising and where gaps exist.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
  • DOL’s CLEAR is an online evidence clearinghouse. CLEAR’s goal is to make research on labor topics more accessible to practitioners, policymakers, researchers, and the public more broadly, so that it can inform their decisions about labor policies and programs. CLEAR identifies and summarizes many types of research, including descriptive statistical studies and outcome analyses, implementation studies, and causal impact studies. For causal impact studies, CLEAR assesses the strength of the design and methodology in studies that look at the effectiveness of particular policies and programs. CLEAR’s study summaries and icons, found in each topic area, can help users quickly and easily understand what studies found and how much confidence to have in the results.
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • DOL promotes the utilization of evidence-based practices in a variety of ways. For example, the Employment & Training Administration (ETA) maintains a user friendly technical assistance tool that promotes state and local service providers’ use of evidence-based interventions through Workforce System Strategies, a comprehensive database of over 1,000 profiles that summarize a wide range of findings from reports, studies, technical assistance tools and guides that support program administration and improvement. Additionally, recognizing that research over the past four decades has found subsidized on-the-job training strategies like apprenticeship improve participants’ employment and earnings outcomes, DOL has awarded or announced several apprenticeship grant opportunities this fiscal year in addition to the State Apprenticeship Expansion Grants awarded in 2018. These include the ETA’s Scaling Apprenticeship Through Sector-Based Strategies and Apprenticeships: Closing the Skills Gap opportunities and the Women’s Bureau’s Women in Apprenticeship and Nontraditional Occupations grant program.
Score
5
Millennium Challenge Corporation
6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • For each investment, MCC’s Economic Analysis (EA) division undertakes a Constraints Analysis to determine the binding constraints to economic growth in a country. To determine the individual projects in which MCC will invest in a given sector, MCC’s EA division combines root cause analysis with a cost-benefit analysis. The results of these analyses allow MCC to determine which investments will yield the greatest development impact and return on MCC’s investment. Every investment also has its own set of indicators for monitoring during the lifecycle of the investment and an evaluation plan for determining the results and impact of a given investment. MCC’s Policy for Monitoring and Evaluation details MCC’s evidence-based research and evaluation framework. Per the Policy, each completed evaluation requires a summary of findings, now called the Evaluation Brief, to summarize the key components, results, and lessons learned of the evaluation. Evidence from previous MCC programming is considered during the development of new programs. Per the Policy, “monitoring and evaluation evidence and processes should be of the highest practical quality. They should be as rigorous as practical and affordable. Evidence and practices should be impartial. The expertise and independence of evaluators and monitoring managers should result in credible evidence. Evaluation methods should be selected that best match the evaluation questions to be answered. Indicators should be limited in number to include the most crucial indicators. Both successes and failures must be reported.”
6.2 Did the agency have a common evidence framework for funding decisions?
  • MCC uses a rigorous evidence framework to make every decision along the investment chain, from country partner eligibility to sector selection to project choices. MCC uses evidence-based selection criteria, generated by independent, objective third parties, to select countries for grant awards. To be eligible for selection, World Bank-designated low- and lower-middle-income countries must first pass the MCC 2019 Scorecard – a collection of 20 independent, third-party indicators that objectively measure a country’s policy performance in the areas of economic freedom, investing in people, and ruling justly. An in-depth description of the country selection procedure can be found in the annual Selection Criteria and Methodology report.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
  • All evaluation designs, data, reports, and summaries are made publicly available on MCC’s Evaluation Catalog. To further the dissemination and use of MCC’s evaluations’ evidence and learning, in May 2019 the Agency launched Evaluation Briefs, a new product to capture and disseminate the results and findings of its independent evaluation portfolio. An Evaluation Brief will be produced for each evaluation and offers a succinct, user-friendly, systematic format to better capture and share the relevant evidence and learning from MCC’s independent evaluations. These accessible products will take the place of MCC’s Summaries of Findings. Evaluation Briefs will be published on the Evaluation Catalog and will complement the many other products published for each evaluation, including evaluation designs, microdata, survey questionnaires, baseline findings, interim reports, and final reports from the independent evaluator.
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • From FY18-FY19, MCC conducted internal research and analysis to understand where and how its published evaluations, datasets, and knowledge products are utilized. The forthcoming results of this analysis will guide future efforts on evidence-based learning such as which sectors MCC prioritizes for evidence generation and publication and what types of products best disseminate MCC’s evidence and learning. Evaluation Briefs are in part a result of MCC’s findings around evaluation user metrics. MCC finalized baseline metrics around evidence and evaluation utilization in April 2018 and is continuing to track global use of its knowledge products on a quarterly basis with a goal of expanding the base of users of MCC’s evidence and evaluation products.
  • MCC has also used in-country evaluation dissemination events to ensure further results and evidence building. For example, a dissemination event was held in Zambia in June 2019 to disseminate results from two studies by an independent evaluator (Centers for Disease Control and Prevention): (i) a baseline survey of 12,000 (future) beneficiary and control groups, and (ii) water quality data along the utility’s water distribution system and in people’s homes. Attendees of the event included 60+ stakeholders representing water sector ministries, local community leaders, the water utility, the post-compact implementation unit, and donor partners.
  • To further bring attention to MCC’s evaluation and evidence, MCC periodically publishes an evaluation newsletter called Statistically Speaking. This newsletter highlights recent evidence and learning from MCC’s programs with a special emphasis on how MCC’s evidence can offer practical policy insights for policymakers and development practitioners in the United States and in partner countries. It also seeks to familiarize a wider audience with the evidence and results of MCC’s investments.
Score
4
Substance Abuse and Mental Health Services Administration
6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • There is great diversity across SAMHSA programming, ranging from community-level prevention activities to residential programs for pregnant and post-partum women with substance misuse issues. While this diversity allows SAMHSA to be responsive to a wide set of vulnerable populations, it limits the utility of a common evidence framework for the entire agency. Within Centers (the Center for Substance Abuse Prevention, the Center for Substance Abuse Treatment, and the Center for Mental Health Services), consistent evidence frameworks are in use and help to shape the process of grant-making (e.g., Center staff are familiar with the pertinent evidence base for their particular portfolios).
  • In 2011, based on the model of the National Quality Strategy, SAMHSA developed the National Behavioral Health Quality Framework (NBHQF). With the NBHQF, SAMHSA proposes a set of core measures to be used in a variety of settings and programs, as well as in evaluation and quality assurance efforts. The proposed measures are not intended to be a complete or total set of measures a payer, system, practitioner, or program may want to use to monitor quality of its overall system or the care or activities it provides. SAMHSA encourages such entities to utilize these basic measures as appropriate as a consistent set of indicators of quality in behavioral health prevention, promotion, treatment, and recovery support efforts across the nation.
6.2 Did the agency have a common evidence framework for funding decisions?
  • SAMHSA has universal language about using evidence-based practices (EBPs) that is included in its Funding Opportunity Announcements (FOAs) (entitled Using Evidence-Based Practices (EBPs)). This language includes acknowledgement that, “EBPs have not been developed for all populations and/or service settings” thus encouraging applicants to “provide other forms of evidence” that a proposed practice is appropriate for the intended population. Specifically, the language states that applicants should: (1) document that the EBPs chosen are appropriate for intended outcomes; (2) explain how the practice meets SAMHSA’s goals for the grant program; (3) describe any modifications or adaptations needed for the practice to meet the goals of the project; (4) explain why the EBP was selected; (5) justify the use of multiple EBPs, if applicable; and (6) discuss training needs or plans to ensure successful implementation. Lastly, the language includes resources the applicant can use to understand EBPs. Federal grants officers work in collaboration with the SAMHSA Office of Financial Resources to ensure that grantee funding announcements clearly describe the evidence standard necessary to meet funding requirements.
  • SAMHSA developed a manual, Developing a Competitive SAMHSA Grant Application, which explains information applicants will likely need for each section of the grant application. The manual has two sections devoted to evidence-based practices (p. 8, p. 26), including: 1) A description of the EBPs applicants plan to implement; 2) Specific information about any modifications applicants plan to make to the EBPs and a justification for making them; and 3) How applicants plan to monitor the implementation of the EBPs. In addition, if applicants plan to implement services or practices that are not evidence-based, they must show that these services/practices are effective.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
  • Until 2018, SAMHSA regarded the National Registry of Evidence-based Programs and Practices (NREPP) as the primary online user friendly tool for identifying evidence-based programs for grantee implementation. In January 2018, SAMHSA announced that it was “moving to EBP [evidence-based practice] implementation efforts through targeted technical assistance and training that makes use of local and national experts and will that assist programs with actually implementing services….” NREPP was taken offline in August 2018. In August 2019, the Pew-MacArthur Results First Initiative announced it had restored users’ access to this information, which can be found in the Results First Clearinghouse.
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • To date SAMHSA has produced 11 Evidence-Based Practice Knowledge Informing Transformation (KIT) guides to help move the latest information available on effective behavioral health practices into community-based service delivery. The KITs contain information sheets, introductory videos, practice demonstration videos, and training manuals. Each KIT outlines the essential components of the evidence-based practice and provides suggestions collected from those who have successfully implemented them.
  • In April 2018, SAMHSA launched the Evidence-Based Practices Resource Center (Resource Center) that aims to provide communities, clinicians, policy-makers and others in the field with the information and tools they need to incorporate evidence-based practices into their communities or clinical settings. The Resource Center contains a collection of science-based resources, including Treatment Improvement Protocols, toolkits, resource guides, and clinical practice guidelines, for a broad range of audiences. The Resource Center also directs users to nine issue-based or regionally-organized Technical Assistance projects, all of which promote the use of evidence-based practices in some way. For example, the purpose of the Mental Health Technology Transfer Center (MHTTC) Network is disseminating and implementing evidence-based practices for mental disorders into the field.
Back to the Standard

Visit Results4America.org