Artificial intelligence (AI), including machine learning, natural language processing, and large language models, may support implementation practice and research in tasks such as evidence synthesis, determinant assessment, strategy selection, monitoring, adaptation, and theory development. However, these applications of AI do not form a single, uniform category. They span a continuum from practice-facing applications that support local implementation work to research- and methods-facing applications that support evidence generation and synthesis. The guidance on how to classify, evaluate, and report these uses of AI remains limited. The AI Methods for Implementation Science (AIM-IS) program aims to develop, validate, and maintain a suite of products to guide the responsible use of AI across implementation practice, implementation research, and bridging use cases. AIM-IS is a multi-phase, multi-method methodological development program. The unit of analysis is the AI-for-implementation use case: a specific AI capability supporting a defined implementation practice or research task within a workflow, decision point, and governance context. Phase 1 is a living scoping review mapping published AI use cases in implementation science, including how they are evaluated and what risks they raise. Phase 2 is a qualitative interview study with implementation researchers, practitioners, AI experts, community members, and data infrastructure and governance experts to refine use cases and identify feasibility constraints, outcome priorities, and reporting needs. Phase 3 will integrate findings from Phases 1 and 2 to develop the draft AIM-IS products, including a framework, a taxonomy of use cases, guardrails for responsible use, a practical guide, outcome domains, and reporting items. Phase 4 will use an eDelphi process and consensus meeting to refine and finalize these products. Phase 5 will conduct usability testing to improve clarity and ease of use, resulting in the finalized AIM-IS products. AIM-IS is informed by implementation science, sociotechnical systems, equity, and responsible AI frameworks, and includes a living-update approach to support ongoing refinement. The AIM-IS program will deliver a suite of products, including a framework, toolkit and reporting standard, to support the specification, governance, evaluation, and reporting of AI in implementation science. Together, these products aim to strengthen transparency, comparability, accountability, and attention to equity in how AI is used by implementation practitioners and researchers over time. Open Science Framework, March 15, 2026: https://doi.org/10.17605/OSF.IO/BX35K.
Understanding the effectiveness of implementation strategies to support uptake of evidence-based interventions (EBIs) requires examining activation of mechanisms targeted by implementation strategies. This study uses data from the TEAMS (Translating Evidence-Based Interventions for Autism) hybrid type III implementation-effectiveness trial to examine whether leader-level and provider-level implementation strategies, when paired with provider training in AIM HI (An Individualized Mental Health Intervention for Autism) in mental health programs (Study 1) and CPRT (Classroom Pivotal Response Teaching) in schools (Study 2) successfully activated proposed implementation mechanisms (3 for leader level strategy and 2 for the provider-level strategy). We also examined whether any of the identified mechanisms associated with the leader-level strategy mediated the previously reported effect of the strategy on implementation and child outcomes. Organizations were randomized to receive a leader-level strategy (TEAMS Leadership Institute [TLI]), provider strategy, both strategies, or neither strategy (EBI provider training only). Leader participants were recruited from enrolled programs/districts and then supported recruitment of provider/child dyads. Children ranged in age from 3 to 13 years. The combined sample included 65 programs/districts, 95 TLI leaders, and 385 providers/child dyads. Multi-level modeling was used to test hypotheses. The hypothesized mechanisms were implementation leadership, implementation climate, and implementation support strategies for TLI and EBI attitudes and motivation for training for TIPS. The leader-level strategy engaged the most proximal of the three hypothesized mechanisms (implementation support strategies). The provider-level intervention did not engage any of the hypothesized mechanisms. There was an interaction between the leader-level and provider-level strategies on implementation climate and provider motivation mechanisms favoring groups that received both implementation strategies compared to those that only received the provider-level strategy. No mechanisms significantly mediated the effect of the leader-level strategy on implementation or clinical outcomes. This study provides support that a brief implementation leadership and climate training, TLI, increases leader use of specific actions to promote autism EBIs across two public service systems, children's mental health and public education. This does not fully account for strategy effects on fidelity or clinical outcomes. Findings advance the study of implementation mechanisms by examining how leadership training might work and identifying a clear need to focus on leader-level implementation strategies in these systems of care. ClinicalTrials.gov Identifier: NCT03380078.
Implementation strategies are methods or techniques to improve the adoption, implementation, sustainment, and scale-up of evidence-based interventions. Limited guidance exists on feasible processes for selecting and tailoring implementation strategies in community (non-clinical) settings. The Implementation Strategies Applied in Communities (ISAC) compilation includes a pragmatic matching process to accompany the compilation (ISAC Match). This study expands on ISAC Match by providing additional detail and potential approaches to complete the four-step matching process, including a case study from work in a state Cooperative Extension System. IMPLEMENTATION STRATEGIES APPLIED IN COMMUNITIES MATCHING PROCESS (ISAC MATCH): ISAC Match is intended to be applied within integrated research-practice partnerships or similar models. Before beginning the ISAC Match process, participants should have identified a new or existing evidence-based intervention they are interested in integrating (or improving the integration of) and have the power and scope to influence implementation. ISAC Match includes four steps: 1) reviewing available information on evidence-based intervention integration and conducting contextual inquiry, if needed, to understand barriers and facilitators; 2) identifying existing implementation strategies used in the implementing organization, 3) using recommended guidance tools to select relevant implementation strategies to overcome barriers and capitalize on facilitators; and 4) tailoring strategies to fit within the setting they will be used in. These steps are completed with health equity considerations in mind to ensure that implementation strategies are designed to improve adoption, implementation, and maintenance in ways that seek to narrow existing health disparities. To illustrate the use of ISAC Match, this study applied the four-step ISAC Match process to select and tailor implementation strategies to increase Montana State University Extension Agents' adoption of built environment approaches that facilitate physical activity. The ISAC match process was developed to apply to community settings because of a lack of guidance on rapid, relevant methods for selecting and tailoring implementation strategies to overcome barriers and capitalize on facilitators. Future work is needed to determine whether the ISAC match process is more efficient and whether results are more impactful than other matching processes that are less specific to community settings.
Over the past two decades, implementation science has developed a strong conceptual foundation through the proliferation and widespread use of theories, models, and frameworks (TMFs). These have provided coherence, shared vocabulary, and methodological discipline across a rapidly expanding field. However, this success has also produced an unintended consequence: increasing reliance on deductive modes of inquiry, in which a limited set of established TMFs are repeatedly applied as analytic templates across diverse empirical contexts. This tendency toward early deductivism risks constraining theoretical development, reducing sensitivity to heterogeneity, complexity, and temporality inherent in implementation, and reinforcing methodological circularity. In this conceptual paper, we argue for an inductive renewal of implementation science that rebalances deduction with stronger inductive and abductive forms of reasoning. Rather than abandoning established TMFs, we propose reframing them as evolving heuristics - resources for structuring inquiry that remain open to refinement, extension, and selective reconfiguration through empirical engagement. We clarify the complementary roles of induction, abduction, and deduction in theory development, emphasizing abductive iteration as a mechanism for translating empirical discovery into cumulative conceptual advancement. We outline strategies for advancing this agenda across three interdependent levels. At the study level, this involves treating TMFs as provisional heuristics, re-embracing qualitative discovery, and using abductive reasoning to refine theory through engagement with unexpected findings. At the field level, shared infrastructures for synthesis and longitudinal learning are needed to support cumulative, context-sensitive theorizing and to account for the temporal dynamics of implementation. Institutionally, journals and funders must recalibrate incentives to value theory development, adaptation, and transparency alongside theory application. Drawing on examples from research on knowledge brokering and implementation scale-up, we show how theoretically informative contributions emerge when empirical surprises, temporal dynamics, and analytic tensions are used to interrogate and refine existing TMFs rather than being absorbed into pre-specified categories. A mature implementation science must move beyond asking which TMF best fits a study, toward examining how empirical phenomena challenge, extend, and reshape theory. Sustaining this balance is essential for theoretical coherence and continued conceptual innovation.
Qualitative methods are central to implementation research. Qualitative research provides rich contextual insight into lived experiences of health and illness, healthcare systems and care delivery, and complex implementation processes. However, quantitative methods have historically been favored by editors and reviewers who serve as gatekeepers to scientific knowledge. Thus, we underscore that editors and reviewers must be familiar with the underlying principles and strengths of qualitative methods to avoid perpetuating inappropriate evaluation criteria that hinder qualitative research dissemination and funding opportunities. We aim to help authors and researchers provide sufficient details to dispel misperceptions and editors and reviewers to better evaluate studies using qualitative methods to maximize dissemination for high-impact implementation research. We convened a panel of six researchers with extensive experience in: designing, conducting, and reporting on qualitative research in implementation science and other healthcare research; training and mentoring others on qualitative methods; and serving as journal editors and manuscript/grant peer reviewers. We reviewed existing literature, published and unpublished reviewer critiques of qualitative grants and manuscripts, and discussed challenges facing qualitative methodologists when disseminating findings. Over the course of one year, we identified candidate topics, ranked each by priority, and used a consensus-based process to finalize the inventory and develop written guidance for handling each topic. We identified and dispelled 10 common misperceptions that limit the impact of qualitative methods in implementation research. Five misperceptions were associated with the application of inappropriate quantitative evaluation standards (subjectivity, sampling, generalizability, numbers/statistics, interrater reliability). Five misperceptions were associated with overly prescribed qualitative evaluation standards (saturation, member checking, coding, themes, qualitative data analysis software). For each misperception, we provide guidance on key considerations, responses to common critiques, and citations to appropriate literature. Unaddressed misperceptions can impede the contributions of qualitative methods in implementation research. We offer a resource for editors, reviewers, authors, and researchers to clarify misunderstandings and promote more nuanced and appropriate evaluation of qualitative methods in manuscripts and grant proposals. This article encourages a balanced assessment of the strengths of qualitative methods to enhance understandings of key problems in implementation research, and, ultimately, to strengthen the impact of qualitative findings.
Up to 50% of individuals with musculoskeletal road traffic injury (RTI) develop chronic pain, resulting in substantial individual and societal burden. Integrated psychological and physical care, such as StressModex, improves patient outcomes compared to physical treatment alone. However, StressModex is not routinely implemented in physiotherapy practice, due to limited training access and physiotherapists' lack of confidence in delivering psychological care. To address this gap, we developed a blended learning implementation strategy-Physiotherapist bIopsyChosocial On-line Training (PICOT)-guided by the integrated-Promoting Action on Research Implementation in Health Services (i-PARIHS) framework. The aims of this trial are to compare: (1) the effectiveness of PICOT versus in-person training on the reach of StressModex in routine community private physiotherapy practice; (2) the effectiveness of PICOT versus in-person training on adoption, implementation fidelity, sustainability, and maintenance of StressModex; (3) the effectiveness of PICOT versus in-person training on patient health outcomes; and (4) the cost-effectiveness of PICOT versus in-person training. Trial outcomes are informed by the RE-AIM framework. This is a hybrid type III implementation-effectiveness, cluster randomised, superiority trial with embedded economic and qualitative process evaluations. Thirty primary care physiotherapy clinics across Australia will be randomly assigned to either the PICOT or traditional 2-day in-person training. PICOT includes a 6-week online program, 6 weeks (once/week) of real-time online group training with individualised feedback, then 3 clinical supervision on-line sessions (once per fortnight). All on-line sessions are co-facilitated by a clinical psychologist and expert physiotherapist. Following training, physiotherapists will deliver StressModex to eligible patients (≥ 18 years, ≤ 12 weeks of musculoskeletal spinal pain post RTI, and at risk of poor recovery). The primary implementation outcome is reach, defined as the proportion of eligible patients treated with StressModex over 8 months. Secondary outcomes include adoption (training participation and initial uptake), implementation (dose, fidelity, and sustainability of delivery), patient health outcomes (collected at Time1, 8 weeks, 6-, and 12 months), and cost-effectiveness. This trial will provide critical evidence on scalable training models for embedding integrated psychological and physical care into physiotherapy practice. Findings will inform strategies to improve the implementation and sustainment of evidence-based interventions for musculoskeletal RTIs. ACTRN12624001268538. Registered on 18 October 2024. https://www.anzctr.org.au/Trial/Registration/TrialReview.aspxid=388006&showOriginal=true&isReview=true.
Implementation science has a history of drawing from other fields to advance its science, yet understanding how approaches from marketing might enhance the field remains a largely untapped area of theoretical and methodological potential. Social marketing (i.e., applying commercial marketing to solve social or health problems) is a branch of marketing that shares many conceptual features with implementation science (e.g., behaviour change), but remains an unrealized opportunity for synergy. This review aimed to 1) describe studies that have tested social marketing interventions in controlled designs; 2) describe these interventions including their context, mechanism, and outcome; and 3) propose social marketing approaches that might be usefully applied to implementation science. This scoping review, with a team consensus discussion, followed JBI (formerly the Joanna Briggs Institute) methodological guidance and included a team of researchers and practitioners in implementation, marketing, and social marketing. Twelve databases were searched. Studies were included that 1) utilized a randomized or non-randomized controlled intervention design; and 2) tested a social marketing intervention as defined by five essential social marketing criteria. Two reviewers independently completed all screening and extraction. Variables extracted included intervention details per social marketing criteria and the intervention's context, mechanism, and outcome. Team consensus discussions of the scoping review results were used to determine approaches that might be usefully applied more broadly across implementation science. Screening of 4,867 citations yielded 28 included studies published from 1999-2023. All topics were from the health field and included nutrition (13, 46%), sexual health/family planning (6, 21%), physical activity (3, 11%), child safety (1, 4%), cancer screening (1, 4%), fall prevention (1, 4%), worksite safety (1, 4%), sanitation (1, 4%), and substance abuse (1, 4%). Novel theories identified included 'Exchange Theory' and 'Consumer Information Processing Model'. Proposed approaches to consider for application included: leverage emotions; design for appeal; consider what your audience values; understand the price; understand the place; emphasize competitive advantage; and use branding. This review examined the application of social marketing theories and approaches to implementation science. Applying social marketing approaches could invigorate novel and creative thinking in implementation science. Open Science Framework Registration link: osf.io/6q834.
In the U.S., racial and ethnic disparities in hypertension control contribute to disparities in cardiovascular mortality. Evidence-based practices (EBPs) for improving hypertension control have not been consistently applied across patient subgroups, especially in safety-net settings, contributing to observed disparities. The Los Angeles County Department of Health Services serves racially and ethnically diverse, low-income patients with hypertension and represents a valuable setting for research to reduce disparities. We designed a hybrid Type 3 effectiveness-implementation study using a three-arm, crossover randomized controlled trial to compare the effects of patient- and provider-focused strategies and usual implementation strategy on key implementation and clinical outcomes. We will enroll 27 primary care clinics. Patient-focused implementation strategies aim to increase patient access to culturally and linguistically tailored educational materials on hypertension and improve patient engagement in hypertension care. Provider-focused strategies include training in culturally tailored hypertension care and activities to strengthen clinic workflows for home blood pressure monitoring, medication titration, referral to nurse-directed blood pressure clinics, and social needs screening and referral. Implementation facilitators provide support for these EBPs. The primary implementation outcome is provider EBP adoption clustered at the clinic level, based on a scoring system using medical records, clinic observation, and webinar participation. The primary health-related outcome is the proportion of patients in a clinic with controlled hypertension by race and ethnicity. We will use the constrained generalized Poisson mixed-effects model to compare changes in event rate of provider EBP adoption between usual implementation strategy and either provider- or patient-focused strategies. We will use constrained logistic mixed-effects models to assess the effect on change in blood pressure control. We will record implementation progress using the Stages of Implementation Completion tool and identify costs and resource use using the Cost of Implementing New Strategies tool. Our study contributes to the implementation science literature on cardiovascular health equity by examining alternative implementation strategies to increase use of culturally and linguistically tailored hypertension EBPs and social needs screening and intervention. Findings from our study will build evidence for implementation of hypertension EBPs in safety-net and other health systems serving racial and ethnic minority patients. Clinicaltrials.gov NCT06359691, registered April 10, 2024.
Stroke risk screening using transcranial Doppler (TCD) is a critical evidence-based tool for children with sickle cell anemia (SCA) that has been poorly implemented in the United States. The Dissemination and Implementation of Stroke Prevention Looking at the Care Environment (DISPLACE) study was designed to improve rates of stroke risk screening for SCA using interventions informed by an extensive multi-level barriers and facilitators assessment. This report describes the final outcomes of a large, randomized implementation trial comparing two intervention arms: 1) an application designed to track TCD implementation, ProviderMinder™, versus 2) ProviderMinder™ plus a single coordinator intervention. All sites additionally received a rebranding and educational intervention. The primary outcome was the difference in stroke risk screening rates between intervention arms. The intervention group was compared to four sites that did not implement either intervention and to their baseline rates as secondary outcomes. The initial part of DISPLACE included 28 sites from which 16 sites with poor stroke risk screening implementation were included in the trial and randomized to intervention arms. All sites entered patient data into a secure, customized electronic database and were required to use ProviderMinder™ for stroke risk screening data entry. Three sites were unable to adopt ProviderMinder™ and a fourth site from the original DISPLACE cohort was added to this group, resulting in thirteen intervention sites and four non-implementing sites (NIS). NIS collected data retrospectively for the same period as the implementation trial. A generalized quasi-likelihood Poisson mixed effects regression model compared screening rates between groups and timepoints while controlling for baseline screening rates and site size. Unadjusted stroke risk screening rates were also compared via two-proportion Z-tests for all outcomes. The intervention-by-timepoint interaction indicated statistically significant improvement for the ProviderMinder™ arm relative to the combined intervention arm (difference of 10.0%) and for the intervention group (both arms) compared to NIS (difference of 15.9%). Screening rates increased by 28.0% from baseline to intervention, with an overall rate of 76.8%. Our intervention approach in DISPLACE significantly improved stroke risk screening for children with SCA, with procedure-patient tracking emerging as an important component for improving care. Clinical trial number: ClinicalTrials.gov; NCT04173026; 6/4/2020; https://clinicaltrials.gov/study/NCT04173026?cond=NCT04173026&rank=1.
Hospital-acquired neonatal severe infections and resistant bacterial colonization in neonatal care are a world-wide challenge. Preventing the spread of resistant bacteria in neonatal intensive care units (NICUs) is specifically challenging, for example, due to multi-patient rooms, spaces being crowded with equipment, and high antibiotic use in these settings. Kangaroo care (KC), a practice that involves skin-to-skin contact between newborn infants and caregivers, is a promising, low-cost intervention that has been associated with reduced morbidity and mortality in low-birthweight infants. Despite these and other health benefits, KC has not been implemented systematically or consistently in NICUs following current WHO guidelines. The NeoIPC project aims to optimize KC practices in NICUs and to determine the effect on severe neonatal infections and resistant bacterial colonization among high-risk infants. Within the NeoIPC project, NeoDeco is a multi-center, parallel-group, cluster-randomized type 2 hybrid effectiveness-implementation study with 24 NICUs from five European countries representing clusters. NeoImplement, comprising the implementation elements of NeoDeco, focuses on (1) providing implementation support to sites in the intervention arm of the study and (2) evaluating the implementation of optimized KC in intervention sites. Implementation support consists of core implementation strategies that are offered to all intervention sites as well as the co-design of tailored implementation strategies for each individual site. Innovative methods supporting this co-design process are presented in this protocol. The implementation evaluation comprises a mixed-methods longitudinal study evaluating barriers and facilitators and various implementation outcomes, including a comprehensive economic evaluation. NeoImplement focuses on implementing optimized KC in participating NICUs by offering and co-developing strategies that can be sustained beyond the duration of the study. The accompanying implementation evaluation will provide insights into the effectiveness, feasibility, acceptability, sustainability, and cost-effectiveness of strategies targeting the implementation of optimized KC in European NICUs. A long-term goal of the study is to develop strategies for implementing KC that can be applied by NICUs beyond this study and to present an approach for how KC champions in NICUs themselves can develop context-sensitive implementation strategies. NCT05993442, December 27, 2024 https://classic. gov/ct2/history/NCT05993442 .
Randomized rollout trial designs, including stepped wedge designs, are commonly used to examine how well an evidence-based intervention or package is being implemented in community or healthcare settings. The multitude of implementation research questions and specific hypotheses suggest the need for diverse randomized rollout implementation trial designs, assignment principles and procedureds, and statistical modeling. We separate key research questions and identify mixed effect models for randomized implementation rollout trials involving 1) a single implementation strategy that tests how this strategy varies over time and/or resources that are allocated, 2) comparison of two distinct implementation strategies, and 3) three distinct strategies or components tested in a single trial. Appropriate rollout designs, optimal assignment methods, and other design and analysis considerations are discussed for trials of up to three distinct implementation strategies. To examine improvement in implementation outcomes we present a Fixed-Length Staggered Rollout Trial Design to examine how well a sustainment period continues to produce outcomes, The Rollout Implementation Optimization (ROIO) methodology illustrates testing for quality improvement. For comparing an existing to new strategy, we focus on a Stepped Wedge design, and for comparing two new strategies we describe a Head-to-Head Rollout trial design. To test for synergy between two components, we introduce a Head-to-Head Rollout trial design, and for testing an existing strategy to a new one followed by a sustainment period, we recommend using a Three-Phase Sequential Rollout Implementation trial design. Modeling choices are described, including options for specifying random effects that capture variations in site and clustering. We discuss comparisons of superiority versus non-inferiority testing and multiple contrasts. To support uses of these six designs and analyses, we provide computational code. The large class of randomized rollout implementation trial designs provides rich opportunities to address research questions posed by implementation scientists. Balance in assigning sites to cohorts is important before random assignment to time of transition to a new implementation occurs. Specific hypotheses are tested with mixed effects models where fixed effects include comparisons of implementation conditions and random effects that account for variation in sites and clustering.
Learning Health Systems (LHSs) link research and health service delivery by generating evidence to guide decision-making and continuous improvement. Although various LHS frameworks exist, there is limited practical guidance for how LHSs can improve implementation. This systematic review aimed to consolidate existing guidance to identify the infrastructure (pillars) and improvement processes (steps) required to support a LHS cycle that improves the implementation (including scale up or sustainment) of health programs, policies, or practices. We searched five databases and grey literature for documents describing an LHS model, or a process, or process model, guideline, or tool (i.e., guidance) intended to improve the quality of implementation, scale-up, and/or sustainment of health interventions. Title, abstract, and full-text screening were conducted independently by two reviewers. Data were synthesised separately for pillars and steps. Framework synthesis identified pillars and steps, informed by an existing LHS framework and refined iteratively; thematic synthesis explored patterns within each. From 12,151 records and 25 websites, 96 guidance documents were included. Six Pillars were identified as important to operationalise LHS improvement processes: 1-Interest holder engagement, 2-Workforce development and capacity, 3-Evidence surveillance and synthesis, 4-Data collection and management, 5-Governance and organisational processes, and 6-Cross-cutting infrastructure. The improvement process was comprised of 10 'Steps' across three LHS phases: Phase 1) Knowledge to Practice -Identify and understand the problem; Decide and plan for action; Assess and build capacity; Pilot; Phase 2) Practice to Data-Execute the action; Collect data; Monitor and respond; Phase 3) Data to Knowledge- Analyse and evaluate; Disseminate; and Decide (continue, adapt, or cease improvement efforts). Despite the diversity in purpose and context across included documents, the consolidated steps and pillars were conceptually consistent, suggesting a shared foundation. Some contextual variation in emphasis and operationalisation was noted, particularly among guidance focused on scale-up or sustainment. This review consolidated LHS pillars and improvement steps to better implement, scale or sustain health interventions. Findings provide a structured yet adaptable approach for operationalising implementation-focused learning cycles within LHSs. It informs forthcoming WHO guidance, and supports more systematic, responsive use of evidence in health systems. The review protocol was prospectively registered on Open Science Framework ( https://doi.org/10.17605/OSF.IO/V4JRC ).
Monitoring systems are important for evaluating key outcome measures, identifying opportunities for improvement, and informing public health investment. While disease surveillance systems are highly developed and widely used, monitoring systems for the implementation of public health programs and policies in community settings are less established. To address this gap, this review aimed to: 1) describe the scope of the literature on implementation monitoring systems and their operational features; and 2) synthesise this literature to produce features and suggested actions for system design. A systematic search of five databases and grey literature sources was conducted to identify systems, frameworks, or guidance for monitoring the implementation of public health programs or policies in community settings. Studies focused on clinical healthcare, or disease surveillance were excluded. Two authors independently screened titles, abstracts, and full texts for eligibility. Included documents were then categorised by 'Case' (one or more documents exploring the same monitoring system, framework, or topic). For Research Aim 1, characteristic data for each Case were extracted and narratively summarised. For Research Aim 2, Best Fit Framework Synthesis was employed using an a priori framework informed by disease surveillance models. Full text of included documents were coded, and the framework iteratively modified, to develop a new framework with key features and suggested actions relevant to implementation monitoring systems in community settings. Ninety-seven documents were identified, describing 75 distinct real-world implementation monitoring Cases. Aim 1: Most Cases were intended for use in high-income countries (64%) and focused on monitoring programs (81%) rather than policy. The most common topic areas related to nutrition (36%) and reproductive, HIV, and sexual health (28%). Primary responsibility for monitoring systems was most often held by national-level agencies (43%). Aim 2: Synthesis led to 13 key features of monitoring systems, with corresponding suggested actions across five broad Action Areas: 1) planning and preparation; 2) data collection activities; 3) system appraisal; 4) partner engagement, and 5) system revision. Findings emphasise that monitoring systems require attention across multiple Action Areas, including planning and resourcing, data collection, partner engagement, and system improvements (through both proactive appraisals and real-time response). This manuscript offers a foundational framework to guide policymakers and practitioners in monitoring the implementation of community-based public health policy or programs.
Implementation scientists increasingly recognize the value of multiple strategies to improve the adoption, fidelity, and scale up of an evidence-based intervention (EBI). However, with this recognition comes the need for alternative and innovative methods to ensure that the package of implementation strategies work well within constraints imposed by the need for affordability, scalability, and/or efficiency. The aim of this article is to illustrate that this can be accomplished by integrating principles of intervention optimization into implementation science. We use a hypothetical example to illustrate the application of the multiphase optimization strategy (MOST) to develop and optimize a package of implementation strategies designed to improve clinic-level adoption of an EBI for smoking cessation. We describe the steps an investigative team would take using MOST for an implementation science study. For each of the three phases of MOST (preparation, optimization, and evaluation), we describe the selection, optimization, and evaluation of four candidate implementation strategies (e.g., training, treatment guide, workflow redesign, and supervision). We provide practical considerations and discuss key methodological points. Our intention in this methodological article is to inspire implementation scientists to integrate principles of intervention optimization in their studies, and to encourage the continued advancement of this integration.
The increased prevalence of autism spectrum disorder creates a sense of urgency to improve outcomes for this population in publicly funded education systems, the primary setting in which autistic children receive behavioral health services in the United States. Important barriers to progress include a lack of feasible clinical interventions that address autistic children's externalizing behaviors in schools and major challenges sustaining fidelity to newly implemented programs over time. This trial addresses these gaps by (1) testing the clinical effectiveness of the Research Units on Behavioral Interventions in Educational Settings (RUBIES) program relative to educator psychoeducation on externalizing behaviors of autistic children in public elementary schools, and (2) testing the effects of adding a leadership-focused organizational implementation strategy, Helping Educational Leaders Mobilize Evidence (HELM), to educator coaching in RUBIES on RUBIES sustainment. In a cluster-randomized, hybrid type 2 effectiveness-implementation trial, schools will be randomized to one of 3 arms: 1) educator coaching in RUBIES and school participation in HELM; 2) educator coaching in RUBIES only; or 3) a control condition incorporating an active clinical comparator, educator psychoeducation. We will enroll 42 schools and 126 educators yoked to 126 elementary-aged autistic children. Depending on arm, educators will complete study instruments up to six times: 1) Spring semester prior to the year of school and student enrollment (implementation baseline; arms 1-2); 2) Fall semester Year 1 (clinical baseline; arms 1-3); 3) 16 weeks (arms 1-3); 4) 24 weeks (arms 1-3); 5) 52 weeks (arms 1-2); and 6) 76 weeks (arms 1-2). The primary clinical outcome compares arms 1 & 2 vs. 3 on change in autistic children's externalizing behavior from clinical baseline to 24 weeks. The primary implementation outcome compares arms 1 vs. 2 on RUBIES sustainment, operationalized as educators' average RUBIES fidelity at 52 and 76 weeks. Generating evidence for the clinical effectiveness of RUBIES addresses a significant gap in educator-delivered interventions to minimize highly prevalent externalizing behaviors among autistic children in public schools. Simultaneously, testing the effectiveness of HELM on sustainment of RUBIES will inform future efforts to successfully implement and sustain new innovations for autistic youth in public schools. Clinical Trials. NCT07276750. Date of registration 12/10/25. URL of trial registry record https://clinicaltrials.gov/study/NCT07276750?cond=Autism&intr=RUBIES&rank=1 .
Process evaluations are considered an essential component in conducting and reporting complex interventions, such as those studied in randomised controlled trials (RCTs) of implementation interventions, to explain the effect of implementation interventions. Given the growth of RCTs of implementation interventions with embedded process evaluations, it is timely to review the explanatory learnings to date. This scoping review explores process evaluations of RCTs of implementation interventions to examine how studies are conducted and what insights can be offered about how and why implementation interventions achieve (or not) their intended impacts. The scoping review was conducted in accordance with the JBI methodology. MEDLINE, CINAHL, Scopus, Web of Science and PsycINFO were searched. Articles were screened and data were extracted by two independent reviewers. Of the 5857 studies screened, 81 process evaluations were included. Two process evaluations reported on the same trial, resulting in a final number of n = 80 independent studies. Half of studies (48%) reported on implementation trials with no demonstrated effect on the primary outcome (null), while n = 32 (40%) reported on trials where the intervention group demonstrated positive changes in the primary outcome (positive). Seven studies (9%) had mixed findings and n = 3 (4%) studies had no reported trial outcomes. When comparing process evaluation findings from positive and null trials, few discernible patterns that clearly explained the difference in outcomes were identified. Education and training was the most common strategy used in implementation interventions, yet one of the most common implementation barriers reported related to knowledge and self-efficacy, which could indicate a misalignment. Availability of resources was the most prominent barrier for both positive and null trials and there was little evidence that implementation interventions were tailored to context despite prominent barriers and enablers at the inner and outer setting level. Process evaluation studies embedded in RCTs of implementation interventions are recommended as an important method to explain whether and how interventions produce their intended effect. This review suggests a need to further optimise the design and evaluation of implementation interventions, including the conduct and reporting of process evaluations, to continue advancing the science and practice of implementation. Protocol published in Open Science Framework, May 10 2022 (Collyer et al., Process evaluations in randomised trials of implementation interventions in health care: a scoping review protocol. In Open science framework, 2022).
Integrated knowledge translation (IKT) is an approach facilitating collaboration between researchers and decision-makers towards evidence-informed decision-making. Increasingly evaluated in various contexts, less is known about the implementation process of IKT, including in low- and middle-income countries. The Collaboration for Evidence-based Healthcare and Public Health in Africa (CEBHA+) developed, implemented and evaluated an IKT approach across five countries. Here, we examined how the IKT approach was implemented in the African-German multi-country research consortium, investigating project-level context; implementation process, strategy, and outcomes; and exploring intervention core components. This process evaluation used a mixed-methods comparative case study design. Following a previously published protocol, the main authors of this paper surveyed and interviewed African CEBHA+ researchers and their partners from policy and practice in 2020/2021 and 2022/2023 and identified relevant IKT-related documents. We drew on our programme theory and implementation science frameworks to undertake qualitative content analysis of interview data and documents. Data was analysed within sites, integrated with descriptively analysed quantitative survey data, and subsequently compared across sites. We enrolled 36 researchers and 19 decision-makers and analysed 92 IKT-related documents. IKT was implemented at the five sites in Ethiopia, Malawi, Rwanda, South Africa, and Uganda. In our cross-site analysis of fidelity and adaptability of IKT, we identified three core components of the IKT approach: (i) continuous tailored engagement between researchers and decision-makers, (ii) researchers' commitment to research impact, and (iii) linking to existing KT routines. The context analysis revealed that IKT implementation was facilitated by local KT structures, pre-existing knowledge translation routines and relationships with decision-makers, senior leadership motivation, and funder support including a dedicated budget for IKT activities. Feasibility of IKT implementation was reduced by administrative challenges, overall project complexity, and conflicting priorities. This research leveraged a unique opportunity to study a systematic IKT approach implemented across sites in five African countries in the context of a large international research consortium. The findings can inform IKT design and implementation in other multi-site and multi-country projects. Particularly, the identified core components can guide adaptation and refinement of IKT in contextually diverse settings, including low- and middle- income countries.
The Consolidated Framework for Implementation Research (CFIR) is a determinant framework that includes constructs from many implementation theories, models, and frameworks; it is used to predict or explain barriers and facilitators to implementation success. CFIR is among the most widely applied implementation science frameworks, and after 15 years of use in the field, the framework was updated based on user feedback obtained via literature review and survey. Dissemination of the updated CFIR and accompanying outcomes addendum resulted in hundreds of requests from users for further guidance in applying the framework. In addition, observations of potential and actual misuse of CFIR in grant reviews and published manuscripts were the catalyst for the development of this user guide. As a result, the objective of this article is to provide a user guide and essential tools and templates for using CFIR in implementation research. This user guide was generated from the combined wisdom and experience of the CFIR Leadership Team, which includes the lead developers of the original and updated CFIR (LJD, CMR), and has collectively used CFIR in more than 50 projects. The five steps as well as the tools and templates were finalized via consensus discussions. The five steps below guide users through an entire research project using CFIR and include 1) Study Design; 2) Data Collection; 3) Data Analysis; 4) Data Interpretation; and 5) Knowledge Dissemination. In addition, the article provides a Frequently Asked Questions (FAQs) section based on user queries and six tools and templates: 1) CFIR Construct Example Questions; 2) CFIR Construct Coding Guidelines; 3) Inner Setting Memo Template; 4) CFIR Construct Rating Guidelines; 5) CFIR Construct x Inner Setting Matrix Template; and 6) CFIR Implementation Research Worksheet. This user guide details how to use CFIR in implementation research, from the design of the study through dissemination of findings, answers frequently asked questions, and offers essential tools and templates. We hope this guidance will facilitate appropriate and consistent application of the framework as well as generate feedback and critique to advance the field.
Homeless-experienced Veterans (HEVs) have higher rates of substance use disorders (SUDs) than housed Veterans, which impairs their ability to retain housing. The Department of Housing and Urban Development-VA Supportive Housing (HUD-VASH) initiative, which provides subsidized permanent housing and supportive services, contributed to the 50% reduction in Veteran homelessness over the past decade. However, ~ 40% of Veterans exit HUD-VASH within two years, often due to untreated SUDs. We will use two strategies to support the implementation of Medications for Addiction Treatment (MAT) and Cognitive Behavioral Therapy for Substance Use Disorders (CBT-SUD) in 12 HUD-VASH sites; conduct an evaluation of this implementation effort; and generate an implementation playbook to support continued spread of MAT and CBT-SUD in HUD-VASH. We will use Replicating Effective Programs (REP) to implement MAT and CBT-SUD at 12 sites over 18 months. After 9 months of REP alone, half (n = 6) of these sites will also receive Consumer Engagement (CE) for 9 months, activating HEVs to adopt these practices via peer coaching. We will conduct a type 3 hybrid cluster-randomized trial to compare the impacts of REP versus REP + CE. Randomization will occur at two levels: implementation start date (3 cohorts) and the implementation strategy (REP versus REP + CE). We will use stratified block randomization to balance site size among sites receiving each strategy across cohorts. We will use mixed methods to assess the impacts of REP versus REP + CE on implementation outcomes (reach [primary outcome], adoption, and sustainment); Veteran outcomes (primarily housing); provider and Veteran experiences; and costs and budget impacts. We hypothesize that REP + CE will have higher implementation costs than REP but result in improved MAT and CBT-SUD implementation and Veteran outcomes, leading to a business case for REP + CE. Implementing MAT and CBT-SUD within HUD-VASH can improve HEVs' housing and health. By identifying effective strategies to support the implementation of these practices, we aim to inform other implementation efforts of behavioral health practices in homeless service settings. This project was registered with ClincialTrials.gov as "Coordinated Access for Addiction Recovery and Equity in VA Supportive Housing." Trial registration NCT07141394, registered 8/26/2025 ( https://clinicaltrials.gov/study/NCT07141394?term=CARE-VASH&rank=1 ).
 People with serious mental illness die 10-20 years earlier than the overall population, mainly from cardiovascular disease. Although effective interventions to manage cardiovascular disease risk in this population exist, they have not been widely implemented in community settings. IDEAL Goals is an empirically supported, cardiovascular risk reduction program tailored for people with serious mental illness (i.e., "clients") and designed to be delivered by clinicians and staff in community mental health settings. In this trial, we use Replicating Effective Programs (REP) as the foundational implementation strategy to test the effects of two additional strategies, Coaching and Facilitation, on improving the number of IDEAL Goals sessions clients receive in community mental health organizations in Maryland and Michigan.  This cluster-randomized hybrid Type 3 effectiveness-implementation trial will use a non-restricted sequential, multiple-assignment randomized trial (SMART) design that randomizes organizations at two points, months 0 and 6, of the 18-month IDEAL Goals intervention. Organizations will receive one of four sequences of implementation strategies: (1) REP only; (2) REP + Coaching; (3) REP + Facilitation; or (4) REP + Coaching + Facilitation. The primary aim is to determine the effect of the most intensive sequence of strategies (REP + Coaching + Facilitation) versus REP only on the number of IDEAL Goals sessions clients receive over 18 months. The secondary aim is to determine the marginal effects of Coaching and Facilitation on the number of IDEAL Goals sessions clients receive over 18 months. Exploratory aims include: (1) assessing tailoring variables to inform a future adaptive implementation intervention to scale IDEAL Goals; (2) estimating the cost of delivering IDEAL Goals and implementation strategies; and (3) examining the relationship between different sequences of implementation strategies on: clients' receipt of cardiovascular disease risk factor management processes and outcomes over 18 months; and clients' receipt of IDEAL Goals over 30 months. Qualitative efforts will explore implementation strategy mechanisms, adaptations, and participants' experience of delivering and receiving IDEAL Goals. To meaningfully reduce premature mortality for people with serious mental illness, it is imperative to test strategies that can facilitate optimal uptake and continued sustainability of cardiovascular risk reduction programs in community settings. ClinicalTrials.gov identifier: NCT06674616 , registered on November 1, 2024.