BACKGROUND: Many of society's health problems require research-based knowledge acted on by healthcare practitioners together with implementation of political measures from governmental agencies. However, there has been limited knowledge exchange between implementation science and policy implementation research, which has been conducted since the early 1970s. Based on a narrative review of selective literature on implementation science and policy implementation research, the aim of this paper is to describe the characteristics of policy implementation research, analyze key similarities and differences between this field and implementation science, and discuss how knowledge assembled in policy implementation research could inform implementation science. DISCUSSION: Following a brief overview of policy implementation research, several aspects of the two fields were described and compared: the purpose and origins of the research; the characteristics of the research; the development and use of theory; determinants of change (independent variables); and the impact of implementation (dependent variables). The comparative analysis showed that there are many similarities between the two fields, yet there are also profound differences. Still, important learning may be derived from several aspects of policy implementation research, including issues related to the influence of the context of implementation and the values and norms of the implementers (the healthcare practitioners) on implementation processes. Relevant research on various associated policy topics, including The Advocacy Coalition Framework, Governance Theory, and Institutional Theory, may also contribute to improved understanding of the difficulties of implementing evidence in healthcare. Implementation science is at a relatively early stage of development, and advancement of the field would benefit from accounting for knowledge beyond the parameters of the immediate implementation science literature. SUMMARY: There are many common issues in policy implementation research and implementation science. Research in both fields deals with the challenges of translating intentions into desired changes. Important learning may be derived from several aspects of policy implementation research.
BACKGROUND: The movement of evidence-based practices (EBPs) into routine clinical usage is not spontaneous, but requires focused efforts. The field of implementation science has developed to facilitate the spread of EBPs, including both psychosocial and medical interventions for mental and physical health concerns. DISCUSSION: The authors aim to introduce implementation science principles to non-specialist investigators, administrators, and policymakers seeking to become familiar with this emerging field. This introduction is based on published literature and the authors' experience as researchers in the field, as well as extensive service as implementation science grant reviewers. Implementation science is "the scientific study of methods to promote the systematic uptake of research findings and other EBPs into routine practice, and, hence, to improve the quality and effectiveness of health services." Implementation science is distinct from, but shares characteristics with, both quality improvement and dissemination methods. Implementation studies can be either assess naturalistic variability or measure change in response to planned intervention. Implementation studies typically employ mixed quantitative-qualitative designs, identifying factors that impact uptake across multiple levels, including patient, provider, clinic, facility, organization, and often the broader community and policy environment. Accordingly, implementation science requires a solid grounding in theory and the involvement of trans-disciplinary research teams. The business case for implementation science is clear: As healthcare systems work under increasingly dynamic and resource-constrained conditions, evidence-based strategies are essential in order to ensure that research investments maximize healthcare value and improve public health. Implementation science plays a critical role in supporting these efforts.
BACKGROUND: Evidence, in multiple forms, is a foundation of implementation science. For public health and clinical practice, evidence includes the following: type 1 evidence on etiology and burden; type 2 evidence on effectiveness of interventions; and type 3: evidence on dissemination and implementation (D&I) within context. To support a vision for development and use of evidence in D&I science that is more comprehensive and equitable (particularly for type 3 evidence), this article aims to clarify concepts of evidence, summarize ongoing debates about evidence, and provide a set of recommendations and tools/resources for addressing the "how-to" in filling evidence gaps most critical to advancing implementation science. MAIN TEXT: Because current conceptualizations of evidence have been relatively narrow and insufficiently characterized in our opinion, we identify and discuss challenges and debates about the uses, usefulness, and gaps in evidence for implementation science. A set of questions is proposed to assist in determining when evidence is sufficient for dissemination and implementation. Intersecting gaps include the need to (1) reconsider how the evidence base is determined, (2) improve understanding of contextual effects on implementation, (3) sharpen the focus on health equity in how we approach and build the evidence-base, (4) conduct more policy implementation research and evaluation, and (5) learn from audience and stakeholder perspectives. We offer 15 recommendations to assist in filling these gaps and describe a set of tools for enhancing the evidence most needed in implementation science. CONCLUSIONS: To address our recommendations, we see capacity as a necessary ingredient to shift the field's approach to evidence. Capacity includes the "push" for implementation science where researchers are trained to develop and evaluate evidence which should be useful and feasible for implementers and reflect community or stakeholder priorities. Equally important, there has been inadequate training and too little emphasis on the "pull" for implementation science (e.g., training implementers, practice-based research). We suggest that funders and reviewers of research should adopt and support a more robust definition of evidence. By critically examining the evolving nature of evidence, implementation science can better fulfill its vision of facilitating widespread and equitable adoption, delivery, and sustainment of scientific advances.
BACKGROUND: Implementation science has a core aim - to get evidence into practice. Early in the evidence-based medicine movement, this task was construed in linear terms, wherein the knowledge pipeline moved from evidence created in the laboratory through to clinical trials and, finally, via new tests, drugs, equipment, or procedures, into clinical practice. We now know that this straight-line thinking was naïve at best, and little more than an idealization, with multiple fractures appearing in the pipeline. DISCUSSION: The knowledge pipeline derives from a mechanistic and linear approach to science, which, while delivering huge advances in medicine over the last two centuries, is limited in its application to complex social systems such as healthcare. Instead, complexity science, a theoretical approach to understanding interconnections among agents and how they give rise to emergent, dynamic, systems-level behaviors, represents an increasingly useful conceptual framework for change. Herein, we discuss what implementation science can learn from complexity science, and tease out some of the properties of healthcare systems that enable or constrain the goals we have for better, more effective, more evidence-based care. Two Australian examples, one largely top-down, predicated on applying new standards across the country, and the other largely bottom-up, adopting medical emergency teams in over 200 hospitals, provide empirical support for a complexity-informed approach to implementation. The key lessons are that change can be stimulated in many ways, but a triggering mechanism is needed, such as legislation or widespread stakeholder agreement; that feedback loops are crucial to continue change momentum; that extended sweeps of time are involved, typically much longer than believed at the outset; and that taking a systems-informed, complexity approach, having regard for existing networks and socio-technical characteristics, is beneficial. CONCLUSION: Construing healthcare as a complex adaptive system implies that getting evidence into routine practice through a step-by-step model is not feasible. Complexity science forces us to consider the dynamic properties of systems and the varying characteristics that are deeply enmeshed in social practices, whilst indicating that multiple forces, variables, and influences must be factored into any change process, and that unpredictability and uncertainty are normal properties of multi-part, intricate systems.
BACKGROUND: different implementation strategies work. To improve outcomes of implementation efforts, the field needs precise, testable theories that describe the causal pathways through which implementation strategies function. In this perspective piece, we describe a four-step approach to developing causal pathway models for implementation strategies. BUILDING CAUSAL MODELS: First, it is important to ensure that implementation strategies are appropriately specified. Some strategies in published compilations are well defined but may not be specified in terms of its core component that can have a reliable and measureable impact. Second, linkages between strategies and mechanisms need to be generated. Existing compilations do not offer mechanisms by which strategies act, or the processes or events through which an implementation strategy operates to affect desired implementation outcomes. Third, it is critical to identify proximal and distal outcomes the strategy is theorized to impact, with the former being direct, measurable products of the strategy and the latter being one of eight implementation outcomes (1). Finally, articulating effect modifiers, like preconditions and moderators, allow for an understanding of where, when, and why strategies have an effect on outcomes of interest. FUTURE DIRECTIONS: We argue for greater precision in use of terms for factors implicated in implementation processes; development of guidelines for selecting research design and study plans that account for practical constructs and allow for the study of mechanisms; psychometrically strong and pragmatic measures of mechanisms; and more robust curation of evidence for knowledge transfer and use.
BACKGROUND: Like many new fields, implementation science has become vulnerable to instrumentation issues that potentially threaten the strength of the developing knowledge base. For instance, many implementation studies report findings based on instruments that do not have established psychometric properties. This article aims to review six pressing instrumentation issues, discuss the impact of these issues on the field, and provide practical recommendations. DISCUSSION: This debate centers on the impact of the following instrumentation issues: use of frameworks, theories, and models; role of psychometric properties; use of 'home-grown' and adapted instruments; choosing the most appropriate evaluation method and approach; practicality; and need for decision-making tools. Practical recommendations include: use of consensus definitions for key implementation constructs; reporting standards (e.g., regarding psychometrics, instrument adaptation); when to use multiple forms of observation and mixed methods; and accessing instrument repositories and decision aid tools. SUMMARY: This debate provides an overview of six key instrumentation issues and offers several courses of action to limit the impact of these issues on the field. With careful attention to these issues, the field of implementation science can potentially move forward at the rapid pace that is respectfully demanded by community stakeholders.
Centuries of experience make it clear that establishing the effectiveness of a clinical innovation is not sufficient to guarantee its uptake into routine use. The relatively new field of implementation science has developed to enhance the uptake of evidence-based practices and thereby increase their public health impact. Implementation science shares many characteristics, and the rigorous approach, of clinical research. However, it is distinct in that it attends to factors in addition to the effectiveness of the clinical innovation itself, to include identifying and addressing barriers and facilitators to the uptake of evidence-based clinical innovations. This article reviews the definition, history, and scope of implementation science, and places the field within the broader enterprise of biomedical research. It also provides an overview of this Special Issue of Psychiatry Research, which introduces the principles and methods of implementation science to mental health researchers.
A person who wants to find a solution to a public health problem has a different task than someone who wants to create or test a theory. (Eldredge, Markham, Ruiter, Kok, & Parcel, 2016, p. 8) The challenges in improving health care are considerable, as are the efforts made to develop and deliver best practice (Grol, Wensing, Eccles, & Davis, 2013). Different interventions with evidence of effectiveness are continuously made available for potential improvement of health care. However, the difficulties in implementing and using such evidence are well known (Greenhalgh, Robert, Macfarlane, Bate, & Kyriakidou, 2004). The knowledge-practice gap in health care refers to the gap between scientific knowledge and its application in routine healthcare practice. Implementation science has developed in the 2000s in response to this gap, with the ambition to generate knowledge to promote a better uptake of evidence for improvements in the quality and safety of health care. The body of implementation knowledge comprises a rapidly growing amount of empirical studies as well as countless theories, frameworks, and models, contributing to an understanding of factors associated with successful implementation of evidence-based interventions within a variety of settings (Tabak, Khoong, Chambers, & Brownson, 2012). The multitude of empirical implementation studies, as well as theories, models, and frameworks developed in implementation science, reflect a growing evidence-based concerning implementation (Brownson, Colditz, & Proctor, 2018). However, despite the rapid progress of implementation science, the knowledge-practice gap in health care is still substantial, as shown in studies that describe difficulties in achieving desirable change in healthcare practice. Low rates of adoption and limited use of evidence-based interventions are persistent problems. Thus, the challenges of reducing the knowledge-practice gap still remain after more than two decades of research. The aim of this editorial is to address the knowledge-practice gap by means of increasing awareness of a parallel knowledge-practice gap (i.e., the somewhat paradoxical gap between scientific knowledge concerning implementation and actual real-life implementation and use of this knowledge in healthcare practice). This editorial is based on findings and conclusions presented in a doctoral thesis by the first author, which investigated the resemblance between available scientific knowledge on implementation and implementation strategies used in healthcare practice in three large improvement efforts in Sweden (Westerlund, 2018). An overall conclusion of the thesis was that there exists a parallel knowledge-practice gap between scientific knowledge on implementation and the use of this knowledge in implementation efforts in healthcare practice (Westerlund, 2018; Westerlund et al., 2017). The findings showed that implementation knowledge was not transferred to healthcare practice (and practitioners) to a sufficient extent, thus restricting the systematic use of implementation knowledge in practice. Implementation science has a twofold aim: to produce knowledge sufficiently generalizable to contribute to scientific knowledge accumulation and to produce knowledge applicable for improved practice (Fixsen, Blase, & Van Dyke, 2019). The question of use, applicability, and impact of implementation science has been highlighted previously, and the need to make implementation science knowledge more relevant and widely disseminated has been called for in the literature (Armson, Roder, Elmslie, Khan, & Straus, 2018; McIsaac et al., 2018). Implementation knowledge is not taught in healthcare practitioners’ basic training and only seldom in continuing professional education. Although the literature on evidence-based implementation is expanding and courses are increasingly being made available, these do not focus on practical issues or guidance on how to actually use implementation science knowledge in implementation endeavors (Nilsen, Neher, Ellström, & Gardner, 2017). Ovretveit, Mittman, Rubenstein, and Ganz (2017) have noted that healthcare practitioners are not expected to be knowledgeable about implementation science. Although implementation science is widely considered an applied science, the extent to which knowledge produced in this field is actually used by practitioners is not known. There are few empirical studies concerning if or how scientific knowledge on implementation is being used in healthcare practice (Armson et al., 2018). As implementation researchers, we need to ask ourselves if our research findings and evidence on implementation have reached the world of practice to a sufficient degree. There are many analytical tools aimed at supporting researchers’ use of implementation science in their research endeavors (Simpson et al., 2013). When approaching the implementation knowledge field, phrases such as the following are frequently encountered:“Theories and frameworks enhance implementation research” and”inform study design and execution” (Tabak et al., 2012, p. 6) or“Scholars seeking to study implementation have over 60 conceptual frameworks to guide their work” (Birken et al., 2017, p. 2). The impression is that models and frameworks are developed to“help advance implementation science” (Damschroder et al., 2009, p. 2). Recently, the ImpRes tool was developed with the stated purpose to“support research teams in the process of designing implementation research” (King's Improvement Science, 2018, p. 1). These observations raise the questions of whether other researchers are the primary target audience of implementation science knowledge and the extent to which the knowledge produced in the field actually reaches beyond academia. To a large extent, knowledge produced in implementation science still seems to belong to the scientific community rather than practitioners to improve outcomes in health care (Armson et al., 2018; Ovretveit et al., 2017; Westerlund, 2018). Considering the vast amount and variation of empirical studies of implementation efforts in many different healthcare settings, there is no question that the field of implementation science has produced knowledge on implementation of great relevance for potential use in health care. It seems highly plausible that a conscious and systematic use of scientific knowledge on implementation would be beneficial in change efforts in health care and would likely increase adoption and use of research-informed interventions to improve the quality of care. Hence, applying scientific knowledge on implementation in healthcare practice may help bridge the knowledge-practice gap in health care. So-called”action models” such as Knowledge-to-Action (Graham et al., 2006) and Quality Implementation Framework (QIF; Meyers, Durlak, & Wandersman, 2012) have been developed to guide the translation of research into practice. The originators of the QIF introduced the concept of”practical implementation science,” which refers not only to the translation of implementation science knowledge into user-friendly resources but also to research and actions based on this translation. Meyers and colleagues stated that one of their goals was to”outline practical implications for improving future implementation efforts in the world of practice” (Meyers, Durlak, et al., 2012, p. 464). Deriving from the QIF, Meyers and colleagues developed what they referred to as a”practical implementation tool” and the Quality Implementation Tool. The aim was to assist practitioners and those providing support to practitioners in implementing interventions with better quality (Meyers, Durlak et al., 2012; Meyers, Katz et al., 2012). However, efforts like these with the explicit goal of narrowing the gap between the science and practice of implementation may not be sufficiently practice-friendly or ready to use. We do not know this because studies regarding their utility and usability do not exist. In many ways, making use of implementation science knowledge could be viewed as an important implementation strategy with the potential to reduce the knowledge-practice gap in health care. However, studies are needed to explore and assess this assumption. We strongly recommend research efforts focusing on further development of the concept of”practical implementation science.” There is a need for research on the applicability and use of models and frameworks as well as additional focus on the question of how to develop and evaluate more user-friendly tools. The rapidly growing body of evidence for implementation has the potential to bridge the knowledge-practice gap in health care. However, implementation science knowledge is still predominantly in the domain of researchers. For knowledge on implementation to facilitate bridging the knowledge-practice gap, it needs to be translated to user-friendly tools that are actually used by healthcare practitioners. With this editorial, we hope to have raised awareness of the need for the implementation science society to reflect upon the question of how we can support the systematic use of implementation science knowledge among leaders and other practitioners in healthcare settings. Implementation science was born out of a desire to bridge the knowing-doing gap (i.e., the gap between what is known and what is actually done in health care). It is a paradox if the knowledge produced in this field fails to reach the world of practice. For the practice of implementation to be furthered, we as researchers have an obligation to contribute to improved utilization and translation of the knowledge produced in the implementation science field.
BACKGROUND: Theories provide a synthesizing architecture for implementation science. The underuse, superficial use, and misuse of theories pose a substantial scientific challenge for implementation science and may relate to challenges in selecting from the many theories in the field. Implementation scientists may benefit from guidance for selecting a theory for a specific study or project. Understanding how implementation scientists select theories will help inform efforts to develop such guidance. Our objective was to identify which theories implementation scientists use, how they use theories, and the criteria used to select theories. METHODS: We identified initial lists of uses and criteria for selecting implementation theories based on seminal articles and an iterative consensus process. We incorporated these lists into a self-administered survey for completion by self-identified implementation scientists. We recruited potential respondents at the 8th Annual Conference on the Science of Dissemination and Implementation in Health and via several international email lists. We used frequencies and percentages to report results. RESULTS: Two hundred twenty-three implementation scientists from 12 countries responded to the survey. They reported using more than 100 different theories spanning several disciplines. Respondents reported using theories primarily to identify implementation determinants, inform data collection, enhance conceptual clarity, and guide implementation planning. Of the 19 criteria presented in the survey, the criteria used by the most respondents to select theory included analytic level (58%), logical consistency/plausibility (56%), empirical support (53%), and description of a change process (54%). The criteria used by the fewest respondents included fecundity (10%), uniqueness (12%), and falsifiability (15%). CONCLUSIONS: Implementation scientists use a large number of criteria to select theories, but there is little consensus on which are most important. Our results suggest that the selection of implementation theories is often haphazard or driven by convenience or prior exposure. Variation in approaches to selecting theory warn against prescriptive guidance for theory selection. Instead, implementation scientists may benefit from considering the criteria that we propose in this paper and using them to justify their theory selection. Future research should seek to refine the criteria for theory selection to promote more consistent and appropriate use of theory in implementation science.
BACKGROUND: Scientists have developed evidence-based interventions that improve the symptoms and functioning of youth with psychiatric disorders; however, these interventions are rarely used in community settings. Eliminating this research-to-practice gap is the purview of implementation science, the discipline devoted to the study of methods to promote the use of evidence-based practices in routine care. METHODS: We review studies that have tested factors associated with implementation in child psychology and psychiatry, explore applications of social science theories to implementation, and conclude with recommendations to advance implementation science through the development and testing of novel, multilevel, causal theories. RESULTS: During its brief history, implementation science in child psychology and psychiatry has documented the implementation gap in routine care, tested training approaches and found them to be insufficient for behavior change, explored the relationships between variables and implementation outcomes, and initiated randomized controlled trials to test implementation strategies. This research has identified targets related to implementation (e.g., clinician motivation, organizational culture) and demonstrated the feasibility of activating these targets through implementation strategies. However, the dominant methodological approach has been atheoretical and predictive, relying heavily on a set of variables from heuristic frameworks. CONCLUSIONS: Optimizing the implementation of effective treatments in community care for youth with psychiatric disorders is a defining challenge of our time. This review proposes a new direction focused on developing and testing integrated causal theories. We recommend implementation scientists: (a) move from observational studies of implementation barriers and facilitators to trials that include causal theory; (b) identify a core set of implementation determinants; (c) conduct trials of implementation strategies with clear targets, mechanisms, and outcomes; (d) ensure that behaviors that are core to EBPs are clearly defined; and (e) agree upon standard measures. This agenda will help fulfill the promise of evidence-based practice for improving youth behavioral health.
BACKGROUND: Health disparities are differences in health or health care between groups based on social, economic, and/or environmental disadvantage. Disparity research often follows 3 steps: detecting (phase 1), understanding (phase 2), and reducing (phase 3), disparities. Although disparities have narrowed over time, many remain. OBJECTIVES: We argue that implementation science could enhance disparities research by broadening the scope of phase 2 studies and offering rigorous methods to test disparity-reducing implementation strategies in phase 3 studies. METHODS: We briefly review the focus of phase 2 and phase 3 disparities research. We then provide a decision tree and case examples to illustrate how implementation science frameworks and research designs could further enhance disparity research. RESULTS: Most health disparities research emphasizes patient and provider factors as predominant mechanisms underlying disparities. Applying implementation science frameworks like the Consolidated Framework for Implementation Research could help disparities research widen its scope in phase 2 studies and, in turn, develop broader disparities-reducing implementation strategies in phase 3 studies. Many phase 3 studies of disparity-reducing implementation strategies are similar to case studies, whose designs are not able to fully test causality. Implementation science research designs offer rigorous methods that could accelerate the pace at which equity is achieved in real-world practice. CONCLUSIONS: Disparities can be considered a "special case" of implementation challenges-when evidence-based clinical interventions are delivered to, and received by, vulnerable populations at lower rates. Bringing together health disparities research and implementation science could advance equity more than either could achieve on their own.
BACKGROUND: There is growing urgency to tackle issues of equity and justice in the USA and worldwide. Health equity, a framing that moves away from a deficit mindset of what society is doing poorly (disparities) to one that is positive about what society can achieve, is becoming more prominent in health research that uses implementation science approaches. Equity begins with justice-health differences often reflect societal injustices. Applying the perspectives and tools of implementation science has potential for immediate impact to improve health equity. MAIN TEXT: We propose a vision and set of action steps for making health equity a more prominent and central aim of implementation science, thus committing to conduct implementation science through equity-focused principles to achieve this vision in U.S. research and practice. We identify and discuss challenges in current health disparities approaches that do not fully consider social determinants. Implementation research challenges are outlined in three areas: limitations of the evidence base, underdeveloped measures and methods, and inadequate attention to context. To address these challenges, we offer recommendations that seek to (1) link social determinants with health outcomes, (2) build equity into all policies, (3) use equity-relevant metrics, (4) study what is already happening, (5) integrate equity into implementation models, (6) design and tailor implementation strategies, (7) connect to systems and sectors outside of health, (8) engage organizations in internal and external equity efforts, (9) build capacity for equity in implementation science, and (10) focus on equity in dissemination efforts. CONCLUSIONS: Every project in implementation science should include an equity focus. For some studies, equity is the main goal of the project and a central feature of all aspects of the project. In other studies, equity is part of a project but not the singular focus. In these studies, we should, at a minimum, ensure that we "leave no one behind" and that existing disparities are not widened. With a stronger commitment to health equity from funders, researchers, practitioners, advocates, evaluators, and policy makers, we can harvest the rewards of the resources being invested in health-related research to eliminate disparities, resulting in health equity.
BACKGROUND: Implementation science in resource-poor countries and communities is arguably more important than implementation science in resource-rich settings, because resource poverty requires novel solutions to ensure that research results are translated into routine practice and benefit the largest possible number of people. METHODS: We reviewed the role of resources in the extant implementation science frameworks and literature. We analyzed opportunities for implementation science in resource-poor countries and communities, as well as threats to the realization of these opportunities. RESULTS: Many of the frameworks that provide theoretical guidance for implementation science view resources as contextual factors that are important to (i) predict the feasibility of implementation of research results in routine practice, (ii) explain implementation success and failure, (iii) adapt novel evidence-based practices to local constraints, and (iv) design the implementation process to account for local constraints. Implementation science for resource-poor settings shifts this view from "resources as context" to "resources as primary research object." We find a growing body of implementation research aiming to discover and test novel approaches to generate resources for the delivery of evidence-based practice in routine care, including approaches to create higher-skilled health workers-through tele-education and telemedicine, freeing up higher-skilled health workers-through task-shifting and new technologies and models of care, and increasing laboratory capacity through new technologies and the availability of medicines through supply chain innovations. In contrast, only few studies have investigated approaches to change the behavior and utilization of healthcare resources in resource-poor settings. We identify three specific opportunities for implementation science in resource-poor settings. First, intervention and methods innovations thrive under constraints. Second, reverse innovation transferring novel approaches from resource-poor to research-rich settings will gain in importance. Third, policy makers in resource-poor countries tend to be open for close collaboration with scientists in implementation research projects aimed at informing national and local policy. CONCLUSIONS: Implementation science in resource-poor countries and communities offers important opportunities for future discoveries and reverse innovation. To harness this potential, funders need to strongly support research projects in resource-poor settings, as well as the training of the next generation of implementation scientists working on new ways to create healthcare resources where they lack most and to ensure that those resources are utilized to deliver care that is based on the latest research results.
BACKGROUND: The relevance of context in implementation science is reflected in the numerous theories, frameworks, models and taxonomies that have been proposed to analyse determinants of implementation (in this paper referred to as determinant frameworks). This scoping review aimed to investigate and map how determinant frameworks used in implementation science were developed, what terms are used for contextual determinants for implementation, how the context is conceptualized, and which context dimensions that can be discerned. METHODS: A scoping review was conducted. MEDLINE and EMBASE were searched from inception to October 2017, and supplemented with implementation science text books and known published overviews. Publications in English that described a determinant framework (theory, model, taxonomy or checklist), of which context was one determinant, were eligible. Screening and inclusion were done in duplicate. Extracted data were analysed to address the study aims. A qualitative content analysis with an inductive approach was carried out concerning the development and core context dimensions of the frameworks. The review is reported according to the PRISMA guidelines. RESULTS: The database searches yielded a total of 1113 publications, of which 67 were considered potentially relevant based on the predetermined eligibility criteria, and retrieved in full text. Seventeen unique determinant frameworks were identified and included. Most were developed based on the literature and/or the developers' implementation experiences. Six of the frameworks explicitly referred to "context", but only four frameworks provided a specific definition of the concept. Instead, context was defined indirectly by description of various categories and sub-categories that together made up the context. Twelve context dimensions were identified, pertaining to different aggregation levels. The most widely addressed context dimensions were organizational support, financial resources, social relations and support, and leadership. CONCLUSIONS: The findings suggest variation with regard to how the frameworks were developed and considerable inconsistency in terms used for contextual determinants, how context is conceptualized, and which contextual determinants are accounted for in frameworks used in implementation science. Common context dimensions were identified, which can facilitate research that incorporates a theory of context, i.e. assumptions about how different dimensions may influence each other and affect implementation outcomes. A thoughtful application of the concept and a more consistent terminology would enhance transparency, simplify communication among researchers, and facilitate comparison across studies.
PURPOSE: Patient-reported outcome and experience measures (PROMs/PREMs) are well established in research for many health conditions, but barriers persist for implementing them in routine care. Implementation science (IS) offers a potential way forward, but its application has been limited for PROMs/PREMs. METHODS: We compare similarities and differences for widely used IS frameworks and their applicability for implementing PROMs/PREMs through case studies. Three case studies implemented PROMs: (1) pain clinics in Canada; (2) oncology clinics in Australia; and (3) pediatric/adult clinics for chronic conditions in the Netherlands. The fourth case study is planning PREMs implementation in Canadian primary care clinics. We compare case studies on barriers, enablers, implementation strategies, and evaluation. RESULTS: Case studies used IS frameworks to systematize barriers, to develop implementation strategies for clinics, and to evaluate implementation effectiveness. Across case studies, consistent PROM/PREM implementation barriers were technology, uncertainty about how or why to use PROMs/PREMs, and competing demands from established clinical workflows. Enabling factors in clinics were context specific. Implementation support strategies changed during pre-implementation, implementation, and post-implementation stages. Evaluation approaches were inconsistent across case studies, and thus, we present example evaluation metrics specific to PROMs/PREMs. CONCLUSION: Multilevel IS frameworks are necessary for PROM/PREM implementation given the complexity. In cross-study comparisons, barriers to PROM/PREM implementation were consistent across patient populations and care settings, but enablers were context specific, suggesting the need for tailored implementation strategies based on clinic resources. Theoretically guided studies are needed to clarify how, why, and in what circumstances IS principles lead to successful PROM/PREM integration and sustainability.
BACKGROUND: Many interventions found to be effective in health services research studies fail to translate into meaningful patient care outcomes across multiple contexts. Health services researchers recognize the need to evaluate not only summative outcomes but also formative outcomes to assess the extent to which implementation is effective in a specific setting, prolongs sustainability, and promotes dissemination into other settings. Many implementation theories have been published to help promote effective implementation. However, they overlap considerably in the constructs included in individual theories, and a comparison of theories reveals that each is missing important constructs included in other theories. In addition, terminology and definitions are not consistent across theories. We describe the Consolidated Framework For Implementation Research (CFIR) that offers an overarching typology to promote implementation theory development and verification about what works where and why across multiple contexts. METHODS: We used a snowball sampling approach to identify published theories that were evaluated to identify constructs based on strength of conceptual or empirical support for influence on implementation, consistency in definitions, alignment with our own findings, and potential for measurement. We combined constructs across published theories that had different labels but were redundant or overlapping in definition, and we parsed apart constructs that conflated underlying concepts. RESULTS: The CFIR is composed of five major domains: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. Eight constructs were identified related to the intervention (e.g., evidence strength and quality), four constructs were identified related to outer setting (e.g., patient needs and resources), 12 constructs were identified related to inner setting (e.g., culture, leadership engagement), five constructs were identified related to individual characteristics, and eight constructs were identified related to process (e.g., plan, evaluate, and reflect). We present explicit definitions for each construct. CONCLUSION: The CFIR provides a pragmatic structure for approaching complex, interacting, multi-level, and transient states of constructs in the real world by embracing, consolidating, and unifying key constructs from published implementation theories. It can be used to guide formative evaluations and build the implementation knowledge base across multiple studies and settings.
PURPOSE: Evidence is not always used in practice, and many examples of problematic implementation of research into practice exist. The aim of this paper is to provide an introduction and overview of current developments in implementation science and to apply these to nursing. METHODS: We discuss a framework for implementation, describe common implementation determinants, and provide a rationale for choosing implementation strategies using the available evidence from nursing research and general health services research. FINDINGS: Common determinants for implementation relate to knowledge, cognitions, attitudes, routines, social influence, organization, and resources. Determinants are often specific for innovation, context, and target groups. Strategies focused on individual professionals and voluntary approaches currently dominate implementation research. Strategies such as reminders, decision support, use of information and communication technology (ICT), rewards, and combined strategies are often effective in encouraging implementation of evidence and innovations. Linking determinants to theory-based strategies, however, can facilitate optimal implementation plans. CONCLUSIONS: An analytical, deliberate process of clarifying implementation determinants and choosing strategies is needed to improve situations where suboptimal care exists. Use of theory and evidence from implementation science can facilitate evidence-based implementation. More research, especially in the area of nursing, is needed. This research should be focused on the effectiveness of innovative strategies directed to patients, individual professionals, teams, healthcare organizations, and finances. CLINICAL RELEVANCE: Implementation of evidence-based interventions is crucial to professional nursing and the quality and safety of patient care.
BACKGROUND: Strategies are central to the National Institutes of Health's definition of implementation research as "the study of strategies to integrate evidence-based interventions into specific settings." Multiple scholars have proposed lists of the strategies used in implementation research and practice, which they increasingly are classifying under the single term "implementation strategies." We contend that classifying all strategies under a single term leads to confusion, impedes synthesis across studies, and limits advancement of the full range of strategies of importance to implementation. To address this concern, we offer a system for classifying implementation strategies that builds on Proctor and colleagues' (2013) reporting guidelines, which recommend that authors not only name and define their implementation strategies but also specify who enacted the strategy (i.e., the actor) and the level and determinants that were targeted (i.e., the action targets). MAIN BODY: We build on Wandersman and colleagues' Interactive Systems Framework to distinguish strategies based on whether they are enacted by actors functioning as part of a Delivery, Support, or Synthesis and Translation System. We build on Damschroder and colleague's Consolidated Framework for Implementation Research to distinguish the levels that strategies target (intervention, inner setting, outer setting, individual, and process). We then draw on numerous resources to identify determinants, which are conceptualized as modifiable factors that prevent or enable the adoption and implementation of evidence-based interventions. Identifying actors and targets resulted in five conceptually distinct classes of implementation strategies: dissemination, implementation process, integration, capacity-building, and scale-up. In our descriptions of each class, we identify the level of the Interactive System Framework at which the strategy is enacted (actors), level and determinants targeted (action targets), and outcomes used to assess strategy effectiveness. We illustrate how each class would apply to efforts to improve colorectal cancer screening rates in Federally Qualified Health Centers. CONCLUSIONS: Structuring strategies into classes will aid reporting of implementation research findings, alignment of strategies with relevant theories, synthesis of findings across studies, and identification of potential gaps in current strategy listings. Organizing strategies into classes also will assist users in locating the strategies that best match their needs.
PURPOSE: This article introduces implementation science, which focuses on research methods that promote the systematic application of research findings to practice. METHOD: The narrative defines implementation science and highlights the importance of moving research along the pipeline from basic science to practice as one way to facilitate evidence-based service delivery. This review identifies challenges in developing and testing interventions in order to achieve widespread adoption in practice settings. A framework for conceptualizing implementation research is provided, including an example to illustrate the application of principles in speech-language pathology. Last, the authors reflect on the status of implementation research in the discipline of communication sciences and disorders. CONCLUSIONS: The extant literature highlights the value of implementation science for reducing the gap between research and practice in our discipline. While having unique principles guiding implementation research, many of the challenges and questions are similar to those facing any investigators who are attempting to design valid and reliable studies. This article is intended to invigorate interest in the uniqueness of implementation science among those pursuing both basic and applied research. In this way, it should help ensure the discipline's knowledge base is realized in practice and policy that affects the lives of individuals with communication disorders.
BACKGROUND: Artificial intelligence (AI), particularly generative AI, has emerged as a transformative tool in healthcare, with the potential to revolutionize clinical decision-making and improve health outcomes. Generative AI, capable of generating new data such as text and images, holds promise in enhancing patient care, revolutionizing disease diagnosis and expanding treatment options. However, the utility and impact of generative AI in healthcare remain poorly understood, with concerns around ethical and medico-legal implications, integration into healthcare service delivery and workforce utilisation. Also, there is not a clear pathway to implement and integrate generative AI in healthcare delivery. METHODS: This article aims to provide a comprehensive overview of the use of generative AI in healthcare, focusing on the utility of the technology in healthcare and its translational application highlighting the need for careful planning, execution and management of expectations in adopting generative AI in clinical medicine. Key considerations include factors such as data privacy, security and the irreplaceable role of clinicians' expertise. Frameworks like the technology acceptance model (TAM) and the Non-Adoption, Abandonment, Scale-up, Spread and Sustainability (NASSS) model are considered to promote responsible integration. These frameworks allow anticipating and proactively addressing barriers to adoption, facilitating stakeholder participation and responsibly transitioning care systems to harness generative AI's potential. RESULTS: Generative AI has the potential to transform healthcare through automated systems, enhanced clinical decision-making and democratization of expertise with diagnostic support tools providing timely, personalized suggestions. Generative AI applications across billing, diagnosis, treatment and research can also make healthcare delivery more efficient, equitable and effective. However, integration of generative AI necessitates meticulous change management and risk mitigation strategies. Technological capabilities alone cannot shift complex care ecosystems overnight; rather, structured adoption programs grounded in implementation science are imperative. CONCLUSIONS: It is strongly argued in this article that generative AI can usher in tremendous healthcare progress, if introduced responsibly. Strategic adoption based on implementation science, incremental deployment and balanced messaging around opportunities versus limitations helps promote safe, ethical generative AI integration. Extensive real-world piloting and iteration aligned to clinical priorities should drive development. With conscientious governance centred on human wellbeing over technological novelty, generative AI can enhance accessibility, affordability and quality of care. As these models continue advancing rapidly, ongoing reassessment and transparent communication around their strengths and weaknesses remain vital to restoring trust, realizing positive potential and, most importantly, improving patient outcomes.