Since its beginning in the early 2000s, the Campbell Collaboration has acknowledged the importance that methodology plays in producing systematic reviews. Indeed, the Methods Coordinating Group (CG) was one of the original groups in Campbell, alongside crime and justice, education, and social welfare. During the past two decades, numerous statisticians and methodologists have contributed to early forms of methods guidance and methods recommendations (it would be impossible to name them all and it would be unfair to just mention some). In recent years, with the expansion of the Campbell CGs, the type of questions asked and evidence used among CGs has diversified. These changes produce new challenges, and with new challenges new opportunities arise as well. In more recent years, with the intention to better serve the rest of the CGs, the Methods CG, with the support of the board, the Editor in Chief, and the CEO, started to produce discussion papers, methods guidance, and methods policy. These early efforts are collected as Methods Research Papers in the Campbell Systematic Reviews journal. Methods articles are open access, peer-reviewed multidisciplinary manuscripts. The primary goal of the articles published within the Campbell Systematic Reviews journal is to provide methodological support for the systematic reviews and evidence and gap maps that it also publishes. Method: Innovative Methods Papers Research Methods Guide Method: Translations Research Methodology: Discussion Paper—Methodology Systematic Review: Systematic Reviews of Methods, and Guidance Paper Policy Paper Methods manuscripts published in the Campbell Systematic Reviews journal are anchored in addressing practical problems of designing, conducting, reporting, and implementing systematic reviews and their results. We welcome manuscripts that use different forms of inquiry and traditions. Because we want to provide transparent and accessible methodological information to researchers, we encourage any data and source code used in the submission to be submitted with the manuscript prior to peer review. In cases in which quantitative manuscripts provide findings from a simulation study, the source code of the simulation should be provided. Although not required, we encourage all submissions that use software for analyses (quantitative or qualitative) to rely on an open-source software. Finally, the Methods CG will disseminate guidance and policy papers as Guidance and policy papers. We welcome Innovative Methods Papers that introduce a novel approach to any of the stages of a systematic review, compare the use of different known methods, or demonstrate the accuracy or inaccuracy of a known method. Innovative Methods must be relevant to at least one of the stages of Campbell Systematic Reviews, or evidence and gap maps. These manuscripts must provide some examples (when feasible with real data), and the data and source code utilized in the example must be submitted at the time for the submission. If a simulation study is conducted, the source code of the simulation study must be included with the submission. For an example of an Innovative Methods Paper see Polanin and Nuijten (2018). There are papers that demonstrate, illustrate, and teach other researchers undertaking systematic review how to use specific methods. Whether focusing on a specific technique or software, these papers must be written in accessible language in order to reach a broader audience that may be interested in learning the practical use of the technique or software implemented. Because the goal of the tutorial is to aid other researchers in learning something that they have not yet mastered, particular attention should be given to the demonstration component of the manuscript. Papers must clearly articulate the objectives of the guide. Although not required, authors of Research Methods Guides are encouraged to create an appendix with a set of Q&A to help the reader test their understanding of the content. For an example of a Research Methods Guide see Papakonstantinou et al. (2020). These are short (two to three pages) papers aimed at stakeholders and consumers of systematic reviews. These papers aim to communicate, in plain language, the reason that a particular method is used and why it matters for stakeholders and consumers. Translation Papers answer questions such as: What should a stakeholder know about a particular method? Why should stakeholders care that systematic reviews use a specific method? For example, what should a stakeholder know about heterogeneity? Why should stakeholders care that systematic reviews account for dependence? These are relatively short papers that discuss a specific method-related topic. Discussion papers are not intended to be an exhaustive review of a topic or introduce a new method. This type of manuscript intends to highlight methods, such as specific advantages of a method or tool (e.g., software) and to explain the possible implications of the implementation of the discuss methods or tools. This type of paper is similar to commentary papers in other journals. For an example of a Research Methodology: Discussion Paper, see Haddaway et al. (2020). These are systematic reviews of methods. Systematic reviews of methods answer questions related, but not limited to: How is a particular method being used? What is reported when a particular method is being used? What is the overall bias or precision of a method while synthesizing multiple simulation studies? For examples of Systematic Reviews of Methods see Villar and Waddington (2019) and Wang et al. (2021). The Methods CG encourages the constituents of the Campbell Collaboration to write guidance papers. Guidance Papers are recommendations in how to approach a particular issue. Although it may be considered good practice to follow a specific guidance, it is not mandated or expected that all Campbell reviews comply with these. Guidance papers are sent for consultation with all CGs as part of the peer review process. The guidance on information retrieval (Kugley et al., 2017) and on evidence and gap maps (White et al., 2020) are examples of guidance. Policies are, typically, previous guidance that has been elevated to policy. All Campbell reviews are expected to comply with policies. As part of the peer review process, Policy Papers are sent for consultation to all CGs. In addition, policies need to be approved by a technical panel. The Methodological Expectations for Campbell Collaboration Intervention Reviews (MECCIR) represent the Campbell policy on expectations (MECCIR Conduct Standards). The inclusion of methods articles in the Campbell Systematic Reviews journal is another step that the Campbell Collaboration is taking to support the publication of high-quality systematic reviews and evidence and gap maps. For questions related to the methods articles, please contact us at [email protected].
The Campbell Collaboration was established in 2001 to promote positive social and economic change through supporting the conduct of high-quality systematic reviews and promoting their use in decision making (Welch, 2018). Wang et al. (2021) found that the methodological quality of Campbell reviews of intervention effectiveness published between 2011 and 2018 improved over time, and particularly after the introduction of the 2014 Methodological Expectations for Conducting Campbell Intervention Reviews (MECCIR) (Wang et al., 2021). For the 96 systematic reviews published between 2011 and 2018, the methodologic quality as assessed by the AMSTAR tool was 16 (17%) reviews rated as high quality, 40 (42%) as moderate, 24 (25%) as low, and 16 (17%) as critically low (Wang et al., 2021). Based on this assessment, Campbell provided feedback to all editorial teams on the quality of reviews and areas for improvement. We decided to conduct a follow-up analysis to evaluate the quality of Campbell reviews published since 2018 and compare the findings with the baseline assessment to identify areas where improvements are still needed. We conducted the quality assessment of Campbell systematic reviews of intervention effectiveness published in the past 5 years (February 2018 to November 2022) using the AMSTAR 2.0 tool (Shea et al., 2017). A total of 77 intervention reviews were included. All analyses were conducted using R software. Sources of funding for the included studies (34%, 26) (AMSTAR item 10). Assessed potential impact of risk of bias in individual studies on the results of the meta-analysis or other evidence synthesis (52%, 40) (AMSTAR item 12). List of excluded studies with justifications (60%, 46) (AMSTAR item 7). Compared with the reviews published before 2018, the overall methodological quality of the recent reviews has generally improved (Figure 1). The proportion of high-quality reviews has doubled (17% to 39%), while the proportion of moderate quality reviews has been reduced by more than half (42% to 16%). However, there was little difference in the percentage of reviews rated as low (25% vs. 27%) and critically low (17% vs. 18%). Since the baseline assessment of Campbell reviews published between 2011 and 2018, some reporting deficiencies have improved and are now reported in over 70% of the reviews. The factors that improved were justifying the choice of eligible study designs, explaining heterogeneity in results, and discussing the impact of publication bias (Figure 2). However, reporting the source of funding and the impact of risk of bias in individual studies on the results of the meta-analysis were persistently inadequately considered but more frequently observed in the last 5 years (15% to 34%, and 33% to 52%, respectively). Of note, fewer reviews in the last 5 years reported the list of excluded studies with justifications than did the sample of 2011–2018 (92% to 60%). This is a critical flaw in the AMSTAR scale that leads to lower quality ratings. Although there has been continuous improvement in the quality of Campbell reviews, there is a need to improve reporting of excluded studies, sources of funding for studies, impact of risk of bias on the meta-analysis, and assessing impact of publication bias. To address these shortcomings, the Campbell editorial board has implemented three strategies going forward and will monitor the quality of reviews annually. First, all Campbell authors have access to RevmanWeb for authoring their Campbell reviews and evidence and gap maps. Campbell's template for reviews of intervention effectiveness have been modified to mention each of the 16 AMSTAR items in the guidance for authors as they write their reviews. This aims to raise awareness of items that influence methodological quality during the conduct of the review. Second, an internal Campbell editor assesses each Campbell review before sending for external review. Campbell has included the AMSTAR items in the internal editorial checklists and feedback forms. This will help editors to assess if all AMSTAR items are reported and provide feedback to authors during the editorial process. Third, although implementation of the MECCIR expectations led to improved quality from 2014 to 2018, the checklists are burdensome for both authors and editors (with 79 items in the MECCIR for conduct and 102 items in the MECCIR for reporting). Campbell is currently updating MECCIR to create a unified checklist with the goal of making it easier for authors and editors to ensure that methodologic standards are met. Furthermore, this updated guidance aims to include all relevant items of AMSTAR and PRISMA 2020 (Page et al., 2021) in this unifies checklist. This updated guidance will be available by Fall 2023. Systematic reviews have a special importance for decision making. They aim to summarize the best available evidence on a specific research question to inform practice guidelines and reveal knowledge gaps to guide future research initiatives in a wide range of sectors (Collaboration, 2018; Li et al., 2021; Yang, Li, & Bai, 2018). The trustworthiness of a systematic review depends on its methodological rigor and reporting quality (Pussegoda et al., 2017). We welcome feedback on these measures to continuously improve the quality of Campbell systematic reviews. Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.
The global severe acute respiratory syndrome coronavirus 2 pandemic strikingly shows the need for rigorous evidence to inform decisions. During such times of crisis, many decisions are made across multiple sectors and trillions of dollars are spent to deal with its consequences that affect all aspects of economic and societal life. Given the scale of human suffering, thoughtfully designing effective policies, and carefully spending scarce resources on interventions that work during crisis management and recovery, become crucial. However, in many areas of decision making, the use of robust and reliable evidence is not the norm. This has dire consequences: evidence from impact evaluations in different sectors show that about 80% of policy interventions are not effective (White, 2019). Equally, the reliance on an individual study or model rather than evidence synthesis commonly leads to misinformed policy and outright harm. For example, the retracted study on hydroxychloroquine for COVID-19 led to public harm as well as public mistrust (Mehra, Ruschitzka, & Patel, 2020). Now, more than ever, public policy needs to be informed by the most rigorous, comprehensive and up-to-date evidence possible. We, at the Campbell Collaboration, are working on both providing this rigorous evidence and promoting its use to inform decisions about social and public policy. Campbell systematic reviews provide a wealth of rigorous evidence to support social and economic response. These reviews highlight what is known and actionable, and point to critical questions decisionmakers need to ask in planning and implementing social and economic responses. Campbell systematic reviews follow carefully structured, peer-reviewed procedures to produce high-quality, theory-based evaluations of social and economic policies and programmes. They address real-world problems, often in partnership with relevant stakeholders, and seek to answer what works, why and for whom. Our 12 coordinating groups provide broad coverage of social issues, including ageing, business and management, climate solutions, crime and justice, disability, education, international development, knowledge translation and implementation, methods, nutrition and food systems and social welfare. And our international editorial board supervises the process in order to produce rigorous evidence syntheses and strategic partnerships that encourage their timely consideration for policy. Campbell systematic reviews have influenced national policy discussions on over 40 topics. They inform international guidelines and support the design and scaling-up of dozens of evidence-based social and economic policies and programmes (Campbell Collaboration, 2020). Campbell also publishes evidence and gap maps, which provide a thorough overview of the body of evidence. They allow decision makers and planners to quickly identify the best available evidence on a topic, remaining evidence gaps, as well as suitable areas to be converted into living evidence reviews (Thomas et al., 2017). For example, the Campbell evidence and gap map on people with disabilities may be helpful to inform decisions about health, social engagement and employment for people with disabilities (Saran, White, & Kuper, 2020) in the aftermath of COVID-19 stringency measures. With this editorial, we provide a virtual issue of 50 Campbell systematic reviews to inform the social and economic response to COVID-19 (Figure 1). Some reviews have immediate relevance, including how to promote handwashing (De Buck et al., 2017), distribute cash in emergency settings, provide nutrition outreach, intervene for the safety of women and children and implement evidence-based policing. Lockdown measures put pressure on families. We can learn from the large number of reviews on family functioning such as promoting the well-being of children exposed to intimate partner violence (Latzman, Casanueva, Brinton, & Forman-Hoffman, 2019). Reviews provide guidance to support vulnerable populations including the elderly, and others needing assistance in daily living. Other reviews cover programmes to strengthen the social safety net, for example, in food security, cash transfers and care homes. As economies reopen, Campbell reviews offer ideas on how best to get people back to work, including labour activation measures such as youth employment (Kluve et al., 2017), promoting entrepreneurship and providing vocational training. With global shutdowns in food processing plants and agriculture, we need to increase food production and availability through transport, improving retail access and outreach to difficult-to-reach areas such as urban slums. Campbell reviews highlight the effects of technological support for farmers, training and contract farming. Campbell reviews inform how to restructure government services such as schools, community services and prisons to support continued social distancing. New evidence syntheses are needed in some areas to answer questions directly related to COVID-19 policies; for example, evidence on the impacts of reopening of schools on disease burden, learning and achievement and family well-being would be most helpful. Reviews provide evidence on alternatives to prison like noncustodial sentences (Villettaz, Gillieron, & Killias, 2015), noncustodial employment programmes and court diversion programmes to keep youth out of the justice system. Partnership with Evidence Aid to produce COVID-19-relevant summaries of Campbell systematic reviews (Evidence Aid Coronavirus COVID-19, 2020). Highlighting COVID-19-relevant Campbell reviews with blogs and editorials. Partnership with the COVID-END network to coordinate evidence synthesis initiatives. Fast track editorial process for COVID-19 relevant articles. Development of methods to register rapid systematic reviews, followed by living reviews to address high-priority questions with rapidly emerging evidence-bases (ongoing). Initiatives within practitioner and policy communities, such as priority-setting, webinars and training. Campbell Systematic Reviews welcomes registration of new reviews, with a fast-track editorial process, to inform the global COVID-19 social and economic response. Our methodological standards protect against bias and potentially misleading findings. Registration with Campbell protects against research waste since titles and protocols are publicly available and searchable. As the world continues to respond to the COVID-19 crisis, the policy community needs rigorous evidence on options and alternatives. Evidence from Campbell systematic reviews shows what is known on social and economic policies and programmes. Reviews identify the uncertainties to address via policy experiments, pilot tests and trials. And they identify questions to be answered with further evidence synthesis or primary research. Donald Campbell's vision of an Experimenting Society (Campbell, 1991), which conducts and learns from policy experiments, is needed now more than ever.
Background: The Campbell Collaboration undertakes systematic reviews of the effects of social and economic policies (interventions) to help policymakers, practitioners, and the public to make well-informed decisions about policy interventions. In 2010, the Cochrane Collaboration and the Campbell Collaboration developed a voluntary co-registration policy under the rationale to make full use of the shared interests and diverse expertise from different review groups within these two organizations. In order to promote the methodological quality and transparency of Campbell intervention reviews, the Methodological Expectations of Campbell Collaboration Intervention Reviews (MECCIR) were introduced in 2014 to guide Campbell reviewers. However, there has not been a comprehensive review of the methodological quality and reporting characteristics of Campbell reviews. Objectives: This review aimed to assess the methodological and reporting characteristics of Campbell intervention reviews and to compare the methodological quality and reporting completeness of Campbell reviews published before and after the implementation of MECCIR. A secondary aim was to compare the methodological quality and reporting completeness of reviews registered with Campbell only versus those co-registered with Cochrane and Campbell. Search Methods: We searched the Campbell Library to identify all the completed intervention reviews published between 1 January 2011 to 31 January 2018. Selection Criteria: One researcher downloaded and screened all the records to exclude non-intervention reviews based on reviews' title and abstract. A second researcher checked the full text of all the excluded records to confirm the exclusion. In case of discrepancies, the two researchers jointly agreed on the final decision. Data Collection and Analysis: We developed the abstraction form based on mandatory reporting items for methods, results, and discussion from the MECCIR reporting standards Version 1.1; and additional epidemiological characteristics identified in a similar study of systematic reviews in health. Additionally, we judged the methodological quality and completeness of reporting of each included review. For methodological quality, we used the AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews 2) instrument; for reporting completeness we used the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist. We rated reporting as either complete/partial or not reported. We described characteristics of the included reviews with frequencies and percentages, and median with interquartile ranges (IQRs). We used Stata version 12.0 to conduct multiple linear regressions for continuous data and the ordered logistic regressions for ordered data to investigate associations between prespecified factors and both methodological quality and completeness of reporting. Main Results: = .003). An increasing trend over time was observed for both the percentage of high and moderate methodological quality of reviews and the median number of PRISMA items reported. Authors' Conclusions: Many features expected in systematic reviews were present in Campbell reviews most of the time. Methodological quality and reporting completeness were both significantly higher in reviews published after the introduction of MECCIR in 2014 compared with those published before. However, this may also reflect general improvement in the reporting the methodology of systematic reviews over time or associations with other characteristics which were not assessed such as funding or experience of teams. Reviews co-registered with Cochrane were of higher methodological quality and more complete reporting than reviews only registered in Campbell.
With this first double issue of Campbell Systematic Reviews launching our publishing partnership with John Wiley & Sons, the Campbell Collaboration moves one step closer to adhering to the FAIR principles (Findable, Accessible, Interoperable, Reusable) of research (https://www.force11.org/group/fairgroup/fairprinciples). The FAIR principles are central to ensuring reproducibility, transparency, and accountability in science. Since its beginning in 2001, the Campbell Collaboration has held the principle of accessibility at its heart through open-access publication of both the plan for the review (the protocol) and the full review. We publish peer-reviewed, transparent rigorous systematic reviews in the social sciences as well as methodological guidance. With Wiley, we will continue to publish open access, using a CC-BY 4.0 licences where authors retain the copyright to their own material. We will also continue to welcome copublication with specialty journals which reach the specialty practitioners and policy-makers such as police departments, teachers, and schoolboards. One example is our long-standing relationship with the Research in Social Work Practice journal. Findability and discoverability was a key deciding factor in our motivation to partner with Wiley in January 2019. In our first year, Wiley will ensure indexing in relevant social science databases as well as supporting our communication of review findings within its global social science networks, particularly in Asia. So far, we have documented over 20 policy influence stories of how Campbell systematic reviews have been used in policy documentation and legislation (https://campbellcollaboration.org/blog/must-try-harder-policy-influence-from-campbell-reviews.html). Altmetrics enabled through the Wiley platform will allow us to identify additional press and media stories as well as citations in policy documents. Wiley will also ensure a permanent archive of all content. Interoperability refers to the ability for data to be used in different ways, for example, by making meta-data such as included references and tables machine readable. The entire Campbell Systematic Reviews collection will be published on the Wiley platform as full-text .html, which will allow the content of each article to be more easily searched, and will also facilitate searching of meta-data. In 2017, Campbell decided to transition to using Revman as its authoring tool, generously provided by the Cochrane Collaboration, a long-time and valued partner. Revman provides a consistent structure for systematic reviews and has an efficient way to ensure version control for multiple author systematic reviews. Reusability is linked to interoperability since the data needs to be interoperable to be reusable. However data also needs to be shared for reusability. Repositories for systematic review data already exist such as the SR data repository (Brown University) and will soon be available within the R metafor package. These repositories enable sharing of not only outcome data but also coding of study characteristics such as methods, population and risks of bias. In the coming years, we encourage Campbell authors to make their data available on platforms such as these, ideally where the data can be cited. Expanding article types to better address questions from our communities of practitioners and policy-makers, including qualitative evidence synthesis, evidence and gap maps, reviews of reviews, methods research studies and guidance. We are also developing editor tools and training to build capacity in these new article types. Advancing relevant methods standards. For example, we are forming a working group to consider what types of nonrandomized studies, how to synthesize with randomized trials and how to appraise risk of bias. We also expect our qualitative evidence synthesis working group to finalize draft guidance this fall. Improving efficiency, transparency, and accountability of systematic reviews. This will involve trialing automation in relevant steps of the systematic review process, with human verification. We also continue to aim to prevent wasteful overlap or duplication of prior systematic reviews, while at the same time recognizing that replication of systematic reviews may be valuable for policy and practice, for example, to confirm findings, test assumptions or understand reasons for controversy. We are collaborating on policy guidance on when to replicate and when not to replicate systematic reviews with an international working group which will be available in early 2020. We are part of a common movement for greater research transparency, and plan to make more of the (meta)data from our reviews available. I am confident that our move to publish with Wiley and our focus on supporting the publication of high quality, leading-edge systematic reviews that are relevant for policy and practice will help improve our impact on social policies and the lives of people affected by these policies. Join us in this exciting period of innovation! I would love to hear your ideas to make Campbell systematic reviews faster, more robust and better fit for purpose. Let's produce better evidence for a better world together. John Wiley & Sons Limited is a private limited company registered in England with registered number 641132. Registered office address: The Atrium, Southern Gate, Chichester, West Sussex, United Kingdom. PO19 8SQ.
Searching for studies in systematic reviews is a critical step that lays the foundation for the remaining stages of the review and synthesis. Searching in the social sciences and other disciplines covered by the Campbell Collaboration comes with added complexities and challenges related to finding and organizing evidence across a rich diversity of sources. To assist Campbell authors and information specialists supporting Campbell reviews in this process, we recently published new guidance (MacDonald et al., 2024) based on the previous guidance document originally published in 2010 and updated in 2017. The guide was revised to reflect current Campbell Collaboration areas of practice and recommendations in the recently updated Methodological Expectations of Campbell Collaboration Intervention Reviews (MECCIR) (Dewidar et al., 2024), capture evolving practice and strategies for searching, and update links and descriptions of individual bibliographic and other resources. It includes helpful templates, lists, and checklists to assist authors in meeting the expectations for conduct and reporting of Campbell systematic review searches. Here, we provide an overview and highlight some of the key changes and new additions. The new guidance includes several new sections. The Section 1.0 About this Guide describes who this guide is for: both review authors and information specialists. Also new is the section 2.0 Working with an Information Specialist which explains the role of the information specialist in the systematic review process. Searching for and retrieving information is a key component of systematic reviews and information specialists, as experts in search, can play a supporting or collaborative role in the production of these reviews. In the section on 4.0 Sources to Search, the list of sources has been placed in an Appendix which can be found on the Open Science Framework (OSF). The list can now be updated frequently so that accurate and up-to-date information is available to researchers. As well in this edition preprint repositories have been added to the list of potential sources of studies. 5.0 Planning the Search has a new section on using seed articles, or benchmarking studies, to help in the construction and validation of the search strategy. Using a seed article set can help identify search terms and ensure the search strategy finds relevant studies. Also new to the 5.3 Search updates subsection, is the practice of checking for retracted studies. While the guidance on how to deal with retracted studies is still under debate (Faggion, 2019), checking for retractions, corrections, errata and other areas of concern related to included studies should be a routine step in any review. The author team updated the 6.0 Designing Search Strategies section with a new subsection on identifying search terms (both controlled vocabulary and keywords) and how to use text mining for selecting terms. Inclusion of a discussion on predatory publications is also new providing guidance on deciding how to deal with potential predatory publications. The subsection 6.5.7 Adapting search strategies across databases is another addition in this version of the guide complete with examples. The subsection 6.6, previously called Additional strategies, has been updated and renamed Supplementary search techniques to be in keeping with the TARCiS statement by Hirt et al. (2023). A new subsection 6.8 Peer review of search strategies on search peer reviews has been added. Peer review of search strategies occurs during standard peer review processes. However, search strategies are complex, and minor typos or syntax errors can have drastic implications for search results and thus review findings. For this reason, it is recommended that the search, in particular, be peer reviewed before manuscripts are submitted as an added checkpoint. We have also added a section 6.9 When to stop searching. In searching for studies in the social sciences, especially when included study designs are diverse and much of the research may be found in grey literature, identifying when ‘enough is enough’ can be particularly challenging. This section addresses this challenge and provides some considerations for stopping rules when it comes to searching and search strategy development. A new section on 8.0 Selecting Studies was added to this version of the guide, similar to the Cochrane handbook. While the selection of studies is not strictly part of the searching step of reviews, there are important information management considerations in the screening phase that the author team, as librarians and information specialists, felt would be helpful to address. The section 9.0 Documenting and Reporting the Search was updated to include the recently released MECCIR standards (Dewidar et al., 2024) and the PRISMA-S reporting guideline (Rethlefsen et al., 2021). A total of five Appendices can be found on OSF. They include a list of databases by subject, grey literature sources by geography, documenting and reporting templates, a peer review checklist for searches, and a list of abbreviations and definitions found in the guide. We hope that researchers will find these appendices useful for their own systematic searches. In conclusion, this new document, providing guidance along with templates and checklists, should be a go-to resource for any new or seasoned Campbell review author. In a recent assessment of Campbell systematic reviews, we found that only about 10% of reviews published since 2017 had cited the previous Campbell searching guidance (Young et al., 2024). We hope that the updated version of the Campbell searching guidance will become a routine reference document for all Campbell authors moving forward. We also encourage authors new to conducting systematic review searches in the social sciences to take the Campbell Collaboration's online course on systematic reviews and meta-analysis (Unit 3 covers searching and is an excellent companion resource to the search guidance), which, as of the writing of this editorial, is freely available through the Open Learning Initiative (Valentine et al., 2022). With the support of these resources, and by involving a trained information specialist, researchers will be well equipped to produce thorough, robust, and transparent searches to support high-quality evidence synthesis and contribute to building a credible and trustworthy evidence base.
journal tables of contents from January 2017 to March 2024. We included all systematic reviews published since 2017. We excluded other types of evidence synthesis (e.g., evidence and gap maps), updates to systematic reviews when search methods were not changed from the original pre-2017 review, and systematic reviews that did not conduct their own original searches. We developed a data extraction form in part based on the conduct and reporting items in MECCIR and PRISMA. In addition, we extracted information about the general quality of searches based on the use of Boolean operators, keywords, database syntax and subject headings. Data extraction included information about reporting of sources searched, some aspects of search quality, the use and reporting of supplementary search methods, reporting of the search strategy, the involvement of information specialists, date of the most recent search, and citation of the Campbell search methods guidance. Items were rated as fully, partially or not conducted or reported. We cross-walked our data extraction items to the 2019 MECCIR standards and 2020 PRISMA guidelines and provide descriptive analyses of the conduct and reporting of searches in Campbell systematic reviews, indicating level of adherence to standards where applicable. We included 111 Campbell systematic reviews across all coordinating groups published since 2017 up to the search date. Almost all (98%) included reviews searched at least two relevant databases and all reported the databases searched. All reviews searched grey literature and most (82%) provided a full list of grey literature sources. Detailed information about databases such as platform and date range coverage was lacking in 16% and 77% of the reviews, respectively. In terms of search strategies, most used Boolean operators, search syntax and phrase searching correctly, but subject headings in databases with controlled vocabulary were used in only about half of the reviews. Most reviews reported at least one full database search strategy (90%), with 63% providing full search strategies for all databases. Most reviews conducted some supplementary searching, most commonly searching the references of included studies, whereas handsearching of journals and forward citation searching were less commonly reported (51% and 62%, respectively). Twenty-nine percent of reviews involved an information specialist co-author and about 45% did not mention the involvement of any information specialist. When information specialists were co-authors, there was a concomitant increase in adherence to many reporting and conduct standards and guidelines, including reporting website URLs, reporting methods for forward citation searching, using database syntax correctly and using subject headings. No longitudinal trends in adherence to conducting and reporting standards were found and the Campbell search methods guidance published in 2017 was cited in only twelve reviews. We also found a median time lag of 20 months between the most recent search and the publication date. In general, the included Campbell systematic reviews searched a wide range of bibliographic databases and grey literature, and conducted at least some supplementary searching such as searching references of included studies or contacting experts. Reporting of mandatory standards was variable with some frequently unreported (e.g., website URLs and database date ranges) and others well reported in most reviews. For example, database search strategies were reported in detail in most reviews. For grey literature, source names were well reported but search strategies were less so. The findings will be used to identify opportunities for advancing current practices in Campbell reviews through updated guidance, peer review processes and author training and support.
The importance of sex and gender considerations in research is being increasingly recognized. Evidence indicates that sex and gender can influence intervention effectiveness. We assessed the extent to which sex/gender is reported and analyzed in Campbell and Cochrane systematic reviews. We screened all the systematic reviews in the Campbell Library (n = 137) and a sample of systematic reviews from 2016 to 2017 in the Cochrane Library (n = 674). We documented the frequency of sex/gender terms used in each section of the reviews. We excluded 5 Cochrane reviews because they were withdrawn or published and updated within the same time period as well as 4 Campbell reviews and 114 Cochrane reviews which only included studies focused on a single sex. Our analysis includes 133 Campbell reviews and 555 Cochrane reviews. We assessed reporting of sex/gender considerations for each section of the systematic review (Abstract, Background, Methods, Results, Discussion). In the methods section, 83% of Cochrane reviews (95% CI 80–86%) and 51% of Campbell reviews (95% CI 42–59%) reported on sex/gender. In the results section, less than 30% of reviews reported on sex/gender. Of these, 37% (95% CI 29–45%) of Campbell and 75% (95% CI 68–82%) of Cochrane reviews provided a descriptive report of sex/gender and 63% (95% CI 55–71%) of Campbell reviews and 25% (95% CI 18–32%) of Cochrane reviews reported analytic approaches for exploring sex/gender, such as subgroup analyses, exploring heterogeneity, or presenting disaggregated data by sex/gender. Our study indicates that sex/gender reporting in Campbell and Cochrane reviews is inadequate.
This is the protocol for a Campbell review. The aim of this study is to comprehensively assess the quality and nature of the search methods and reporting across Campbell systematic reviews. The search methods used in systematic reviews provide the foundation for establishing the body of literature from which conclusions are drawn and recommendations made. Searches should be comprehensive and reporting of search methods should be transparent and reproducible. Campbell Collaboration systematic reviews strive to adhere to the best methodological guidance available for this type of searching. The current work aims to provide a comprehensive assessment of the quality of the search methods and reporting in Campbell Collaboration systematic reviews. Our specific objectives include the following: To examine how searches are currently conducted in Campbell systematic reviews. To identify any machine learning or automation methods used, or emerging and less commonly used approaches to web searching. To examine how search strategies, search methods and search reporting adhere to the Methodological Expectations of Campbell Collaboration Intervention Reviews (MECCIR) and PRISMA guidelines. The findings will be used to identify opportunities for advancing current practices in Campbell reviews through updated guidance, peer review processes and author training and support.
The Campbell Collaboration is one organization providing standards for education-related systematic reviews. Librarians are often involved in search strategy development or as research team members for Campbell reviews which allows us to investigate librarian impact. This study examines protocols and reviews published by Campbell's Education Coordinating Group for adherence to the search standards from the Methodological Expectations of Campbell Collaboration Intervention Reviews (MECCIR) and the Peer Review of Electronic Search Strategies (PRESS) checklist for evaluating searches. Eligible studies include reviews with protocols published October 2014 to January 2019. Nineteen matched protocols and reviews were evaluated by two authors independently using a form based on MECCIR and PRESS. We compared adherence differences between protocols and reviews and adherence differences due to librarian involvement. Overall, the protocols and reviews generally adhered to search standards, with greater adherence for reviews reporting librarian involvement. Implications for education librarians include: encouragement to become familiar with the systematic review process; selecting and following appropriate guidelines and standards for conducting and reporting reviews; developing proficiency with search strategy development and reporting to reduce bias and increase transparency and reproducibility; and advocating for acknowledgement or authorship in publications to give credit for expertise and contributions to SR projects.
This is the protocol for a Campbell systematic review. The objectives are as follows: To identify methods used to assess the risk of outcome reporting bias (ORB) in studies included in recent Campbell systematic reviews of intervention effects. The review will answer the following questions: What proportion of recent Campbell reviews included assessment of ORB? How did recent reviews define levels of risk of ORB (what categories, labels, and definitions did they use)? To what extent and how did these reviews use study protocols as sources of data on ORB? To what extent and how did reviews document reasons for judgments about risk of ORB? To what extent and how did reviews assess the inter-rater reliability of ORB ratings? To what extent and how were issues of ORB considered in the review's abstract, plain language summary, and conclusions?
BACKGROUND: Studies published in languages other than English are often neglected when research teams conduct systematic reviews. Literature on how to deal with non-English studies when conducting reviews have focused on the importance of including such studies, while less attention has been paid to the practical challenges of locating and assessing relevant non-English studies. We investigated the factors which might predict the inclusion of non-English studies in systematic reviews in the social sciences, to better understand how, when and why these are included/excluded. METHODS: We appraised all Campbell Collaboration systematic reviews (n = 123) published to July 2016, categorising each by its language inclusiveness. We sought additional information from review authors via a questionnaire and received responses concerning 47 reviews. Data were obtained for 17 factors and we explored correlations with the number of non-English studies in the reviews via statistical regression models. Additionally, we asked authors to identify factors that support or hinder the inclusion of non-English studies. RESULTS: Of 123 reviews, 108 did not explicitly exclude, and of these, 17 included non-English language studies. One factor correlated with the number of included non-English studies across all models: the number of countries in which the members of the review team work (B-value = 0.56; SE B = 0.24; 95% CI = 0.07-1.03; p = 0.02). This indicates that reviews which included non-English studies were more likely to be produced by international review teams. Our survey showed a dominance of researchers from English-speaking countries (52.9%) and review teams consisting only of team members from these countries (65.9%). The most frequently mentioned challenge to including non-English studies was a lack of resources (funding and time) followed by a lack of language resources (e.g. professional translators). CONCLUSION: Our findings may indicate a connection between the limited inclusion of non-English studies and a lack of resources, which forces review teams to rely on their limited language skills rather than the support of professional translators. If unaddressed, review teams risk ignoring key data and introduce bias in otherwise high-quality reviews. However, the validity and interpretation of our findings should be further assessed if we are to tackle the challenges of dealing with non-English studies.
Aim: This study examines use and impacts of systematic reviews produced by the Campbell Collaboration’s Social Welfare Coordinating Group (SWCG) on practice, policy, and research. Methods: A mixed-method research design was used to examine impacts of 52 systematic reviews published by the SWCG. We conducted author surveys and retrieved multiple sources of bibliometric data. Results: Campbell SWCG reviews were downloaded 136,356 times and cited 3,184 times. Most reviews did not receive significant attention in alternative outlets (i.e., social media). Review authors provided evidence that reviews were used directly to make changes in policy or practice or inform future research. Discussion: Assessing the use and impacts of research is challenging. While downloads and citations provide evidence that these reviews receive attention, it was more difficult to determine the extent to which the reviews were used to impact practice or policy. More work is needed to better track and assess impacts of Campbell reviews.
This is the protocol for a Campbell systematic review. The objectives are as follows. This study has three main objectives: (1) To examine the time duration from title registration to publication of the protocol for a Campbell systematic review and publication of the completed Campbell systematic review; (2) To describe publication times in accordance with the characteristics of the reviews, which include year of publication, type of review, number of authors, number of collaborative institutions, the time gap between the date the search was conducted and review publication, and the length and complexity of the included review (including the number of pages, the number of tables and figures, the number of studies included in the review, the number and type of analyses undertaken, and the number of references); (3) To describe the differences in publication times between Campbell Review Groups.
Systematic reviews of relevant controlled experiments are required to set the results of individual studies in proper context, and to assess 'what works' in particular areas of social, psychological or educational intervention. In order to minimise bias, people preparing systematic reviews must identify as high a proportion as possible of the potentially eligible studies, but this phase of data collection is extremely tedious because potentially relevant studies are scattered and often very difficult to locate. This paper describes early progress in creating The Campbell Collaboration Social, Psychological, Educational & Criminological Trials Register (C2-SPECTR) to help those preparing and maintaining systematic reviews in these fields. The register currently contains over 10,000 records, including over 300 references to existing systematic reviews.
Practitioners working in social welfare, education, judicial circuits, psychology, and many other domains of human sciences daily decide on best treatments for their clients. The authors expect those practitioners to base their decisions on evidence from scientific research. The Campbell collaboration is an international nonprofit organization that supports a systematic evaluation of the effects of existing and new arising interventions in social sciences. In November 2005, 20 local volunteers launched the Belgian Campbell group. The most important tasks of this group are (a) to organize course programs on systematic reviews and (b) to assist Belgian authors willing to contribute to the Campbell collaboration in the writing of their protocol and systematic review. In this article, the authors introduce the concept of a systematic review and present the first achievements of the Belgian Campbell group, its current strengths, weaknesses, opportunities, and threats.
Abstract In order to determine what works in reducing crime, systematic reviews of the literature are needed. Systematic reviews have explicit objectives, explicit criteria for including or excluding studies, extensive searches for eligible evaluation studies from all over the world, careful extraction and coding of key features of studies, and a structured and detailed report of the methods and conclusions of the review. The Campbell Collaboration Crime and Justice Group has been established to prepare, continually update and electronically disseminate systematic reviews of criminological topics. Its international Steering Committee has identified key topics for the first reviews and is moving forward to obtain these reviews and to expand the activities of the Crime and Justice Group.
Current Campbell Collaboration policy on specific methods for use in Campbell systematic reviews of intervention effects
In April 2020, members of the Campbell Collaboration Methods Group and Campbell leadership met to discuss options for creating flexible training opportunities for Campbell reviewers. It was not a coincidence that this meeting occurred at the beginning of the Covid-19 pandemic. But in truth, conversations about how Campbell might increase the effectiveness and reach of Campbell training started at least a decade earlier. Training in systematic review methods has always been important to Campbell—we have a Methods sub-group that is focused on training, and training opportunities have been part of every Campbell annual meeting. In addition, Campbell typically offers one or two standalone workshops sessions a year to outside groups seeking training in systematic review methods. We have never had a good way of evaluating the effectiveness of these one-off training experiences, and in addition, we worried about the cost and access issues associated with in-person training. Further, we could not help but notice that when Campbell training sessions are accessible to a broad audience, they tend to be very popular. As an example of this latter point, David Wilson's presentation on effect sizes and basic issues in meta-analysis, which was part of a training workshop at the Campbell Colloquium in 2011, has been viewed over 49,000 times (Wilson, 2011) as of December 2022. As we investigated options for addressing questions regarding training effectiveness, resource efficiency, and access equity, we searched for platforms that would allow us to host training materials in an online environment, that could be accessed at little or no cost to users, and that have tools for assessing learning. Ultimately, we chose to work with the Open Learning Initiative (OLI) at Carnegie Mellon University (https://oli.cmu.edu/). Over the course of the next 30 months, a team of seven individuals with expertise in systematic reviews and meta-analysis, led by Jeff Valentine, Julia Littell, and Sarah Young, plus Greg Bunyea, a learning engineer from OLI, devoted thousands of hours to create a course titled: Systematic reviews and meta-analysis: A Campbell Collaboration online course (Valentine et al., 2022). We were ably assisted in this work by Mark Englebert, Jennifer Hanratty, Terri Pigott, and Zahra Premji. The remainder of this essay is devoted to describing the scope of the course, its primary audience, its organization, and the principles we adopted during development. Systematic reviews and meta-analysis: A Campbell Collaboration online course is aimed at Campbell reviewers and others who want to learn how to find, assess, and synthesize the results of relevant studies to inform policy, practice, and future research or in other words, people who want to learn how to conduct systematic reviews and meta-analysis. We assume that learners will have prior graduate training in research methodology and statistics. We designed the course to be suitable for both classroom and independent learning and view it as the equivalent to a textbook or an introductory, graduate-level course in systematic reviewing. It should also work well as an adjunct to in-person workshop training. The content on systematic review methods is relevant to systematic reviews regardless of the nature of the specific research hypotheses being investigated, but the content addressing synthesis methods is focused on the synthesis of quantitative data (meta-analysis). Introduction Problem formulation Searching the literature Screening potentially eligible studies Data extraction and coding Introduction to effect sizes Introduction to meta-analysis Completing systematic reviews and exploring other synthesis methods Units are the primary organizing framework, and each Unit contains multiple learning modules. For example, Unit 3 Searching the Literature has modules on the importance of working with an information specialist, how to identify sources to search, how to design database searches, and how to search the grey literature, among others. Most modules have multiple pages that serve to break the material into smaller chunks. For example, the module on Designing Database Searches has pages on combining terms and concepts, using subject headings, and on the role of database limiters, among others. In alignment with our design principles, described below, most pages begin with specific learning objectives and end with formative assessment exercises. Student performance on these formative assessment exercises provides critical information about how well the materials are leading students to successfully meet the learning objectives. This feedback will support continual improvement of the course by allowing us to identify where we have been more and less effective. When we set out to design the course, we committed to a set of principles informed by research and theory regarding how humans learn. We used an outcome-driven curriculum design method known as “backwards design” (Richards, 2013; Wiggins & McTighe, 2005). This method involves beginning by articulating learning objectives, then determining how we will assess if the learning objectives have been met, and only then creating content. By starting with learning outcomes and intentionally designing curriculum around those outcomes, the course content and assessments are aligned in helping learners achieve the learning goals. We also employ principles of active learning by providing opportunities for practice and formative assessments to test knowledge throughout the course (Koedinger et al., 2015). The practice and assessment activities include meaningful feedback, and in some cases hints, that challenges the learner to think critically about their answers. All assessments are linked in the OLI system to learning outcomes and skills. Thus, a well-designed formative assessment can tell us something about how students are understanding or misunderstanding material, which we can then address through iterations on the course content. We announced the availability of a pilot version of this course (https://oli.cmu.edu/courses/systematic-reviews-and-meta-analysis/) at the What Works Global Summit in October 2022. We plan on releasing the full course in the early part of 2023. After launch, we will continue to make data-driven improvements in course content and assessments. In the future, we intend to create summative assessments for self-paced and classroom use, and we will explore the feasibility of expanding the course into a certificate program.
BACKGROUND: Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence. Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before. The purpose of this review is to determine if a shared model of the literature searching process can be detected across systematic review guidance documents and, if so, how this process is reported in the guidance and supported by published studies. METHOD: A literature review. Two types of literature were reviewed: guidance and published studies. Nine guidance documents were identified, including: The Cochrane and Campbell Handbooks. Published studies were identified through 'pearl growing', citation chasing, a search of PubMed using the systematic review methods filter, and the authors' topic knowledge. The relevant sections within each guidance document were then read and re-read, with the aim of determining key methodological stages. Methodological stages were identified and defined. This data was reviewed to identify agreements and areas of unique guidance between guidance documents. Consensus across multiple guidance documents was used to inform selection of 'key stages' in the process of literature searching. RESULTS: Eight key stages were determined relating specifically to literature searching in systematic reviews. They were: who should literature search, aims and purpose of literature searching, preparation, the search strategy, searching databases, supplementary searching, managing references and reporting the search process. CONCLUSIONS: Eight key stages to the process of literature searching in systematic reviews were identified. These key stages are consistently reported in the nine guidance documents, suggesting consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews. Further research to determine the suitability of using the same process of literature searching for all types of systematic review is indicated.