Initiatives in responsible conduct of research (RCR) have often been ineffective, since they are based on several problematic assumptions. These include that (1) integrity issues in biomedical research serve as paradigm cases for those in research in general, (2) the primary cause of research misconduct is individual researchers' behavior, (3) educational interventions alone can prevent research misconduct, and (4) RCR can be addressed at the level of institutions. However, the research ecosystem comprises various partners, including funding agencies, research institutions, professional societies, and accreditation bodies. This study employs a review of literature and critical reflection to analyze how partners comprising the research ecosystem shape research environments, making policy recommendations on that basis. Research misconduct should be understood as resulting from misaligned incentives throughout the research ecosystem. Just as institutional cultures shape individuals, the policies of partners comprising the research ecosystem shape institutional cultures. An ecosystems approach to RCR consists in understanding how partners comprising the research ecosystem depend on each other, using these relations to ensure each holds the others accountable to promote the production of valid and reliable research. Viewing RCR through an ecosystems lens highlights the need for coordinated accountability among research partners.
In many parts of the world, an increasing number of clinical healthcare services are delivered through corporations. These corporations are also increasingly required to shape and undertake vital medical research. In this paper we outline the challenges of setting research priorities in corporatised clinics and ensuring that researchers are accountable to society and alert to the broader societal impacts of their work. We propose that the approach to research governance known as "Responsible Innovation" might provide a useful framework for selecting and shaping corporate research priorities so that they are grounded in population health priorities and wider social benefit. Responsible innovation also provides guidance for engaging patients, consumers, regulators and payers in constructive collaboration with researchers; encouraging ethical reflection by both corporations and individual scientists; and promoting responsiveness to contingencies in the processes, outcomes, and reception of research.
Open Access (OA) agreements were introduced to remove financial barriers to scientific dissemination and promote equity in knowledge access. As Article Processing Charges (APCs) have shifted from individual researchers to institutions, access to OA publishing has become an institutional asset, unevenly distributed across institutions, countries, and career stages. This article introduces and defines value extraction in OA - the use of access to APC coverage as leverage to obtain authorship or corresponding authorship without proportional intellectual contribution - and examines it as a structurally enabled integrity risk distinct from previously described forms of authorship abuse. We conduct a conceptual and normative analysis of the mechanisms by which OA agreements interact with metric-driven academic evaluation systems and existing research integrity frameworks, identifying governance gaps and distributional inequities produced by these interactions. Value extraction in OA is enabled by the convergence of three factors: centralized APC control within institutions, performance metrics that privilege publication counts and corresponding authorship, and integrity frameworks that treat publishing infrastructure as an ethically neutral background condition. Researchers at less-resourced institutions, early-career researchers, and scholars in the Global South face heightened vulnerability. Existing authorship guidelines fail to address mechanisms in which infrastructural access - rather than hierarchy or prestige - functions as leverage for academic credit. Safeguards are needed at institutional, publisher, and systemic levels, including procedural firewalls between APC decisions and authorship documentation, publisher-level monitoring of authorship patterns, and reform of evaluation frameworks to decouple infrastructural access from academic credit. Future research should investigate the prevalence of value extraction using bibliometric and network-based screening approaches.
In this commentary, I integrate Bjørn Hofmann's thorough analysis of polarization in research with two considerations. First, Hofmann defines polarization as characterized by incommensurable positions. This makes his definition too strict, as hardly any disagreement in modern science, including the cases he discusses, is based on genuine incommensurability. Polarization in research is better characterized in terms of perceived incommensurability between opposite groups. This is not a mere terminological issue. In the absence of genuine incommensurability, talking about incommensurability to describe polarized debates only risks exacerbating them. Second, Hofmann reviews several explanations of polarization but includes only value differences in his definition. Because values are ubiquitous in research, the role of values in polarization should be better qualified. Hofmann's current definition risks suggesting that values are a special feature of polarization, rather than a common feature of scientific research. Switching from the incommensurability to the perceived incommensurability criterion would make Hoffman's definition more precise. Better qualifying the role of values in polarization would make it more consistent with the values in science literature and his own analysis. Both tweaks will help forestall possible risks in communication that could hinder attempts to smooth over polarized debates, including those attempts reviewed by Hofmann.
Since the term predatory publishing was coined in the early 2010s, a significant research literature has emerged that carries warnings about journals issued by such publishers, while signaling the virtues of mainstream publishing. Three narratives that support the negative framing of predatory journals were identified: (1) they prioritize profit over scholarship; (2) they are assessed using qualitative warning signs rather than robust quality indicators; and (3) they are seldom named in editorial interventions, generating uncertainty about the domain of predatory publishing. Challenges of differentiating between the quality standards of mainstream and other journals are examined by applying a "warning list" of criteria to a grey publisher representative of the boundary between legitimate and illegitimate publishing, then analyzing editorial and production attributes of a cross-section of health science journals, with a range of impact factors, indexed within a bibliometric database. Use of predatory "warning signs" has affected progress with evaluating the relative qualities of mainstream and other journals, and meant that some innovations associated with some non-mainstream journals have been overlooked (e.g. process efficiencies in peer review and the sharing of production process data). Sources of editorial and production practice data for comparing all journals are incomplete and dispersed. More complete quality indicators for all journals that include authors' experiences of publishing need to be openly shared and externally validated. Research funders can influence publishers' behavior by making open access funding contingent upon journals meeting both quality and timeliness indicators for peer review.
In efforts to improve replication rates across sciences, graduate student training can foster an understanding of best practices. One consideration is to identify the psychological underpinnings that motivate early-career researchers to avoid questionable research practices (QRPs) and engage in transparent research behaviors. Recent findings demonstrate efficacy by leveraging identificatory processes, or how researchers identify with ethical science. This study examined whether the extent to which individuals incorporate ethical scientific principles into their identities can motivate disinterest in QRPs. As part of a baseline data collection effort for a systemic ethics training program at a Carnegie R1 institute, graduate students provided initial measures assessing endorsement of scientific values as outlined by the National Academies of Science, Engineering, and Medicine (NASEM) and the extent to which those values are part of their identity. They also reported their perceptions of the defensibility of various QRPs, and their willingness to engage in them. Greater endorsement of NASEM values was associated with less endorsement of QRPs. This association was mediated by inclusion of these values in one's own identity. Results provide initial evidence for how institutes can foster psychological profiles of an ethical researcher in developing training modules for graduate students.
Despite the importance of showcasing research achievements and safeguarding research integrity, our understanding of how Chinese universities navigate these potentially competing priorities remains limited. In response, this study investigated 579 Chinese universities on the 2024 Stanford lists of the world's top 2% scientists (WTSs) and operationalized their fulfillment of the dual priorities in terms of institutional visibility (i.e. public institutional responses to the release of the 2024 Stanford lists of WTSs and to the government requirements for safeguarding research integrity) and institutional responsiveness (i.e. promptness in publishing news reports featuring WTSs and releasing annual research integrity reports). In this connection, three types of publicly accessible official documents were analyzed: 1) news reports featuring WTSs, 2) academic integrity webpages, and 3) annual research integrity reports disclosing integrity investigations. Among these universities, 28.5% published news reports featuring WTSs, 52.8% maintained academic integrity webpages, and 16.8% released annual research integrity reports. Furthermore, significant variations were found across four contextual factors: university prestige (elite universities vs. non-elite universities), retraction status (universities hosting retraction-afflicted WTSs vs. universities hosting retraction-free WTSs), the number of WTSs, and the prevalence of retraction-afflicted WTSs.
Authorship remains the primary currency of academic credit and a cornerstone of research integrity, yet current practices often fail to reflect the collaborative and interdisciplinary nature of modern science and questionable authorship practices persist. We argue that addressing these shortcomings is a collective responsibility shared by researchers, journals, research funders, scholarly societies, and research institutions. We examined authorship guidelines issued by journals and research institutions and found that their recommendations to researchers are highly variable. We propose that fostering a responsible authorship culture requires a shared, principle-based framework grounded in transparency, credit, and accountability. These three interconnected principles highlight when authorship practices are questionable and offer a framework for constructive reflection on the meaning of authorship. We outline practical ways research leaders can embed these principles into everyday practice by initiating early, inclusive, and fair authorship discussions and ensuring transparent description of contributions. Research institutions have a unique opportunity to inculcate good practices and lead this culture change with harmonized guidance, education, fair conflict resolution, and reform of researcher assessment. Anchoring authorship in transparency, credit, and accountability will strengthen the credibility of individual research, the fairness of recognition systems, and, ultimately, the trustworthiness of science itself.
A number of proposals across different fields have suggested incorporating "independent" actors into the research process as a way to manage potential bias. For example, in response to allegations of bias in psychedelic science, some have suggested the idea of independent auditors for adverse events, as well as the incorporation of independent researchers into the research teams of psychedelic trials. However, despite growing interest in these methods, the concept of independence itself remains frequently undefined. Moreover, although introducing independent actors may seem like a prima facie beneficial solution to help reduce bias and improve the scientific rigor of research, it may come with significant drawbacks as well. Here, we argue that the sense of independence on which these proposals for independent actors implicitly rely on is freedom from any influence that might alter the actors' choices in a way that reduces the trustworthiness or accuracy of research findings. Whether it is possible to identify and involve such actors without incurring trade-offs with other scientific desiderata (e.g. due to the risk of inadequate expertise) is then further explored. We conclude by providing two models in law and science that may be helpful to draw upon if seeking to incorporate independent actors.
In this article, we discuss the growing problem of hallucinated citations produced by Generative Artificial Intelligence (GenAI) in scholarly research and writing. We argue that GenAI hallucinated citations might qualify as a provable instance of research misconduct under the U.S. federal regulations when a) the researcher uses a GenAI tool to produce hallucinated (i.e., nonexistent) citations for a research document; b) the citations function as data because they directly support research findings, as in, for example, review articles or bibliometric studies; and c) the researcher demonstrates indifference to the risk of fabrication of the data (i.e. citations) because they did not check the GenAI's output for veracity and accuracy. Other types of problematic citations such as bibliometrically incorrect citations, or contextually inaccurate citations, are indicative of poor scholarship and irresponsible behavior, but do not qualify as research misconduct. Recognizing that GenAI hallucinated citations could be regarded as research misconduct in certain cases will hopefully encourage researchers to take this problem more seriously than they do now. In partnership with scientific institutions, funders and professional societies, the scholarly community should work on establishing, promoting, and enforcing standards for responsible use of AI in research, including standards pertaining to citation practices.
This study investigates the awareness, perceptions, and responses of library and information science (LIS) researchers toward retracted papers, aiming to inform the improvement of research integrity governance. A questionnaire survey of 280 LIS researchers examined their sources of retraction information, understanding of causes, perceived consequences, and attitudes toward evaluation. The influence of academic background, publication volume, and discipline was also explored. Findings indicate generally low retraction awareness and a primary reliance on informal channels. Critically, the analysis reveals several nuanced patterns: (1) Significant disciplinary differences exist in perceiving retraction causes; (2) Opinions are sharply divided on including retraction records in research evaluation, reflecting concerns about uniform responsibility attribution; (3) A considerable proportion of researchers mistakenly view retraction's impact as reversible. These attitudes are strongly associated with educational background and publication experience. In response, this paper proposes five key recommendations: establishing authoritative retraction platforms, improving journal retraction mechanisms, differentiating retraction types in evaluation, strengthening integrity education, and building a coordinated governance framework. These measures contribute to fostering a more transparent, fair, and sustainable scholarly correction ecosystem.
Scientific fraud, particularly within medical journals, is a critical and complex issue due to the potential consequences for public health. Medical research plays a crucial role in informing scientific innovation, guiding clinical practice, and driving advancements in treatments. False or misleading research can influence future discovery, lead to ineffective or harmful treatments, waste valuable resources, and erode public trust. A troubling increase in misconduct, including data fabrication and falsification, has been recently noted, although it's unclear whether this partially reflects the development of better methods of detection. Addressing the issue of scientific fraud in medical journals requires a concerted effort from all stakeholders involved, including researchers, journal editors, peer reviewers, funding agencies, and regulatory bodies. Implementing robust measures for detecting and preventing fraud, promoting transparency and accountability in research practices, and fostering a culture of integrity and ethical conduct are all essential steps toward safeguarding the integrity of medical research.
The lack of diversity in research participation poses a threat to health equity and the ethical principle of justice. Yet few evidence-based interventions exist. This study compared two educational programs for research teams, designed to build capacity for inclusive recruitment practices. This parallel cluster randomized trial compared outcomes generated by an anti-bias focused educational workshop and one emphasizing pro-diversity learning. The evaluation consisted of pre-/post-intervention (n = 124) and 3-month follow-up surveys (n = 83). Regression analysis was employed to evaluate program efficacy, and the adoption of simple behaviors comparing groups at follow-up while controlling for pre-measurement levels and utilizing propensity weights. Interviews (n = 33) with participants explored experiences post-intervention. There were no statistically significant differences in outcomes between the test and control groups. Both workshop versions increased participants' self-efficacy and simple behaviors, including "thinking about community perspectives" and "identifying ways to increase community voice" at follow-up. Participants in the test group were the only ones to show a significant increase (within group) in "making suggestions" to their teams about using inclusive strategies (p = .02) and in increasing community voice (p = .00). Qualitative data indicate that pro-diversity activities provided participants with concrete ideas for suggestions and revealed persistent barriers faced by post-intervention.
Black individuals are more likely to die from colorectal cancer (CRC) and experience more treatment-related side effects compared to White individuals. Physical activity (PA) has been associated with decreased side effects, improved CRC treatment completion rates and responses, and survival. However, Black survivors of CRC are 60% less likely to engage in PA than White survivors. The Physical Activity Centers Empowerment (PACE) study is testing an intervention specifically designed to increase PA among Black individuals diagnosed with CRC. This study outlines the protocol for a randomized controlled trial. The study aims to test the feasibility of PACE and will use the reach, effectiveness, adoption, implementation, and maintenance (RE-AIM) framework. The PACE study was developed in partnership with a community advisory board consisting of Black cancer advocates and survivors of cancer. The study aims to recruit 72 participants aged >18 years from North Carolina who have been diagnosed with CRC. These participants will be randomized in a 1:1 ratio to an intervention or control group. During the 12-week intervention, all participants will receive a wearable activity tracker and informational materials from the American College of Sports Medicine's "Moving through Cancer" program. The intervention group will also receive additional PACE theory-guided intervention components, including personalized daily adaptive step goals, access to the PACE video library, and optional video chat meetings for PA support. Data will be collected at 3 time points: baseline, after the intervention (3 months), and 6 months after the intervention (9 months). Using the RE-AIM framework, the study aims to evaluate the intervention's reach, effectiveness, acceptability, implementation, and maintenance. The National Institute on Minority Health and Health Disparities funded this study in 2021. Study enrollment began in August 2024 and is anticipated to conclude in December 2024. This study will advance our understanding of effective behavioral strategies to increase PA and help advance the use of PA as a form of complementary cancer treatment, with the aim of improving health outcomes for Black survivors of CRC. ClinicalTrials.gov NCT06411756; https://clinicaltrials.gov/study/NCT06411756. DERR1-10.2196/65804.
Generative Artificial Intelligence(GenAI) significantly enhances medical research efficiency but raises ethical concerns regarding research integrity. The lack of systematic guidelines for its ethical use underscores the need to investigate GenAI's impact on researchers' awareness and behavior concerning integrity. A cross-sectional survey of 718 valid responses from Chinese medical researchers assessed GenAI's impact on research integrity using an extended Unified Theory of Acceptance and Use of Technology(UTAUT) model. The findings reveal that performance expectancy, effort expectancy, technical environment, trust in technology, and supporting conditions positively influence researchers' awareness of research integrity. Conversely, GenAI anxiety and perceived risks exert a significant negative impact. Furthermore, both supporting conditions and integrity awareness are positively associated with integrity behavior, while GenAI anxiety negatively affects such behavior. The stakeholders in the medical research ecosystem should develop comprehensive guidelines for the responsible use of GenAI. Emphasis should be placed on optimizing the technical environment, enhancing trust and support structures, and embedding integrity safeguards, thereby promoting the synergistic development of technological innovation and ethical research practices.
The integration of generative artificial intelligence (GAI) in research raises concerns about transparency, accountability, and task delegation. While frameworks such as CRediT and the NIST AI Use Taxonomy address contributions to research, they either exclude AI-assisted input (CRediT) or do not provide a stage-specific approach (NIST). A structured taxonomy is needed to delineate GAI's contributions across research stages while preserving human oversight and research integrity. This study introduces the Generative AI Delegation Taxonomy (GAIDeT), informed by existing contributor role taxonomies, peer-reviewed literature, and an iterative consensus-building approach. It categorizes GAI's contributions at macro and corresponding micro levels, specifying the degree of human oversight required. GAIDeT provides a structured framework for documenting GAI's role in scholarly research. It classifies research activities into key domains - conceptualization, literature review, methodology, data analysis, writing, supervision, and ethical review - ensuring transparency and human accountability. A GitHub-based interactive tool - the GAIDeT Declaration Generator - was developed to help researchers document delegation choices transparently. By standardizing GAI task delegation, GAIDeT enhances research integrity and transparency. Future work should focus on empirical validation, cross-disciplinary adaptability, and policy implications for GAI governance.
The 2024 Stanford career-long list of the world's top 2% scientists (WTSs) by citation impact, which for the first time includes retraction data, offers a unique opportunity to explore research integrity within this group of elite researchers. This study examines the retraction data across multiple dimensions, including countries/regions, institutions, research domains, fields, and subfields, using three key metrics: the prevalence of WTSs with retractions, the retraction rate, and the citation rate of retracted publications. Our analysis reveals significant variations in these retraction metrics by country/region income level, level of seniority in academic publishing, and research domain. Significant differences were also observed between China and the USA. Based on these findings, we argue that elite researchers should be held accountable and sanctioned for their retractions. Accordingly, we propose a ranking-based sanction framework for identifying and ranking WTSs with retractions, and this framework is applied to the 2024 Stanford career-long list to illustrate its practical applicability. We discuss the findings and their implications for addressing retractions among elite researchers, as well as strategies for refining the ranking-based sanction framework.
Public trust in research depends in part on the capacity of the system to detect and correct errors in the research record. In Australia, this task is largely entrusted to research institutions through a self-regulatory framework. The present article seeks to contribute to ongoing conversations about whether the Australian framework is fit for purpose. Here, we assemble and analyze two sources of information that have been previously analyzed: research misconduct investigation policies at Australian Group of Eight universities (a group of Australian universities that purport to be its leading research-intensive universities) and published decisions and appeals arising from workplace disputes involving allegations of research misconduct. Together, these materials support existing concerns that universities are not adopting robust policies regarding reporting findings of misconduct and correcting the record, and that they sometimes fail to follow their own policies. Claims that the current self-regulatory approach is sufficient are not supported by our evidence. These findings provide a foundation for reform, including revisions to the existing guidelines and the creation of an independent oversight body with adequate enforcement powers.
Three oversight bodies review research proposals to help ensure the safe and responsible conduct of biomedical research, each focusing on unique aspects of research ethics: institutional review boards (IRBs), institutional biosafety committees (IBCs), and institutional animal care and use committees (IACUCs). The role of artificial intelligence (AI) in research oversight is rapidly expanding, specifically when preparing and reviewing applications. Although using AI may reduce administrative costs and burdens, it also may create new concerns since AI tools can make mistakes of fact and reasoning, and are susceptible to bias. Furthermore, outsourcing ethical planning and oversight of research to AI could compromise ethical understanding. Although the arguments for/against using AI in preparation or review of IRB, IBC, or IACUC differ fundamentally from those concerning AI use in manuscript writing/peer review, currently there is minimal guidance about the responsible use of AI in research oversight from government agencies, professional organizations, universities, hospitals, and other entities that conduct research. We argue that 1) to minimize the risks of using AI in research oversight, additional guidance is urgently needed; and 2) humans must always be the final decider because ethical planning and oversight involve value judgments that should not be outsourced to AI.
Despite a robust literature on the relationship between protective behavioral strategies, also known as harm reduction strategies (HRS), and alcohol-specific harm reduction (Cox et al., 2024; Peterson et al., 2021), limited formative research has been conducted in the last decade to update the types of HRS currently being implemented. This study utilized qualitative data from open-ended questionnaires to identify HRS recommended by college students who drink heavily. Qualitative data were collected from 179 heavy-drinking college students (61% women, 49% White). Students responded in writing to a computer-delivered, open-ended prompt soliciting suggestions for preventing alcohol-related consequences related to drinking. Coders independently coded the written responses, resolving discrepancies via consensus, and then used thematic analysis to identify key themes in the data. Essays ranged from 37 to 700 words (M = 261.96, SD = 130.84), revealing 28 distinct HRS. While many strategies aligned with items on commonly used protective behavioral measures, new strategies were also revealed (e.g., social accountability and bodily awareness). Findings revealed that HRS can be organized according to when they might occur: before, during, and/or after a drinking event. Additionally, some HRS required social support, while others could be implemented independently. The results suggest a novel framework for understanding HRS adopted by heavy-drinking college students. The temporal and social dimensions of the HRS described in this study differ from the many assessments that typically concentrate on the strategies a drinker can use during drinking events. Prevention efforts could benefit from expanding the pool of potential HRS. (PsycInfo Database Record (c) 2026 APA, all rights reserved).