Abstract The traditional publication process delays dissemination of new research, often by months, sometimes by years. Preprint servers decouple dissemination of research papers from their evaluation and certification by journals, allowing researchers to share work immediately, receive feedback from a much larger audience, and provide evidence of productivity long before formal publication. Launched in 2013 as a non-profit community service, the bioRxiv server brought preprinting to the life sciences and recently posted its 310,000th manuscript. The server now receives around ten million views per month and hosts papers spanning all areas of biology. Initially dominated by evolutionary biology, genetics/genomics and computational biology, bioRxiv has been increasingly populated by papers in neuroscience, cell and developmental biology, and many other fields. bioRxiv and its sister server, medRxiv, also played a critical role during the pandemic, rapidly disseminating new discoveries in immunology, virology and epidemiology related to the SARS-CoV-2 virus and its effects. Changes in journal and funder policies that encourage preprint posting helped drive adoption, as did the development of bioRxiv technologies that allow authors to transfer papers easily between bioRxiv and journals. A recent user survey found that 30% of authors post their preprints weeks to months before submitting to a journal whereas 55% post around the time of journal submission. Authors are motivated by a desire to share work early; they value the feedback they receive and very rarely experience any negative consequences of preprint posting. Rapid dissemination via bioRxiv is also encouraging new initiatives that experiment with the peer review process and the development of novel approaches to literature filtering and assessment.
In an era of evidence-based medicine, peer review is an engine and protector of that evidence. Such evidence, vetted by and surviving the peer review process, serves to inform clinical decision-making, providing practitioners with the information to make diagnostic and therapeutic decisions. Unfortunately, there is recent and growing pressure to prioritize the speed of research dissemination, often at the expense of careful peer review. It is timely to remind readers and the public of the value brought by peer review, its benefits to patients, how much the public trust in science and medicine rests upon peer review, and how these have become vulnerable.Peer review has been the foundation of scholarly publishing and scientific communication since the 1665 publication of the Philosophical Transactions of the Royal Society. The benefits and advantages of peer review in scientific research, and particularly medical research, are manifold and manifest.1 Journals, editors, and peer reviewers hold serious responsibility as stewards of valid information, with accountability to the scientific community and an obligation to maintain the public trust. Anesthesiology states its aspiration and its responsibility on the cover of every issue: Trusted Evidence. Quality peer review (more specifically, closed or single-blind peer review, in which the identity of reviewers is confidential) is a foundational tenet of Anesthesiology.Peer review grounds the public trust in the scientific and medical research enterprise, as well as the substantial public investment in scientific research. Peer review affords patients some degree of comfort in placing their trust in practitioners, knowing that they should be informed by the best possible, vetted evidence.Quality peer review enriches and safeguards the scientific content, transparency, comprehensibility, and scientific integrity of published articles. It can enhance published research importance, originality, authenticity, scientific validity, adherence to experimental rigor, and correctness of results and interpretations and can identify errors in research execution. Peer review can help authors improve reporting quality, presentation clarity, and transparency, thereby enhancing comprehension and potential use by clinicians and scientists. Careful scrutiny can identify whether research has appropriate ethical principles, regulatory approvals, compliance, and equitable inclusion of both sexes. Peer review should consider the appropriateness of authorship and can detect duplicate publication, fabrication, falsification, plagiarism, and other misconduct.Peer review should serve as a tempering factor on overenthusiastic authors and overstated conclusions, unwarranted extrapolations, conflation of association with causality, unsupported clinical recommendations, and spin. Spin is a well known, unfortunately common, and often insidious bias in the presentation and interpretation of results that seeks to convince readers that the beneficial effect of an experimental treatment exceeds what has actually been found or that minimizes untoward effects.2–4Manuscripts often change substantially between the initial submission and the revised and improved published version. Improvement during the peer review process is not apparent to readers, who only see the final, published article, but is well known to authors, reviewers, and editors. Peer review is a defining difference in an era of proliferating predatory journals and other forms of research dissemination. Anesthesiology reviewers and editors devote considerable effort in service to helping authors improve their scientific communications, whether published in this journal or if ultimately elsewhere.In the domain of clinical research, peer review does not change the scientific premise of an investigation, the hypothesis, or the study design, although it frequently improves their communication. Peer review does not change clinical research data, although it often corrects, enhances, or strengthens the statistical analysis of those data and can markedly improve their presentation and clarity. More importantly, peer review can assess, correct, and improve the interpretation, meaning, importance, and communication of research results—and importantly, confirm that conclusions emanate strictly from those results. Peer review may occasionally fundamentally revise or even reverse clinical research interpretations and recommendations. Each of these many functions enhances reader understanding and should ultimately improve patient care.Peer review is not a guarantee of truth, and it can be imperfect. Medical history provides many examples of peer-reviewed research that was later found to be incorrect, typically through error or occasionally from misconduct. However, peer review certainly was and remains an essential initial check and quality control that has weeded out, or corrected before publication, innumerable reports of research of insufficient quality or veracity that otherwise would have been published and thereby become publicly accessible. Additionally, science should be “self-correcting,” and peer review is one of the most important factors responsible for such correction. Peer review remains an element by which medical science achieves the “self-correction” that drives progress.Quality peer review does take time. So also do the initial preparation of manuscripts and the modifications made by authors in response to peer review. Anesthesiology endeavors to provide both quality and timely peer review. Our time to first decision averages only 16 days.The increasing emphasis on fast research dissemination, often absent quality peer review, comes mostly but not exclusively because of the immediacy of the internet and broader media and societal trends. In an era in which the companies whose major product is the immediacy of information are the economic leaders (Facebook, Twitter, Google, and Apple), it is unsurprising that the immediacy of information is challenging that of quality as the value proposition in the research marketplace. Nevertheless, fast is not synonymous with good. We believe that sacrificing quality on the altar of speed is unwise, benefits no one (except perhaps authors), and may ultimately diminish trust in medical research and possibly even worsen clinical care.Another recent societal problem is the growing spillover of political and media communication trends into scientific communication. Almost half of Americans believe that science researchers overstate the implications of their research, and three in four think “the biggest problem with news about scientific research findings is the way news reporters cover it.”5 Scientific conclusions may be perverted through internet-based campaigns of disinformation and misinformation and dissemination of misleading and biased information.6 This threatens the public trust in the scientific enterprise and scientific knowledge.7 Social media has made science and health vulnerable to strategic manipulation.7,8 It is also “leaving peer-reviewed communication behind as some scientists begin to worry less about their citation index (which takes years to develop) and more about their Twitter response (measurable in hours).”8 Peer-reviewed journals cannot reverse these trends, but they can at least ensure that scientific conclusions when presented are correct and clearly stated.In addition to the premium on dissemination speed versus peer review quality, a new variant of rapid clinical research dissemination has emerged that abrogates peer review entirely: preprints. Preprints are research reports that are posted by authors in a publicly accessible online repository in place of or before publication in a peer-reviewed scholarly journal. The preprint concept is decades old, rooted in physics and mathematics, in which authors traditionally sent their hand- or typewritten manuscript draft to a few colleagues for feedback before submitting it to a journal for publication. With the advent of the internet, this process was replaced by preprint servers and public posting. With the creation of a preprint server for biology and the life sciences (bioRxiv.org), the posting of unreviewed manuscripts by basic biomedical scientists has exploded in popularity and practice. Next came the creation of medRxiv.org, a publicly accessible preprint server for disseminating unpublished and unreviewed clinical research results in their “preliminary form”9 and more so a call for research funders to require mandatory posting of their grantees’ research reports first on preprint servers before peer-reviewed publication.10Lack of peer review is the hallmark of preprints.The main arguments offered by proponents of preprints are the free and near-immediate access to research results, claimed acceleration of the progress of research by immediate dissemination without peer review, and the assumption that articles will be improved by feedback from a wider group of readers alongside formal review by a few experts. Specifically claimed advantages of preprints are that they bypass the peer review process that adversely delays the dissemination of research results and “lifesaving cures” and “the months-long turnaround time of the publishing process and share findings with the community more quickly.”11 In addition it is claimed that preprints address “researchers recently becoming vocally frustrated about the lengthy process of distributing research through the conventional pipelines, numerous laments decrying increasingly impractical demands of journals and reviewers, complicated dynamics at play from both authors and publishers that can affect time to press” and enable “sharing papers online before (or instead of) publication in peer-reviewed journals.”11Preprints for clinical research have been justifiably criticized.2,12–15 Most importantly, medical preprints lack safeguards afforded by peer review and increase the possibility of disseminating wrong or incorrectly interpreted results. Related concerns are that preprints are unnecessary for and potentially harmful to scientific progress and a significant threat with potential consequence to patient health and safety. Preprint server proponents “assume that most preprints would subsequently be peer reviewed,”10 possibly before or after formal publication (if published), thus enabling correction or improvement (before or after publication). However, it is estimated that careful peer review of a manuscript takes 5 to 6 h.1,16 It seems highly unlikely that busy scientists will surf the web in search of preprints on which to spend half a day providing concerted informative peer review.Preprint enthusiasts claim that peer review after posting will provide scholarly input, facilitate preprint improvement, and enhance research quality. In fact, such peer review has been scant with biologic preprints, and it seems naïve to expect it with medical preprints. In reality, most preprints receive few comments, even fewer formal reviews, and many comments that are “counted” to support the notion that preprints do undergo peer review actually come through social media; a tweet is hardly a substantive review. The idea that comments on servers will replace quality peer review is not happening now and seems unlikely to transpire. Moreover, a survey found that the lack of peer review was an important reason why authors deliberately choose to post via preprint.17 Additionally, postdissemination peer review takes longer than traditional prepublication peer review, and there remains concern by authors who do value peer review about the quality of the post-preprint peer review process and the quality of posted preprints.17Preprint server proponents state “the work in question would be available to interested readers while these processes (peer review) take place, which is more or less what happens in physics today.”10 The lives of patients are different than the lives of subatomic particles. Preprints deliberately “decouples the dissemination of manuscripts from the much slower process of evaluation and certification.”10 However, it is exactly that coupling that validates clinical research, benefits patients, improves health, and engenders public trust.The potential for free and unfettered distribution of raw, unvetted, and potentially incorrect information to be consumed by clinicians and patients cannot be called a medical advance. Use of such information by news outlets and online web services to promote “new” and “latest” research further misinforms the public and patients and is a disservice.Relegating peer review to the realm of option and afterthought is not in the interest of research quality and integrity or of patients and public health. There is no apparent value in abrogating peer review of clinical research and all its many attendant benefits in ensuring the quality of clinical research available to practitioners and patients. Practitioners and patients have historically not seen the unreviewed manuscript submissions that eventually become revised peer-reviewed publications. Doing so now, given the sizable fraction of clinical research manuscripts that are rejected for publication and the substantial changes in most that are published, by providing the public with unreviewed preprints seems to carry considerable risk.An additional problem is that the same research report can be posted on several preprint servers or websites or multiple versions may exist on the same preprint site. Various versions may be the same or different, and the final peer-reviewed published article (if it ever exists) may bear little semblance to the various posted versions, which remain freely available. Which version is correct? Availability of various differing reports of the same research risks competing or incorrect information and can only generate confusion. Scientific publishing decades ago banned publication of the same research in multiple journals owing to concerns about data integrity and inappropriate reuse. Restarting this now, via preprints, seems unwise—especially in medicine.The public cannot and should not be expected to differentiate between posting and peer-reviewed publication. Unfortunately, and worse, even some practitioners do not understand the difference. Posting is often referred to erroneously as publication. Indeed, even the world’s most prestigious scientific journals refer to posting as publication.18 Such conflation blurs the validity of information. That peer-reviewed publications and preprints both receive digital object identifiers further blurs their distinction and may give the latter more apparent credibility in the eyes of the lay public. The preprint community (servers and scientists) continues to claim simultaneously that preprints are and are not publications, depending on how such claims meet their proclivities. Although the bioRxiv server contains the disclaimer “readers should be aware that articles on bioRxiv have not been finalized by authors, might contain errors, and report information that has not yet been accepted or endorsed in any way by the scientific or medical community” on a web page,19 it is not on the preprint itself for readers to see (perhaps this disclaimer, and the one below, should appear on the cover page of every preprint and as a footnote on every page). Fortunately, the medRxiv home page (http://www.medrxiv.org) states the following disclaimer: “Preprints are preliminary reports of work that have not been certified by peer review. They should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.” Then why bother?The popularity of preprints in the basic science world has exploded in the last 5 yr, with the number of documents posted to preprint servers increasing exponentially.20 While acknowledging the noble reasons given by preprint servers and authors for the dissemination of research by posting, three other apparent reasons are less noble. The first is competition for research funding. Major research funders (e.g., the National Institutes of Health) do not allow citation of unpublished manuscripts in grant applications but do allow citation of preprints.21,22 The second is the preoccupation of authors with the speed of availability. There is a growing (and disappointing) trend of authors perceiving a need to claim priority (“we are the first to report…”), grounded perhaps on fear of being “scooped.” The third is the pursuit of academic promotion, which is based largely on the number of peer-reviewed publications listed on a curriculum vitae. We now see faculty listing preprints in the peer-reviewed research publications section of their curriculum vitae. All these drivers (priority, science advancement, reputational reward, and financial return)7 are investigator centric. They are neither quality-centric nor patient-centric.Who benefits if clinical research quality is sacrificed at the altar of speed? Certainly, it is not patients, public health, or the public trust in science, medicine, and the research enterprise. Enthusiasm for preprints seems to be emanating mostly from investigators, presumably because of academic or other incentives,23 including the desire for prominence and further funding. Is this why we do medical research? Should we be investigator- or patient-centric?Little in the argumentation espoused by proponents of clinical preprints attends to their benefit to patients. Indeed, posted preprints without all the scrutiny and benefits of peer review may lack quality and validity and may report flawed data and conclusions, which may hurt patients.17,23 As stated previously, “clinical studies of poor quality can harm patients who might start or stop therapy in response to faulty data, whereas little short-term harm would be expected from an unreviewed astronomy study.”12The importance of peer review in clinical research and the downside of its absence in posted preprints is illuminated by the COVID-19 pandemic. As of this date (October 1, 2020), there are 9,222 unreviewed COVID-19 SARS–CoV-2 preprints posted: 7,257 on medRxiv and 1,965 on bioRxiv.24 To date, 33 COVID-19 articles have been retracted (0.37%), and 5 others have been temporarily retracted or have expressions of concern.25 Of the 33 retractions, 11 (33%) were posted on an Rxiv server. The overall retraction rate in the general peer-reviewed literature is 0.04%.26Based upon one of the unreviewed COVID-19 medical preprints,27 the Commissioner of the U.S. Food and Drug Administration (the government agency entrusted more than any other to protect public health) and the President of the United States announced that convalescent plasma from COVID-19 survivors was “safe and very effective” and had been “proven to reduce mortality by 35%.”28 Although the Commissioner later, after scientific uproar over that misinformation, “corrected” his comment in a tweet (a back page retraction to a front page headline),29 the preprint was used to justify a Food and Drug Administration decision to issue an emergency use authorization for convalescent plasma to treat severe COVID-19. Would these errors have been prevented by peer review? We will never know.Even if priority in clinical (and basic) research is valued, compared to the unquestionable value of quality, clinical preprints have questionable necessity in establishing precedence in contemporary times. Clinical trials registration, which makes fully public the existence of all such research, establishes both who is doing what and when. Some investigators may even publish their entire clinical protocol, to further make their studies known and by whom and when.For hundreds of years, patent medicines (exotic concoctions of substances, often addicting and sometimes toxic) were claimed to prevent or cure a panoply of illnesses, without any evidence of effectiveness or safety or warning of potential harm. These medical elixirs, the magic potions of snake oil salesmen and charlatans, were heavily advertised and promoted to ailing, sometimes desperate, and thoroughly unsuspecting citizens—all without any oversight, regulation, quality control, or peer review. It was not until the 20th century that medical peer review and the requirement for evidence of effectiveness and safety reigned in the “Wild West” and launched the modern era of medicine, yielding the scientific discovery, progress, and improvement in human health seen today. This era rests on the bedrock of peer review, the quality ideal, and the evidence that constitutes the foundation for evidence-based medicine.Will clinical preprints become the patent medicines of the new millennium? Do they portend the unrestricted and unregulated spillage of anything claimed as research, by anyone, and absent the quality control afforded by peer review? Like the patent medicines of a bygone era, which were heavily promoted by the newly developed advertising industry, will “posted” clinical research become fodder for the medical advertising industry and media at large, pushing who knows what information and claims on practitioners and a public already deluged with endless promotions and claims with which they cannot keep up or verify? An unsuspecting public is incapable of differentiating between the “posting” of any research observation by anyone with access to a computer and scholarly of peer-reviewed results and This is particularly of vulnerable patients with severe who may at Moreover, claims of and “proven based on preprints, by after and further public in the scientific as a This can the that clinical science is and might be a of and instead of valid the century and the has been and government have been to protect the public and maintain their trust in the medicines they would the patent now consider the peer review process in clinical the editors of several journals a well that they will not clinical research manuscripts that had been posted to a preprint was that the benefit of preprint servers in clinical research not the potential harm to patients and scientific Major concerns Preprints may be by some (and used by less as evidence even the studies have not through peer review and the public may not be to an unreviewed preprint from a article in a It seems unlikely that the of prepublication that has place in other academic and will take place in medicine or because the are very Preprints may to multiple and perhaps even versions of the being available online at the same which can and and the of medical a few of review of a findings do not make a the of and dissemination is These concerns and if not more potential for and public the difference between unregulated preprints and peer-reviewed publication is Indeed, the posting of preprints is often incorrectly Peer-reviewed publications versus posted will become a difference without a Moreover, authors cannot have it both They cannot claim a preprint as a publication for of a grant (and now in some potentially for of a yet claim it is not a publication for the of submission to a peer-reviewed journal that does not allow publication. More importantly, the peer review in clinical research and the it in research quality, the evidence and patient constitutes an obligation to patient safety that cannot and should not be review, clinical research quality, and the public trust in clinical research all now an Quality peer review is a foundational tenet of Anesthesiology and the Trusted we and peer review will to be a hallmark of in service to readers, patients, and the public and of for their has a with reports being on and for and reports financial with and The authors no competing
When Cold Spring Harbor Laboratory announced the launch of bioRxiv in November 2013, only the braver or more radical amongst us would have predicted that it heralded a lasting change within the science publishing ecosystem. What started with a bold idea – an open access preprint repository for the biological sciences (see original news post) – resulted in a cultural change within the field, changing the way research is shared, reviewed and accessed. The emergence and expansion of preprint review platforms, funder policy changes and community advocacy for preprint adoption continues to support this change.The Company of Biologists was one of the earlier advocates among publishers for the integration of preprints into existing publishing workflows. Furthermore, through initiatives such as preLights, the Company pro-actively helps to amplify the reach and impact of preprints. This Editorial explores the transformation of the preprint landscape since 2013 and how the Company has adapted to and embraced this (new) way of sharing research.Whereas preprinting in the physical sciences has a much longer history (Tennant et al., 2018), the adoption of preprints in biology took a significant step forward in 2013. After a few unsuccessful attempts to start a preprint server that would encompass the life sciences (e.g. Nature Precedings in 2007), the successful launch of bioRxiv meant that life scientists now had a dedicated platform to rapidly share their findings (Sever et al., 2019a,b preprint). Of note, other preprint servers (e.g. PeerJ Preprints) or related platforms (e.g. F1000Research) were launched around the same time, offering researchers more options to share their work. The Center for Open Science launched the Open Science Framework (OSF) Preprints platform in 2016 to link preprint servers that were starting to appear across the scientific landscape (Tennant et al., 2018). bioRxiv quickly became the most popular preprint server for the biological sciences and only recently received competition from Research Square (the preprint server owned by Springer Nature) with regard to this title (see recent statistics collated by Europe PMC).When, in 2015, Ron Vale noted that it took University of California San Francisco (UCSF) graduates longer and longer to achieve a first-author publication, this inspired him to found the preprint-focused organisation titled Accelerating Science and Publication in Biology, better known as ASAPbio. This non-profit organisation has been instrumental in pushing the wider acceptance of preprints within the biological sciences. Today, ASAPbio continues to play a pivotal role in the preprint ecosystem, advocating for widespread changes (involving the adoption of preprints) that increase accessibility, inclusion and fairness.Prior to the launch of bioRxiv, The Company of Biologists journals' policies were incompatible with preprint posting. However, the Editors-in-Chief and Board of Directors quickly realised that bioRxiv, unlike previous attempts, was likely to gain traction – at least in some of our communities. In early 2014, Development and Biology Open began to welcome the submission of preprinted manuscripts. By the end of 2016, not only had all our journals introduced this policy, but we had also integrated with bioRxiv to enable bi-directional manuscript transfer between the two sites. It is noteworthy that different areas of the life sciences have embraced preprinting to different extents – with cell and developmental biologists being much more likely to post preprints than comparative physiologists (Nelson and Marshall, 2025). Adoption of preprint-related policies has therefore reflected the needs and wishes of the communities our journals serve.In line with the policy changes at The Company of Biologists, nearly half of publishers had developed preprint-related policies by 2017 (da Silva and Dobránszki, 2019). At this point, it was also becoming clear that researchers were increasingly finding bioRxiv (and other preprint servers) an invaluable resource for accessing the latest scientific findings. In response, our community site the Node (which serves the developmental biology field) began to collate monthly lists of relevant preprints – making it easier for researchers to find the content they were interested in. From an initial of just 20 preprints in June 2016, the monthly list now typically includes over 150 articles, and these posts on the Node are consistently among its most-read content. The popularity of these posts prompted the Company to consider other ways of helping the community to access and digest the preprint literature, leading to the birth of preLights – a preprint highlighting service that allows early-career researchers (ECRs) to highlight and discuss preprints that spark interest or debate in the scientific community (Brown and Pourquié, 2018; for more information about preLights, see Box 1). Box 1. preLights (2018 to now)Designed as a community-driven platform, supported by The Company of Biologists and a dedicated Community Manager, preLights allows early-career researchers (ECRs) to highlight and discuss preprints that are of interest to them as well as the wider biological community (Brown and Pourquié, 2018). A usual preLights post contains a summary of the key findings presented in the selected preprint, the reasons it was selected and the preLighter's thoughts on its significance. Inspired by a published study supported directly by the preLights community (Brierley et al., 2022), ‘postLights’ is a recently added feature that tracks how an article changed between the preprinted and published version. In addition, two-thirds of the >1600 preLights posts include a response from the preprint authors, making preLights a unique platform for encouraging discussion around preprints. This also is the main aim of its associated podcast series, ‘spotLights’. preLights is indexed on EuropePMC, bioRxiv, PubPeer and Sciety, and all posts receive a DOI. As such, it not only promotes the visibility of preprints but also supports ECRs in building their expertise, writing skills and networks.We weren't the only ones to realise that there was a need for collation and review of preprints; around the same time, several preprint review initiatives started to emerge – most notably PREreview and Peer Community In. These sites provide more formal peer review of preprints than preLights but – like preLights – there is a strong focus on involving ECRs and diversifying the pool of researchers that can contribute to the review of the scientific literature. Thus, a new community began to coalesce, led by ASAPbio, around the preprint ecosystem – involving researchers, publishers, funders and other relevant parties.Besides preprint review platforms, there was also a noticeable increase in so-called overlay journals. These journals combine the available preprint infrastructure (e.g. bioRxiv) with existing peer-review policies and journal structures (Rousi and Laakso, 2024). Within the biological sciences, a particularly prominent example of such a journal is JMIRx | Bio, which, rather than an overlay journal, defines itself as a ‘Superjournal’. Although journals like JMIRx | Bio could, in theory, bridge preprints and publishing workflow, it is interesting to note that, in practice, overlay journals have not gained significant traction to date.By 2018, the number of life science preprints was growing at a rate ten times faster than that for traditional journal articles (Levchenko et al., 2024). However, journal policies regarding preprints varied widely, and it was not always straightforward for authors to find out what they could do in the preprint space without compromising potential publication in their target journal. To address this issue, the Transpose database was launched in 2019 to provide more clarity on journal policies regarding preprinting peer review, co-reviewing, and preprint policies relating to media coverage, licensing, versions, citation and platforms (this database is no longer actively maintained, perhaps reflecting the fact that journal policies are now less divergent than previously).The success of bioRxiv meant that many publishers had to consider the degree to which the new culture of preprinting might threaten journal publishing or, conversely, how they could synergise with this ecosystem. Such discussions led to two major initiatives in the biological sciences – the launch of Review Commons, and the Preprint Review trial at eLife – which laid the groundwork for eLife’s current publishing model (see next section). Review Commons, launched by the European Molecular Biology Organization (EMBO) in collaboration with ASAPbio, provides researchers with the option of receiving journal-independent, high-quality peer review of their preprints and/or manuscript prior to submission to a journal (Lemberger and Pulverer, 2019). The result of the Review Commons process would be a ‘Refereed Preprint’ comprising the manuscript, reviewer reports and any author responses to those reports. Initially, public posting of the refereed preprint was optional, but Review Commons now requires authors to post manuscripts as preprints first (see EMBO news post), and the peer reviews and author response are posted on the preprint server by default (as detailed in a Review Commons news post). What has remained the same since the launch of Review Commons is that the authors can choose to submit their work, along with the reviews and responses, to one of the affiliate journals, which use these evaluations to make informed decisions without restarting peer review (Lemberger and Pulverer, 2019). Of note, the journals of The Company of Biologists were among the initial 17 affiliate journals – a group that has now grown to 28.In response (and adding to) the evolving publishing landscape, two impactful papers appeared in PLOS Biology in 2019 that proposed extensive changes to the way scientific research is disseminated (Stern and O'Shea, 2019; Sever et al., 2019a,b preprint). In one of these, Bodo M. Stern and Erin K. O'Shea proposed a ‘publish first, curate second’ approach to academic publishing (Stern and O'Shea, 2019). This approach – perhaps better known as the ‘Publish, Review, Curate’ (PRC) model of publishing – aims to separate the dissemination and curation of scientific work. Importantly, it puts the authors in the driver's seat by allowing them to decide when and what to publish (Stern and O'Shea, 2019). Peer review then follows publication, thereby preventing the delay in dissemination. In line with this kind of thinking, another paper, which appeared at the same time, argues for funder preprint mandates (Sever et al., 2019a,b preprint). This plan, proposed by the co-founders of bioRxiv and the co-founder of PLOS, was termed Plan U (for ‘universal’) and aims to ensure accessibility of scientific literature and to support the implementation of new peer review and research evaluation initiatives, like the PRC model (Sever et al., 2019a,b preprint). These papers, and their proposals, have been hugely influential in further shaping the preprint and wider publishing ecosystem.The outbreak of the COVID-19 pandemic at the end of 2019 further emphasised the importance of preprints for rapid information dissemination. The pandemic drove a surge in preprints, with 32% of COVID-19 papers listed on the National Institutes of Health (NIH)’s portfolio being preprints (as reported by ASAPbio). During this time, PubMed started a pilot experiment, as part of which they indexed preprints of NIH-funded authors (Funk et al., 2024 preprint; more information has also been provided by the National Library of Medicine).Various preprint initiatives were launched during the COVID-19 pandemic, including Rapid Reviews: COVID-19, which received the 2022 PROSE Award for Innovation in Journal Publishing. It later evolved into a true overlay journal: Rapid Reviews\Infectious Diseases. Both Early Evidence Base (EEB) and Sciety were launched in 2020 as platforms focused on aggregating peer-reviewed preprints. As winner of the ASAPbio PreprintSprint, the main goal of EEB was to ensure that both expert evaluations and author responses are openly available, enabling readers to assess the findings critically (see EMBO news post). Similarly, the goal of Sciety (part of eLife) is to support preprint peer-review communities to openly share their efforts, allowing multiple groups, rather than a single journal, to participate in the review and curation of scientific literature.Following the pandemic, the adoption and visibility of preprints within the biological sciences has been steadily growing. In 2021, bioRxiv introduced a dashboard providing links to scientific discussion and evaluation of bioRxiv preprints (see bioRxivnews post). preLights, which had further developed into an invaluable resource by this time, with >1000 posts by early 2021, was fully integrated with the bioRxiv dashboard, featuring under the Community Reviews tab (see preLights news post). Not only through preLights did The Company of Biologists help to increase the reach and visibility of preprints, however: the journal Development introduced ‘In preprints’ articles highlighting key preprints in the field, sparking discussions and guiding readers to significant new research (Briscoe and Grewal, 2022).In 2022, EMBO announced that refereed preprints would be recognised as an eligibility criterion for the EMBO Postdoctoral Fellowships. This move reflected a wider acceptance of preprints. Other funders had already started to include preprints in the evaluation of applicants, such as the Australian Research Council and European Research Council (see examples of funding agencies with changes to policies surrounding using preprints). In fact, some funders, such as Alex's Lemonade Stand Foundation, started to mandate the posting of preprints (see policy document). In March 2024, the Bill & Melinda Gates Foundation announced a future policy requiring all grantees to share their research as preprints – another big step toward the normalisation of preprints in scientific communication.In 2023, the proportion of life sciences research disseminated as preprints grew to 10.7%, underscoring their sustained adoption (Levchenko et al., 2024). In response, eLife made a bold move in switching altogether to a model in which they would only publish ‘Reviewed Preprints’ (Eisen et al., 2022; see eLifenews post). They created quite some ripples in the publishing ecosystem, and the impact of this drastic change of policy is still being evaluated (Behrens et al., 2024).Preprints are here to stay, and The Company of Biologists remains determined to be at the forefront of this ever-growing movement. At Biology Open, the vision is to accelerate the dissemination of biological research by requiring that all journal submissions be posted as preprints (Adhikari et al., 2025). Platforms like preLights are more relevant than ever, offering curated and community-driven insights into the rapidly growing body of preprinted literature. Still, it's clear that preprinting and preprint peer review haven't (yet) replaced traditional journal publishing. The stranglehold of journal metrics and reputation still mean that researchers want and need formal publications for career progression. But also, more positively, there is recognition that journals provide important services to the community – not only in coordinating expert peer review, but also in helping to ensure ethical integrity and in collating and disseminating research results in an accessible format. Thus, preprints and journal articles complement each other – the one providing rapid access to research results, the other ensuring the long-term integrity and preservation of the scientific record. Moving forwards, we at The Company of Biologists will continue to experiment with our preprint-related activities and policies – in line with the ever-evolving needs of our communities.
The ASBMR Publication Committee, together with the Editors-in-Chief of JBMR, JBMR Plus, and members of the editorial boards, have implemented a new policy that allows manuscripts that have been posted on noncommercial preprint servers to be submitted to the journals for consideration. The use of preprint servers is becoming more common, but, because it is still rare in the field of bone and mineral research, the concept is most likely unfamiliar to many readers of JBMR and JBMR Plus. To clarify the new policy decision, this editorial provides a brief overview of preprint servers and describes their role in biomedical research. Picture this common scenario: a young biomedical researcher has spent years collecting data for a very exciting project and is finally ready to submit a manuscript. This researcher is also applying for an NIH grant. Do they (i) hold off on submitting the grant application, knowing that they will have a greater chance of receiving funding once their paper is accepted; (ii) submit the application right away, even though they are missing the crucial citation; or (iii) upload a draft of their manuscript to a preprint server, using the citation for their application while they wait for a response from the journal? Many researchers are unaware of option iii, but it demonstrates one of the primary utilities of preprint servers: rapid dissemination of results within the scientific community. Preprint servers, such as bioRxiv, PeerJ Preprints, and arXiv-q bio, provide a way to publish and distribute scientific papers online with minimal hassle. Posting to a preprint server is free (with noncommercial servers), fast (within a few days, sometimes within 24 hours), and easy. Once submitted, a draft is checked for scientific content and plagiarism, and then added to the server archive without peer review, revisions, or editing. Servers assign each manuscript a persistent link (DOI), allowing the work to be searched and cited. Authors can use their DOI as a citation in grant applications, progress reports, and on their curriculum vitae (CV). Some servers (eg, bioRxiv) make the process of submitting the manuscript to the journal more streamlined by allowing authors to transfer their manuscripts directly from the server to a journal, which is a may not be as hypothesized. Water soluble carbohydrates (WSC)functionality we hope to integrate in the future. Once printed by a journal, most preprint servers will update the DOI link to the publication of record, so that versioning of the paper is fully transparent. Preprint servers are open; anyone can read the articles without paying for a subscription. Readers who frequent the servers tend to be researchers in the field. Posting a paper, like publishing in a peer-reviewed journal, draws feedback and criticism from the community, which can inspire new experiments and foster collaborations. It can also help authors strengthen their work before they submit their manuscript to a peer-reviewed journal. In this way, preprint servers provide fast delivery of the latest information within the scientific community, while also taking advantage of peer input to improve a manuscript before publication. As part of the scientific record, preprint servers can be viewed as an early step in the refinement process of reporting new discoveries. Despite their appeal, preprint servers had a bit of a rocky start in the biomedical research community. The concept of preprints is not new to the field. In the 1960s, the NIH began circulating papers in Information Exchange Groups (IEGs) to facilitate the exchange of ideas among experts in select subspecialties.1 At the time, the relationship between preprint and canonical publication was not a positive one; in fact, some commercial journals viewed IEGs as competitors. Worried about losing novelty, in 1966 the American Association of Immunologists (AAI) banned manuscripts that had been circulated in an IEG from submission to their journal. Thirteen other journals soon followed suit, and Nature and Science both published editorials criticizing IEGs, implying that they encouraged a dangerous disregard for scientific integrity by distributing work without peer review. Many of the IEGs were dissolved soon after. In 1999, another attempt was made by the NIH to create a preprint server, called E-biomed, but it once again met with backlash from publishers and was shut down while still in the planning phase. Since then, opinions on the use of open-source preprint servers have changed. Nature Publishing Group (NPG) modified its submission policy to allow preprints in 1997,2 and even launched its own preprint server, Nature Precedings, which was open from 2007 to 2012. In 2013, PeerJ Preprints and the Cold Spring Harbor–run bioRxiv were established. Use of both servers has grown over the past 4 years. Part of this trend may be connected with frustration regarding the slow process of review and revision required to publish a paper in a high-tier journal. Initiatives such as Accelerating Science and Publication in Biology (ASAPBio) are attempting to shift the culture of biology to encourage acceptance of preprint servers. Proponents point out that preprints aren't just good for the author; they're good for the scientific community, and by natural extension good for the public. Even with these potential advantages, acceptance of preprint servers in the biomedical community still lags behind other fields of science. In contrast to physicists, who have used the preprint server arXiv for over 25 years, biologists are still hesitant to share their results in preprint. This is due to multiple factors—fear of being scooped and lack of awareness being the most prominent. However, new support from funding agencies and journals is shifting the attitude of the biomedical research community in favor of preprints. The NIH, for example, recently adopted a policy accepting preprint citations in grant applications and progress reports; this has created a compelling reason for researchers, particularly young investigators, to turn to the rapid distribution of online servers. But the debate is far from over. Preprint servers will require wider acceptance in the community to find their place in the publication pipeline. Publishers weigh the benefits and risks involved in accepting papers for submission that have appeared in preprint. Below are some of the key arguments. Because younger scientists are more likely to post preprints, journals that will not consider preprinted manuscripts may seem less appealing to new researchers. Feedback from the scientific community in response to preprints encourages authors to refine their work, leading to high-quality initial submissions. This has the potential to minimize revisions and to make the editorial process run more smoothly. Community excitement surrounding a paper is a good indicator of its potential impact, and could also draw more publicity for the final, peer-reviewed version when it is published. Journals can benefit from publishing the “definitive” version of a groundbreaking work, and preprint servers generally support this by updating their DOI to direct readers to the published version. Journals, as part of the scientific community, benefit from changes in science that accelerate the rate of discovery. Preprint servers allow rapid distribution of information, which has the potential to generate new hypotheses and foster collaborations at a faster rate. Preprint servers also widen the breadth of information that can be shared; eg, negative results rarely find acceptance in canonical publications but are not discouraged from preprint distribution. Top tier journals such as Nature, Science, and Cell, have made themselves compatible with prepublication in established preprint servers. In addition, prominent journals in the bone and mineral research field, including Bone, Endocrine Reviews, and others, have policies allowing submission of manuscripts that have appeared as preprints. Preprint servers do not have a peer-review process, meaning everything printed on these servers is “self-policed” for rigor and accuracy. Misinterpretation and overstatement, both of which are often checked in the formal peer-review and editorial process, can make their way into a preprint. Preprints can attract publicity, which some journals view as a strike against their novelty, one of the key metrics for acceptance. Under the Ingelfinger Rule, The New England Journal of Medicine (NEJM) views draft preprints as prior publication and thus unacceptable as manuscript submissions.3 Although most journals do not take this view, there are more subtle caveats at work that should be considered by both authors and journal editors. According to the Science editorial policies, “reporting the main findings of a paper in the mass media can compromise the novelty of the work and thus its appropriateness for Science.”4 Funding for preprint servers is provided by nonprofit agencies and universities: arXiv relies primarily on Cornell University and the Simons Foundation; and bioRxiv is funded by Cold Spring Harbor Laboratories, the Lourie Foundation, and the Chan Zuckerberg Initiative. Costs to run and maintain preprint servers are diminished by the lack of peer-review, editing, and printing costs, keeping them streamlined and fast. Because of this, some fear that commercial journals will suffer a loss of submissions, particularly if scientists begin to view preprint as a sufficient endpoint for publication. Preprint servers are still in their infancy, and there is no consensus for citing preprint work. Preprints are not universally accepted as intellectual priority, leading to worries about infringement and potential conflicts among authors that could pull journal editors into the fray. Many of the arguments against preprints are entangled in the culture of biomedical research. This means change will happen slowly, but it also means that the key players—researchers, societies, funding agencies, and publishers—have a large amount of control over how these changes develop. For example, a shift in attitude toward preprints has occurred among publishers, from rejection in the 1960s to current acceptance. Many publishers now believe preprint and publication complement, rather than compete with, each other. Preprints allow researchers to share new information quickly, but peer review is still recognized as a hallmark of quality. Nature has argued for a synergistic relationship between preprint servers and canonical peer-review, arguing that “rapid dissemination in a preprint server and high-quality peer review and promotion through publication in a scientific journal should, in our view, go hand in hand.”5 Perhaps the most evocative argument against submission to preprint servers (other than the ubiquitous fear of being “scooped”) is the loss of novelty that may come with posting a preprint, and the impact that this could have on the final editorial decision. Most journals are heavily invested in the novelty of the papers they publish, and have media embargos that prevent authors from discussing their work prior to publication. Some, like the NEJM, consider preprints to be on par with discussing the work with the media prior to submission. However, practical experience shows that preprint servers have more in common with a scientific conference than a press conference; they provide a venue for researchers to share their work within a small community of specialists. The media is unlikely to pay special attention to preprint servers, particularly because, unlike media announcements, preprint drafts make no special effort to “translate” the work for a lay audience. Journals have the potential advantage of producing the definitive, peer-reviewed version of a manuscript that, despite having lost some novelty, will have generated enough attention in the field to garner a high number of citations. In summary, although preprint servers are appealing because they offer a rapid pipeline for new information in the research community, journals still represent the bastion of scientific quality because of their peer review and editorial processes. Although current participation in preprint servers from biologists is low, there is a clear trend toward adopting the practice. In the past year the approximate number of preprints submitted each month to bioRxiv doubled from 500 to 1000.6 If consensus is amicably reached among scientists, funding sources, and publishers, preprint servers can become a complementary mechanism that supports, rather than bypasses, the traditional peer-review process. Preprint servers are new for many of us and will continue to be an ongoing topic of discussion. As always, we welcome feedback from ASBMR members regarding our policy decision. Erin Bove-Fenderson, Ph.D. Endocrine Unit Massachusetts General Hospital and Harvard Medical School Boston MA, USA Katie Duffy Director of Publications American Society for Bone and Mineral Research Washington, DC, USA Michael Mannstadt, M.D. Chief, Endocrine Unit Massachusetts General Hospital and Harvard Medical School Chair, ASBMR Publications Committee Boston MA, USA
We are excited to announce the launch of preLights (https://prelights.biologists.com/), a new service from The Company of Biologists. preLights is a community platform for selecting, highlighting and commenting on recent preprints from across the biological sciences. With this service, we aim to help researchers discover the most interesting and relevant preprints from the growing archive of manuscripts deposited on bioRxiv and other preprint servers.So what is the rationale behind preLights, and how will it work? While preprints have been around for a long time in the physics community, it was only when bioRxiv (https://www.biorxiv.org/) came on the scene in late 2013 that they began to take off in the biological sciences. Since then, we have seen an almost exponential growth in the number of preprints posted, as more and more researchers recognise the value in making their research available at an early stage, and as journals become increasingly open to considering papers that have already been posted on a preprint server. Here at Development, we quickly changed our policy on preprint deposition after the launch of bioRxiv, and have allowed authors to submit papers that were already available on preprint servers since Spring 2014. Since early 2016, all of the Company's journals have been open to considering manuscripts deposited as preprints. Not only that, but we actively facilitate posting of submitted manuscripts through our bidirectional transfer portal with bioRxiv (see http://dev.biologists.org/content/news#biorxiv for more details). Overall, The Company of Biologists and Development see a synergy between preprint servers and the more traditional peer review and publishing model we operate. Preprints allow authors to get their work out quickly and readers to access the latest research, pre-publication peer review helps to ensure the rigour and quality of the work, and publication in a field-specific journal like Development helps the community find and digest the most relevant research in their area in an accessible format.Since 2016, our community blog the Node has played an active role in helping developmental biologists find the most relevant preprints – providing a monthly round-up of the latest preprints in developmental biology and related fields (see http://thenode.biologists.com/tag/preprints/). This initiative has proved hugely popular: the post is generally our most-read post each month, and many people have told us how valuable they find the listing. Moreover, the list is getting longer each month, and is in danger of becoming unwieldy, even with the careful curation from our community manager. Given these trends, we started thinking about what more we could do to help people to navigate the ever-growing preprint literature – across the range of fields covered by the Company's journals.From these discussions, the idea of preLights was born. At its heart, preLights is a community of around 80 researchers, mainly (but not exclusively) postdocs and early stage PIs, many of whom have been nominated by our editors or editorial board members. Their research interests span the range of fields covered by our journals and beyond – from morphogenesis to neuroethology, from autophagy to cancer immunology. Our preLights team members will be selecting each month the preprint or preprints that they feel most worthy of comment, and will provide a personal perspective on why they have chosen each article. Through our online platform, we also hope to encourage other members of the community to comment on those preprints selected by our team, and to engage the authors of the selected preprints in the discussion as well – thus facilitating the exchange of ideas and opinions. Hopefully, this will also help authors improve and revise their papers as they make their way towards formal publication. Although the platform has been designed and will be hosted by The Company of Biologists, we see preLights as a community-run service, where we provide logistical support, but the content is driven by our team of selectors and the broader community.Posts will be categorised and tagged by topic to make it easy for you to search for those preprints most relevant to you, and we'll also highlight the most popular posts across all fields to give you a flavour of the latest work that's attracting attention. We're also planning to feature content from the preLights site in the journal, further helping the developmental biology community to discover the newest research in the field. And to reassure those of you who have become used to browsing the Node's monthly list, we're not planning to stop doing this. Finally, an additional aim is to support and promote our team of selectors – giving them the platform and profile to get their opinions ‘out there’ and helping them to expand their networks, both within their specific fields and beyond.We are of course aware that commenting on preprints and papers has not really taken off in a big way in the biological sciences. preLights is therefore something of an experiment, but we've been hugely encouraged by the enthusiasm with which the idea was met by those we approached as potential contributors. While we hope that preLights will provide a venue where such discussion can happen more freely, we also believe that there is significant value in the selection and highlighting of a subset of preprints that will be of particular interest to our communities. Over time, we expect that the preLights site, and the team of selectors, will evolve – and we welcome your suggestions and feedback on how we can make it better. For now, though, we invite you to browse the first set of posts on preLights and to join in the discussion there. We hope you find this new initiative valuable!
A public preprint server such as arXiv allows authors to publish their manuscripts before submitting them to journals for peer review. It offers the chance to establish priority by making the results available upon completion. This article presents the arXiv section Quantitative Biology and investigates the advantages of preprint publications in terms of reception, which can be measured by means of citations. This paper focuses on the publication and citation delay, citation counts and the authors publishing their e-prints on arXiv. Moreover, the paper discusses the benefit for scientists as well as publishers. The results that are based on 12 selected journals show that submitting preprints to arXiv has become more common in the past few years, but the number of papers submitted to Quantitative Biology is still small and represents only a fraction of the total research output in biology. An immense advantage of arXiv is to overcome the long publication delay resulting from peer review. Although preprints are visible prior to the officially published articles, a significant citation advantage was only found for the Journal of Theoretical Biology.
The establishment of bioRxiv facilitated the rapid adoption of preprints in the life sciences, accelerating the dissemination of new research findings. However, the sheer volume of preprints published daily can be overwhelming, making it challenging for researchers to stay updated on the latest developments. Here, I introduce biorecap, an R package that retrieves and summarizes bioRxiv preprints using a large language model (LLM) running locally on nearly any commodity laptop. biorecap leverages the ollamar package to interface with the Ollama server and API endpoints, allowing users to prompt any local LLM available through Ollama. The package follows tidyverse conventions, enabling users to pipe the output of one function as input to another. Additionally, biorecap provides a single wrapper function that generates a timestamped CSV file and HTML report containing short summaries of recent preprints published in user-configurable subject areas. By combining the strengths of LLMs with the flexibility and security of local execution, biorecap represents an advancement in the tools available for managing the information overload in modern scientific research. The biorecap R package i
The PreprintToPaper dataset connects bioRxiv preprints with their corresponding journal publications, enabling large-scale analysis of the preprint-to-publication process. It comprises metadata for 145,517 preprints from two periods, 2016-2018 (pre-pandemic) and 2020-2022 (pandemic), retrieved via the bioRxiv and Crossref APIs. We selected the two periods to capture preprint-publication dynamics before and during the COVID-19 pandemic while avoiding transitional years. Each record includes bibliographic information such as titles, abstracts, authors, institutions, submission dates, licenses, and subject categories, alongside enriched publication metadata including journal names, publication dates, author lists, and further information. In addition to the main dataset, a version-history subset provides all available versions of preprints within the two selected periods, enabling analysis of how preprints evolve over time. Preprints are categorized into three groups: Published (formally linked to a journal article), Preprint Only (posted on a preprint server), and Gray Zone (potentially published in a journal but unlinked). To enhance reliability, title and author similarity scores w
Systems Biology has emerged in the last years as a new holistic approach based on the global understanding of cells instead of only being focused on their individual parts (genes or proteins), to better understand the complexity of human cells. Since the Systems Biology still does not provide the most accurate answers to our questions due to the complexity of cells and the limited quality of available information to perform a good gene/protein map analysis, we have created simpler models to ensure easier analysis of the map that represents the human cell. Therefore, a virtual organism has been designed according to the main physiological rules for humans in order to replicate the human organism and its vital functions. This toy model was constructed by defining the topology of its genes/proteins and the biological functions associated to it. There are several examples of these toy models that emulate natural processes to perform analysis of the virtual life in order to design the best strategy to understand real life. The strategy applied in this study combines topological and functional analysis integrating the knowledge about the relative position of a node among the others in th
The world continues to face a life-threatening viral pandemic. The virus underlying the Coronavirus Disease 2019 (COVID-19), Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), has caused over 98 million confirmed cases and 2.2 million deaths since January 2020. Although the most recent respiratory viral pandemic swept the globe only a decade ago, the way science operates and responds to current events has experienced a cultural shift in the interim. The scientific community has responded rapidly to the COVID-19 pandemic, releasing over 125,000 COVID-19-related scientific articles within 10 months of the first confirmed case, of which more than 30,000 were hosted by preprint servers. We focused our analysis on bioRxiv and medRxiv, 2 growing preprint servers for biomedical research, investigating the attributes of COVID-19 preprints, their access and usage rates, as well as characteristics of their propagation on online platforms. Our data provide evidence for increased scientific and public engagement with preprints related to COVID-19 (COVID-19 preprints are accessed more, cited more, and shared more on various online platforms than non-COVID-19 preprints), as well as changes in the use of preprints by journalists and policymakers. We also find evidence for changes in preprinting and publishing behaviour: COVID-19 preprints are shorter and reviewed faster. Our results highlight the unprecedented role of preprints and preprint servers in the dissemination of COVID-19 science and the impact of the pandemic on the scientific communication landscape.
Performance in web applications is a key aspect of user experience and system scalability. Among the different techniques used to improve web application performance, caching has been widely used. While caching has been widely explored in web performance optimization literature, there is a lack of experimental work that explores the effect of simple inmemory caching in small-scale web applications. This paper fills this research gap by experimentally comparing the performance of two server-side web application configurations: one without caching and another with in-memory caching and a fixed time-tolive. The performance evaluation was conducted using a lightweight web server framework, and response times were measured using repeated HTTP requests under identical environmental conditions. The results show a significant reduction in response time for cached requests, and the findings of this paper provide valuable insights into the effectiveness of simple server-side caching in improving web application performance making it suitable for educational environments and small-scale web applications where simplicity and reproducibility are critical.
These data and chart present an approximate calculation of the proportion of preprints in biology when compared to publications in PubMed, based on monthly figures and incorporating monthly preprint submissions (or counts) across a selection of servers relevant to biology. Version 1.0 of these data represents data from January 2007 until May 31, 2019 for preprint servers: arXiv q-bio, Nature Precedings, F1000Research*, PeerJ Preprints*, bioRxiv**, Winnower*, preprints.org, Wellcome Open Research*. * Counts may not be specific to biology preprints only; ** Counts may include all versions posted that month, so may be an overestimate for version 1 submissions. From January 2019, data has been gathered manually by the authors, as per the methods described in the .csv here, and is included here in 'Preprints_per_month_direct_2019-01to05.csv'. Until December 2018, monthly preprint submissions data are based on those contributed by Jordan Anaya (ORCID: https://orcid.org/0000-0002-6166-4113) for PrePubMed, source: https://raw.githubusercontent.com/OmnesRes/prepub/master/analyses/preprint_data.txt; Github repository: https://github.com/OmnesRes/prepub; website: http://www.prepubmed.org). Data are not included here, they are provided from the source linked above under MIT license associated with the website code: https://github.com/OmnesRes/prepub/blob/master/LICENSE. A live version of these data and the chart are available from this GSheet: https://docs.google.com/spreadsheets/d/1bkGEcfQcL0LpIanVqNHci1ZFY6oVNGz7IQbEugzkqkU/edit?usp=sharing. Between version updates here, please refer to this sheet for updated counts and method updates e.g. to include more servers and ensure only version 1 submissions are counted. For more information, please contact naomi.penfold@asapbio.org. When presenting these data and/or chart, please attribute to ASAPbio (https://asapbio.org, twitter: @ASAPbio_).
Abstract The world continues to face a life-threatening viral pandemic. The virus underlying the COVID-19 disease, SARS-CoV-2, has caused over 98 million confirmed cases and 2.2 million deaths since January 2020. Although the most recent respiratory viral pandemic swept the globe only a decade ago, the way science operates and responds to current events has experienced a paradigm shift in the interim. The scientific community has responded rapidly to the COVID-19 pandemic, releasing over 125,000 COVID-19 related scientific articles within 10 months of the first confirmed case, of which more than 30,000 were hosted by preprint servers. We focused our analysis on bioRxiv and medRxiv, two growing preprint servers for biomedical research, investigating the attributes of COVID-19 preprints, their access and usage rates, as well as characteristics of their propagation on online platforms. Our data provides evidence for increased scientific and public engagement with preprints related to COVID-19 (COVID-19 preprints are accessed more, cited more, and shared more on various online platforms than non-COVID-19 preprints), as well as changes in the use of preprints by journalists and policymakers. We also find evidence for changes in preprinting and publishing behaviour: COVID-19 preprints are shorter and reviewed faster. Our results highlight the unprecedented role of preprints and preprint servers in the dissemination of COVID-19 science, and the impact of the pandemic on the scientific communication landscape.
The adoption of open science has quickly changed how artificial intelligence (AI) policy research is distributed globally. This study examines the regional trends in the citation of preprints, specifically focusing on the impact of two major disruptive events: the COVID-19 pandemic and the release of ChatGPT, on research dissemination patterns in the United States, Europe, and South Korea from 2015 to 2024. Using bibliometrics data from the Web of Science, this study tracks how global disruptive events influenced the adoption of preprints in AI policy research and how such shifts vary by region. By marking the timing of these disruptive events, the analysis reveals that while all regions experienced growth in preprint citations, the magnitude and trajectory of change varied significantly. The United States exhibited sharp, event-driven increases; Europe demonstrated institutional growth; and South Korea maintained consistent, linear growth in preprint adoption. These findings suggest that global disruptions may have accelerated preprint adoption, but the extent and trajectory are shaped by local research cultures, policy environments, and levels of open science maturity. This paper
In this paper, we propose and study several inverse problems of determining unknown parameters in nonlocal nonlinear coupled PDE systems, including the potentials, nonlinear interaction functions and time-fractional orders. In these coupled systems, we enforce non-negativity of the solutions, aligning with realistic scenarios in biology and ecology. There are several salient features of our inverse problem study: the drastic reduction in measurement/observation data due to averaging effects, the nonlinear coupling between multiple equations, and the nonlocality arising from fractional-type derivatives. These factors present significant challenges to our inverse problem, and such inverse problems have never been explored in previous literature. To address these challenges, we develop new and effective schemes. Our approach involves properly controlling the injection of different source terms to obtain multiple sets of mean flux data. This allows us to achieve unique identifiability results and accurately determine the unknown parameters. Finally, we establish a connection between our study and practical applications in biology, further highlighting the relevance of our work in real-
Preprints are preliminary research reports that have not yet been peer-reviewed. They have been widely adopted to promote the timely dissemination of research across many scientific fields. In August 1991, Paul Ginsparg launched an electronic bulletin board intended to serve a few hundred colleagues working in a subfield of theoretical high-energy physics, thus launching arXiv, the first and largest preprint platform. Additional preprint servers have since been implemented in different academic fields, such as BioRxiv (2013, Biology; www.biorxiv.org) and medRxiv (2019, Health Science; www.medrxiv.org). While preprint availability has made valuable research resources accessible to the general public, thus bridging the gap between academic and non-academic audiences, it has also facilitated the spread of unsupported conclusions through various media channels. Issues surrounding the preprint policies of a journal must be addressed, ultimately, by editors and include the acceptance of preprint manuscripts, allowing the citation of preprints, maintaining a double-blind peer review process, changes to the preprint's content and authors' list, scoop priorities, commenting on preprints, and preventing the influence of social media. Editors must be able to deal with these issues adequately, to maintain the scientific integrity of their journal. In this review, the history, current status, and strengths and weaknesses of preprints as well as ongoing concerns regarding journal articles with preprints are discussed. An optimal approach to preprints is suggested for editorial board members, authors, and researchers.
Building circuits and studying their behavior in cells is a major goal of systems and synthetic biology. Synthetic biology enables the precise control of cellular states for systems studies, the discovery of novel parts, control strategies, and interactions for the design of robust synthetic systems. To the best of our knowledge,there are no literature reports for the synthetic circuit construction for protozoan parasites. This paper describes the construction of genetic circuit for the targeted enzyme inositol phosphorylceramide synthase belonging to the protozoan parasite Leishmania. To explore the dynamic nature of the circuit designed, simulation was done followed by circuit validation by qualitative and quantitative approaches. The genetic circuit designed for inositol phosphorylceramide synthase shows responsiveness, oscillatory and bistable behavior, together with intrinsic robustness.
Though it goes without saying that linear algebra is fundamental to mathematical biology, polynomial algebra is less visible. In this article, we will give a brief tour of four diverse biological problems where multivariate polynomials play a central role -- a subfield that is sometimes called "algebraic biology." Namely, these topics include biochemical reaction networks, Boolean models of gene regulatory networks, algebraic statistics and genomics, and place fields in neuroscience. After that, we will summarize the history of discrete and algebraic structures in mathematical biology, from their early appearances in the late 1960s to the current day. Finally, we will discuss the role of algebraic biology in the modern classroom and curriculum, including resources in the literature and relevant software. Our goal is to make this article widely accessible, reaching the mathematical biologist who knows no algebra, the algebraist who knows no biology, and especially the interested student who is curious about the synergy between these two seemingly unrelated fields.
Information is a key concept in evolutionary biology. Information is stored in biological organism's genomes, and used to generate the organism as well as to maintain and control it. Information is also "that which evolves". When a population adapts to a local environment, information about this environment is fixed in a representative genome. However, when an environment changes, information can be lost. At the same time, information is processed by animal brains to survive in complex environments, and the capacity for information processing also evolves. Here I review applications of information theory to the evolution of proteins as well as to the evolution of information processing in simulated agents that adapt to perform a complex task.
Understanding the biological mechanisms of disease is crucial for medicine, and in particular, for drug discovery. AI-powered analysis of genome-scale biological data holds great potential in this regard. The increasing availability of single-cell RNA sequencing data has enabled the development of large foundation models for disease biology. However, existing foundation models only modestly improve over task-specific models in downstream applications. Here, we explored two avenues for improving single-cell foundation models. First, we scaled the pre-training data to a diverse collection of 116 million cells, which is larger than those used by previous models. Second, we leveraged the availability of large-scale biological annotations as a form of supervision during pre-training. We trained the \model family of models comprising six transformer-based state-of-the-art single-cell foundation models with 70 million, 160 million, and 400 million parameters. We vetted our models on several downstream evaluation tasks, including identifying the underlying disease state of held-out donors not seen during training, distinguishing between diseased and healthy cells for disease conditions and