In the field of medicine, there has been a growing understanding of the impact of social and economic inequities on patients' health outcomes. Social medicine was established with the intention of addressing these social and economic drivers of health when caring for patients. Physicians who practise social medicine aim to take an interdisciplinary and interprofessional approach to patient care with an emphasis on the promotion of health equity and patient advocacy. As the effects of social determinants of health (SDOH) on health outcomes have become more widely appreciated, medical professional organisations and accrediting bodies have advocated for formal education on the impact of SDOH in undergraduate and graduate medical curricula. The goal of this scoping review is to examine how undergraduate and graduate medical education programmes in the USA have implemented social medicine concepts into their curricula. The proposed scoping review will be conducted in accordance with the Joanna Briggs Institute methodology for scoping reviews. The review team worked with a medical librarian, who created a unique search for five databases (PubMed, Embase, Cochrane CENTRAL Register of Controlled Trials, ERIC and the Web of Science Core Collection). Additionally, we will conduct a grey literature search that includes medical school and residency programme websites, as well as Association of American Medical Colleges (AAMC), Council of Residency Directors in Emergency Medicine (CORD), Alliance for Academic Internal Medicine (AAIM) and Society for Academic Emergency Medicine (SAEM) conference abstracts. Two independent reviewers will assess all articles for eligibility. Data will be extracted using the Covidence data extraction tool. We will present the results of the extraction in tabular form. Themes identified during the full-text review and data extraction process will be discussed. Data will be gathered from publicly accessible sources, so ethics approval is not necessary. The results will be disseminated through a peer-reviewed journal and reported at conferences related to medical education and social medicine. This protocol is registered on OSF (https://doi.org/10.17605/OSF.IO/7PZ8U).
Objective: This exploratory post hoc analysis examined the safety/efficacy of the aripiprazole once-monthly 400 mg (AOM 400) long-acting injectable (LAI) in Black/African American patients diagnosed with bipolar I disorder (BP-I). Methods: Data were from a 52-week, open-label trial of AOM 400 maintenance treatment in patients diagnosed with DSM-IV-TR-defined BP-I. Outcomes included treatment-emergent adverse events (TEAEs), clinician-rated extrapyramidal symptoms (EPS), patient stability, Young Mania Rating Scale (YMRS), Montgomery-Asberg Depression Rating Scale (MADRS), and Clinical Global Impression-Bipolar Version-Severity (CGI-BP-S) scores, functioning, and quality of life (QoL). Data analyses comprised descriptive statistics and a mixed model for repeated measures. Results: Outcomes were analyzed in 464 patients (Black/African American, n=104; White, n=255; Asian, n=94; other racial groups, n=11). No notable increase in TEAEs, serious TEAEs, or TEAEs leading to discontinuation were observed in Black/African American patients vs those in other racial groups. Rates of akathisia, tremor, increased weight, or hypertension were lower/similar in Black/African American patients vs other racial groups; changes in EPS scores were minimal in all groups. At last visit, 90.1% of Black/African American patients were stable, similar to other racial groups. Small changes in YMRS total score occurred in all groups, with MADRS total score and CGI-BP-S scores largely unchanged. Functioning and QoL improved in Black/African American patients, to a similar/greater degree than other racial groups. Limitations include the open-label design, prior aripiprazole stabilization, and sparse metabolic laboratory data, constraining causal inference and metabolic conclusions. Conclusion: The safety and efficacy of AOM 400 is comparable between Black/African American and non-Black/African American patients with BP-I. The data provide valuable evidence supporting second-generation LAI antipsychotic use in these patients. Trial Registration: ClinicalTrials.gov identifier: NCT01710709.
While surgical repair is standard for acute Achilles tendon ruptures, the optimal technique remains debated. This study com pares clinical, functional, and ultrasonographic outcomes between minimally invasive and open surgical approaches, with particular focus on: (1) patient-reported recovery, (2) tendon healing dynamics, and (3) the utility of ultrasound in postoperative monitoring. This retrospective study analyzed 108 consecutive patients undergoing surgical repair for acute Achilles tendon ruptures between 2015-2023, comparing minimally invasive (n = 58; ring forceps technique) and open approaches (n = 50; Krackow technique). Functional outcomes were assessed using American Orthopaedic Foot and Ankle Society (AOFAS), Patient-Reported Outcomes Measurement Information System (PROMIS), and Madrid Sonographic Enthesitis Index (MASEI) scores at standardized 6-, 12-, and 24-month follow-ups, while ultrasonographic evaluations quantified tendon thickness at rupture and insertion sites relative to contralat eral tendons. Complication rates and demographic variables were systematically reviewed, with all patients receiving identical postoper ative rehabilitation protocols. A total of 108 patients were included in the study, with a mean age of 41.56 ± 13.98 years (range, 18-68). Minimally invasive sur gery was performed in 58 patients (53.7%), while the remaining 50 patients (46.3%) underwent open surgical repair. The mean follow-up duration was 2.4 years (minimum of 2 years of follow-up). Patients in the minimally invasive group reported significantly higher PROMIS scores compared to those in the open surgery group (P < .001). However, no significant differences were observed in AOFAS or MASEI scores between the groups (P > .05). Ultrasonographic evaluation revealed that the mean tendon thickness at the rupture site was signifi cantly greater in the minimally invasive group (1.04 cm; range, 0.93-1.15) than in the open surgery  group (0.87 cm; range, 0.77-0.93) (P < .001). Furthermore, the operated-to-intact tendon thickness ratio was 2.13 in the minimally invasive group and 1.78 in the open surgery group, which was also statistically significantly different (P = .006). Minimally invasive achilles tendon repair was associated with potential advantages compared to open techniques, includ ing more favorable patient-reported outcomes (median PROMIS score 80 vs. 76, P < .001), increased tendon thickness (19% greater, P < .001), a potential indicator of differential healing patterns, and lower wound complication rates, while importantly achieving equivalent high-level function as measured by the AOFAS and MASEI scores. The main limitations of this study include its retrospective design and the potential for unmeasured confounding. Ultrasound serves as a critical postoperative tool, objectively quantifying healing progression and informing return-to-sports decisions. These findings suggest potential advantages of minimally invasive approaches and support their consideration as a viable alternative to open repair in selected patients; however, the choice of technique should be individualized based on surgeon experience and patient-specific factors. However, these associative findings require validation in randomized trials.   Cite this article as: Yigit O, Erdogan MK, Canbaz SB, et al. Minimally invasive (ring forceps) versus open achilles tendon repair: A retrospective comparison of ultrasonographic and functional outcomes. Acta Orthop Traumatol Turc., 2025;59(6):394-404.
The purpose of this study was to survey sport medicine physicians globally to evaluate how they treat patients with a first-time shoulder dislocation (FTSD), specifically exploring the most common management strategies, the evidence or guidelines guiding these decisions, and the influence of demographic factors on these strategies and perceptions. A cross-sectional survey was developed and distributed globally from 14 October 2024 through 22 March 2025 to sport medicine physicians involved in the management of shoulder instability. The questionnaire assessed respondents' demographics, preferred management strategies for FTSDs, and perceptions regarding the evidence supporting various treatment approaches. Descriptive statistics were used to summarize the data, and regression analyses were conducted to explore associations between demographic variables and management practices and perceptions. A total of 326 respondents completed the survey, predominantly fellowship-trained orthopaedic surgeons from North America and Europe. The most common management strategy was immobilization for 1-3 weeks, followed by physical therapy incorporating strengthening and proprioception exercises. Imaging practices varied geographically, with North American respondents preferring radiographic assessment and Europeans favouring magnetic resonance imaging at initial presentation. Surgical management or consideration for surgical intervention was most influenced by the presence of bony injury (81%), patient age (69%) and participation in contact sports (63%). Younger physicians favoured arthroscopic stabilization, while older respondents leaned towards open approaches. While variability in the management of FTSDs persists, the results of this study demonstrate emerging trends towards consensus on critical factors for consideration in the management of such patients. However, despite the prevalence of guideline-based management, a substantial proportion of respondents, particularly those over 55 years old, continue to base their practices on personal experience and training rather than standardized protocols. The development and dissemination of evidence-based guidelines remain essential to standardize clinical practices and optimize patient outcomes. Level IV.
Complex ventral hernias are a surgical challenge associated with high morbidity and healthcare costs. Component separation techniques have improved throughout the years with better outcomes, although the optimal approach remains debated. Robotic surgery has shown promising outcomes as an alternative to open repair, although data in large multicenter studies is still limited. A retrospective cohort study was conducted using the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database. Adult patients undergoing component separation for ventral hernia repair were identified using CPT and ICD codes. Outcomes included 30-day surgical, wound, medical, and overall complications, as well as length of stay and readmission. Multivariable logistic regression and propensity score matching were applied to adjust for baseline differences. A total of 6,207 patients were included, from those 4,443 (71.6%) underwent open technique and 1,764 (28.4%) robotic. After propensity matching (n = 5,259), robotic repair was independently associated with significantly lower overall complication rates (4.8% vs. 19.6%, aOR 0.193, 95% CI 0.140-0.265, p < 0.001), including wound (2.2% vs. 10.2%, aOR 0.164, p < 0.001), surgical (2.9% vs. 10.0%, aOR 0.271, p < 0.001), and medical complications (2.0% vs. 7.0%, aOR 0.229, p < 0.001). Robotic surgery was also associated with shorter length of stay (1.34 vs. 3.86 days, p < 0.001) and lower readmission rates (4.4% vs. 9.1%, p < 0.001). Robotic component separation for ventral hernia repair is associated with lower postoperative complication rates, shorter length of stay, and fewer readmissions compared to the open approach. These benefits remained significant after multivariate analysis and propensity score matching, supporting the robotic technique as an effective strategy. Prospective studies are warranted to evaluate long-term outcomes, including recurrence, and to assess cost-effectiveness to optimize evidence-based surgical decision-making.
Pain, fatigue, and impaired health-related quality of life are common manifestations of rheumatoid arthritis. The aim of this study was to compare the effects of active conventional treatment with three different biological disease-modifying antirheumatic drugs (DMARDs) on patient-reported outcomes after 48 weeks, in patients with early rheumatoid arthritis using data from the NORD-STAR trial. NORD-STAR was an investigator-initiated open-label randomised controlled trial done at 29 rheumatology centres across Denmark, Finland, Iceland, Norway, Sweden, and the Netherlands. Newly diagnosed patients aged 18 years or older, with rheumatoid arthritis (according to the 2010 American College of Rheumatology-European Allience of Associations for Rheumatology classification criteria for rheumatoid arthritis), symptom duration less than 24 months and who were naïve to DMARDs were randomly assigned (1:1:1:1) to receive active conventional treatment, certolizumab pegol, abatacept, or tocilizumab. The patient-reported outcomes assessed at baseline and weeks 4, 8, 12, 16, 24, 32, 40, and 48 included pain, patient's global assessment of disease activity, Health Assessment Questionnaire Disability Index, Fatigue, Short Form-36 (reflecting health-related quality of life, morning stiffness, and patient's acceptable symptom state). Linear mixed regression and logistic regression analyses were adjusted for sex, country, baseline patient-reported outcomes values, anti-citrullinated protein antibody status, and treatment group. Proportions of patients reporting improvements greater than or equal to the minimal clinically important difference (MCID) were assessed. There was lived experience involvement in the design and implementation of the study. This trial was registered with ClinicalTrials.gov, NCT01491815, and EudraCT, 2011-004720-35. Between Dec 14, 2012, and Dec 11, 2018, 812 patients were enrolled and randomly assigned; after exclusion of 17 patients not receiving tocilizumab due to administrative issues, the intention-to-treat population consisted of 795 patients (200 [25%] received active conventional treatment, 203 [26%] received certolizumab pegol plus methotrexate, 204 [26%] received abatacept plus methotrexate, and 188 [24%] received tocilizumab plus methotrexate). 547 (69%) of 795 patients were female, 248 (31%) were male, the mean age was 54 years (SD 15). Between baseline and week 48 large and clinically relevant improvements in patient-reported outcomes were observed in all treatment groups. At 48 weeks the biological DMARD groups had larger improvements in pain, fatigue, physical component score, and bodily pain of SF-36 compared with the active conventional treatment group. For pain, improvement exceeding MCID was reported by 155 (76%) of 203 patients with certolizumab pegol plus methotrexate and 162 (79%) of 204 patients with abatacept plus methotrexate compared with 136 (68%) of 200 patients in the active conventional treatment group. In the group of patients with tocilizumab and methotrexate 132 (70%) of 188 patients reported pain improvement exceeding MCID. The absolute differences between the biological DMARD groups and the active conventional treatment group were otherwise generally marginal. All treatment groups showed substantial improvements in patient-reported outcomes over time. Biological DMARDs produced somewhat greater gains in pain, fatigue, and physical quality of life measures than conventional treatments, though overall differences between groups were small. The results highlight that early treatment and effective disease control in rheumatoid arthritis lead to strong patient-reported benefits regardless of therapy type. Stockholm County Council, Swedish Medical Research Council, Swedish Rheumatism Association, Academy of Finland, Finska Läkaresällskapet, South-Eastern Health Region Norway, HUS Institutional grant, Icelandic Society for Rheumatology, Interregional grant from all health regions in Norway, NordForsk, Regionernes Medicinpulje, The Research Fund of University Hospital Reykjavik, UCB, Bristol Myers Squibb.
Tripterygium glycoside tablets (TGTs), a traditional Chinese medicine-based therapy and a type of conventional synthetic disease-modifying antirheumatic drug (csDMARD), have shown promise as a cost-effective alternative for rheumatoid arthritis (RA). However, there is limited evidence regarding the most effective combinations with other csDMARDs, such as methotrexate, leflunomide, and hydroxychloroquine. This study evaluates the 12-week efficacy and safety of TGT-based regimens in patients with moderately active RA. This study aims to evaluate the clinical efficacy and safety of the csDMARD combination regimens mainly based on TGTs in the treatment of RA, using multicenter clinical data, and to explore the clinical characteristics of TGTs and identify the optimal combination therapy through a randomized controlled trial. This multicenter, open-label randomized controlled trial recruited 188 participants (47 per group) from 3 hospitals. Eligible patients were stratified by study center and randomized in a 1:1:1:1 ratio to receive TGT monotherapy, TGT plus methotrexate, TGT plus leflunomide, or TGT plus hydroxychloroquine. The primary outcome was the American College of Rheumatology 20% improvement response rate. Secondary outcome measures included the Disease Activity Score-28 (DAS28) using erythrocyte sedimentation rate and DAS28 using C-reactive protein response rates, the Clinical Disease Activity Index, the Simplified Disease Activity Index, the pain visual analog scale (0 to 10 points), the patient global assessment, the medical doctor global assessment, quality of life (36-Item Short Form Health Survey), and the Health Assessment Questionnaire Disability Index. All enrolled patients were followed up every 4 weeks for a total of 12 weeks. Adverse events were recorded during the observation period of the study. All patients randomized in this study will be included in the intention-to-treat analysis. This study was funded in February 2024. Funding was provided by the Beijing Tongzhou District Science and Technology Planning Project. Recruitment of participants commenced on September 1, 2024, and concluded on December 31, 2025. A total of 181 patients were enrolled. Final data collection has been completed, and data cleaning is currently underway. Statistical analyses have not yet been performed. The primary results are expected to be submitted for publication by the end of 2026. This trial uses multicenter clinical data to provide robust evidence on the efficacy and safety of TGTs in combination with csDMARDs for the treatment of RA with moderate disease activity.
Breast cancer is a leading cause of mortality and morbidity among females worldwide. As part of the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2023, we provided an updated comprehensive assessment of the epidemiological trends, disease burden, and risk factors associated with breast cancer globally, regionally, and nationally from 1990 to 2023. Breast cancer incidence, mortality, prevalence, years lived with disability (YLDs), years of life lost (YLLs), and disability-adjusted life-years (DALYs) were estimated by age and sex for 204 countries and territories from 1990 to 2023. Mortality estimates were generated using GBD Cause of Death Ensemble models, leveraging data from population-based cancer registration systems, vital registration systems, and verbal autopsies. Mortality-to-incidence ratios were calculated to derive both mortality and incidence estimates. Prevalence was calculated by combining incidence and modelled survival estimates. YLLs were established by multiplying age-specific deaths with the GBD standard life expectancy at the age of death. YLDs were estimated by applying disability weights to prevalence estimates. The sum of YLLs and YLDs equalled the number of DALYs. Breast cancer burden attributable to seven risk factors was examined through the comparative risk assessment framework. The GBD forecasting framework was used to forecast breast cancer incidence and mortality from 2024 to 2050. Age-standardised rates were calculated for each metric using the GBD 2023 world standard population. In 2023, there were an estimated 2·30 million (95% uncertainty interval [UI] 2·01 to 2·61) breast cancer incident cases, 764 000 deaths (672 000 to 854 000), and 24·1 million (21·3 to 27·5) DALYs among females globally. In the World Bank low-income group, where a low age-standardised incidence rate (ASIR) was estimated (44·2 per 100 000 person-years [31·2 to 58·4]), the age-standardised mortality rate (ASMR) was the highest (24·1 per 100 000 [16·8 to 31·9]). The highest ASIR was in the high-income group (75·7 per 100 000 [67·1 to 84·0]), and the lowest ASMR was in the upper-middle-income group (11·2 per 100 000 [10·2 to 12·3]). Between 1990 and 2023, the ASIR in the low-income group increased by 147·2% (38·1 to 271·7), compared with a 1·2% (-11·5 to 17·2) change in the high-income group. The ASMR decreased in the high-income group, changing by -29·9% (-33·6 to -25·9), but increased by 99·3% (12·5 to 202·9) in the low-income group. The increase in age-standardised DALY rates followed that of ASMRs. Risk factors such as dietary risks, tobacco use, and high fasting plasma glucose contributed to 28·3% (16·6 to 38·9) of breast cancer DALYs in 2023. The risk factors with a decrease in attributable DALYs between 1990 and 2023 were high alcohol use and tobacco. By 2050, the global incident cases of breast cancer among females were forecast to reach 3·56 million (2·29 to 4·83), with 1·37 million (0·841 to 2·02) deaths. The stable incidence and declining mortality rates of female breast cancer in high-income nations reflect success in screening, diagnosis, and treatment. In contrast, the concurrent rise in incidence and mortality in other regions signals health system deficits. Without effective interventions, many countries will fall short of the WHO Global Breast Cancer Initiative's ambitious target of achieving an annual reduction of 2·5% in age-standardised mortality rates by 2040. The mounting breast cancer burden, disproportionately affecting some of the world's most vulnerable populations, will further exacerbate health inequalities across the globe without decisive immediate action. Gates Foundation, St Jude Children's Research Hospital.
Smoking increases retear rates after rotator cuff repair. However, the cessation duration required to achieve outcomes comparable to those of nonsmokers remains unclear. To determine the cessation duration required for former smokers to achieve retear rates comparable to those of nonsmokers after arthroscopic rotator cuff repair. Cohort study; Level of evidence, 3. The study included 1902 patients who underwent arthroscopic rotator cuff repair for full-thickness tears between March 2012 and October 2023. Patients were categorized as nonsmokers (1172 patients); former smokers stratified by cessation duration of <1 year, 1 to <3 years, 3 to <5 years, and ≥5 years (454 patients); or current smokers (276 patients). After 1:1:1 propensity score matching based on age, employment status, tear size, and fatty infiltration, the records of 276 patients per group were analyzed. The visual analog scale, Subjective Shoulder Value, American Shoulder and Elbow Surgeons score, University of California at Los Angeles score, and range of motion were used to compare functional outcomes. Six-month postoperative magnetic resonance imaging assessed structural integrity using the Sugaya classification. At the 2-year (range, 23-27 months; mean, 24.5 ± 1.0 months) follow-up evaluation, clinical scores and range of motion had significantly improved (P < .001 for all) in all groups without significant intergroup differences. However, retear rates differed significantly: 17.8% in nonsmokers, 25.4% in former smokers, and 29.3% in current smokers (P = .005). Former smokers demonstrated progressively decreasing retear rates with longer cessation: 28.6% at <1 year, 27.0% at 1 to <3 years, 20.1% at 3 to <5 years, and 19.4% at ≥5 years. Patients with ≥3 years' cessation achieved rates comparable to those of nonsmokers. Multivariable analysis identified smoking status, pack-years (cutoff, 14), and cessation duration (cutoff, 44 months) as independent predictors. The combined cessation duration/pack-year model demonstrated superior predictive performance (AUC, 0.716). Sustained smoking cessation significantly lowers retear rates after rotator cuff repair, with at least 3 years of abstinence required to achieve rates comparable to those of nonsmokers. Pack-years and duration of cessation serve as independent predictors of tendon healing.
Octogenarians undergoing surgical management for benign prostatic obstruction (BPO) present unique challenges because of age-related comorbidities and frailty, which may impact perioperative outcomes. This study evaluates 30-day complications in octogenarian patients undergoing laser enucleation of prostate (LEP), robotic simple prostatectomy (RSP), and open simple prostatectomy (OSP) as reported in a large U.S. national database. The American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database was queried for octogenarian patients who underwent BPO surgery from 2010 to 2023. Procedures were categorized using Current Procedural Terminology codes. Outcomes assessed included 30-day complications, readmission, and reoperation. Of 136,991 patients undergoing BPO surgery, 1,798 were octogenarians (LEP = 1,222; RSP = 93; OSP = 483). LEP patients had shorter median operative times (94 vs 123 and 153 minutes, p < 0.0001), shorter hospital stays (1 vs 3-6 days, p < 0.0001), and higher rates of discharge to home (96.8% vs 85.3%, p = 0.016). Transfusions were more frequent after OSP (27.5% vs 3.2%, p < 0.0001). Urinary tract infections were more common following RSP (9.7% vs 3.7-5.8%, p = 0.009). OSP was associated with higher rates of return to the operating room (4.6% vs 1.1-2.3%, p = 0.023) and prolonged hospitalization beyond 30 days (3.4% vs 0-1.5%, p = 0.016). Independent predictors of complications included higher ASA class, dependent functional status, longer operative time, and undergoing OSP vs LEP. LEP demonstrated superior perioperative outcomes with shorter operative times, lower transfusion rates, and a higher likelihood of discharge to home, supporting its preference in octogenarians. RSP provided intermediate outcomes, whereas OSP was associated with the highest morbidity.
Nutrition insecurity is a major driver of poor cardiovascular health in Indigenous communities. Medically tailored meals may improve heart failure outcomes and quality of life. There is growing momentum among Indigenous communities to reclaim traditional precontact foods to improve cardiovascular health. In this context, utilizing community-based-participatory methods, we designed MUTTON-HF (Medically Utilized Tailored Traditional Foods to Optimize Nutrition in Heart Failure)-an Indigenous culturally and medically tailored meals program that locally sources Native produce and meat and incorporates traditional Diné (Navajo) foods and recipes. The MUTTON-HF trial is a 2-center, pragmatic, open-label randomized controlled trial to determine the efficacy and safety of MUTTON-HF among Diné patients with heart failure and hospitalization, or emergency room visit in the last 12 months. Two hundred and four subjects at 2 Indian Health Service sites will be randomized in a 1:1 stratified fashion by sex, age (<65 versus ≥65 years), and left ventricular ejection fraction (<50% versus ≥50%). Study subjects will receive either culturally and medically tailored meals that incorporate traditional Diné recipes and foods or usual dietary advice for 8 weeks. Data analysts will be blinded to group assignment until study completion. The primary efficacy end point is the proportion of patients with a hospitalization or emergency room visit (all-cause) within 90 days. Secondary outcomes will include hospitalizations or emergency room visits for heart failure specifically; Kansas City Cardiomyopathy Questionnaire scores for health-related quality of life; measures of food insecurity; Indigenous nourishment; diet quality; financial strain; and clinical biomarkers from study enrollment to 8 weeks. Community-based interventions that leverage the protective assets of Indigenous communities are needed to improve cardiovascular health among Native populations. This pragmatic randomized controlled trial will target a rural tribal population and be the first to evaluate the efficacy of an Indigenous food is medicine intervention to improve Indigenous cardiovascular health outcomes for patients with heart failure. URL: https://www.clinicaltrials.gov; Unique identifier: NCT06549699.
Artificial intelligence (AI) models are emerging as rapid, low-cost tools for predicting targetable genomic alterations directly from routine pathology slides. Although these approaches could accelerate treatment decisions in lung cancer, little is known about whether their performance is consistent across diverse patient populations and tissue contexts. To evaluate the performance and generalizability of 2 open-source AI pathology models for predicting EGFR mutation status in lung adenocarcinoma (LUAD) across independent cohorts and ancestral subgroups. This cohort study included patients with LUAD from 2 cohorts: Dana-Farber Cancer Institute (DFCI) from June 2013 to November 2023, and a European-based trial (TNM-I) from August 2016 to February 2022. All patients had paired next-generation sequencing data and hematoxylin-eosin-stained whole-slide images. In the DFCI cohort, genetic ancestry was inferred using germline genotype data. Data analyses were performed from July 2025 to September 2025. The primary outcome was model performance for predicting EGFR mutation status, measured as the area under the receiver operating characteristic curve (AUC), evaluated overall and across ancestry subgroups and sample types. Overall, 2098 patients with LUAD were included (mean [SD] age, 66.6 [10.3] years; 1315 female individuals [63%] and 783 male individuals [37%]). In the DFCI cohort (n = 1759; 54 African, 101 American, 95 Asian, 1465 European), EGFR mutations were detected in 432 patients (25%). One AI-pathology model achieved an AUC of 0.83 (95% CI, 0.81-0.85) compared with 0.68 (95% CI, 0.65-0.70) for the other model. In the TNM-I cohort (n = 339), EGFR mutations were detected in 50 patients (15%), with AUCs of 0.81 (95% CI, 0.74-0.88) and 0.75 (95% CI, 0.68-0.83), respectively. In ancestry-stratified analyses of the DFCI cohort, AUCs for the higher-performing model were 0.84 (95% CI, 0.81-0.86) in patients of European ancestry, 0.85 (95% CI, 0.72-0.94) in African ancestry, and 0.68 (95% CI, 0.55-0.78) in Asian ancestry. In sample type analyses, performance declined in pleural (AUC, 0.66; 95% CI, 0.56-0.76) compared with lung specimens (AUC, 0.86; 95% CI, 0.83-0.88). AI-guided triage analyses showed a potential 57% reduction in rapid EGFR testing, while maintaining sensitivity of 0.84 and specificity of 0.99. This cohort study found that AI-based pathology tools may serve as preliminary adjuncts for EGFR prediction in lung cancer, though performance differences by ancestry warrant careful interpretation.
Taxane-induced peripheral neuropathy (TIPN) affects quality of life and ability to complete cancer treatment and has limited effective interventions for prevention and treatment. To develop and validate a TIPN risk prediction model. SWOG S1714 was a prospective observational cohort study conducted at sites in the National Cancer Institute National Community Oncology Research Program between March 1, 2019, and November 15, 2021, with 3 years of follow-up. The study included evaluable participants 18 years or older with stage I to III lung, breast, or ovarian, fallopian tube, or primary peritoneal cancer who were starting taxane-based treatment. Statistical analysis was conducted from December 2023 to June 2024. Taxane-based regimens including paclitaxel or docetaxel. The primary end point was occurrence of TIPN by 24 weeks. TIPN was assessed using the patient-reported European Organization of Research and Treatment of Cancer Quality of Life Questionnaire-Chemotherapy-Induced Peripheral Neuropathy 20-item scale (CIPN-20) at baseline and weeks 4, 8, 12, and 24. Occurrence of TIPN was defined as an increase of 8 points or more over baseline in the CIPN-20 sensory subscale score. With a 60% random sample of evaluable participants, best-subset selection using logistic regression and k-fold cross-validation identified a best model based on demographic factors, baseline comorbid conditions, and treatment factors. Adverse risk factors were summed, generating a score, split at the median and tested in the remaining 40% of evaluable participants. The target difference was 12% between high-risk vs low-risk groups. A total of 1336 participants enrolled in S1714. Of 1278 evaluable participants (median age, 55.0 years [range, 23.0-84.0 years]; 1264 women [98.9%]; 1164 with breast cancer [91.1%]), 804 (62.9%) experienced TIPN by week 24. Using the training set of 768 participants, a risk prediction model for TIPN was developed that included 5 adverse risk factors: receipt of paclitaxel; stage II or III disease; planned taxane duration of more than 12 weeks; diabetes, autoimmune disease, moderate kidney disease, or a neurologic condition; and self-identified race and ethnicity (Black, Hispanic, Native American, Pacific Islander, multiple races, or unknown race or ethnicity). In the test set of 510 participants, TIPN was more common in high-risk (235 of 345 [68.1%]) vs low-risk (84 of 165 [50.9%]) groups (absolute difference, 17.2%), exceeding the 12% target. In this cohort study of participants with early-stage cancer receiving a taxane regimen, a set of baseline risk factors stratified TIPN risk. A risk prediction model may guide treatment decision-making, symptom monitoring, and enrollment in interventional trials for TIPN prevention and treatment.
Studies have demonstrated lower odds of survival from out-of-hospital cardiac arrest (OHCA) during nighttime hours, but this has not been studied in North America since 2013, and it is unclear what factors might explain this survival difference. To identify whether OHCA survival during nighttime hours remains lower than during daytime hours using contemporary data and whether it can be explained by variable patient physiology or emergency care factors. This cohort study included adults (aged ≥18 years) with OHCA in the Cardiac Arrest Registry for Enhanced Survival from 2013 to 2024. Daytime was defined as 7:00 am to 10:59 pm, and nighttime was defined as 11:00 pm to 6:59 am. Primary outcomes were sustained return of spontaneous circulation (ROSC) and neurologically favorable survival (Cerebral Performance Category score of 2 or more). A multilevel mixed-effects logistic regression model with prehospital agency as a random effect and patient or treatment characteristics as fixed effects was used. A similar analysis of postresuscitation survival was performed among patients with sustained ROSC, adjusting for the time-to-cardiopulmonary resuscitation interval and defibrillation status. A mediation analysis was performed to identify whether the prehospital response interval mediates the association. Of 1 151 845 patients in the registry, 874 415 were eligible and included in the analysis, and the median (IQR) age in the cohort was 64 (52-75) years with 557 515 males (63.8%) and 181 878 Black or African American patients (20.8%), 146 352 Hispanic or Latino patients (16.7%), and 447 646 White patients (51.2%). A minority of OHCA responses occurred at nighttime (241 967 [27.7%]), and the odds of sustained ROSC and neurologically favorable survival were lower at nighttime than daytime (sustained ROSC: 62 548 [25.8%] vs 193 486 [30.6%]; adjusted odds ratio [aOR], 0.85; 95% CI, 0.84-0.86; neurologically favorable survival: 16 234 [6.7%] vs 58 542 [9.3%]; aOR, 0.84; 95% CI, 0.82-0.86). Among those with sustained ROSC, the odds of postresuscitation survival at nighttime were also lower than daytime (aOR, 0.93; 95% CI, 0.90-0.95). The prehospital response interval partially mediated the nighttime survival disadvantage, with approximately 12.6% of the total effect mediated by the response interval. In this cohort study of OHCA, nighttime response was associated with lower adjusted odds of sustained ROSC, neurologically favorable survival, and postresuscitation survival. Emergency care factors accounted for only a portion of the decreased odds of survival at nighttime.
Open reduction and internal fixation (ORIF) has been a long-established and accepted technique for the treatment of lateral malleolar fractures. However, minimally invasive percutaneous plate osteosynthesis (MIPPO) may be preferred in cases with compromised soft tissue conditions or in the presence of comorbidities. MIPPO is increasingly being used in the treatment of long bone fractures. The aim of this study was to compare the clinical, functional and radiological outcomes of the MIPPO technique for lateral malleolus fractures with those of ORIF. This prospective randomized controlled study analysed patients who underwent surgery for lateral malleolus fractures treated with ORIF or MIPPO between the years 2022 and 2024. The study included patients with lateral malleolus fractures of Danis-Weber types B and C requiring surgical intervention. Postoperative pain, the American Orthopaedic Foot and Ankle Society (AOFAS) score, hospital stay, surgical time, fluoroscopy time, and radiological measurements were compared. The study included 68 patients (ORIF n = 34, MIPPO n = 34). The minimum follow-up for all the patients was 12 months and maximum was 24 months with a mean of 18,20 months postoperatively. The postoperative pain scores at 8 h (p = 0.001), 24 h (p < 0.001) and 2 weeks (p = 0.003) were lower in the MIPPO group. Pain scores were similar between the preoperative measurement and those observed at 4 weeks, 6 weeks, 3 months, 6 months and 1 year postoperatively. AOFAS scores were similar between both groups at 3 months, 6 months and 1 year postoperatively. Surgical time and the postoperative hospitalisation period were shorter in the MIPPO group (p < 0.001, p = 0.005). A higher level of fluoroscopy exposure was observed in the MIPPO group (p = 0.001). Similar results were obtained in radiological measurements between groups. The MIPPO technique leads to lower pain scores in the short term, shorter surgery times, and earlier discharge. In the long term, it provides similar clinical, functional, and radiological results to those obtained with ORIF.
The increasing cost of lumbar fusion has invited payment reforms, such as mandatory price limits by Medicare in 2026. To examine the cost, utilization, and procedural case-mix trends for different types of lumbar fusion from 2002 to 2023 in the United States. This cross-sectional analysis used survey-weighted data from the 2002 to 2023 National Inpatient Sample (NIS) and the 2016 to 2022 Nationwide Ambulatory Surgical Sample (NASS). From this nationally representative sample of inpatient and hospital-owned outpatient discharges, information on US adults aged 20 years and older undergoing lumbar fusion for any indication from January 2002 to December 2023 were included. Lumbar fusion of any type (1-disc level or multilevel as well as single vertebral column or both anterior-posterior columns) with nonfusion surgery as a comparison. The main outcomes were the survey-weighted annual total of procedures, the mean age of patients undergoing lumbar fusion, the inflation-adjusted hospital costs, and the annual procedure rates per 100 000 population. A total of 5 033 772 lumbar fusion admissions between 2002 and 2023 were included. In 2023, the cohort of patients undergoing 274 750 procedures had a mean (SD) age of 63.2 (12.9), with 142 815 (52.0%) female patients. Excluding 54 620 complex fusions, which were mostly multilevel anterior-posterior column fusions, there were 164 105 (50.1%) multilevel fusions, and 109 130 (51.3%) combined anterior-posterior column fusions. The age-adjusted population rate of inpatient fusion procedures increased from 60.1 (95% CI, 58.8-90.3) per 100 000 in 2002 (148 823 admissions) to a peak of 89.9 (95% CI, 89.6-90.3) in 2016 (284 180 admissions), before declining to 80.0 (95% CI, 79.7-80.4) by 2023 (273 235 admissions). Lumbar fusion performed in hospital-owned outpatient facilities was minimal in 2016 (6132 procedures, or 2.1% of total lumbar fusions) and 6.9 per 100 000 (27 331 procedures, or 9.8% of total lumbar fusions) in 2022. Adjusted inpatient hospital costs increased 265.3% from $3.86 (95% CI, $3.81-$3.92) billion in 2002 to $14.1 (95% CI, $13.9-$14.2) billion in 2023, and mean inpatient per-procedure cost increased from $25 849 (95% CI, $25 684-$26 015) in 2002 to $45 458 (95% CI, $45 207-$45 709) in 2023. Lumbar fusion primarily shifted from single column at 1 or 2 disc levels in 2002 (mean cost, $24 515; 95% CI, $24 361-$24 669) to multilevel anterior-posterior column fusion in 2023 (mean cost, $55 034; 95% CI, $54 420-$55 650). In this cross-sectional study, lumbar fusion trends were marked by greater utilization of procedures overall, and especially involving multilevel and combined anterior-posterior column approaches and by greater use in the outpatient setting. Costs also increased at both the national and per-procedure levels.
We assembled a workgroup within the Pediatric Research Collaborative on Critical UltraSound (PeRCCUS), a subgroup of the Pediatric Acute Lung Injury and Sepsis Investigators (PALISI), to define early guidance for point-of-care ultrasound (POCUS) institutional practice and foster future comprehensive guidelines for its broad adoption in pediatric critical care medicine. A modified Delphi method was used for creating the statements. The first meeting was an open proposal session for workgroup members to suggest items for consideration. This was followed by a cycle of voting for levels of agreement along a 7-point Likert-type scale. Items were reviewed, with only items receiving a score of greater than or equal to 6 progressing to the next stage of voting and lower-scoring items reconsidered, with only consensus items proceeding to the next stage for additional rounds of voting until consensus was reached. Multi-institutional, multidisciplinary, workgroup of experts on POCUS organized within PeRCCUS as a subgroup of PALISI. None. Consensus was obtained for 25 recommendations across five domains: clinical application, quality assurance, equipment, education, and research. We report consensus recommendations for institutions on clinical use, educational programs, quality assurance, technical requirements, and future research opportunities for the adoption of pediatric critical care medicine POCUS.
Preeclampsia is a leading cause of maternal and perinatal morbidity and mortality, yet its unpredictable onset and rapid progression hinder timely management. Existing prediction tools often rely on specialized biomarkers, static assessments, or limited study cohorts, impeding clinical utility and generalizability. To develop and validate machine learning models for dynamic, short-term prediction of preeclampsia onset using longitudinal electronic health record (EHR) data. This retrospective, multisite cohort study included pregnancies delivered between October 1, 2020, and May 31, 2025, at 3 NewYork-Presbyterian hospitals: Weill Cornell Medical College (WCMC), Lower Manhattan Hospital (LMH), and Brooklyn Methodist Hospital (BMH). Extreme gradient boosting models were developed to predict preeclampsia onset within 1, 2, and 4 weeks. Performance was assessed using nested cross-validation at the training site and external validation via direct transfer, fine-tuning, and retraining. The study included pregnancies among individuals 18 years or older (35 895 at WCMC, 8664 at LMH, and 14 280 at BMH). Routine information captured within the EHR, including blood pressure, maternal characteristics and routine laboratory test results. The main outcome was development of preeclampsia within specified prediction windows. Model performance was evaluated using area under the receiver operating characteristic curve, specificity and positive predictive value at 90% sensitivity. Among 58 839 pregnancies (mean [SD] maternal age, 33.3 [5.3] years; 10 196 [17.3%] Asian, 6525 [11.1%] Black or African American, 32 675 [55.5%] White, and 9443 [16.0%] other [ie, those who were races other than Asian, Black, or White] or unknown race), individuals who developed preeclampsia were older (median [IQR] age, 35.0 [31.0-38.0] vs 34.0 [31.0-37.0] years in the WCMC group [P < .001], 35.0 [31.0-38.0] years vs 34.0 [32.0-37.0] years in the LMH group [P = .003], and 33.0 [28.0-36.0] vs 31.0 [26.0-35.0] years in the BMH group [P < .001]) and more frequently Black (335 of 2227 [15.0%] vs 2178 of 3668 [6.5%] in the WCMC group [P < .001], 117 of 792 [14.8%] vs 566 of 7872 [7.2%] in the LMH group [P < .001], and 455 of 1088 [41.8%] vs 2874 of 13,192 [21.8%] in the BMH group [P < .001]). Predictive performance increased from 28 to 34 weeks' gestation and peaked at 34 weeks' gestation (areas under the receiver operating characteristic curves, 0.863 at training and 0.808-0.834 at validation). The positive predictive values increased from approximately 0.001 to 0.002 at 28 weeks to peak values at 36 weeks (mean [SD], 0.057 [0.012] at LMH and 0.046 [0.007] at BMH), whereas the negative predictive values were greater than 0.993. Blood pressure was the most informative predictor, whereas laboratory measures (including albumin, alkaline phosphatase, and hematologic indexes) contributed to earlier gestation, with demographic and obstetric factors increasing in importance later. In this retrospective, multisite cohort study of pregnancies in late gestation, dynamic short-term prediction of preeclampsia was feasible using routinely available clinical and laboratory data. These results suggest that this approach provided opportunities for earlier intervention and would be adaptable across diverse health care settings.
The integration of artificial intelligence (AI), the rise of mega-journals, and the manipulation of impact factors present challenges to scientific integrity. These trends threaten the core principles of objectivity, reproducibility, and transparency. This editorial highlights two categories of threats: (1) external pressures, such as AI misuse and metric-driven publishing models, and (2) internal systemic flaws, including the 'publish or perish' culture and methodological fragility. Mega-journals, characterized by high-volume publishing and broad interdisciplinary scopes, improve accessibility and accelerate dissemination. However, the emphasis on publication volume might weaken the rigor of peer review. To navigate these challenges, the authors propose a balanced approach that harnesses innovation without compromising scientific integrity. Proposed solutions include mandating AI transparency through frameworks like CONSORT-AI, and redefining impact metrics to emphasize reproducibility, mentorship, and societal impact alongside citations. Scientific journals should promote career opportunities based less on publication quantity and more on quality. Global cooperation, via initiatives like the San Francisco Declaration on Research Assessment (DORA) and the Committee on Publication Ethics (COPE), is essential to standardize ethics and address resource disparities. This editorial proposes solutions for researchers, journals, and policymakers to realign academic incentives and uphold the ethical foundation of the science. By fostering transparency, accountability, and equity, the scientific community can preserve its ethical foundations while embracing transformative tools-ultimately advancing knowledge and serving society. FOR CLINICAL TRIALS: n/a. LEVEL OF EVIDENCE: V.
Minimally invasive surgery has been associated with reduced postoperative morbidity compared with traditional open approaches, suggesting that it may be advantageous for patients with frailty. However, its effect on individuals with frailty with intraductal papillary mucinous neoplasms (IPMNs) undergoing distal pancreatectomy (DP) remains unclear. This study aimed to evaluate the association between surgical approach and postoperative outcomes in the context of patient frailty. Using the American College of Surgeons National Surgical Quality Improvement Program database, 1120 patients with nonmalignant IPMN who underwent DP between 2019 and 2023 were identified. Frailty was defined as a modified frailty index (mFI) of ≥2, calculated using 5 variables: diabetes mellitus, hypertension, functional dependency, chronic obstructive pulmonary disease, and congestive heart failure. Patients were categorized according to their frailty status and surgical approach (minimally invasive surgery [MIS] vs open surgery). Postoperative outcomes, including complications, major complications, readmission, reoperation, and mortality, were compared between the groups using univariate and multivariate analyses. Patients with frailty consisted of 27.7% of the cohort (n = 310) and were more likely to experience complications (35.1% in patients with frailty vs 28.4% in patients without frailty; P =.042) and longer hospital stay (mean: 5.9 days in patients with frailty vs 5.3 days in patients without frailty; P =.009). In the overall cohort, frailty independently predicted higher odds of complications (odds ratio [OR], 1.44 [95% CI, 1.05-1.97]) and readmission (OR, 1.68 [95% CI, 1.16-2.45]), whereas male sex and older age were associated with increased mortality. MIS was not associated with reduced odds of complications, readmission, reoperation, or mortality in populations with or without frailty. Frailty is an independent predictor of complications and readmission after DP for IPMN. However, MIS does not seem to confer benefits over open surgery in patients with or without frailty.