Invasive breast cancer (IBC) is the most prevalent malignant tumor in women globally and a leading cause of female mortality, with increasing incidence and death rates. Recent advancements in machine learning (ML) have shown significant potential in IBC prediction. This study aimed to assess different ML strategies to develop an optimal model for predicting IBC based on routine clinical examination indicators. We collected routine blood parameters, serum tumor marker indicators, and age data from 1,175 IBC patients at the Affiliated Dazu's Hospital of Chongqing Medical University. From these datasets, we identified 26 key routine clinical examination indicators, including 23 blood routine parameters, 2 tumor marker indicators, and age. We constructed an IBC prediction model using 10 ML algorithms. The performance of these models was evaluated using the test set and internal validation set, with evaluation metrics including accuracy, positive predictive value (PPV), negative predictive value (NPV), sensitivity, specificity, F1 score, and area under the curve (AUC). Ultimately, an optimal web tool for predicting IBC was developed based on these models. In the internal testing cohort, we assessed ten ML models. The XGBoost-based web tools emerged as the optimal choice, achieving an AUC exceeding 0.970 on both the test set and internal validation cohorts. Interpretability analysis using Shapley additive explanations (SHAP) revealed that basophils, platelet distribution width (PDW), and age features ranked highly in the feature importance of XGBoost models for IBC prediction, highlighting the importance of incorporating routinely collected clinical data into IBC prediction models. The ML-based web tool developed using 26 routine clinical examination indicators has shown considerable promise in predicting IBC. Among the models, the XGBoost algorithm exhibited the highest performance, becoming a reliable predictive tool that can enhance clinical decision-making and improve the accuracy of IBC diagnoses.
Rehabilitation robotics has accumulated evidence for improving motor outcomes, yet adoption in routine care remains uneven across services and health systems. This evidence-to-practice gap reflects not only clinical considerations but also variation in how robotic rehabilitation is organized and delivered. Relevant service features include staffing and supervision patterns, scheduling rules, device placement, maintenance and support arrangements, governance, and documentation workflows. Current studies often describe the technology and clinical protocol while reporting service delivery features inconsistently, which limits transferability and weakens interpretation of implementation and economic findings. This perspective proposes a pragmatic reporting standard for service models in robotic rehabilitation. Its purpose is to make delivery configurations measurable and comparable across settings, while distinguishing the steady-state service delivery model from the time-limited implementation strategies used to establish, adapt, or sustain it. The standard includes a taxonomy of common service delivery archetypes, a Minimum Service Model Dataset for Rehabilitation Robotics (MSMD-RR) specifying must-report variables with operational definitions and units, and a reporting checklist, robotic rehabilitation service reporting (ROBOT-SERV), designed to complement established implementation science and health economic reporting guidance. MSMD-RR variables were selected for feasibility in routine care, cross-context interpretability, and plausible links to key drivers of real-world value, particularly utilization hours, throughput, therapist time, and downtime. A service model logic model is also provided to link inputs and processes to outputs and implementation, organizational, and economic outcomes. The proposed standard aims to support benchmarking, pragmatic evaluation, and health technology assessment. Graphical abstract.
Stroke disparities persist globally and are driven in part by diagnostic delays and uneven access to advanced neuroimaging. Cerebrovascular ultrasound offers a portable, bedside modality that can optimize stroke evaluation across diverse clinical environments, yet formal training in its use remains inconsistent. This perspective argues that structured, competency-based ultrasound education within vascular neurology fellowships represents a practical, equity-aligned intervention to address diagnostic gaps. While training alone cannot resolve structural drivers of inequity, standardizing ultrasound proficiency can equip clinicians with adaptable diagnostic skills, support timely decision-making, and promote more equitable stroke care delivery across practice settings.
Placental evaluation during obstetric ultrasound commonly includes assessment of placental location, umbilical cord insertion, and relationship to the internal cervical os. However, placental volume measurement is not routinely incorporated into standard ultrasound protocols despite potential clinical applications. Authors of this study aimed to assess sonographer familiarity with, utilization of, and perceptions regarding placental volume measurement in clinical practice. A cross-sectional, web-based survey was administered to practicing sonographers. Items assessed demographics, practice characteristics, familiarity with placental volume measurement, and involvement in education. Primary outcome was familiarity with and routine use of placental volume measurement. Secondary outcomes included institutional protocols and teaching involvement. Fifteen sonographers completed the survey. Most worked in hospital settings (66.7%, n = 10), and all were White (100%, n = 15). The majority performed 6-10 scans daily (73.3%, n = 11), and 40.0% (n = 6) had >20 years of experience. Placental volume measurement was not required in any laboratory (100%, n = 15). Most respondents reported limited familiarity, with 66.7% (n = 10) indicating they do not routinely perform it. Standard protocols most commonly included placental location (100%, n = 15), relationship to the internal os (72.7%, n = 8), and cord insertion (63.6%, n = 7). Most reported teaching sonography students (78.6%, n = 11). Placental volume measurement rarely is incorporated into routine obstetric ultrasound practice, and sonographers report limited familiarity with the technique. Study limitations include small sample size and convenience sampling. Increased education and standardized protocols may improve adoption of advanced placental imaging methods.
Understanding endoscopists' perspectives and routine practice offers opportunities to improve bowel cleansing for colonoscopy. To elucidate Italian endoscopists' perceptions of bowel preparation quality, focusing on defining high-quality cleansing (HQC) and its perceived benefits in clinical practice and for diagnostic outcomes. Nationwide, cross-sectional, web-based survey. A nationwide, web-based cross-sectional survey was undertaken in Italy between August and September 2024 among gastroenterologists with special interest in endoscopy. Participants were recruited via telephone screening; of 498 gastroenterologists contacted, 150 respondents completed an online questionnaire; analyses were descriptive. The survey results revealed that all respondents (100%) routinely evaluate and document cleansing in the endoscopy report and almost all (99%) used validated scales. The majority (72%) of endoscopists aimed for HQC, which they defined as a segment score of ⩾8-9 on the Boston Bowel Preparation Scale or 'excellent' on the Aronchick scale. Almost all (93%) considered HQC important in every colonoscopy regardless of indication. All respondents considered that HQC allows higher identification rates for adenomas and sessile serrated lesions, reduces procedure time, and improves overall clinical efficiency; 99% considered that HQC allows for more appropriate surveillance intervals. On a scale of 1-10 to rate confidence with the diagnostic reliability of the exam (1 = not at all confident, 10 = very confident), the respondents' levels of confidence improved with high-quality bowel preparation; mean scores were 2.1 with inadequate preparation, 6.6 with good cleansing and 9.2 with high-quality bowel cleansing. The survey revealed that the vast majority of Italian endoscopists consider HQC essential across all clinical indications. The results support the transition from 'good' to 'high-quality' cleansing as the new standard in clinical colonoscopy practice. High-quality bowel preparation in Italy Understanding endoscopists’ perspectives and routine practice offers opportunities to improve bowel cleansing for colonoscopy. A survey of Italian endoscopists was conducted to elucidate perceptions of bowel preparation quality, focusing on defining high-quality cleansing (HQC) and its benefits in clinical practice and for diagnostic outcomes. The survey results revealed that all respondents (100%) routinely evaluate and document cleansing in the endoscopy report and almost all (99%) used validated scales. The majority (72%) of endoscopists aimed for HQC, which they defined as a segment score of ⩾8–9 on the Boston Bowel Preparation Scale or “excellent” on the Aronchick scale. Almost all (93%) considered HQC important in every colonoscopy regardless of indication. All respondents considered that HQC allows higher identification rates for adenomas and sessile serrated lesions, reduces procedure time, and improves overall clinical efficiency; 99% considered that HQC allows for more appropriate surveillance intervals. On a scale of 1 to 10 to rate confidence with the diagnostic reliability of the exam (1 = not at all confident, 10 = very confident), the respondents’ levels of confidence improved with high-quality bowel preparation; mean scores were 2.1 with inadequate preparation, 6.6 with good cleansing, and 9.2 with high-quality bowel cleansing. The survey revealed that the vast majority of Italian endoscopists consider HQC essential across all clinical indications. The results support the transition from “good” to “high-quality” cleansing as the new standard in clinical colonoscopy practice.
Identifying individuals at risk for incident chronic kidney disease (CKD; estimated glomerular filtration rate [eGFR] <60 mL/min/1.73 m2) could aid in prevention and disease surveillance. Develop and validate prediction equations to identify individuals at risk of incident CKD using routinely collected administrative data with and without urine albumin-to-creatinine ratio (ACR). This is a retrospective cohort study using administrative data. This study was conducted in Manitoba and Ontario, Canada. This study included 413 948 adults (18 or older) with an eGFR > 70 mL/min/1.73 m2 from Manitoba (derivation cohort; 2006-2016) with external validation in 7 747 513 adults from Ontario, Canada. Routinely available variables (demographics, comorbidities, laboratory values) in administrative data sets were used to predict the outcome of incident CKD (stage G3+) defined by a single outpatient eGFR measure <60 mL/min/1.73 m2 during and up to 10 years of follow-up. In an additional analysis, we defined incident CKD using repeat eGFR measures. Time-to-event models, accounting for the competing risk of death, were used to predict new-onset CKD from one to nine years with a data-driven model reduction. Prediction equations stratifying individuals with and without ACR measurements were derived internally and externally validated. Among individuals from Manitoba [53% women, mean (SD) age 51 (17), mean (SD) baseline eGFR 95 (14) mL/min/1.73 m2, median (interquartile range) ACR 0.7 mg/mmol (1-3)], incident CKD occurred in 11.4% during a median follow-up time of 4.5 (Q1 = 2.3, Q3 = 7.6) years of follow-up. The final model included six variables (age, sex, baseline eGFR, hemoglobin, hypertension, and diabetes) and yielded a five-year area under the curve of 86.0 (no ACR) and 80.2 (with ACR). Model performance was excellent in external validation. Only individuals with measures of all model predictors (complete case analysis) were included. Equations using routinely collected population-level, administrative data variables can accurately predict the onset of CKD with or without ACR. L’identification des personnes à risque de développer une insuffisance rénale chronique (IRC) (débit de filtration glomérulaire estimé [DFGe] < 60 ml/min/1,73 m2) pourrait aider à mieux prévenir et suivre la maladie. Mettre au point et valider des équations prédictives permettant d’identifier, à partir des données administratives usuelles, les individus à risque de développer une IRC incidente, qu’ils disposent ou non d’une mesure du rapport albumine-créatinine urinaire (RAC). Étude de cohorte réalisée à partir des données administratives. Manitoba et Ontario (Canada). L’étude a inclus 413 948 adultes (18 ans et plus) du Manitoba (Canada) avec un DFGe > 70 ml/min/1,73 m2 (cohorte de dérivation; 2006-2016). La validation externe a été réalisée chez 7 747 513 adultes de l’Ontario (Canada). Des variables habituellement disponibles dans les données administratives (données démographiques, comorbidités, résultats de laboratoire) ont servi à prédire l’apparition d’une IRC incidente (stade G3+), définie par une mesure unique du DFGe inférieure à 60 ml/min/1,73 m2 en consultation externe, au cours d’un suivi pouvant atteindre 10 ans. Dans une autre analyse, l’IRC incidente a été définie par des mesures répétées du DFGe. Des modèles de temps jusqu’à l’événement, tenant compte du risque concurrent de décès, ont servi à prédire l’apparition d’une IRC sur un horizon de 1 à 9 ans, avec une simplification du modèle fondé sur les données. Les équations prédictives ont été élaborées en stratifiant les individus avec et sans mesure du RAC, puis validées en interne et en externe. Au cours d’une période de suivi médiane de 4,5 ans (T1: 2,3 ans et T3: 7,6 ans) une IRC incidente est apparue chez 11,4 % des individus du Manitoba [53 % de femmes; âge moyen (écart-type): 51 (17) ans; DFGe moyen à l’inclusion: 95 (14) ml/min/1,73 m2; RAC médian: 0,7 mg/mmol (intervalle interquartile: 1-3)]. Le modèle final reposait sur six variables (âge, sexe, DFGe initial, taux d’hémoglobine, hypertension et diabète) et a produit une surface sous la courbe (SSC) à 5 ans de 86,0 (sans RAC) et de 80,2 (avec RAC). La performance du modèle a été jugée excellente en validation externe. Seules les personnes disposant de mesures pour toutes les variables du modèle (analyse en cas complet) ont été incluses. Des équations prédictives utilisant des variables habituellement disponibles dans les données administratives peuvent prédire avec précision l’apparition de l’IRC, que l’on dispose ou non d’une mesure du RAC.
Post operative sepsis in neonates is a serious problem that may be challenging to diagnose. It is standard practice at our Neonatal Intensive Care Unit (NICU) in Pakistan to perform routine Blood Cultures (BLCS) and C-Reactive Protein (CRP) to screen for post-operative sepsis. We aimed to review this practice to investigate its effectiveness at screening for post-operative sepsis. All neonates admitted to the NICU post-operatively at our center from 2017-2022 were included. Relevant clinical and demographic data were collected. The sensitivity of BLCS was calculated for each post-operative day (POD) and an ROC curve was constructed for overall CRP values to quantify their screening value. A total of 109 post-operative neonates were included (median gestational age 37 weeks, birth weight 2.4kg). Thirteen (12.6%) developed sepsis. Only two patients had pathological microbe growth on POD 0 or 1, both having growth preoperatively. BLCS sensitivity increased significantly after POD 2. CRP performed poorly at discriminating post-operative sepsis (AUROC=0.55). Routine BLCS performed immediately after surgery did not predict the onset of post-operative sepsis. CRP performed poorly at discriminating post-operative sepsis, likely due to physiologic inflammation in post-operative neonates. Unnecessary screening tests represent a significant financial burden in LMICs, with little clear clinical benefit.
Malnutrition is an important prognostic factor in patients with acute coronary syndrome (ACS), but it remains under-recognized in routine practice, particularly in Thailand, where local data are limited, and no population-specific nutritional screening tool has been validated. The Prognostic Nutritional Index (PNI) and Nutritional Risk Index (NRI) have been associated with mortality and major adverse cardiovascular events (MACEs) in patients with ACS, but their clinical usefulness in Thai patients remains unclear. This study aimed to determine the prevalence of malnutrition among patients with ACS undergoing coronary angiography (CAG) at Rajavithi Hospital, Bangkok, Thailand, and to assess the clinical usefulness of PNI and NRI in this setting. The secondary objective was to evaluate 1-year all-cause mortality and the occurrence of MACEs according to nutritional status. This study included 244 adult patients with ACS who were admitted between January 2023 and December 2024, underwent CAG, and completed a 1-year follow-up. Nutritional status was assessed using PNI and NRI, and categorized as severe, moderate, or no malnutrition. The primary outcome was 1-year all-cause mortality, while the secondary outcome was MACEs, defined as a composite of cardiovascular death, non- fatal myocardial infarction, non-fatal ischemic stroke, hospitalization for heart failure, and hospitalization for unstable angina. Associations between nutritional status and outcomes were examined using logistic regression. According to PNI, 43.8% of patients were malnourished, including 27.0% with severe malnutrition and 16.8% with moderate malnutrition. In contrast, NRI classified 99.6% of patients as severely malnourished. The 1-year all-cause mortality rate was 28.3%, and the MACE rate was 28.7%. Based on PNI, severe and moderate malnutrition were associated with higher mortality than no malnutrition (62.1% and 31.7% vs 10.9%, respectively). Severe malnutrition was associated with 13.34-fold higher odds of death (odds ratio (OR) 13.34; 95%CI: 6.41-27.71), while moderate malnutrition was associated with 3.78-fold higher odds (OR 3.78; 95%CI: 1.61-8.82). Severe and moderate malnutrition were also associated with higher odds of MACEs (OR 2.65; 95%CI: 1.38-5.06 and OR 2.89; 95%CI: 1.36-6.11, respectively). Malnutrition was common among Thai patients with ACS undergoing CAG and was strongly associated with adverse 1-year outcomes. Compared with NRI, PNI provided more clinically meaningful stratification in this cohort. Although formal comparative performance analyses were not performed, PNI may be a practical tool for nutritional risk assessment in routine ACS care.
Steatotic liver disease is increasingly recognised in people living with HIV. Non-invasive fibrosis screening strategies commonly rely on the FIB-4 score, but individuals with intermediate values (1.3-2.67) fall into a diagnostic grey zone where additional risk stratification may be required. Identifying simple and effective approaches to guide further assessment is particularly important in routine HIV care settings. We analysed two independent cohorts of people living with HIV undergoing transient elastography (TE): a development cohort from London (n = 229) and an external validation cohort from Madrid (n = 188). Among individuals with intermediate FIB-4 scores, we evaluated the performance of established non-invasive scores, including APRI, and several machine learning models for predicting liver stiffness thresholds associated with significant fibrosis (≥7 kPa and ≥8 kPa). Model performance was assessed using sensitivity, specificity, and predictive values. In both cohorts, the prevalence of significant fibrosis was low. APRI demonstrated consistently high sensitivity and strong negative predictive value for identifying individuals without significant fibrosis. Machine learning models showed modest discrimination and tended to favour negative predictions, reflecting the low prevalence of fibrosis in this screening population. Across models, no machine learning approach demonstrated clear improvement over APRI in identifying individuals at risk of fibrosis. In people living with HIV with intermediate FIB-4 scores, APRI may provide a simple and effective strategy to further stratify fibrosis risk in routine clinical practice. In this real-world screening population, machine learning models did not outperform established non-invasive scores.
The albumin-to-globulin ratio (AGR) is an inexpensive and routinely available laboratory index that may reflect nutritional status and systemic immune activity. However, its association with Hashimoto's thyroiditis (HT) and thyroid autoantibodies has not been well characterized in population-based samples. We investigated the associations of AGR with HT and thyroid autoantibody levels in US adults. We analyzed 10,423 participants from the 2007-2012 National Health and Nutrition Examination Survey (NHANES). Multivariable logistic regression was used to evaluate the association between AGR and HT. Smooth curve fitting was applied to assess the shape of the associations. We examined relationships between AGR and thyroid autoantibodies and performed sensitivity and subgroup analyses to assess robustness. In fully adjusted models, each one-unit increase in AGR was associated with a lower likelihood of HT (OR = 0.70; 95% CI: 0.54-0.91; p < 0.05). Curve-fitting analyses supported a linear inverse association between AGR and HT. AGR was also inversely associated with thyroid peroxidase antibody (TPOAb) levels (β = -11.33; 95% CI: -17.91 to -4.75; p = 0.0007), whereas the association with thyroglobulin antibody (TgAb) was not statistically significant (β = -4.23; 95% CI: -10.88 to 2.43; p = 0.2129). Sensitivity and subgroup analyses yielded consistent results. Lower AGR was associated with higher odds of HT and higher TPOAb levels in a nationally representative US sample. As a routinely available composite index, AGR may be useful for HT risk assessment and early identification at the population level. These findings are observational and hypothesis-generating; prospective and mechanistic studies are warranted to confirm temporality and clarify underlying pathways.
During the COVID-19 pandemic, nasal swabs became a routine test for infection. This non-invasive procedure is typically regarded as benign and rarely associated with complications. However, physical stimulation of the nasopharynx and trigeminal nerve can trigger the trigeminocardiac reflex (TCR), a vagally mediated cardioinhibitory response that can lead to sinus arrest. In this study, a clinical trial participant experienced a 3.4 s episode of sinus arrest which was captured on electrocardiogram telemetry during a routine COVID-19 nasal swab test. Diagnosis of the TCR was based upon the plausibility and reversibility criteria, which are hallmarks of this condition. Healthcare providers need to be aware of this phenomenon as well as other potential complications when performing nasopharyngeal swabs.
Antibiotic susceptibility testing (AST) results are needed more rapidly to support antimicrobial stewardship and improve patient outcomes. Diagnostic microbiology laboratories receive hundreds of urine samples daily from patients with suspected urinary tract infection (UTI) in both community and inpatient settings. Bacteriostatic boric acid preserves microbial contents during transport but may interfere with rapid AST methods. This study aimed to assess the accuracy of rapid microcapillary direct-from-urine (RMD) AST with suspected UTI patient urine, and to determine whether RMD AST is affected by boric acid. The overall accuracy of RMD AST was assessed with 352 diagnostic remnant urine samples collected with boric acid, for seven first-line antibiotics (ampicillin, amoxicillin/clavulanic acid, trimethoprim, nitrofurantoin, ciprofloxacin, cefalexin and cefoxitin). A further 90 urine samples were tested in duplicate with or without addition of bacteriostatic. RMD AST showed a concordance with the reference method of 572/590 bacteria/antibiotic combinations (96.95%) for urine samples containing a single organism. The mean time to AST result was 5.85 h. When duplicate samples with or without boric acid were directly compared there was a categorical agreement of 158/160 (98.75%). The overall high accuracy of RMD AST for determining antimicrobial susceptibility to seven first-line antibiotics for UTI shows this method can deliver rapid results-without requiring additional processing-for urine samples routinely collected with boric acid from suspected UTI patients. The close agreement between duplicates with or without boric acid confirms this rapid direct method is unaffected by bacteriostatic collection.
Oxidative stress, inflammation, and endothelial dysfunction contribute to perioperative morbidity following total knee arthroplasty (TKA). Vitamin C (ascorbic acid), an essential antioxidant cofactor, has been proposed to mitigate these pathways. This systematic review evaluates current evidence on perioperative vitamin C supplementation in TKA and its effects on pain, inflammation, blood loss, and postoperative recovery. A systematic search of PubMed, Embase, Scopus, and Web of Science was conducted from database inception through July 2025, following PRISMA 2020 guidelines. Randomized controlled trials (RCTs) assessing perioperative vitamin C use in primary TKA were included. Methodologic quality was appraised using the Cochrane Risk-of-Bias tool (RoB 2). Owing to heterogeneity in dosing, timing, and outcomes, results were synthesized narratively. Ten RCTs involving 1,364 patients met the inclusion criteria. Vitamin C administration varied substantially in dose, route, and timing. Across studies, findings for postoperative pain, inflammatory markers, blood loss, and functional recovery were inconsistent. Several reported numerical trends favors vitamin C, but most outcomes lacked statistical significance or were supported by a single study. Evidence for reduced complex regional pain syndrome (CRPS) was more consistent but still limited by small sample sizes. No major safety concerns were identified. Current evidence does not support a definitive benefit of perioperative vitamin C supplementation in TKA. While isolated studies suggest potential reductions in inflammation, blood loss, or pain, these findings are not consistent across trials and often lack statistical significance. Larger, methodologically sound RCTs with standardized dosing protocols are needed before recommending vitamin C as a routine perioperative supplement.
Feeding difficulties can significantly disrupt family routines and contribute to caregiver stress, yet limited research has explored the lived experiences of Australian caregivers navigating this challenge. The objectives of this exploratory qualitative study was to investigate the experiences of six Australian caregivers raising children aged 2-18 with feeding difficulties. Six semi-structured interviews were conducted and analysed using framework analysis. Four themes emerged: (1) caregiver concern for child wellbeing, including nutrition, emotional health, and social functioning; (2) impacts on the family, including disrupted mealtimes, strained relationships, and intergenerational tension; (3) challenges and strategies used to manage feeding difficulties; and (4) needs for support, highlighting service gaps, barriers to care, and mixed views on telehealth. Findings suggest the emotional and logistical burden of feeding difficulties on families and the importance of responsive, family-centred approaches. These findings provide preliminary qualitative insight to inform future research and service development. Health professionals should consider both the psychosocial context and practical needs of caregivers when supporting feeding concerns.
Enteral nutrition is commonly practiced for ischemic stroke survivors with dysphagia. In Eastern Asia, nasogastric and oro-esophageal tubes are the mainstream options. However, there is a lack of rigorous clinical evidence on the effects of these two feeding methods on swallowing-related rehabilitation outcomes and clinical relevance. This study is clinically oriented and aims to assess the effect of nasogastric versus oro-esophageal tube feeding on the degree and speed of dysphagia improvement, and aspiration symptoms. This multicenter randomized controlled trial will include 422 ischemic stroke patients with dysphagia who require tube feeding. Stratified randomization will be performed to assign participants 1:1 to the oro-esophageal or nasogastric groups. All participants will receive 15-days routine rehabilitation care and nasogastric or oro-esophageal feeding, according to their group assignment. The primary outcome is the dysphagia severity assessed using the Dysphagia Outcome and Severity Scale (DOSS). The secondary outcomes include time to improvement of one level from the baseline DOSS, time to oral intake, accumulation of secretions assessed using the Murray Secretion Scale, pharyngeal residue after swallowing assessed using the Yale Pharyngeal Residual Severity Rating Scale, and airway protection assessed using the Penetration-Aspiration Scale. Aspiration symptoms will be monitored for 6 weeks. This study aims to provide evidence-based support for the comprehensive effects of tube feeding on swallowing-related rehabilitation outcomes. ClinicalTrials.gov, identifier NCT07386834.
Oceanic submesoscale currents dominate the vertical exchanges of heat, biological nutrients and carbon between the shallow and the deep ocean and strongly influence the lateral dispersion of biogeochemical tracers and pollutants. Observing these surface intensified currents, however, has been a long-standing challenge due to their small scales and rapid evolution. Here we introduce Geostationary Ocean Flow (GOFLOW), a deep learning framework that takes advantage of geostationary satellites' contiguous sequences of thermal imagery to produce hourly, high-resolution surface velocity fields that capture submesoscale circulations. Our approach does not assume simplified dynamical balances and inherently filters internal wave noise, both of which limit state-of-the-art satellite altimetry. Applying GOFLOW to the Gulf Stream, we provide satellite-based measurements of submesoscale current statistics, revealing characteristic asymmetries in vorticity and divergence previously documented only in high-resolution circulation models. This ability to routinely map the ocean's energetic submesoscale currents provides a transformative data source to advance Earth system forecasting, to mitigate ocean pollution, to monitor marine ecosystems and to reduce climate model uncertainties.
Background Shoulder pain is a common musculoskeletal complaint with diverse etiologies involving rotator cuff, labral, and periarticular structures. Accurate imaging is essential for diagnosis and management. Ultrasonography (USG) and magnetic resonance imaging (MRI) are widely used modalities, each with distinct advantages and limitations. The aim of this study was to compare the diagnostic performance of USG and MRI in evaluating shoulder pathologies, particularly rotator cuff tears, using surgical or arthroscopic findings as the reference standard where available, and to assess their complementary roles in clinical practice. Methods This prospective observational study included 75 patients presenting with shoulder pain, restriction of movement, or instability. All patients underwent USG and MRI of the affected shoulder using standardized protocols. USG and MRI examinations were interpreted independently by experienced musculoskeletal radiologists. Readers were blinded to the findings of the other imaging modality at the time of reporting; however, relevant clinical history regarding the symptomatic side was available, consistent with routine diagnostic practice. Radiologists were not blinded to surgical outcomes at the time of final correlation, as surgical findings were used as the reference standard in the operative subgroup. Imaging findings were compared between modalities and correlated with surgical or arthroscopic findings where available. Diagnostic performance parameters, including sensitivity and specificity, were calculated in the surgically verified subgroup (n=34). Interobserver agreement for USG and MRI interpretations was not formally assessed. Statistical analysis was performed using the chi-square test, with p<0.05 considered statistically significant. Results The study population showed a male predominance (52/75; 69.3%) and greater involvement of the right shoulder (54/75; 72.0%). Trauma was the most common etiological factor, observed in 44/75 patients (58.7%). Bursitis or joint effusion (32/75; 42.7%) and rotator cuff tears (29/75; 38.7%) were the most frequently detected pathologies. MRI demonstrated higher sensitivity than USG for detecting overall rotator cuff tears (96.5% vs. 87.0% sensitivity) and full-thickness rotator cuff tears (100% vs. 88.0% sensitivity). MRI also showed higher specificity for rotator cuff tear detection (90.0% vs. 82.0%). Additionally, MRI was superior in detecting labral and instability-related lesions. However, the difference in diagnostic performance between USG and MRI was not statistically significant (χ²=17.07, p=0.105). Conclusion USG is a reliable first-line imaging modality for evaluating common shoulder pathologies, while MRI provides superior characterization of complex soft-tissue and intra-articular lesions. A combined imaging approach optimizes diagnostic accuracy and clinical decision-making.
Left uterine displacement is presumed to decrease vasopressor requirements during cesarean delivery due to decreased aortocaval compression, but studies have been equivocal. Our primary aim was to determine the median effective dose (ED5 0) of prophylactic phenylephrine infusion in two positions-15° left lateral tilt versus supine-which remains unknown during elective cesarean delivery under combined spinal-epidural anesthesia. We hypothesized that a 15° left tilt would reduce phenylephrine requirements. Eighty pregnant women were randomly allocated to be positioned either at a 15° left tilt (group L) or supine (group S) during elective cesarean delivery. A Prophylactic phenylephrine was started immediately after intrathecal injection. In each group, the first patient received phenylephrine infusion at 0.5 µg/kg/min. Each subsequent patient received an infusion with an incremental dose of 0.05 μg/kg/min above or below the initial dose, depending on the response of the preceding patient. ED50 values for phenylephrine were calculated using the up-down sequential methodology and compared using relative median potency ratios. The ED50 of a phenylephrine infusion was 0.33 µg/kg/min (95% confidence interval (CI), 0.23 to 0.39 µg/kg/min) in group L and 0.30 µg/kg/min (95% CI, 0.22 to 0.37 µg/kg/min) in group S. The relative median potency for phenylephrine in group L vs. group S was 1.06 (95% CI, 0.86 to 1.45). A 15 left tilt did not significantly alter phenylephrine requirements; therefore, routine use of tilt solely to reduce vasopressor need may not be necessary.
The medical intensive care unit (ICU) can be a challenging place for patients to get adequate sleep. Poor sleep quality before ICU admission could have significant consequences for sleep during ICU admission and for critical illness recovery. The objective of this study was to characterize the pre-admission sleep of patients admitted to the medical ICU using sleep questionnaires obtained during a larger randomized control trial. Enrolled participants were interviewed regarding their habitual sleep routines prior to their ICU admission via the Pittsburgh Sleep Quality Index (PSQI) and the Epworth Sleepiness Scale (ESS). Demographic and medical information were also collected. We enrolled 40 medically critically ill participants who reported a mean sleep duration (± standard deviation, SD) of 6.9 ± 1.8 hours and a mean sleep latency of 34.2 ± 36.5 minutes. The mean global PSQI score was 7.7 ± 4.0 with 61% of patients having a global PSQI score >5. The mean ESS score was 9.5 ± 5.1 with 36% of patients reporting an abnormal score >10. Patients admitted to the ICU appear to have poor quality habitual sleep prior to hospitalization. The PSQI and ESS tools may provide insight into pre-ICU sleep in selected patient populations. These tools could be leveraged to expand the evidence base regarding pre-hospital sleep in critically ill patients and potentially inform interventions to improve sleep in the ICU.
Hepatocellular carcinoma (HCC) remains a major cause of cancer-related mortality, and transarterial chemoembolization (TACE) is the standard therapy for intermediate-stage disease. However, response to TACE is variable, and reliable quantitative imaging biomarkers are needed to support early treatment decision-making. This study aimed to evaluate the predictive value of the delayed percentage attenuation ratio (DPAR) measured from pre-TACE multiphasic computed tomography (CT) in forecasting early therapeutic response. A retrospective cross-sectional study was conducted involving patients with a definitive diagnosis of HCC who underwent their first TACE session and had complete multiphasic CT imaging before and after treatment. Quantitative washout parameters, delayed percentage attenuation ratio (DPAR), absolute washout (WOAbs), and relative washout (WORel) were measured using standardized region of interest (ROI) placement by three radiologists. Treatment response was assessed four to six weeks post- TACE based on modified Response Evaluation Criteria in Solid Tumors (mRECIST) criteria and classified into responders and non-responders. Diagnostic performance was evaluated using receiver operating characteristic (ROC) analysis, and interobserver reliability was assessed using intraclass correlation coefficient (ICC) and Cohen's κ. A total of 49 HCC patients were included and analyzed. Responders demonstrated significantly higher DPAR values compared with non-responders (median 134.5 vs 113.0; p<0.001). DPAR showed the strongest discriminative performance with an area under the curve (AUC) of 0.898, outperforming WOAbs (AUC 0.689) and WORel (AUC 0.704). The optimal DPAR threshold of ≥120.5 provided 84.4% sensitivity and 88.2% specificity to predict early post-TACE treatment response. Interobserver reliability was excellent for all washout parameters (ICC 0.98-0.99), and agreement for mRECIST classification was also excellent (κ=0.867). In conclusion, pre-TACE DPAR is a robust and reproducible quantitative imaging biomarker that accurately predicts early response to TACE in HCC. A threshold value of ≥120.5 may assist in treatment planning and patient selection in routine clinical practice.