This study investigated perioperative Th1/Th2 cytokine polarization and biochemical changes in cats undergoing ovariohysterectomy under three different anesthesia protocols. Twenty-four healthy mixed-breed female cats were randomly allocated into three groups (n = 8/group): propofol-isoflurane (Group I), xylazine-ketamine hydrochloride (Group II), and xylazine-isoflurane (Group III). Venous blood samples were collected preoperatively (0 h) and at 6 and 12 h postoperatively. Serum concentrations of TNF-α, IFN-γ, IL-2, IL-4, IL-5, and IL-10 were quantified using feline-specific ELISA kits, and biochemical variables were measured with an automated analyzer. In Group I, tumor necrosis factor (TNF)-α, increased postoperatively (12.64 ± 2.28 ng/mL) compared to preoperative levels (6.79 ± 0.74 ng/mL; p < 0.05). In Group II, both interferon (IFN)-γ and interleukin (IL)-2 levels decreased after the operation compared to pre-op 0 h (p < 0.05). Group III showed a decrease in IL-4 and IL-10 postoperatively (124.21 ± 23.45 pg/mL and 106.05 ± 5.25 ng/mL, respectively) compared to pre-op 0 h (197.28 ± 14.40 pg/mL and 113.14 ± 11.19 ng/mL, respectively; p < 0.05). IL-5 concentrations did not change significantly in any group. Postoperative biochemical changes were transient and remained within reference ranges, with no clinical complications observed. These findings indicate that anesthesia protocols differentially influence perioperative immune responses in cats. The xylazine-ketamine combination induced the greatest cytokine fluctuations, whereas xylazine-isoflurane best preserved Th1/Th2 balance. Anesthetic selection may play a critical role in minimizing perioperative immune modulation. The xylazine-isoflurane protocol may be a preferable option for feline ovariohysterectomy, particularly in patients where immune stability is clinically relevant. These findings indicate that anesthesia protocols are associated with distinct short-term perioperative immune profiles in cats undergoing elective ovariohysterectomy. The observed differences should be interpreted as preliminary and hypothesis-generating, and larger studies with standardized pain assessment and longer follow-up are warranted to clarify clinical implications.
Alkaline phosphatase (ALP) activity in gingival crevicular fluid (GCF) reflects localized periodontal and alveolar bone remodeling during orthodontic tooth movement. Although ALP changes have been investigated during leveling and alignment or active orthodontic movement, the biological impact of tooth extraction timing relative to orthodontic force application remains insufficiently characterized. This randomized controlled trial aimed to compare ALP activity in gingival crevicular fluid between different extraction protocols in maxillary canines over a 6-week period. The primary outcome was the between-group difference in ALP activity across repeated time points. Sixty patients with moderate maxillary crowding were randomly allocated, using concealed allocation into three equal groups: Group A (leveling and alignment only with delayed extraction), Group B (extraction only), and Group C (simultaneous extraction with leveling and alignment). GCF samples were collected weekly from Mesial and Distal sites of maxillary canines from baseline to week six. ALP activity was quantified using a spectrophotometric method. Data distribution was assessed prior to analysis. Due to predominantly non-normal distributions, non-parametric tests were primarily applied, including Friedman's test for intragroup comparisons and Kruskal-Wallis test for intergroup comparisons, with appropriate post-hoc analyses. Distinct temporal patterns of ALP activity were observed across the three groups. Group B (extraction only) demonstrated the highest ALP activity during the early time points, particularly at weeks 1-3, while Group C showed intermediate responses and Group A exhibited consistently lower levels. At T1, significant intergroup differences were observed. At Mesial sites, Group C showed higher ALP activity than Group A (mean difference: 0.59 IU/L), followed by Group B (mean difference: 0.28 IU/L compared with Group A; p < 0.05). At Distal sites, Group B demonstrated the highest ALP activity compared to Group A (mean difference: 1.00 IU/L; p < 0.01), with Group C showing intermediate values compared with Group A (mean difference: 0.73 IU/L). Tooth extraction was associated with increased ALP activity in gingival crevicular fluid during the early phase of orthodontic treatment. Variations in ALP patterns between extraction protocols indicate that extraction timing modulates early biochemical responses in periodontal tissues.
Human biomonitoring (HBM) is crucial for evaluating exposure to diet-related contaminants, whose effects may pose substantial health risks. Saliva is recognized as a promising non-invasive biological matrix due to its ease of collection and potential to reflect external and systemic exposure. However, suitability for monitoring dietary hazardous compounds remains uncertain. To assess the potential of saliva as a biomonitoring matrix for diet-related contaminants, identify compounds with robust diet-related associations, and highlight knowledge gaps. A systematic literature review was conducted to screen over 500 diet-related contaminants analyzed in saliva. Detailed information was extracted only for contaminants quantitatively measured in saliva, including concentration ranges, sample sizes, and analytical methods. Evidence of correlations with systemic concentrations, exposure pathways, and individual or lifestyle factors was compiled into a FAIR database to provide an integrated evaluation of saliva's biomonitoring potential. Only a limited subset of contaminant groups, including nitrite/nitrate, heavy metals, bisphenols, polycyclic aromatic hydrocarbons (PAHs), biogenic amines, pesticides, advanced glycation end products (AGEs), perchlorate, microplastics (MPs), parabens and phthalates, have been quantitatively measured in saliva. Compounds such as nitrate, arsenic, AGEs, pesticides and perchlorate demonstrate moderate to strong correlations between salivary and systemic levels, supporting saliva's potential to estimate exposure. Conversely, substances like PAHs, MPs, phthalates and parabens generally show weak or no correlation, reflecting recent or localised exposures rather than cumulative burden. Salivary composition is influenced by intrinsic and extrinsic factors, including diet, oral microbiota, physiology, and sampling conditions, resulting in high interindividual variability. Despite challenges, low salivary concentrations and lack of standardized collection protocols, saliva offers advantages for biomonitoring vulnerable populations, such as children and pregnant women. Harmonized collection procedures, validated sensitive methods, predictive models accounting for variability and exposure context, could establish saliva as a reliable complementary or alternative matrix for assessing human exposure to dietary and environmental contaminants. This systematic review synthesizes findings from 104 studies, covering over 500 diet-related contaminants measured in saliva, and compiles them into a FAIR database, providing the most comprehensive resource to date for saliva-based biomonitoring. Compounds such as nitrate, arsenic, advanced glycation end-products (AGEs), pesticides, and perchlorate show meaningful correlations with systemic levels, supporting saliva's potential as a non-invasive matrix for assessing human exposure. To fully realize saliva's potential, standardized collection protocols, validated analytical methods, and predictive models that account for interindividual variability and exposure context are urgently needed, enabling more accurate and ethical monitoring of vulnerable populations.
Heart failure remains a leading global cause of morbidity and mortality, with limited capacity for myocardial regeneration following infarction. Human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) have become a promising therapeutic resource due to their scalability, differentiation potential, and immunologic adaptability. Engineered cardiac patches, three-dimensional constructs of hiPSC-CMs combined with supporting cells and scaffolds, offer a strategy to deliver organized myocardium directly to injured hearts, overcoming the limitations of cell injection therapies. This review synthesizes evidence from 2010 to early 2025, spanning rodent, porcine, and non-human primate models, as well as the first clinical trials of hiPSC-CM patches. We highlight recent advances in maturation protocols, vascularization strategies, and scaffold engineering, while discussing two distinct translational paradigms: short-term paracrine support versus long-term remuscularization under sustained immunosuppression. Preclinical studies show that engineered patches improve graft survival, with engraftment rates ranging from 5 to 15%, alongside enhanced vascularization, electrical coupling, and left ventricular function. In large animal models, patches scaled to clinically relevant sizes achieved durable integration and improved hemodynamics. Of note, arrhythmogenic risk was lower than in intramyocardial injection models. Early human trials in Japan and Germany confirm feasibility and safety, with preliminary evidence of efficacy, including preliminary evidence of improved left ventricular ejection fraction and upgrades in NYHA functional class. Immunogenicity, graft maturation, and manufacturing scalability remain key hurdles, though innovations such as gene-edited hypoimmunogenic lines, multipronged maturation strategies, and bioreactor-based production offer potential solutions. Engineered hiPSC-CM cardiac patches represent a rapidly advancing frontier in regenerative cardiology. While early data indicate technical feasibility and measurable functional benefits, broader adoption will depend on resolving challenges of immune compatibility, arrhythmia prevention, and large-scale manufacturing. With coordinated progress in science, engineering, and regulation, cardiac patches may evolve into a transformative therapy for heart failure.
The use of extracorporeal membrane oxygenation (ECMO) has expanded for severe respiratory and circulatory failure. Acute kidney injury (AKI) is one of the most frequent complications. Up to 85% of ECMO patients develop AKI and approximately half of them require renal replacement therapy (RRT) making a comprehensive understanding of both therapies and their interaction essential for patient management. However, evidence to guide ECMO-specific RRT strategies remains limited. ECMO-related AKI arises from a complex interplay between patient and circuit factors that promote inflammation, endothelial injury, and tubular damage. Timing of RRT initiation in ECMO patients often relies on criteria used in the general critically ill population. Fluid overload is consistently associated with worse outcomes, and observational data suggest that earlier RRT, primarily to control fluid balance, may improve survival although randomized trials in ECMO patients are lacking. All RRT modalities can theoretically be used, but continuous techniques are preferred. RRT can be delivered via a parallel circuit using a dedicated venous catheter or integrated into the ECMO circuit. Integrated configurations reduce the need for additional vascular access and may prolong filter lifespan but require careful pressure management and team expertise to avoid alarms, hemolysis, and air embolism. Anticoagulation strategies must balance bleeding and thrombosis risk across both circuits; unfractionated heparin remains standard, while regional citrate anticoagulation can safely extend filter lifespan in selected patients although data is lacking in patients under veno-arterial ECMO. RRT during ECMO is associated with higher short-term mortality and an increased burden of chronic kidney disease among survivors. The management of AKI in ECMO patients remains a major clinical challenge. While RRT is often required in this population, optimal strategies for its initiation, modality selection, and integration with ECMO circuits are still evolving. Current evidence underscores the need for individualized approaches based on patient characteristics. Future research should focus on defining standardized protocols for RRT implementation in ECMO, use of regional citrate anticoagulation, optimizing patient selection, and evaluating long-term renal and survival outcomes.
Leech therapy is used in diverse ways in the medical field, such as for flap salvation and for treatment of osteoarthritis. While bleeding is a recognized adverse effect, severe coagulopathy is rare and underreported, especially when compounded by concurrent anticoagulant use. We present a case of a 76-year-old female undergoing leech therapy in combination with systemic heparin for free flap salvage following partial glossectomy and neck dissection. Shortly after initiating both therapies, the patient developed marked coagulopathy, evidenced by prolonged activated partial thromboplastin time (aPTT), elevated thrombin time, and persistent bleeding from the surgical site. Laboratory results suggested synergistic anticoagulant effects of heparin and hirudin, the potent direct thrombin inhibitor secreted by medicinal leeches. Despite discontinuation of systemic heparin, bleeding persisted until leech therapy was halted. The patient received 8 units of packed red blood cells (pRBCs) during the treatment course. Coagulation markers normalized after cessation of leech therapy, with no further bleeding events. This case highlights a potentially underrecognized risk of coagulopathy in patients undergoing combined leech and systemic anticoagulation therapy. The mechanistically distinct pathways of heparin and hirudin may act synergistically to impair both free and clot-bound thrombin activity. Given the absence of formal guidelines for coagulopathy monitoring in such cases, we propose daily assessment of coagulation markers including aPTT, PT/INR, and a one-time measurement of thrombin time and anti-Xa levels after treatment initiation. This report underscores the need for standardized protocols and further research to guide safe implementation of leech therapy, particularly in patients receiving systemic anticoagulation.
Tuberculosis (TB) remains a significant public health challenge in Northeast Iran, yet longitudinal data evaluating regional transmission patterns following COVID-19 disruptions remain limited. This cross-sectional study, conducted from 2017 to 2023 at Qaem University Hospital, Mashhad, Iran, analyzed 14,572 patients with suspected TB using smear microscopy, culture, and PCR to characterize the test positivity rate (TPR), temporal shifts, and diagnostic challenges. We identified a 10.3% TPR (1,494 out of 14,572), with significant demographic disparities: females accounted for 51.7\% of cases (OR = 0.74, 95\% CI: 0.665-0.824; p < 0.001), although the association strength was weak (Cramér's V = 0.05). Adults aged 65 and older represented 51.6\% of the cases. The COVID-19 pandemic led to a 33.3\% decline in diagnoses from 2019 to 2020, with outpatient recovery lagging behind inpatient services. Time-series analysis identified a significant structural break in March 2020 (p < 0.001), statistically confirming the sharp decline in diagnoses due to the pandemic. Bronchoalveolar lavage showed the highest positivity rate at 54.7\%, identifying 135 smear-negative/culture-positive cases. Seasonal peaks in spring (27.8\%) are hypothesized to result from post-winter Vitamin D troughs and social gatherings during Nowruz. These findings emphasize the importance of geriatric-focused screening, multimodal diagnostic protocols, and pandemic-resilient TB surveillance. Regional policies should focus on integrated respiratory screening and community-based interventions to reduce seasonal transmission.
Tanzania has adopted artificial intelligence (AI)-assisted chest X-ray screening for tuberculosis (TB), including the use of CAD4TB version 6, which is registered by the Tanzania Medicines and Medical Devices Authority (TMDA). While GeneXpert, practical reference standard used in routine practice, remains the primary bacteriological confirmatory test in routine practice, there is currently no established national threshold for CAD4TB use in either active case finding (ACF) or passive case finding (PCF) settings. This study evaluates the implementation and operational use of CAD4TB version 6 within mobile TB screening units in Tanzania and highlights challenges affecting its effective use. We conducted a retrospective analysis of screening data from 11,923 individuals collected from mobile clinics equipped with digital X-ray, CAD4TB version 6, and GeneXpert systems. Comparisons were made between manual chest X-ray interpretation, CAD4TB scores, and GeneXpert results within the subset of individuals who underwent confirmatory testing. The findings reveal substantial inconsistencies in screening workflows, including non-uniform use of CAD4TB prior to GeneXpert testing, missing radiological records, and deviations from intended protocols across sites. Descriptive analysis showed that CAD4TB scores generally aligned with GeneXpert-positive cases within the tested subset; however, due to selective application of GeneXpert and incomplete data, these observations cannot be interpreted as measures of diagnostic accuracy. This study should be interpreted as an implementation and operational assessment of AI-assisted TB screening rather than a diagnostic accuracy or threshold-setting study. The findings highlight important gaps in protocol adherence, data completeness, and workflow standardization, underscoring the need for prospective, protocol-driven studies to establish validated national thresholds for CAD4TB use in Tanzania.
Sleep disorders include a range of common problems that affect the quality of sleep at night and, as a result, impact an individual's daily functioning. Treatment protocols vary from over-the-counter products to regulated pharmaceuticals. Melatonin and Tasimelteon are two compounds utilized for severe to moderate sleeping disorders. This study developed and validated a sensitive, simple bioanalytical LC-MS/MS method for the measurement of Melatonin and Tasimelteon in spiked rat brain tissue. Chromatographic analyses were conducted in isocratic mode, with Citalopram selected as an appropriate internal standard. The Supelco Ascentis® Express Phenyl-Hexyl column was used for the stationary phase, and the mobile phase comprised 0.2% formic acid in a mixture of acetonitrile and water (65:35, v/v). A response surface methodology is applied. The Box-Behnken design was used to optimize the influence of three independent factors (acetonitrile%, formic acid%, and flow rate (mL/min)) on the response. The study focused on finding the most significant factors influencing chromatographic separation, namely the resolution between Tasimelteon and Melatonin, as well as the tailing factors of both. Statistical analysis of variance provided the optimal conditions for separating the substances as well as the most influential factors. Validation of the analytical method was conducted in accordance with the International Council for Harmonization guideline M10 related to bioanalytical method validation. The method validated was precise and linear in 55.00-1650 (ng/mL) and 20-600 (ng/mL) for the Melatonin and Tasimelteon, respectively. The validated method's lower limit of quantification values was 55 and 20 ng/mL for Melatonin and Tasimelteon, respectively. For Melatonin, intraday accuracy (recovery, %) ranged from 96.53% to 102.68%, and precision (expressed as relative standard deviation) ranged from 0.26% to 0.96%. And inter-day accuracy ranged from 96.58% to 103.08%, and inter-day precision ranged from 0.33% to 3.55%. Intraday accuracy results for Tasimelteon 99.61%-103. 75% precision results were in the range 0.23%-0.93%; additionally, inter-day accuracy was 99.37-103.87%, and the precision range was 1.04-2.11%. The total run time was 3 min, with retention time for Melatonin and Tasimelteon at 1.9 and 2.5 min, respectively, achieving effective chromatographic separation under optimum conditions. The Red Green Blue 12 score for whiteness was determined to be 79.2%.
The study aimed to investigate the safety, optimal biologic dose, and preliminary efficacy of topical neostigmine ophthalmic solution (TNOS) in myasthenia gravis (MG) patients with ptosis. A dose-escalation study comprising cohorts of three patients in each step was performed. The first cohort received 1.5 mg/mL TNOS and was evaluated for safety and efficacy. Then, the subsequent cohorts received the selected dose of TNOS according to the safety and efficacy protocols to determine the optimal biologic dose. Vital signs, visual acuity, intraocular pressure, pupillary size, slit-lamp examinations, and abnormal ocular symptoms were recorded as safety measures. Vertical palpebral fissure and self-reported alleviation of ptosis were investigated as efficacy measures. No systemic or ocular adverse effects were observed in any participant. Vertical palpebral fissure increased by more than 2 mm in 83.3% and 33.3% of patients receiving 1.5 and 1.0 mg/mL TNOS, respectively. The 1.5 mg/mL TNOS significantly increased vertical palpebral fissure from baseline (maximal mean difference 3.30 mm; 95% confidence interval 2.32 to 4.27; p < 0.001). All participants in the 1.5 mg/mL group and 33.3% in the 1.0 mg/mL group reported ptosis alleviation. The 1.5 mg/mL TNOS was considered the optimal biologic dose for alleviating myasthenic ptosis without adverse events.
Single-paradigm, single-measure eye-tracking protocols have demonstrated utility in distinguishing between autistic and non-autistic children although effect sizes and reproducibility vary. There is a need for brief, scalable digital behavioral biomarkers that integrate complementary information from multiple eye-tracking paradigms and achieve improved accuracy in combination. 74 autistic and 63 non-autistic children aged 24-72 months performed five different eye-tracking paradigms (facial emotion processing, gaze-following, dynamic social versus geometric patterns, social interaction, and spinning) lasting 3.25 min in hospital or kindergarten settings. A broad set of fixation-based metrics was extracted from each paradigm. We compared discrimination performance of single-paradigm models versus a combined multi-paradigm model using random forest (RF) classifiers. In the autistic group, symptom severity was assessed using standardized clinical measures. We further evaluated whether paradigms contributed complementary information, estimated potential clinical value of multi-paradigm models, and conducted exploratory subgrouping analyses to examine whether eye-tracking-defined profiles aligned with symptom-based groupings. All individual paradigms distinguished between autistic and non-autistic children, but RF models found a combined paradigm-based model performed best, achieving an AUC of 95% and an accuracy of 90%. Representational similarity analysis indicated that paradigms contributed partially distinct information rather than reflecting a single redundant dimension of social attention. Decision curve analysis demonstrated that the multi-paradigm model provided added net benefit across clinically relevant threshold probabilities compared with strategies based on treating all or no children as autistic. Clustering of eye-tracking features revealed three autistic subgroups with distinct visual preference profiles, whereas clustering based on clinical symptoms alone identified only two subgroups. Sex imbalance and group differences in developmental quotient may have confounded some effects. Models were only internally cross-validated in a single cohort and decision curve analysis relied on assumed clinic prevalence, so external validation and testing in larger, more diverse and high-risk samples, including other neurodevelopmental conditions, are needed. A brief multi-paradigm eye-tracking battery yields robust case-control discrimination and non-redundant behavioral readouts, suggesting that it may complement symptom-based approaches and provide a scalable behavioral framework for future studies seeking to bridge molecular findings with observable autistic profiles.
Cross-study inconsistencies in autism spectrum disorder (ASD) blood microRNA biomarker studies suggest that methodological heterogeneity may substantially limit reproducibility. We conducted an exploratory meta-analysis of publicly available ASD blood miRNA datasets from the Gene Expression Omnibus, applying rigorous inclusion criteria and standardized analytical protocols. Three datasets were included (GSE89596, GSE67979, GSE222046) comprising 614 miRNAs across 90 participants (45 ASD, 45 controls). Random-effects meta-analysis was performed using Hedges' g effect sizes, with comprehensive heterogeneity assessment and leave-one-dataset-out cross-validation. No miRNAs survived multiple testing correction (Benjamini-Hochberg FDR < 0.05), though seven candidate signals showed consistent evidence with unadjusted p < 0.01 and large effect sizes. These candidates demonstrated near-zero between-study heterogeneity and consistent directionality across validation analyses. Potential age-related and platform-related differences were observed, with near-zero correlation between adult and pediatric effect sizes (Kendall's τ = -0.022); however, these two sources of variability were fully confounded in the available data and could not be separated. Some miRNAs exhibited extreme between-study variability (I² > 80%), indicating substantial methodological differences. Cross-validation revealed that excluding the single adult dataset reduced sign consistency from 89.9% to 68.9%. Our findings suggest that age-related and methodological factors, including technical platform differences, may contribute to limited reproducibility in ASD blood miRNA research, and that blood-derived signals should be interpreted as potentially reflecting peripheral physiological states rather than central disease mechanisms. A supplementary cross-tissue analysis using post-mortem prefrontal cortex data (GSE59286; n = 45) provided direct empirical support for this interpretation: the majority of blood candidate miRNAs showed no corresponding expression in brain tissue, with only hsa-miR-29c-5p demonstrating directional concordance across both tissues. These findings suggest that age stratification, platform harmonization, and cross-tissue validation should be considered essential prerequisites for reliable ASD miRNA biomarker discovery, rather than optional refinements.
Peri-implantitis remains a challenging biofilm-associated condition, and the comparative effectiveness of adjunctive nonsurgical decontamination approaches is still unclear. This randomized, assessor-blinded, multi-arm clinical trial evaluated the clinical performance of different local submarginal decontamination protocols used alone or in combination with mechanical instrumentation in the management of early peri-implantitis lesions. Eighty implants from 26 patients were allocated to five treatment groups: mechanical instrumentation alone, mechanical instrumentation combined with chlorhexidine irrigation, ozone application, or glycine powder air abrasion, and glycine powder air abrasion as monotherapy. Clinical parameters, including probing pocket depth, bleeding on probing, and modified plaque index, were assessed at baseline, 3 months, and 6 months, and analyzed using linear mixed-effects models accounting for clustering at the patient level. All treatment modalities resulted in significant clinical improvements over 6 months. The greatest numerical reductions in probing depth, bleeding on probing, and plaque index were observed when mechanical instrumentation was combined with glycine powder air abrasion; however, no statistically significant differences were detected among treatment groups. These findings suggest that nonsurgical mechanical instrumentation, with or without adjunctive approaches, can provide meaningful short-term improvements in early peri-implantitis, while adjunctive glycine powder air abrasion may offer additional clinical benefits without demonstrating clear overall superiority.
Repetitive noxious stimulation can increase perceived pain intensity, a phenomenon known as Temporal Summation of Pain (TSP), thought to reflect central sensitization via neuronal "wind-up" in the spinal cord. As neuronal wind-up occurs only at stimulation frequencies above 0.2 Hz, we have tested whether TSP also appears at two different frequencies using our recently developed TSP protocol in healthy volunteers. In a randomized crossover design, 30 healthy male participants (27±4 years) underwent two experimental sessions involving 90 repetitive heat stimuli applied to the forearm at individually determined pain tolerance temperatures. Stimuli were delivered using a thermode at either 0.4 or 0.15 Hz. Pain intensity was rated using a computerized visual analog scale (0-100). TSP was assessed via linear mixed-effects model (LMM), with pain intensity as the dependent variable. All participants finished the study. LMM revealed a significant main effect of stimulation frequency (F 1, 540=14.20, p<0.001), indicating TSP. Pain intensity was higher at 0.4 Hz compared with 0.15 Hz (β=14.77, 95 % confidence intervals (CI) 6.87-22.68, p<0.001). The presence of TSP at 0.4 Hz but not at 0.15 Hz aligns with previous findings on neuronal wind-up, supporting its reliance to central sensitization. These findings enhance our understanding of the physiological basis of TSP and offer a robust platform for future investigations into pain modulation and therapeutic intervention strategies.
Discrimination is a psychosocial stressor that, through activation of stress-response pathways, may influence risk of adverse perinatal health outcomes. However, few studies have evaluated the association between discrimination and hypertensive disorders of pregnancy (HDP), and none have explored effect modification by race or paternal support, which may buffer the adverse effects of discrimination. We included 1,847 non-Hispanic Black and White participants from the Maternal and Infant Environmental Health Riskscape (MIEHR) study. Krieger's validated Experiences of Discrimination (EOD) scale was used to assess occurrence (yes/no) and frequency (once, 2-3, ≥ 4 times) of discrimination in nine situations. We created continuous scores representing total situations and frequency and also categorized (0, 1-2, ≥ 3) the situations score. Using electronic health records, we identified participants with HDP (hypertension in pregnancy, preeclampsia, eclampsia, and hemolysis, elevated liver enzymes, and low platelet count (HELLP) syndrome). We applied log-binomial regression to evaluate associations between experiences of discrimination and HDP, adjusted for maternal age, pre-pregnancy body mass index, income, and race. Analyses were stratified by race and self-reported paternal support (none/a little vs. good/excellent) as potential effect modifiers; due to small numbers, the situation score was dichotomized (0, ≥ 1) for these analyses. HDP occurred in 37.5% (n = 430) of Black and 26.0% (n = 182) of White participants. Compared with participants reporting no discrimination, the adjusted risk ratio (aRR) was 1.13 (95% CI: 0.96, 1.33) and 1.11 (95% CI: 0.94, 1.30) for those reporting discrimination in 1-2 and ≥ 3 situations, respectively. By setting, the strongest associations were observed for discrimination in getting a job (aRR: 1.22; 95% CI: 1.14, 1.43), by police (aRR: 1.21; 95% CI: 1.01, 1.44), and getting medical care (aRR: 1.17; 95% CI: 0.94, 1.44). Stratified analyses revealed the strongest associations between experiencing discrimination and HDP among Black participants and those with lower perceptions of paternal support. We found associations between discrimination and HDP among Black participants, who bear the greatest burden of HDP, as well as a potential buffering effect of paternal support. Future research is needed to investigate associations between discrimination and HDP and evaluate whether enhancing paternal support can mitigate the impact of discrimination on HDP risk.
Childhood obesity is common and associated with adverse health outcomes. Fetal programming via epigenetics is a potential mechanism underlying its pathogenesis. We conducted an epigenome-wide association study (EWAS) on cord blood DNA to identify DNA methylation sites that may mediate the association of maternal body mass index (BMI) with offspring adiposity using data from the Hyperglycemia and Adverse Pregnancy Outcome (HAPO) Study and its follow-up study (HAPO FUS). HAPO was a prospective, multicenter, international observational study that recruited pregnant women between 2000 and 2006 for glucose tolerance testing; cord blood was collected at delivery and newborn anthropometrics were obtained. The HAPO FUS was conducted from 2013 to 2016, where the 10-14 year-old offspring underwent measures of body composition, anthropometrics, and a fasting glucose tolerance test. Eligibility for HAPO FUS included gestational age at delivery ≥ 37 weeks without major neonatal malformations. There were 3,243 samples with cord blood DNA methylation (cbDNAm) data; mean child age at follow-up was 11.5 years. The present study used cord blood DNA to conduct methylation profiling using the Infinium MethylationEPIC 850 K BeadChip. Linear regression models were used to test the association between maternal BMI and cbDNAm levels adjusting for population substructure, cell count, maternal and child co-variates; multiple testing was accounted for using Bonferroni correction. Mediation analysis tested if cbDNAm CpG sites that were associated (Bonferroni P < 0.05) with maternal BMI explained the known association between maternal BMI and child BMI. This analysis included 3,116 mother-child pairs, 48% White, 21% Asian, 19% Black, 12% Hispanic and < 1% other race/ethnicity self-identified by the mother; 36% of mothers and 28.3% of children had an overweight or obese BMI. Maternal BMI was associated with DNAm at 7 CpG sites following adjustment including: cg00579423, cg07138793, cg12188424, cg19345626, cg20020844, cg02988288 and cg26974062. The 2 CpG sites on the TXNIP gene have been identified in previous EWAS of glucose metabolism and diabetes. Cord blood DNA methylation at cg20020844 (SP6) demonstrated mediation of 1.2% of the association between maternal BMI and child BMI z-score. Exposure to maternal obesity in utero and subsequent differential methylation present at birth may contribute to the prenatal programing of childhood obesity.
Falls are the leading cause of accidental injury among older adults, 30% of community-dwelling adults aged 65 and over fall each year, with nearly half occurring outdoors. These falls are complex, understudied, and insufficiently addressed in current age-friendly cities or walkability frameworks. This study aimed to build interdisciplinary consensus on risks, preventive actions, and barriers to fall prevention in outdoor public spaces through a Delphi process. A three-phase Delphi study was conducted with 64 participants in round 1, 60 in round 2, and 49 in round 3, including four expert groups: older adults who had fallen outdoors, health and research professionals, urban planners, and decision-makers (local and regional policy-makers, elected officials, and public-space managers involved in urban planning). Phase one collected open responses on risks, preventive actions (modification of physical layout, public-space management, and behavior-related factors), and barriers to these actions. Responses were synthesized using AI-assisted analysis with systematic human validation. In phases two and three, the relevance of 124 propositions were rated on a 10-point Likert scale. Consensus was defined as ≥ 70% of ratings ≥ 7/10 and interquartile range ≤ 2.5. Consensus was reached for key intrinsic factors such as gait and balance impairments, visual and vestibular deficits, cognitive decline, and polypharmacy, as well as for environmental factors including irregular or inappropriate surfaces, obstacles, or signage, and crowding. Highly relevant preventive actions included integrating fall prevention into street and sidewalk design, training urban planning professionals, awareness campaigns, systematic maintenance, safer crossings, participatory co-design public-space adaptations and urban design features involving older adults and local stakeholders, and improved data monitoring through surveillance, mapping, and sharing of fall-related and environmental risk information. Main barriers were insufficient budgets, high costs, limited integration of fall prevention into planning priorities, and lack of evaluation of the impact of implemented actions. Outdoor fall prevention is a transversal challenge requiring integration of public health and urban planning. This Delphi highlights actionable priorities to embed fall prevention in local and national strategies, in particular in rapidly aging regions.
Duchenne muscular dystrophy (DMD) is a severe X-linked disorder marked by progressive muscle degeneration and regeneration, inflammation and fibrosis. Cellular senescence has emerged as a potential driver of chronic muscle damage, yet its temporal dynamics and therapeutic relevance remain unclear. We analyzed senescent cell burden in skeletal and cardiac muscles of the DBA/2-mdx mouse model, which closely mimics features of human DMD. The senolytic combination of dasatinib and quercetin (D + Q) was administered during early or late disease phases to evaluate the impact of senescent cell clearance. Skeletal muscle strength was measured by grip strength and ex vivo force assays, while cardiac function was assessed by echocardiography. Fibrosis and senescence markers were quantified histologically, and transcriptional changes associated with senolysis were identified using bulk RNA sequencing (RNA-seq). In skeletal muscle, senescent cells appear and peak during early stages of disease progression (3-5 months), coinciding with high degeneration and regeneration activity, and then decline with age as fibrosis increases. In contrast, in the heart, senescent cells emerge at late stages of disease progression (around 12 months), correlating with heart fibrogenesis. Notably, senolytic intervention in the DBA/2-mdx mice promotes a regenerative and antifibrotic gene signature in both tissues. However, the timing of senolytic therapy determines its efficacy: early treatment with D + Q reduces senescent cell burden, decreases fibrosis, and improves fiber size and contractile performance in skeletal muscle, while later treatment reduces cardiac senescence and fibrosis but does not improve skeletal muscle pathology. Cellular senescence is a dynamic and targetable feature in DMD, with tissue- and age-specific patterns. It represents a potential modifiable therapeutic target, and temporally optimized senolytic strategies could serve as effective adjuncts to current and emerging DMD treatments.
Pediatric femoral neck fractures require stable fixation to avoid complications. It remains unclear whether fixation with the Proximal Humeral Internal Locking System (PHILOS) can serve as an alternative to cannulated screw fixation. The purpose of this study was to compare the biomechanical properties of PHILOS and cannulated screws for stabilizing unstable pediatric femoral neck fractures using a synthetic bone model. Twelve fourth-generation synthetic composite femurs were randomly assigned to screw fixation (Group S) or PHILOS fixation (Group P) (n = 6 each). A standardized vertically oriented Delbet type II osteotomy was created in all specimens. Group S was fixed with three 6.5-mm cannulated screws, whereas Group P received a PHILOS plate with 3.5-mm locking screws. Each specimen underwent a standardized loading protocol using a universal testing machine. Axial stiffness, cyclic displacement, ultimate failure load, and failure modes were recorded and statistically compared between groups. No statistically significant difference was found in axial stiffness between Group P (746 ± 300 N/mm) and Group S (753 ± 256 N/mm) (p = 1.000). Displacement after cyclic loading was significantly greater in Group P (1.42 ± 0.3 mm) compared with Group S (0.57 ± 0.2 mm) (p = 0.004). The ultimate failure load was higher in Group S (2378 ± 513 N) than it was in Group P (1652 ± 206 N) (p = 0.025). Upon reaching ultimate load, all specimens in both groups failed at the femoral head region due to femoral head broken. The adult PHILOS plate with 3.5-mm locking screws demonstrated inferior biomechanical stability compared with three 6.5-mm cannulated screws in this synthetic composite femur model. Controlled laboratory study.
Insurance fraud detection remains challenging to predict in reality because claims data is often uneven among classes, and the information concerning claims is often multidimensional and nonhomogeneous. The present research used a unified evaluation framework to assess the predictive and interpretive capabilities of three distinct model families: CatBoost (tree-based ensemble learning), Bi-GRU with Attention (sequence-oriented learning), and TabTransformer (categorical feature contextual). The model families were tested using a standardised experimental protocol.The study is novel in the sense of a cross-model interpretability framework that unites Shapley Additive Explanation (SHAP)-based feature attribution with attention-based contextual analysis to enable a clear comparison of model reasoning between the suggested frameworks. The data on which the experiments were done consisted of 4,000 life insurance claims that were characterized in terms of 83 attributes. Common preprocessing procedures like missing values, scaling numerical variables, and selecting highly correlated variables were used before training the models. Experimentally, CatBoost is proven to be the most precise on legitimate claims, Bi-GRU is the most recall on fraudulent claims, and TabTransformer is the best in terms of tradeoff between accuracy, interpretability, and computational efficiency. Practical characteristics such as the quantity of claim, tenure in a policy, and diagnosis were repeatedly emphasized in both SHAP and attention analyses. Combined, the current research study provides a consistent and explainable benchmark that may be applied to conduct fraud detection research reliably and assist practitioners in choosing models that are accurate and understandable.