Advance care planning and respect for patient autonomy are central concerns in contemporary medical ethics, particularly for patients whose decision-making capacity may be impaired. This study aimed to examine and identify ethical challenges in judicial judgments in Japan relating to these patients. We conducted a retrospective observational study of Japanese civil court judgments using Westlaw Japan. We systematically identified judgments addressing physicians' duties of explanation or informed consent for patients with impaired decision-making capacity. Cases were limited to judgments rendered after May 2007, following publication of the national Guidelines on the Decision-Making Process for Medical Care and Care at the End of Life in Japan. We evaluated five items: (1) substantive consideration of patients' wishes and values; (2) individualized assessment of decision-making capacity and underlying conditions; (3) whether family members' wishes were treated merely as surrogate consent; (4) whether a repeated, dialogical decision-making process involving family members and healthcare or care teams was described; and (5) whether advance directives or similar written documents were referenced. Each item was coded dichotomously. We assessed inter-rater agreement between researchers using agreement rates and Cohen's κ statistics. Of 116 identified judgments, 10 civil cases met the inclusion criteria. Patients had a mean age of 79.5 years, and all cases involved impaired decision-making capacity due to conditions such as dementia. Courts explicitly attempted to respect or infer patients' wishes in two cases. Consideration of decision-making capacity and its medical assessment was noted in two cases. In two cases, courts treated family members' wishes as sources for inferring the patient's presumed wishes rather than as surrogate consent. A repeated, dialogical decision-making process was described in four cases. No judgment referred to advance directives. Inter-rater agreement was high for all items. The civil court judgments analyzed in this study for patients with impaired decision-making capacity did not sufficiently reflect principles emphasized in ethical guidelines in Japan. This highlights an important challenge in clinical practice and medico-legal evaluation in Japan and underscores the need for future empirical research to evaluate the implementation of advance care planning in clinical practice.
Protein is the most expensive macronutrient worldwide, yet protein-related claims like "high protein" or "protein #1 ingredient" strongly influence dog food purchases. Little is known about how dog owners define protein quality (PQ) and how this knowledge shapes purchasing decisions. This study investigated how perceptions of dietary protein translate from personal to dog feeding habits, the level of PQ knowledge among dog owners, and whether protein source or amount has a greater influence on dog food choice. A 60-question survey was distributed to dog owners across the United States using Qualtrics (Utah, USA) (n = 691). Each respondent answered 12 choice experiment questions in which dog food options varied by protein amount (20%, 35%), protein source (peas, chicken, chicken meal), and price ($80, $95, $110). Descriptive statistics were analyzed in SPSS Statistics (Version 29, IBM Corp.), and a multinomial logit model in STATA (Version BE) was used for choice experiment analysis, with significance set at p < 0.05. Chicken had the greatest influence on purchasing choice, followed by chicken meal, and 35% protein, compared to a baseline diet of 20% protein and peas (p < 0.001). The majority (30%) of respondents believed PQ was parallel to quantity, while only 18% could correctly define PQ as the ability of an ingredient to meet the indispensable amino acid requirements of an individual. For respondents who correctly defined PQ, all protein sources positively influenced choice, while greater protein quantity negatively influenced choice (p < 0.001). Ultimately, protein source, not amount, drives purchasing behaviour in the absence of protein-related claims on dog food. Protein-related claims like “high protein” strongly influence dog food purchasing decisions, but these claims are not standardized and do not consider protein quality (PQ). Limited data exists on dog owners’ understanding of PQ and whether protein source or quantity plays a greater role in purchasing decisions. This study explored how dog owners’ personal nutritional beliefs affect nutritional decisions for their dog, how PQ is defined by owners, and how protein attributes influence purchasing decisions when protein-related claims are absent. A 60-question survey was distributed via Qualtrics to dog owners across the United States (n = 691). Participants completed 12 discrete choice experiments choosing between dog foods that varied by protein source (chicken, chicken meal, peas), amount (20%, 35%), and price ($80, $95, $110 USD). Data were analyzed using SPSS Statistics (Version 29, IBM Corp.) and STATA (Version BE; significance declared at p < 0.05). Overall, dog owners often projected personal dietary protein perceptions onto dogs’ food choices. Chicken had the greatest influence on purchasing choice, followed by chicken meal, then 35% protein compared to a product with 20% protein and peas (p < 0.001). Interaction with 35% protein increased preference for chicken meal (p < 0.001), but not chicken or peas. Owners who correctly defined PQ were positively influenced by protein source but negatively by 35% protein (p < 0.025). Evidently, protein sources have a stronger influence on dog food purchases than protein amount.
The modeling of response times using sequential sampling models has a long history. Because choices, confidence judgments, and reaction times are closely linked in perceptual decisions, it seems only natural to simultaneously model these three outcome variables of a decision. In the package dynConfiR, we implemented various sequential sampling models of choice, response time, and decision confidence in R. This paper gives an overview of the package, which provides probability density functions as well as high-level functions for fitting parameters to empirical data, prediction of reaction time and response distributions, and simulation of artificial data sets. We describe the mathematical specifications of the implemented models and provide detailed descriptions of the implemented likelihood functions. In addition, we outline the workflow for applying the model to empirical data step-by-step: data preprocessing, model fitting, model prediction, quantitative model comparison, and visual assessment of model predictions. Finally, we present results from parameter and model recovery analyses and assess the precision of probability density calculations, illustrating the robustness of the implemented computations. Offering intuitive usability and high flexibility, the package is targeted at researchers in the fields of decision-making and confidence and does not require expert-level programming skills.
Metacognitive judgments and decisions involve uncertainty and rely on probabilistic cues. Prior research shows that people integrate multiple cues when making judgments of learning (JOLs). The present study examined whether metacognitive control decisions are influenced by multiple cues as well. In each of two experiments, participants studied 60 words varying on two cues (Experiment 1: concreteness, emotionality; Experiment 2: font format, word frequency). In Experiment 1, all participants made restudy choices to maximize later recall, whereas in Experiment 2, half made restudy choices, and the other half provided JOLs. Participants who made restudy choices restudied their selected items, and all participants completed a recall test at the end of the experiment. At the group level, both cues influenced restudy choices in Experiment 1, but only one cue did so in Experiment 2. Individual-level analyses of Experiment 2 revealed that most participants used both cues, yet the direction of cue use differed across participants: Some participants more often selected items with cue values associated with lower JOLs, whereas others more often selected items with cue values associated with higher JOLs. Overall, effect sizes for cue effects on restudy choices were smaller than those for JOLs. These findings suggest that multiple cues guided metacognitive control decisions, but that cue integration and cue use were weaker and varied more across individuals than in metacognitive judgments. This pattern indicates that the alignment between monitoring and control is reduced by other factors influencing restudy choices.
Sample size re-estimation designs using a promising zone framework are widely used adaptive trial methodologies that guide study continuation or modification during interim analyses. Conventional implementations often base interim calculations solely on participants with available primary endpoints, overlooking predictive information from baseline and earlier visits. This underutilization can lead to inefficient interim decision-making. In this work, we adapt semi-parametric efficient estimators that leverage baseline and intermediate data for use within a promising zone sample size re-estimation design. By incorporating information from participants who have not yet reached their primary endpoint, these estimators enable more precise interim estimators while maintaining strict Type I error control through the inverse normal combination function. Using data from the ADAPT study in generalized myasthenia gravis, we illustrate how these methods integrate into a promising zone sample size re-estimation framework. Simulations based on longitudinal profiles of anti-acetylcholine receptor antibody-seronegative participants demonstrate improved operating characteristics compared with the conventional approach, including increased overall power, especially for moderate effect sizes, without inflating the one-sided Type I error. Our findings highlight the practical benefit of applying existing semi-parametric estimators within promising zone sample size re-estimation designs, enabling more efficient and timely interim decision-making in settings with partially observed longitudinal data.
Agentic artificial intelligence (AI) systems employing multi-model architectures with iterative reasoning may surpass standard single-model large language models (LLMs) in complex clinical decision-making. Comprehensive comparisons of agentic versus standard LLM deployment against human specialists in critical care remain limited. This simulation study compared the performance of an agentic system combining GPT-5.0 and Gemini 2.0 Flash against two standard LLMs (Gemini 2.0 Flash and GPT-4o) and human specialists in acid-base disorder interpretation and sepsis management using text-based clinical vignettes. Forty-five clinical vignettes (20 acid-base, 25 sepsis) developed by an independent expert panel were evaluated by: (1) Gemini 2.0 Flash (standard single-turn); (2) GPT-4o (standard single-turn); (3) an agentic system combining GPT-5.0 and Gemini 2.0 Flash with multi-step reasoning and cross-verification; and (4) 20 board-certified physicians. Responses were anonymized and assessed by two blinded graders against pre-established gold standards using an explicit scoring rubric. For acid-base disorders, the agentic system achieved 91.0% overall accuracy (95% CI 85.2-96.8%), significantly outperforming GPT-4o (78.0%, P = .002), Gemini (74.5%, P < .001), and human specialists (83.0%, P = .038). SSC hour-1 bundle compliance was 96.8% for the agentic system versus 82.4% for GPT-4o, 79.2% for Gemini, and 90.4% for humans (all P < .05). ROC analysis demonstrated superior discrimination for the agentic system (AUC = 0.932) compared to humans (0.856), GPT-4o (0.814), and Gemini (0.786). Subgroup findings in complex case categories are exploratory given small case numbers. In this simulation study using text-based clinical vignettes, an agentic AI system combining GPT-5.0 and Gemini 2.0 Flash demonstrated significantly higher performance than standard LLM implementations and human medical specialists in structured tasks of acid-base interpretation and sepsis bundle compliance. These simulation-based findings suggest that agentic architectures may represent a promising direction for structured clinical decision support; prospective validation in real clinical environments with actual patient data is essential before implementation.
Shared decision making (SDM) is a process to actively involve both patients and clinicians to weigh the benefits and risks of a healthcare decision, based on clinical guidelines and the patients' preferences, needs and values. Despite the ethical foundation of SDM, its implementation remains limited. Possible physician-reported barriers for this limited uptake include insufficient level of SDM training. Training physicians in SDM could be a part of the puzzle. A recent systematic review showed that there is a shift towards blended training more than live or online learning. We therefore developed and pilot-tested a blended training program for general practitioners (GPs) in SDM in Belgium. Acquired skills were evaluated by three viewpoint - observer, patient and physician. In a pre-post study, GPs participated in the blended training program consisting of an e-learning and a face-to-face session with simulation patients (SPs) GPs and SPs completed surveys before (T0) and after (T1) the blended training. Consultations were recorded for analysis by observer reported scales (OPTION12 and 4SDM scale). Secondary outcomes were SDM-Q9-patient, satisfaction with consultation, knowledge and intentions towards SDM. Ten GPs were included. There was a significant increase in both OPTION12 (mean (SD) from 19·37 before to 37·70 after training, p = 0·0010, 95% CI [9·65 - 27·02]) and 4SDM scale (mean (SD) from 9·2 (4·66) before to 17·00 (5·08) after training, p = 0·0001, 95% CI [5·47 - 10·13]) with a moderate-large effect after training (Cohen's D = 2·39, 95% CI [1·13 - 3·63]. The SDM blended training for GPs improved their skills, knowledge and intentions in SDM in simulated consultations on the short term.
The integration of neurocognitive challenges into biomechanical movement tasks has gained attention due to its potential relevance for assessments of physical function, injury risk, and performance. However, a comprehensive mapping of the chosen methodological approaches, targeted populations, and applied outcome measures is still lacking. This scoping review hence aimed to synthesize the current literature on unplanned movement tasks combining cognitive decision-making and biomechanical outcome measurements. A systematic literature search following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses-extension for scoping reviews) guidelines was performed in Web of Science (Core Collection), MEDLINE (PubMed), Cochrane Library, and Google Scholar. Included studies combined unplanned movement tasks (e.g., change-of-direction or stopping) with biomechanical assessments. Eligible articles were analysed in terms of participant characteristics, movement type, unplanned task type, reactive stimulus, and biomechanical outcome variables. From the total of 167 studies, the majority focused on change-of-direction tasks (82%), mostly using standardized angles of 45° and moderate approach speeds (3.9 ± 0.9 m/s). Jump (7%), land (12%), and/or stop tasks (3%) were less frequent. Most studies (83%) relied on simple visual cues (e.g., lights or symbols), whereas more ecologically valid stimuli (e.g., videos or real opponents) were rarely applied. Biomechanical analyses predominantly focused on knee angles and moments as well as ground reaction forces, while only 23% of studies included electromyography measurements. Older adults (50+ years) were not represented. Although research on unplanned biomechanical tasks is growing, significant methodological heterogeneity and limited ecological validity may constrain the interpretability and applicability of findings. Future research should aim for task designs that better reflect real-world conditions and include diverse populations and comprehensive neuromuscular assessments.
Tumour genetic profiling has the potential to significantly improve cancer care by informing targeted treatments and improving patient outcomes. As use increases worldwide, greater attention should be paid to consumer experiences, need and priorities. This study is consumer-led and aims to inform an equitable and ethical roll out of future services by exploring consumer: 1) awareness of tumour genetic profiling, 2) experiences with tumour genetic profiling, and 3) priorities for improving access to and delivery of tumour genetic profiling within Victoria, Australia. A consumer reference group was formed and supported by experienced researchers and professional staff of a comprehensive cancer centre alliance to develop and conduct the research study. A cross-sectional survey was conducted between January and May 2024, capturing demographic and disease characteristics, along with questions relating to each aim. Both quantitative and qualitative data were collected. Eligible participants were patients diagnosed with cancer whose treatment teams were based in Victoria, Australia, or caregivers of such patients. Of the 181 respondents (n = 36 carers, n = 145 patients), 23% (n = 44) reported that they (or the person they cared for) had undergone tumour genetic profiling. The majority reported a positive impact, including increased knowledge/understanding (n = 30, 68%) and personalised treatment options (n = 23, 52%), with very low decisional regret (mean: 3/100). However, 14% reported no understanding of the results at all, and confusion was reported as a drawback of testing. Higher education and greater shared decision making were associated with better understanding of results (p = 0.02 and p = 0.04, respectively) and higher education was also associated with greater awareness of genetic tumour profiling (p = 0.008). The primary barriers to uptake were lack of awareness (n = 88, 83%) and lack of perceived benefit from the treatment team (n = 19, 18%). Key strategies for improvement identified by participants included government-subsidised testing and improved patient and clinician education. This study highlighted gaps in consumer awareness and access to tumour genetic profiling, as well as the benefits of shared decision making. Overall, consumer-led insights emphasise the need for equitable funding, education, and systemic improvements. These findings can inform policies and practices aimed at delivering person-centred cancer care in Victoria and beyond. Future longitudinal research is needed to comprehensively explore these associations and track progress.
Lipid-poor adrenal adenomas (LPAs) and pheochromocytomas (PCCs) are similar tumours, but misdiagnosed LPAs may lead to health risks such as hypertensive crisis due to improper treatment. The aim of this study was to develop an efficient method for classifying LPAs and PCCs on the basis of different CT scans that minimises the number of radiation doses. The patients included in this study were randomly divided into training and validation groups (the ratio was 7:3). The datasets, including 2-(plain and venous enhanced CT scans) or 3-phase CT data, were separately used to construct XGBoost, Gradient Boosted Decision Tree (GBDT), AdaBoost, random forest and decision-tree models. Receiver operator characteristic (ROC) curves were used to evaluate the models, and the DeLong test was used to determine significant differences. The models constructed were XGBoost, GBDT, AdaBoost, random forest and decision tree and their efficacies Area Under the Curves (AUCs) in the 2-phase CT group were 0.91, 0.89, 0.85, 0.78, and 0.71, respectively, while those in the 3-phase CT group were 0.92, 0.91, 0.89, 0.81, and 0.78, respectively. The optimal model in both the 2-and 3-phase groups was XGBoost; this model exhibited similar performance in both groups. The DeLong test also confirmed some difference in XGBoost between the two groups. Our XGBoost-based model constructed using 2-phase CT data is similar to that constructed using 3-phase CT data; both of them exhibited good performance in the classification of LPAs and PCCs.
Despite the global recognition of advance care planning as a critical component of patient-centred end-of-life care, its implementation remains challenged by skill-based deficiencies (e.g., inadequate training), cultural and communication barriers, and system-level structural impediments within healthcare settings. This study aimed to develop and implement a structured advance care planning communication model to improve nurses' communication practices and facilitate patient engagement in end-of-life care discussions. A participatory action research design with embedded mixed methods was conducted from September 2020 to September 2022 in an oncology palliative care unit at an oncology hospital in Beijing, China. The study integrated the Advance Directive Decision-Making Model with the Meaning-Making Intervention. Data collection included surveys, participant observation, and semi-structured interviews across three phases. Four iterative action cycles were used to co-develop and refine the communication model. Quantitative and qualitative data were triangulated through team debriefings to generate meta-inferences. Initial assessments included surveys and observations. Nurses held a foundational knowledge of advance care planning principles (mean knowledge: 68.52%), but expressed hesitation to initiate end-of-life discussions. Iterative cycles developed a three-step communication model. The steps were: (1) Recognize the Present, (2) Life Review, and (3) Face the Future. Post-action data showed improvements in in all areas. Nurses' knowledge increased significantly (mean score increase: 1.90 points). Attitudes scores increased (mean increase = 0.90) Behaviours scores also increased (mean increase = 0.57). Paired t-tests confirmed significant differences for all measures (p < 0.001). Key improvements attributed to the model included the development of time-efficient communication strategies, structured support systems, and adaptive communication techniques tailored to patient needs. The structured three-step advance care planning communication model improves nurse-patient communication and patient engagement in end-of-life decision-making. This model provides a practical framework for initiating and guiding advance care planning conversations in oncology care. Future research is needed to evaluate its applicability in diverse settings and its long-term impact on patient outcomes.
Artificial intelligence (AI) is rapidly transforming surgical practice, with applications spanning preoperative planning, intraoperative guidance, postoperative management, and surgical education. Despite accelerating research activity, the structure, thematic evolution, and funding landscape of AI research in general surgery remain incompletely characterized. This study aimed to systematically evaluate scientific production on AI in general surgery in the United States over the past 5 years using a bibliometric approach. A bibliometric analysis was conducted following Preliminary Guideline for Reporting Bibliometric Reviews of the Biomedical Literature and Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines using Web of Science. English-language articles published between 2020 and 2025 with a U.S.-affiliated senior author and focused on AI use in general surgery were included. Publications were analyzed across five primary domains: authorship metrics, thematic endpoints, journal characteristics, country of origin, and funding patterns. Bibliometric indicators included H-index, citation counts, Article Influence Score (AIS), and Bradford's Law classification. Funding distribution across endpoints was evaluated using chi-square or Fisher's exact tests, with effect sizes estimated using Cramér's V and odds ratios. Temporal trends in endpoints and keywords were assessed using Poisson and negative binomial regression models. Fifty-nine studies met inclusion criteria, comprising 20 reviews and 39 original investigations. Scientific production increased consistently from one study in 2019 to 17 in 2023 and 16 in 2024, demonstrating sustained growth. Surgical workflow recognition (n = 19) and clinical decision support (n = 18) were the predominant research domains, representing 63% of the included literature. Temporal analysis demonstrated significant annual growth in reviews (Incidence rate ratios [IRR] 2.09, P = .002) and workflow-focused studies (IRR 1.37, P = .031). Keyword analysis revealed sustained prominence of AI and machine learning, with limited emergence of new thematic directions. Most studies reported no funding (57.6%). Although overall funding distribution did not significantly differ across application categories (P = .846), clinically actionable AI applications were significantly more likely to receive funding compared with other research areas (OR 4.0, 95% CI 1.22-13.13; P = .029). AI research in U.S. general surgery is growing but remains concentrated in workflow and decision-support domains. Funding favors clinically actionable applications, highlighting the need for broader, equity-focused AI development.
The COVID-19 pandemic led to unprecedented interest and participation in vaccine trials globally, and a concurrent increase in vaccine hesitancy. Whether this impacted recruitment of healthy volunteers to subsequent non-COVID vaccine trials is not well studied. We explored the impact of the COVID-19 pandemic on motivations for participating in two clinical trials of the same novel anti-plague vaccine, conducted in the United Kingdom (UK) and Uganda in 2021 and 2022. Participants enrolled in PlaVac (UK) and PlaVac Uganda, Phase I trials of ChAdOx1 Plague vaccine, were invited to complete an optional questionnaire and semi-structured interview examining motivations for participating, including questions on the impact of the COVID-19 pandemic on their decision. Questionnaires were self-administered and interviewer-administered for UK and Uganda studies, respectively. Interviews were conducted in local languages, transcribed in English, and analysed using thematic analysis. Results were compared between studies. Thirty-one of the 45 (68.9%; 25.8% female) UK trial participants and all 36 (100.0%; 27.8% female) of the Uganda trial participants completed questionnaires responses, and 19 Uganda questionnaire respondents completed interviews. Responses to questions on the impact of the COVID-19 pandemic on volunteering decisions were divergent between countries, with little effect for UK participants but a strong positive effect for Ugandan participants. Themes relating to this effect were "contributor, not cause" in the UK, and in Uganda were preparedness (wanting to contribute to vaccine development to prevent suffering and death from future epidemics), increased awareness (understanding the vaccine development process and seeing rapidly deployed COVID-19 vaccine trials gave them confidence), and personal protection (believing themselves to be protected by the novel plague vaccine). Participants in both studies expressed trust and confidence in the study vaccine which shares the same adenoviral-vectored platform technology used to elicit an immune response (ChAdOx1) with the COVID-19 vaccine ChAdOx1 nCoV-19 (Vaxzevria, AstraZeneca). For Ugandan participants, COVID-19 and mass vaccination increased knowledge about vaccines and trials and encouraged them to participate in research, but had little impact on UK volunteers. There was no evidence of a negative effect of perceptions of the related ChAdOx1 nCoV-19 vaccine on trial participants' confidence in the novel plague vaccine's safety. Current controlled trial: ISRCTN41077863, prospective registration date: 19/03/2021, and current controlled trial: ISRCTN79243381, prospective registration date 05/08/2022.
High-quality contraceptive counseling has been associated with increased contraceptive method satisfaction. We aimed to assess this relationship in the postpartum period and to investigate practical aspects of counseling underlying this association. We used data from 219 pregnant individuals aged 21-44 years contemplating tubal sterilization who were recruited to a randomized trial assessing the efficacy of the MyDecision/MiDecisión decision aid. Three months postpartum, participants rated their satisfaction with their chosen contraceptive method and their perception of contraceptive counseling encounter quality on Likert scales. Counseling quality domains included provider demonstration of respect, explanation of methods, pressure toward a method, and response to questions, as well as subjective counseling satisfaction. We used logistic regression analyses to assess the relationship between optimal counseling (both overall and in each domain) and optimal contraceptive method satisfaction, adjusting for randomization arm and demographic covariates significant in bivariate analysis. Participants had a mean (SD) age of 30 (5) years; 42% identified as non-Hispanic White, 25% as non-Hispanic Black, and 26% as Hispanic. Approximately one-third of participants (37%) had a tubal sterilization by study follow-up; 11% reported using no method of contraception. Many participants reported optimal contraceptive counseling (61%) and optimal method satisfaction (65%). Optimal counseling was associated with higher odds of optimal method satisfaction (aOR 1.88, 95% CI 1.03-3.46, p = 0.04). This relationship was sustained across all individual counseling quality domains. Patient-perceived provider demonstration of respect, discussion of contraceptive pros and cons, avoidance of pressure, and answering questions were associated with postpartum contraceptive method satisfaction.
Intrahepatic cholestasis of pregnancy (ICP) complicated by twin pregnancy significantly increases the risk of preterm birth, and no tailored predictive tools for gestational age (GA) at delivery have been developed for this specific population to date. This study aimed to develop and validate a twin-specific nomogram with dynamic total bile acid (TBA) monitoring, medication history and curative effect for preterm birth prediction in this population. A retrospective cohort of 258 twin pregnancies complicated by ICP was enrolled (November 2024-November 2025). The data included demographic, clinical, biochemical (dynamic TBA parameters, liver enzymes), and therapeutic variables (ursodeoxycholic acid (UDCA) usage, combination regimens, TBA response posttreatment). LASSO regression was used to select predictors, which were incorporated into a logistic regression-based nomogram. The model was validated using discrimination (the area under the receiver operating characteristic curve (AUC)), classification accuracy (sensitivity, specificity, PPV, NPV), calibration (Hosmer-Lemeshow test, calibration curves), and clinical utility (decision curve analysis (DCA)). In this cohort, the incidence of preterm birth was 83.3%. The independent predictors of preterm birth included GA at ICP diagnosis, UDCA usage, GA at TBA peak, TBA severity group at peak, predelivery TBA (TBA end), aspartate aminotransferase (AST), and treatment curative effect (all P < 0.05). The discriminatory performance of the nomogram, as measured by the area under the curve (AUC), was 0.812 (95% CI: 0.721-0.903) in the training set and 0.740 (95% CI: 0.590-0.889) in the test set. Calibration curves and Hosmer-Lemeshow tests (training set P = 0.1527; test set P = 0.6991) confirmed good agreement between the predicted and actual outcomes. DCA demonstrated significant net benefits across a clinically relevant risk threshold (0-0.833). The model exhibited high specificity (93.8%) and negative predictive value (85.7%) in the test set. To our knowledge, this is among the first nomograms for preterm birth prediction in twin pregnancies with ICP that integrate dynamic TBA monitoring and therapeutic variables. This model is intended primarily as a low-risk exclusion tool to support clinical monitoring strategies, rather than to guide high-risk prediction or delivery decisions. Notably, the model predicts a composite preterm birth outcome modified by both biological risk and clinical intervention rather than purely spontaneous preterm birth, and its low sensitivity further restricts its utility for high-risk preterm birth prediction. Its clinical utility for this purpose requires further rigorous prospective and future external validation studies.
Surgical wounds healing by secondary intention occur if a surgical wound is not closed or dehisces following primary closure. Surgical wounds healing by secondary intention are common and adversely affect patients' quality of life. Treatment is often prolonged, complex and expensive. Negative pressure wound therapy applies a controlled vacuum to the wound and is increasingly used to promote surgical wound healing by secondary intention despite limited rigorous evidence for the clinical and cost-effectiveness of negative pressure wound therapy to augment surgical wound healing by secondary intention. Assess the clinical and cost-effectiveness of negative pressure wound therapy versus usual care (no negative pressure wound therapy) in treating surgical wounds healing by secondary intention. A pragmatic, two-arm, parallel-group, randomised controlled superiority trial. Twenty-eight UK NHS Trusts randomised adult patients with a surgical wounds healing by secondary intention to receive negative pressure wound therapy or usual care (no negative pressure wound therapy). The planned sample size was 696 participants. Participants were followed up for 12 months via weekly telephone contact to collect the primary outcome (time to healing: full cover with no scab in days since randomisation) and clinical secondary outcomes: wound healing, surgical site infection, pain, hospital re-admission, current treatment and reasons for treatment change (if applicable), reoperation, amputation, antibiotic use, death. Patient-reported outcomes (pain, health-related quality of life and resource use) were collected by postal questionnaire at 3, 6 and 12 months. Validation of the Bluebelle Wound Healing Questionnaire, a patient-reported measure of surgical site infection, was also undertaken. A cost-effectiveness decision model considering all available evidence, and a within-trial cost-utility analysis, was also undertaken to evaluate the cost-effectiveness of negative pressure wound therapy against usual care. Neither participants nor the investigators were blind to treatment allocation. Between 15 May 2019 and 13 January 2023, 686 participants were recruited, randomised and included in the analysis (negative pressure wound therapy n = 349; usual care n = 337). Most participants had a single surgical wound healing by secondary intention (n = 622, 90.7%), located on the foot (n = 551, 80.3%) or leg (n = 69, 10.1%) arising following vascular surgery (n = 619, 90.2%). Most participants had comorbidities; diabetes (n = 549, 80.0%), cardiovascular disease (n = 446, 65.0%) and/or peripheral vascular disease (n = 349, 50.9%). Median time to healing was 187 days (negative pressure wound therapy) versus 195 days (usual care), with no evidence that negative pressure wound therapy reduced the time to wound healing compared to usual care (hazard ratio 1.08, 95% CI 0.88 to 1.32; p = 0.47). Odds of re-admission, reoperation, surgical site infection and antibiotic use were slightly higher, and odds of amputation or death slightly lower for negative pressure wound therapy participants. These results were not clinically or statistically significant. Bluebelle Wound Healing Questionnaire, quality of life and wound pain scores were not statistically significantly different at any time point. Serious adverse events were rare (nine negative pressure wound therapy vs. five usual-care participants). Both cost-effectiveness analyses concluded that negative pressure wound therapy generates higher costs and marginally higher quality-adjusted life-years than usual care, although findings were statistically insignificant. The probability of negative pressure wound therapy being cost-effective was under the recommended National Institute for Health and Care Excellence cost-effectiveness thresholds. The Bluebelle Wound Healing Questionnaire was acceptable to participants, had low levels of missing data and demonstrated good levels of sensitivity and specificity in the detection of surgical site infection in surgical wounds healing by secondary intention. The trial included a high proportion of diabetic participants with foot wounds, which may affect study generalisability. Negative pressure wound therapy use for 'wound management', common in certain surgical specialties, was not assessed in this study. Negative pressure wound therapy is not clinically or cost-effective in augmenting healing in patients with surgical wounds healing by secondary intention, particularly those with comorbidities. Evaluation of methods to treat or prevent infection of surgical wounds healing by secondary intention and evaluation of negative pressure wound therapy for 'wound management' are recommended. This synopsis presents independent research funded by the National Institute for Health and Care Research (NIHR) Health Technology Assessment programme as award number 17/42/94. After an operation, most wounds are closed using stitches or staples. Some wounds cannot be closed and are left open. Some closed wounds may reopen. These ‘open’ wounds are usually left to heal slowly from the bottom up. Negative pressure wound therapy is commonly used to treat ‘open’ wounds. Negative pressure wound therapy uses a machine to apply gentle suction to a wound, which removes wound fluid, and may help keep the wound clean and perhaps aid healing. We do not know if negative pressure wound therapy is as good as, better than or worse than standard wound dressings that are also used for healing ‘open’ surgical wounds. We also do not know if negative pressure wound therapy is good value for money. There has not been enough, high quality, independent research to enable doctors and nurses to decide on the best treatment. Between May 2019 and January 2023, 686 patients with an open wound agreed to take part and were equally randomly assigned to standard dressings or negative pressure wound therapy. Most of the wounds were on patient’s feet. Most patients had diabetes, and many patients also had conditions affecting their heart and/or blood vessels. We collected wound healing data, treatment information and health outcomes for each participant for a year. We found no clear evidence that negative pressure wound therapy provided any significant benefits for patients and specifically that negative pressure wound therapy did not reduce the time it took for wounds to heal compared to standard wound care. Negative pressure wound therapy was also more expensive than standard dressings and so was not likely to be a good use of healthcare resources. Patients and doctors will be able to make more informed decisions about which dressing to use to help wounds heal. The National Health Service can save money by recommending the use of standard dressings for open wounds instead of using the more expensive negative pressure wound therapy.
Improving the quality of water intake monitoring data is an urgent issue in current water management. The industrial water intake monitoring data obtained during the National Water Resources Monitoring Capacity Building Project promotion project was taken as a sample, and the common abnormal categories of water intake monitoring data were summarized, and the strategy of "rough screening-fine identification-reconstruction" was proposed. Considering the seasonal fluctuation law of water monitoring data, the multiscale industrial water monitoring abnormal data identification models were constructed based on segmented 3σ criterion, wavelet transform, and Fourier function. Moreover, the least squares support vector machine (LSSVM) model with adaptive inertia function and particle swarm optimization (PSO) was used to reconstruct the recovered anomaly data. The results indicate that the segmented 3σ criterion performs well for the rough processing of water intake monitoring data, identifying 26 data points that fall outside the corresponding threshold intervals. The Fourier function can effectively reduce the information loss associated with the wavelet transform, thereby improving the accuracy of abnormal data identification; based on verification feedback from monitoring users, 31 of the 38 detected abnormal points were confirmed as "demand-driven anomalies," yielding an identification accuracy of 81.6%. Furthermore, the inertia function-particle swarm optimization LSSVM model meets the high-precision requirements for abnormal data reconstruction and recovery, and its reconstruction accuracy is higher than that of the LSSVM, the PSO-LSSVM, and the traditional curve fitting method. Specifically, the inertia function-particle swarm optimization LSSVM achieves an average fitting error of 0.0286, representing reductions of 46.2% and 44.4% compared with the LSSVM (0.0532) and PSO-LSSVM (0.0514), respectively; moreover, when compared with the ground-truth values obtained from verification feedback, the reconstruction error rate is below 5%. Overall, the proposed multiscale mining and reconstruction strategy for industrial water intake monitoring abnormal data can provide a valuable methodological reference for enhancing the decision support capability of data in the National Water Resources Monitoring Capacity Building Project.
Healthcare systems operate within a VUCA (Volatile, Uncertain, Complex, and Ambiguous) environment, shaped by economic, demographic, and systemic transformations. These rapid and unpredictable changes create ethical challenges, resource constraints, and heightened emotional and moral distress for healthcare professionals. The increasing complexity of care delivery, shifting institutional priorities, and external pressures contribute to moral injury, impacting professionals' ability to provide patient-centered care while maintaining their ethical and professional integrity. This qualitative study aimed to explore how healthcare professionals experience and cope with moral injury in a VUCA healthcare ecosystem. Through 35 semi-structured interviews, the study explores how healthcare professionals experience and cope with moral injury in such a dynamic healthcare ecosystem. The research uses an abductive analysis guided by the VUCA framework to examine the systemic roots of moral conflict. The analysis identified six themes highlighting how instability, unpredictability, ambiguity, and systemic overload shape clinical decision-making, emotional burden, and ethical distress. Participants described moral injury as emerging from the misalignment between professional values and institutional demands, intensified by resource shortages, role ambiguity, and crisis normalization. These pressures affect professionals' well-being, compromise ethical integrity, and contribute to long-term psychological consequences. The findings emphasize the need to move beyond individual-level resilience strategies and focus on systemic reforms. Strengthening institutional support structures-including ethical leadership, reflective spaces, and alignment between organizational policy and professional ethics-is essential for protecting both clinicians' integrity and care quality in today's complex healthcare landscape.
Operating room nurses (ORNs) are at high risk for compassion fatigue (CF), which significantly impairs individuals' well-being, undermines the stability of the nursing workforce, and jeopardizes patients' safety. The study aimed to analyze the prevalence and symptom characteristics of CF among ORNs, construct and compare predictive models using machine learning, and determine the relative contribution of distinct features to the models. This is a multi-center cross-sectional study. The questionnaires used in the study included a sociodemographic questionnaire, the Professional Quality of Life Scale (ProQoL), the Patient Health Questionnaire-9 (PHQ-9), the Generalized Anxiety Disorder 7-item Scale (GAD-7), and the Pittsburgh Sleep Quality Index (PSQI). LASSO regression was used to select critical variables, and predictive models such as decision tree, logistic regression, random forest, SVM, and XGBoost were constructed and compared. SHapley Additive exPlanation (SHAP) were drawn to show the contribution of each feature to the models. SPSS version 26.0 and R software version 4.4.0 were used for statistical analyses. In this study, a total of 1024 ORNs from 20 cities across China were recruited. According to ProQoL, 326 (31.8%) reported severe CF, 311 (30.4%) moderate CF, and the remaining 387 (37.8%) no or mild CF. Among the three dimensions, the incidence of secondary traumatic stress was most common (95.4%), followed by low compassion satisfaction (61.3%) and burnout (35.0%). In five machine learning-based predictive models, the RF model stood out with the highest AUC at 0.851 (95%CI: 0.795-0.907) in testing set. Following closely, the XGBoost model showed favorable efficacy with the AUC at 0.824 (95%CI: 0.769-0.879) in testing set, outperforming the remaining algorithms. The results of the two SHAP plots (RF and XGBoost) were consistent: depression, anxiety, self-mental health training, sleep quality, and length of service emerged as the five most significant contributors to the models. This study identified severe CF among ORNs, and the most serious symptom was secondary traumatic stress. The RF model exhibited the best performance in identifying high-level CF among ORNs, and SHAP improved the interpretability of the model. The findings of this study could help medical managers and researchers better understand CF and provide timely interventions for ORNs. Not applicable.
Peer support workers (PSWs) provide support to others through their personal lived experiences of mental health. However, their work is often undervalued by their colleagues, and they frequently face challenges in the workplace, resulting in occupational stigma. Currently, there are limited insights into how PSWs experience and manage the stigma they face. Therefore, this study examines how PSWs in the UK National Health Service experience and navigate occupational stigma in their roles. Seventy semi-structured interviews were conducted with PSWs and their colleagues. Interviews explored their experiences in the role, workplace interactions, and subsequently perceptions and experiences of stigma, and how they dealt with stigmatising experiences. The data were analysed using thematic analysis to identify how stigma manifested and how they navigated it. PSWs reported experiencing stigma both covertly and explicitly. Covert stigma included subtle devaluation of their knowledge and exclusion from decision-making, while explicit stigma involved direct questioning of competence and disrespectful behaviour from colleagues. In response, PSWs navigated stigma through three main strategies. First, they demonstrated commitment to their role via reliability, dedication, and consistent performance, reinforcing the value of their work. Second, PSWs leveraged experiential knowledge as expertise, emphasising practical skills and lived experience in patient care. Third, they used their roles to create reciprocal benefits, where they supported service-users, which in turn helped their own mental health and recovery. Occupational stigma towards PSWs is pervasive, manifesting in both subtle and overt ways that can undermine their role. PSWs actively counter stigma through commitment, expertise, and reciprocal relationships, highlighting their resilience and adaptability. Addressing stigma in healthcare settings is critical for improving team dynamics and ensuring high-quality care. Going forward to support the role, policymakers and organisations that employ PSWs should focus on improving organisational culture, recognition of the role, and collaborative practices to reduce stigma, strengthen workforce sustainability and recognise the value of lived experience in the workforce.