Aging and increased life expectancy generate growing challenges for end-of-life care in old age, particularly in rural contexts marked by territorial and health inequalities. From the perspective of gerontological geography and the notions of autonomy and agency of older adults, this study aims to generate an understanding of end-of-life as a lived experience from the subjective worlds of and with the people involved. To this end, a qualitative study, with an ethnographic approach and case study strategy, was conducted in the Los Lagos Region of Chile between 2022 and 2023. This included semi-structured interviews and ethnographic observation of rural older adults in the end-of-life stages, their caregivers, and rural health teams. The results show that remaining at home is a central desire and organizes care, sustained primarily by feminized family networks and rural primary care. The home becomes a space of care, and health teams play a key role in providing clinical and relational support at the end-of-life. It is concluded that end-of-life care in rural areas requires territorial approaches that recognize autonomy in old age and the structural inequalities of these processes. El envejecimiento y aumento de la esperanza de vida generan desafíos crecientes para los cuidados de fin de vida en la vejez, particularmente en contextos rurales marcados por desigualdades territoriales y sanitarias. Desde la geografía gerontológica, y las nociones de autonomía y agencia de las personas mayores, este estudio se propone generar una comprensión del fin de vida como experiencia vital desde los mundos subjetivos de y con las personas implicadas. Para ello, se realizó un estudio cualitativo, de enfoque etnográfico y estrategia de estudio de caso, en la Región de Los Lagos, Chile, entre 2022 y 2023, que incluyó entrevistas semiestructuradas y observación etnográfica a personas mayores rurales en etapas de fin de vida, las personas cuidadoras y los equipos de salud rural. Los resultados muestran que la permanencia en el hogar constituye un deseo central y organiza los cuidados, sostenidos principalmente por redes familiares feminizadas y por la atención primaria rural. El hogar se transforma en un espacio de cuidado y los equipos de salud cumplen un rol clave en acompañamiento clínico y relacional del fin de vida. Se concluye que los cuidados de fin de vida en la ruralidad requieren enfoques territoriales que reconozcan autonomía en la vejez y las desigualdades estructurales de estos procesos.
暂无摘要(点击查看详情)
Loneliness is linked to higher mortality and poorer health worldwide. As the global population of individuals who are unpartnered and childless grows, public health concerns about social isolation have risen. Sociological theories distinguish between having fewer ties, being socially isolated, and being lonely while gerontological theories emphasize how aging adults prioritize smaller, high-quality relationships. Yet, norms about family, friendship, and loneliness also differ by country contexts such as culture and development. Therefore, while close non-family ties (e.g., friendship) might reduce loneliness, particularly among those who lack partners or children, these processes likely differ across country contexts. Consequently, it remains unclear 1) whether lacking partners or children is associated with various dimensions of loneliness, 2) whether friendship buffers risk of loneliness, and 3) if and how these processes differ across societies. This study examines being unpartnered or childless, friend contact, and loneliness among those aged 45+ (N = 19,289) across 25 countries using data from the International Social Survey Programme (2017) and country-level indicators from the World Health Organization and World Values Survey. Being unpartnered is particularly associated with loneliness-especially lacking companionship. Yet, frequent friend contact likely buffers loneliness across all outcomes-especially for the unpartnered. Those in countries that place a high value on family and especially friendship are at lower risk of loneliness. These findings are discussed considering changing family structures, the potential role of friendship in compensating for limited family ties, and differential risks of loneliness by country context.
Modelling approaches that consider system-wide delivery platforms rather than single diseases can be instrumental in economic evaluation and forward-looking policy formulation. This study develops a costing approach tailored to the Thanzi La Onse (TLO) model of Malawi's healthcare system, with general applicability to other health system models. We developed a mixed-method costing approach to estimate the total cost of healthcare delivery (excluding high-level administrative costs) in Malawi using the TLO model, from a healthcare provider perspective. Through iterative adjustments of key parameters, we aligned model-based estimates as closely as possible with real-world expenditure and budget data. Costs were projected for 2023-2030 under alternative scenarios of health system capacity. A comparison with expenditure and budget data suggests our costing method is broadly reliable for the conditions captured by the model, though some mismatches remain owing to data limitations and definitional inconsistencies. Under current system capacity, total healthcare delivery costs for 2023-2030 were estimated at 2.83 billion US dollars [95% uncertainty interval (UI), $2.80-$2.87 billion], excluding non-medical infrastructure and administrative costs, averaging $390.98 million [$385.92-$396.71 million] annually or $16.89 [$16.75-$17.08] per capita. Scenario analysis highlighted strong interdependencies within the health system. Improving consumable availability alone increased consumables costs by 4.63%, while expanding human resources for health (HRH) alone increased them by 1.43%. When both HRH and consumable availability were expanded together, consumable costs rose by 5.93%, a combined effect larger than either change alone, illustrating how bottlenecks in one component constrain the impact of improvements in another. Mixed-method costing using health system models is a feasible and robust method to estimate and forecast healthcare delivery costs. Clarifying assumptions and limitations can improve their utility for economic analyses and evidence-based planning in the health sector.
Angiotensin receptor-neprilysin inhibitor (ARNi) therapy, combining blockade of the renin-angiotensin-aldosterone system with the inhibition of neprilysin-mediated peptide degradation, is the standard therapy for heart failure with a reduced ejection fraction. Elevated urinary C-peptide (CPR) levels were recently reported in patients receiving ARNi, suggesting changes in the renal handling of CPR. However, a statistical comparison of ARNi users and non-users has not been performed. Therefore, this study aimed to statistically evaluate the impact of ARNi therapy on serum and urinary CPR levels by comparing ARNi users and non-users in a real-world clinical cohort. We retrospectively analyzed 315 patients who underwent 24-h urinary C-peptide (U-CPR) testing at Shinshu University Hospital between January 2022 and March 2024. Demographic data, the diabetes status, and medication history, including ARNi use, were extracted from electronic medical records. Group comparisons were performed by the Mann-Whitney U test, correlations were analyzed by the Spearman's test, and multiple regression analyses identified independent predictors of serum and urinary CPR. Of 315 patients, 11 were ARNi users. U-CPR, serum CPR, the C-peptide index, and U-CPR/serum CPR ratio were significantly higher in ARNi users. Serum CPR moderately correlated with U-CPR in both groups; however, the regression slope was steeper in ARNi users, indicating a disproportionate rise in U-CPR. In multiple regression analyses, ARNi therapy was the strongest contributing factor for elevated U-CPR (β = 186.57 nmol/day) and serum CPR (β = 0.59 nmol /mL). ARNi therapy markedly increases U-CPR, likely through the neprilysin-induced inhibition of renal enzymatic degradation. Using U-CPR to assess β-cell function in patients receiving ARNi therapy is not recommended, while serum CPR remains a valid marker of insulin secretion.
Interstitial lung diseases (ILDs) consist of idiopathic pulmonary fibrosis (IPF) and non-IPF ILDs. While pulmonary complications in IPF are relatively well-studied, there is a need for research on non-IPF ILDs, including connective tissue disease-associated ILDs (CTD-ILDs), and non-pulmonary outcomes of ILDs. We compare hospitalization, infection, and pulmonary/cardiac vascular event outcomes in patients with various ILDs. We used data from 82 healthcare organizations between 2014 and 2023 on the TriNetX Research Network. In addition to IPF, we included patients with rheumatoid arthritis (RA)-ILD, systemic sclerosis (SSc)-ILD, myositis-ILD, hypersensitivity pneumonitis (HP) and pulmonary sarcoidosis. We employed propensity score matching (PSM) and assessed outcomes, including hospitalization, infection, and pulmonary/cardiac vascular events within one year of diagnosis. A total of 66,771 patients met the inclusion criteria, with 15,228 diagnosed with IPF and 51,543 with non-IPF ILDs. Anti-fibrotic agents were used in 30.4% of IPF patients. IPF patients had higher risks of hospitalization, cytomegalovirus disease, aspergillosis, and pulmonary/cardiac vascular event compared to those with non-IPF ILDs. Within CTD-ILDs, RA-ILD was associated with increased risks of sepsis, bacteremia, and pneumonia, while SSc-ILD had higher risks of pulmonary vascular events. Myositis-ILD showed elevated risks of hospitalization and mortality compared to RA-ILD, whereas patients with HP and pulmonary sarcoidosis experienced more favorable outcomes. We identified distinct risk profiles across ILD subtypes, with increased infection risks in RA-ILD and heightened pulmonary/cardiac vascular event risks in SSc-ILD and IPF. These findings emphasize the need for targeted surveillance/management strategies for different ILD subtypes.
The detection of changes with a high degree of accuracy in multi-temporal satellite imagery is still a challenge due to sensor incapability, location problems and misalignment, radiometric noise, and non-linear temporal variations, which have recently motivated registration-aware and semantic-consistency-based change detection frameworks. Traditional CNNs and Transformer-based methods often have difficulty in effectively computing such continuous-time dynamics. This paper proposes an LSNN model, which is a Lightweight Liquid Siamese Neural Network, that has two branches sharing weights for processing the bi-temporal images acquired at times [Formula: see text] and [Formula: see text] and to capture the long-range dependencies and temporal irregularities through liquid time-constant neurons. A cross-feature differencing module is integrated into the system in order to facilitate the detection of fine-grained structural transitions and to eliminate variations that are not relevant. The experiments carried out on RGB-RGB and synthetic aperture radar (SAR)-multispectral datasets with separate train-test splits and extensive statistical analysis indicate that the LSNN has superior performance with SeK = 87.46%, OA = 96.18%, F1 = 94.38%, IoU = 91.22%, NSE = 0.92, and PBIAS = - 1.82%. The results presented are superior to the current best methods, that is, the state-of-the-art models, which also include Transformer- and CNN-based ones like SCaNet. Furthermore, the model's robustness to real-world perturbations is showcased by additional evaluations under Gaussian noise, speckle noise, and illumination variations coupled with slight spatial shifts. The LSNN architecture is thus an efficient and trustworthy alternative for satellite-based high-accuracy change detection because of its excellent temporal modeling capacity, low operational cost, and offering steady performance even under diverse sensing conditions.
This study introduces an improved model for neutron-gamma density (NGD) measurements, specifically designed to reduce the impact of pair production on density estimations. Using the MCNPX simulation software, the model simulates the interaction of the NGD tool with geological formations, focusing on the energy distribution of neutrons and gamma rays, and the effect of mudcake thickness on gamma flux ratios. The study investigates how varying hydrogen weight fractions, particularly those below 10%, influence gamma and neutron flux. The new model effectively mitigates errors caused by pair production, offering more accurate density estimations, especially for formations with low hydrogen content. Experimental benchmarking with a Cf-252 source was conducted to validate the simulation results, providing a comparison with real-world data. While the model demonstrates improved performance for materials with hydrogen content less than 10%, further refinements may be needed for high hydrogen fraction scenarios.
暂无摘要(点击查看详情)
Albeit rarely recognized as such in existing legislation, violent child discipline is a clear form of domestic violence (DV) with enduring consequences for child well-being. This study merges household survey data from 27 sub-Saharan African countries with World Bank data on law implementation to investigate whether broad DV legislation introduced since the mid-2000s has curbed violent parenting practices. Using a quasi-experimental approach to compare childrearing practices and attitudes between countries with and without anti-DV laws, before and after law implementation, we document a robust increase in violent child discipline-mainly driven by emotional punishment-and a higher endorsement of harsh parenting practices following the introduction of these laws. These adverse effects are attenuated in countries with higher income inequality, where the laws appear to play more of a "protective" role. Our findings underscore the unintended consequences when anti-DV legislation is enacted without a specific target for child protection.
Atezolizumab plus bevacizumab (A+B) and STRIDE (tremelimumab plus durvalumab) represent two of the approved first-line immunotherapy strategies for unresectable hepatocellular carcinoma (HCC). As no head-to-head trial exists, we assessed temporal differences in survival benefit using complementary analytic frameworks. Three independent analyses were conducted: (1) anchored indirect comparison using reconstructed survival from IMbrave150 and HIMALAYA; (2) a real-world A+B cohort (n = 883) weighted via MAIC to match STRIDE baseline characteristics; and (3) a propensity-matched population-level dataset from TriNetX (n = 309 per arm). Time-interval hazard ratios (HRs), RMST, and conditional landmark survival were calculated. Proportional hazards were assessed by Schoenfeld testing; pooled interval HRs were synthesized through meta-analysis. Across all models, A+B showed a consistent early advantage. In the first 6 months, the pooled HR favored A+B (0.75; 95% CI 0.64-0.88), with absolute survival differences of + 5-7%. From 6-12 months, HRs remained numerically favorable but non-significant. Between 12 and 24 months, pooled HRs approached neutrality (0.93-1.06), with confidence intervals crossing unity and no meaningful absolute survival differences. Beyond 24-36 months, the effect reversed in favor of STRIDE (anchored HR 1.41; RWD HR 2.52; TriNetX HR 1.47). Meta-analysis confirmed a progressive late benefit for STRIDE from 36 to 60 months (pooled HR 1.54-1.75). Schoenfeld tests indicated time-dependent effects in all frameworks (p < 0.05), and ΔRMST trajectories were highly concordant (Pearson r > 0.97). A+B is associated with stronger early tumor control, while STRIDE provides more durable long-term survival beyond 2-3 years. These findings support selecting therapy based on temporal treatment objectives and advocate for biomarkers predicting early versus sustained treatment benefit.
Monitoring biochemical parameters is an essential component of pharmacological safety and routine clinical practice. Abnormalities in hepatic and renal function observed during hospitalization may reflect pharmacological exposure, underlying disease processes, or their interaction. However, real-world data describing the frequency and distribution of such laboratory abnormalities in hospital settings remain limited. This study aimed to evaluate the prevalence of biochemical abnormalities of hepatic and renal function among patients receiving pharmacological therapy and to assess the frequency of laboratory alterations associated with commonly prescribed drug groups. A retrospective observational study was conducted using laboratory data from the Department of Clinical Biochemistry. The analysis included 3,500 adult patients who underwent biochemical testing while receiving pharmacological therapy between January 2023 and December 2025. The evaluated parameters included alanine aminotransferase, aspartate aminotransferase, serum creatinine, urea, sodium, and potassium. Patients were categorized according to the main pharmacological therapy received, including antibiotics, non-steroidal anti-inflammatory drugs, and antihypertensive medications. Abnormal values were defined according to institutional laboratory reference ranges. Among the 3,500 patients included in the analysis, 52.3% were male and 47.7% were female, with a mean age of 56.8±15.4 years. Antibiotics were prescribed to 41.6% of patients, non-steroidal anti-inflammatory drugs to 33.2%, and antihypertensive medications to 25.2%. Elevated alanine aminotransferase levels were observed in 18.9% of patients, while increased aspartate aminotransferase levels were detected in 15.4%. Hepatic enzyme abnormalities were more frequently observed among patients receiving antibiotics and non-steroidal anti-inflammatory drugs, with statistically significant differences between therapy groups (p<0.05). Renal function abnormalities were identified in 14.7% of patients for creatinine and 12.9% for urea, particularly among patients treated with non-steroidal anti-inflammatory drugs. Electrolyte disturbances were less frequent, with hyponatremia observed in 6.1% and hyperkalemia in 4.3% of cases. Overall, 27.6% of patients exhibited at least one clinically relevant biochemical abnormality during hospitalization while receiving pharmacological therapy. A considerable proportion of hospitalized patients receiving pharmacological therapy present with clinically significant biochemical abnormalities affecting hepatic, renal, or electrolyte parameters. Although causality cannot be established in this retrospective design, these findings underscore the importance of systematic laboratory monitoring as part of hospital-based pharmacovigilance and patient safety strategies.
Heterogeneous briquettes made from rice husk-pine sawdust blend treated with sulfuric acid have the potential to satisfy the increasing global demand for sustainable energy. However, untreated blends are constrained by unfavorable thermochemical properties. In this study, a blend comprising 10 wt% rice husk and 90 wt% pine sawdust was subjected to H2SO4 treatment at 1%, 2%, 3%, 4%, and 5% (v/v) for 1-h at ambient temperature. Briquettes were subsequently formulated, and thermochemical properties were assessed to identify treatment conditions enhancing energy and combustion performance. The briquettes were analyzed for lignocellulose using acid detergent fiber and lignin, and neutral detergent fiber; higher heating value HHV using calorimetry; and thermal efficiency and emissions using a water boiling test (WBT) coupled with a portable emission monitoring system (PEMS) 4000-series sensor. The results demonstrated that sulfuric acid treatment increased briquettes' HHV from 18.21 ± 0.09 MJ/kg (untreated) up to 18.82 ± 0.03 MJ/kg. The 1% concentration yielded the highest thermal efficiency (71.49 ± 2.05%), accompanied by the lowest CO emissions (0.88 ± 0.03 g/MJd), meeting the ISO 19,867-1:2018 Tier 5 limits, and the lowest PM2.5 emissions (100.67 ± 1.53 mg/MJd), within the Tier 3 limit. Overall, the optimal sulfuric acid treatment condition for effective combustion performance and energy delivery was 1%. However, cost-benefit analysis should be incorporated into future studies for practical real-world application.
Seizure forecasting and affective state analysis using EEG-ECG data play a pivotal role in advancing neurological and mental health monitoring. However, existing methods such as Fed-Transformer, Res-1D CNN, and Fed-ESD suffer from privacy risks, inefficient feature extraction, and high computational overhead, limiting their effectiveness in real-world applications. To overcome these challenges, this study proposes NeuroFedSense, a novel Federated Learning-enabled Privacy-Preserving Framework that integrates a Temporal Convolutional Network (TCN) with an Attention Mechanism for accurate seizure forecasting and affective state analysis using EEG-ECG data, ensuring enhanced feature selection, interpretability and efficient decentralized training. The model leverages adaptive attention-based optimization and weighted feature selection to improve classification performance while ensuring data privacy. Implemented using TensorFlow, NeuroFedSense achieves 99.54% accuracy, 99.62% precision, 99.34% recall, and a 99.46% F1-score, outperforming Fed-Transformer (97.10% accuracy), Res-1D CNN (81.62% accuracy), and FML (99.10% accuracy). The ROC-AUC score of 0.99 further establishes its superiority over competing models. Additionally, the federated approach reduces energy consumption per node by 30% and optimizes communication efficiency by minimizing data transmission by 15% over 100 rounds. By ensuring high accuracy, improved privacy, reduced computational overhead, and enhanced energy efficiency, NeuroFedSense sets a new benchmark for decentralized, real-time seizure prediction and affective state monitoring. These findings underscore its potential for deployment in intelligent, privacy-preserving healthcare applications, addressing critical challenges in remote neurological monitoring.
暂无摘要(点击查看详情)
To report clinical outcomes and toxicity of interstitial brachytherapy (IBT), with or without external beam radiotherapy, in a predominantly frail, multimorbid cohort of patients with locally advanced vulvar cancer, analyzed according to treatment intent and clinical setting. We analyzed 47 patients with vulvar malignancies treated between 1998 and 2024 with pulse dose rate-IBT, either postoperatively or with definitive intent. Fifteen patients received pulse dose rate-IBT alone (median total dose of 57.2 Gy, range, 44.7-67.2), while 32 patients underwent combined external beam radiotherapy followed by an IBT boost (median total dose of 61.4 Gy; range, 50.7-68.1). Median age was 67 years (range, 38-90). Survival outcomes were estimated using Kaplan-Meier methods. Median follow-up was 2.2 years (range, 4-210). Two- and five-year overall survival rates were 58.2% and 45.7%, and disease-free survival rates were 62.5% and 53.1%, respectively. Concurrent chemotherapy was administered to 22/47 (47%) patients and was not completed in 7/22 (32%). In a predefined subgroup treated with curative intent for primary vulvar cancer, five-year overall survival, disease-free survival, and cumulative local recurrence rate were 61.7%, 66.7%, and 29.3%, respectively. Severe toxicity (≥grade 3), including ulceration, occurred in 14.2% of patients. In this real-world cohort, interstitial brachytherapy was feasible with a low rate of severe toxicity. Despite a frail, multimorbid patient population, acceptable local control was achieved. With appropriate patient selection, IBT may represent a viable option to improve local control while limiting mucosal toxicity.
Chemical esophageal burns in children, especially grade III injuries, frequently lead to cicatricial strictures, persistent dysphagia, and prolonged hospitalization, creating a high risk of complications and disability. Traditional dilation methods (blind bougienage and gastrostomy with string-guided bougienage) remain widely used; however, they are associated with limited effectiveness and a risk of perforation. This study aimed to evaluate real-world outcomes of these traditional approaches and to define their main clinical limitations. A retrospective analysis was performed in 115 children (aged 1-14 years) with chemical esophageal burns treated in a hospital setting. Grade III esophageal burns with subsequent cicatricial stricture formation were diagnosed in 46 patients. The severity of stenosis was assessed using the Yu.I. Gallinger classification. Treatment selection was determined by the severity of deformity: direct bougienage was performed when luminal passage was possible, while gastrostomy with string-guided bougienage was used in cases of severe stenosis, inability to safely pass a bougie, or after ineffective blind bougienage. Final clinical outcomes were assessed during hospitalization (at discharge). Because treatment selection depended on stenosis severity and technical feasibility, between-method comparisons were interpreted descriptively in view of confounding by indication. All patients with grade III burns had cicatricial stenoses of grades II-IV: grade II - 45.6%, grade III - 43.5%, and grade IV - 10.9%. Direct bougienage was feasible as the primary method in 45.7% of cases, whereas gastrostomy with string-guided bougienage was used in 54.3%; this distribution should be interpreted with caution because gastrostomy was preferentially selected for more severe deformity or after failed blind bougienage. Esophageal perforation was recorded in 6.9% of patients. In alkali burns, a 1.6-fold higher need for gastrostomy was observed than in acid burns (73.3% vs 45.2%), but the difference did not reach statistical significance (p=0.115). The mean length of hospital stay was 41.1±2.9 bed-days. Final in-hospital clinical outcomes were: good - 67.4%, satisfactory - 19.6%, and unsatisfactory - 13.0%. Traditional treatment methods for grade III chemical esophageal burns in children demonstrate important clinical limitations, including a risk of perforation and a frequent need for gastrostomy in severe cases. Given the retrospective design, selection by indication, and the absence of a direct comparison with visually controlled techniques, further comparative studies are needed to determine whether safer dilation under visual or guidewire control improves outcomes.
Federated learning (FL) has become a highly promising paradigm for privacy-preserving distributed model training by enabling edge devices to train without sharing raw data. But in practice, edge environments are both non-stationary and asymmetric, with varying data distributions due to shifts in user behaviour, sensing conditions, and overall environmental dynamics. This causes concept drift (sudden, gradual, and recurrent), leading to poor model performance, slower convergence, and predictive bias. Current approaches to FL are not combined to tackle problems of drift adaptation, differential privacy (DP) and resource efficiency (FedAvg, DP-FedAvg). To address these constraints, we present FedDriftGuard. This Federated learning layer unifies client-level drift detection, drift-adaptive aggregation, and adaptable differential privacy into a single, FLE architecture-compatible system. The proposed DP-DriftNet model implements attention-based time encoding to capture changing data patterns and drift-directed feature weighting to allow greater flexibility in the presence of distributional changes. A drift-optimal privacy scheduler allocates noise probabilistically, subject to a limited privacy budget, thereby enforcing an appropriate privacy-utility trade-off without cancelling formal DP guarantees. Also, update sparsification, compression and periodic transmission techniques are used to reduce communication overhead. Decades of experimentation on real-world and synthetic drift datasets have shown that FedDriftGuard outperforms baseline FL techniques, achieving accuracy and F1-score gains of 9-14% and 11-17%, respectively, with adaptation latency 28% shorter and communication cost 20-35% lower. Such findings are statistically significant and confirm the soundness of the suggested method. FedDriftGuard offers effective, scalable privacy-preserving learning in adaptable, edge-drifting environments.
Anxiety among adolescents has increased globally during the COVID-19 pandemic. Adolescents in juvenile detention centers (JDCs) may be particularly vulnerable due to restricted liberty and social isolation. Movement-based interventions such as Dance/Movement Therapy (DMT) have been studied as approaches to support emotional regulation. This exploratory study investigated the psychophysiological effects of a structured DMT intervention on anxiety reduction among adolescents in JDCs, focusing on anxiety-related changes in dopamine (DA) levels and body temperature. This quasi-experimental study included 55 female adolescents from a single juvenile detention center. Participants were allocated to either a non-DMT control group (n = 30, 16.17 ± 1.73 years), which maintained their usual institutional routine throughout the 8-week study period, or a DMT group (n = 25, 16.23 ± 1.68 years) that completed 24 DMT sessions over the same period. Anxiety and physiological measures were assessed before and after the intervention. Anxiety was measured using the Beck Anxiety Inventory (BAI). Physiological measures included mean body temperature (mTb), calculated from tympanic (core) and skin temperature measurements, and plasma DA levels measured using high-performance liquid chromatography (HPLC). Following the intervention, the DMT group showed a significant reduction in BAI scores (-16%, p < 0.001), along with significant increases in mTb (0.11 ± 0.07 °C, p < 0.001) and DA levels (+ 30%, p < 0.001). BAI scores were negatively correlated with mTb and DA levels, whereas mTb was positively correlated with DA levels. This study provides preliminary evidence that DMT may help alleviate anxiety and support psychophysiological regulation among adolescents in JDCs. However, the exploratory design, single-center setting, and all-female sample may limit generalizability. Future multi-center studies with more diverse samples are needed to confirm these findings and clarify the underlying mechanisms.
Ureteral stents are routinely used following endourological procedures to ensure adequate drainage and prevent obstruction. However, stent-related morbidity remains common, and optimal stent dwell time and removal methods are not well defined. This systematic review aimed to evaluate clinical and procedural factors influencing ureteral stent dwell time and the methods used for stent removal after endourological interventions. A systematic review was conducted in accordance with PRISMA guidelines and registered on PROSPERO. MEDLINE and Embase were searched from inception to October 2025. Randomized controlled trials and comparative observational studies evaluating ureteral stent dwell time and/or removal methods in adults undergoing endourological procedures were included. Risk of bias was assessed using RoB 2 and ROBINS-I tools. Thirty-two studies encompassing 4,373 patients were included. Reported stent dwell times varied widely, most commonly ranging between 10 and 14 days in uncomplicated cases, with longer durations associated with increased rates of encrustation and removal difficulty. Removal techniques included rigid cystoscopy (48.7%), flexible cystoscopy (19.9%), extraction strings (23.5%), and device-assisted methods (7.9%). Less invasive approaches, particularly flexible cystoscopy and extraction-string removal, were consistently associated with reduced pain scores and improved patient comfort, although extraction strings carried a small risk of premature dislodgement. While practice patterns vary, the evidence suggests that a 10-14 day dwell time might be the optimal window to balance healing with the prevention of encrustation. Less invasive removal approaches, particularly flexible cystoscopy and extraction-string techniques, were generally associated with lower pain scores and high procedural success rates in selected patients. While these methods are safe and better tolerated, extraction strings carried a small, reproducible risk of premature dislodgement. High-quality prospective studies are needed to define determinant-based, individualized stent management strategies.