Little is known about compliance with intravitreal anti-vascular endothelial growth factor (VEGF) therapy for the treatment of macular and retinovascular diseases among Nigerians and Africans. The objective of this study is to measure compliance to 3 or more and 6 or more intravitreal anti-VEGF injections for common macular and retinovascular diseases in Nigerian clinics and evaluate the impact on visual outcomes. Retrospective multicenter chart review of 622 eyes/ 528 patients diagnosed with neovascular age-related macular degeneration (nAMD), diabetic macular edema (DME), retinal vein occlusions (RVOs), including branch, central, and hemiretinal (BRVO, CRVO, and HRVO), and non-AMD Choroidal Neovascular Membrane (CNVM) from five clinics (urban, semi-urban, and rural), collecting demographics, diagnosis, injection type/number, pre-/post-BCVA (converted to LogMAR), and follow-up. Treatments were intravitreal Bevacizumab (Avastin), Ranibizumab (Patizra), and Aflibercept (Eylea). Definitions of compliance: compliant; ≥3 injections (standard loading), also ≥ 6 injections. For all 622 eyes, presenting BCVA: 1.21 ± 0.84 and the final BCVA: 0.91 ± 0.80 (P = < 0.001). Overall compliance with ≥ 3 injections was 47.5%, and with ≥ 6 injections, 10.1%. Compliance to ≥ 3 injections by diagnosis was as follows: AMD 50.4%, Non-AMD CNVM 58.2%, BRVO 44.8%, CRVO 44.0%, HRVO 46.7%, DME 40.6%. Age (P = 0.264) and sex (P = 0.870) did not affect compliance to ≥ 3 injections. Clinic location significantly influenced compliance with ≥ 3 injections (P = 0.000), but not with 6 injections (P = 0.173). The highest rates of compliance with ≥ 3 injections were observed in urban tertiary centers. Injection type and cost were not significant factors (P = 0.36). Eyes with ≥ 3 injections achieved better vision (≥ 6/18) across all diagnoses; the most notable improvements were in non-AMD CNVM (+ 41.4%) and BRVO (+ 35%). Statistically significant LogMAR improvements were seen in CRVO (p = 0.049) and DME (p = 0.043). Postoperative endophthalmitis occurred in 2/622 eyes (0.0032) (both Avastin); no other serious adverse event was recorded. Real-world compliance is significantly lower than ideal. Urban and tertiary clinics show better adherence. Receiving the recommended loading doses is associated with improved visual outcomes for most diagnoses. Understanding the reasons for non-compliance, using a prospective approach, and addressing them will improve treatment outcomes for more Nigerian and presumably African patients receiving anti-VEGF drugs.
Penile curvature correction surgeries for Peyronie's disease require a learning curve to acquire confidence and precision in optimally performing the procedures for the patient. Given that these interventions are not frequent in daily practice, surgical simulation with hyperrealistic models can be a tool for correctly and safely acquiring technical skills. We developed hyperrealistic models with different curvature degrees that allow the complete simulation of Peyronie's disease correction surgeries. These models were evaluated and validated by specialist clinicians and residents in training using a specific questionnaire for this measurement. The penile models have different curvature degrees and are anatomically accurate, mimicking corpora cavernosa, urethra, Buck's fascia, neurovascular structures and skin layer. They were designed with different degrees of consistency and hardness to simulate any of the surgical techniques used to correct penile curvatures. In an initial simulation, the overall model accuracy was 3.8 (standard deviation (SD) 1.1) out of 6, improving to 4.8 (SD 0.9) out of 6 in the second simulation performed after the models had been developed and the materials improved. The model was recommended by participants for training in penile curvature correction procedures, with a score of 4.5 (SD 1.3) out of 6. Hyperrealistic penile curvature models developed using additive printing may be an alternative for acquiring experience in performing penile curvature correction procedures and improving the outcomes and safety in real patient surgeries.
Long-term survival in metastatic colorectal cancer (mCRC) can be achieved by curative-intent metastasectomy. Patients ineligible for upfront (primary) metastasectomy (PM) may become candidates for secondary metastasectomy (SM) following response to primary chemotherapy (PCT), still with curative intent. Most studies focus either on resectable or unresectable mCRC, limiting insights into differential subgroup benefits. Here, real-world data offer a comprehensive evaluation of PM, SM, and PCT. We analyzed real-world data from mCRC patients treated at the certified Comprehensive Cancer Center of the LMU Hospital (CCC MunichLMU) from 2007 to 2021, following ESMO's guidance for real-world evidence (GROW). Overall survival (OS) data were matched with the Bavarian Cancer Registry. OS was assessed using Kaplan-Meier estimates and Cox regression analysis, adjusting for prognostic factors, including primary tumor sidedness and number of metastatic sites. Of 840 evaluable patients, 166 (20%) underwent PM, 520 (62%) received PCT, and 109 (13%) became eligible for SM after response to systemic treatment. OS was significantly longer for PM versus PCT (82.3 vs. 41.6 months, HR = 0.47, p < 0.0001). Multivariate analysis confirmed PM benefit across all subgroups, including high-risk patients (e.g., right-sided and multisite mCRC). OS was comparable between PM and SM (82.3 vs. 80.9 months), whereas patients ineligible for local treatments had the worst OS (25 months). Both PM and SM were associated with excellent OS, also for patients with high-risk mCRC. Our findings underscore the importance of initial evaluation and continuous reassessment of resectability throughout the course of treatment against mCRC, as implemented at the CCC MunichLMU.
Safe diagnostic reasoning is a central but challenging competency in general practice training, particularly in frontline settings where information is limited, uncertainty is common, and diagnostic error has direct implications for patient safety. Although Murtagh's diagnostic framework provides a clinically intuitive structure, its educational application and structured assessment in lower-resource, non-English-speaking training environments remain underexplored. The objective of this study was to evaluate a modified Murtagh-style safe diagnostic rubric (MM-SAFE-Dx) and its application within a general practice training context. This study was conducted as a single-centre retrospective real-world implementation study at The First Affiliated Hospital of Harbin Medical University in Northeast China. In 2025, a modified Murtagh-style safe diagnostic training programme was delivered across three voluntary rounds. Eighty-three general practice trainees and residents were included, contributing 145 scored encounters. Baseline and follow-up case-based assessments were analyzed using nonparametric methods. Preliminary real-world validity evidence for the MM-SAFE-Dx rubric was examined in terms of internal consistency, baseline-stratified discrimination, responsiveness, context sensitivity, and practical scoring stability. The overall educational signal was modest but directionally favourable. When separating first-exposure from repeated-use effects, baseline to first exposure showed minimal change, whereas subsequent rounds demonstrated more coherent directional improvements consistent with cumulative learning processes, although not all comparisons reached statistical significance. Sensitivity analyses further indicated that uncorrected baseline stratification yielded counterintuitive patterns dominated by short-term effects among lower-performing learners, whereas longitudinally informed stratification produced more interpretable results. These findings suggest that baseline measurements derived from first-time exposure to a complex diagnostic rubric may not reliably reflect underlying diagnostic competence. In this low-resource, heterogeneous general practice training environment, the MM-SAFE-Dx rubric demonstrated preliminary real-world validity and practical educational utility. Beyond evaluating a specific programme, this study highlights a broader methodological issue: structured diagnostic tools introduced into unfamiliar and resource-variable settings may require both guided implementation and context-sensitive analytical calibration before their educational effect can be meaningfully interpreted.
BackgroundCerebrospinal fluid (CSF) pTau181 is used to support Alzheimer's disease (AD) diagnosis but can also rise in amyloid-negative individuals. This CSF profile (Aβ-/pTau181+) lies outside the AD continuum and complicates real-world etiologic diagnosis of neurocognitive disorders.ObjectiveTo determine the prevalence and clinical phenotype associated with the Aβ-/pTau181+ CSF biomarker profile in a real-world memory clinic population.MethodsWe screened the Mount Sinai Hospital database (2015-2024) for patients who underwent ADmark CSF biomarker testing. An Aβ-/pTau181+ group was classified using assay cutoffs (Amyloid-Total-Tau Index>1.2, pTau181 > 54 pg/mL) and compared to an Aβ+ group (Amyloid-Total-Tau Index<0.8) matched for pTau181 and total-Tau. Clinical variables were extracted via chart review, limited to notes preceding CSF and blinded to CSF results.ResultsThe Aβ-/pTau181+ group included 25 individuals (10.1% of the cohort) and had equally impaired cognition but fewer episodic memory complaints. Diagnosis was more often Lewy body or frontotemporal dementia. On neuroimaging, Aβ-/pTau181+ exhibited less white matter hyperintensity burden and temporoparietal atrophy.ConclusionsCSF Aβ-/pTau181+ is frequent in real-world evaluations of cognitive impairment and presents with fewer AD phenotypic features. Further research is required to clarify Aβ-/pTau181+ underlying biology and clinical trajectory.
Decades of conflict, epidemics, and climatic shocks have severely weakened Somalia's health system. The failure of five consecutive rainy seasons in 2022-2023 led to the longest and most severe drought in recent history, resulting in an unprecedented nutrition and food security crisis. These conditions have heightened the risk of disease epidemics, particularly cholera and measles. To assess the effect of interventions on disease transmission in real time, make short-term projections of disease incidence, and inform response efforts, the World Health Organization Somalia Country Office developed the Somalia infectious disease explorer (WHO-SIDE). To describe the development and application of WHO-SIDE and demonstrate its potential to identify high-risk areas for targeted public health interventions. WHO-SIDE is an interactive web-based application designed using the Shiny framework in R. It estimates the time-varying reproduction number (Rt), projects case incidence, and evaluates the impact of interventions on transmission. The application uses routine disease surveillance data, offering features such as data visualization, geographic analysis, and customizable epidemic projections. Since its deployment, WHO-SIDE has been used to monitor infectious disease trends, guide targeted interventions, and evaluate public health response efforts in Somalia. Outputs were reviewed jointly by WHO and Ministry of Health staff and formed part of the evidence base used to prioritise response activities. Challenges including limited access to technology, incomplete surveillance data, and a lack of trained personnel have hindered its full potential. WHO-SIDE highlights the feasibility and utility of analytical tools for disease monitoring and response in resource-limited, crisis-affected settings. Realising this potential will require sustained investment in three areas: building analytical capacity among local health staff; improving the completeness and timeliness of surveillance data; and developing processes for integrating modelled outputs with field intelligence. Without progress in these areas, tools such as WHO-SIDE risk producing outputs that are difficult to interpret and act upon in operational contexts.
The HFA-ICOS baseline cardiovascular risk stratification framework is widely recommended to guide surveillance in patients receiving potentially cardiotoxic cancer therapies, yet real-world evidence supporting its clinical risk separation remains limited. We conducted a multicenter retrospective study of adults with HER2-positive breast cancer treated at tertiary centers from 2016 to 2023. Baseline cardiovascular risk was classified by the HFA-ICOS framework (low to very high). The primary outcome was composite cardiotoxicity, defined by a decline in left ventricular ejection fraction (LVEF), deterioration in global longitudinal strain (GLS), ECG/arrhythmia events, or elevated cardiac biomarkers. Cardiovascular risk rates and patterns were compared across categories. Discrimination and calibration were explored, with logistic regression and Kaplan-Meier analyses performed. Among 687 patients, 273 (39.7%) developed composite cardiotoxicity during follow-up. Cardiotoxicity occurred across all HFA-ICOS risk categories, with overlap in event rates and timing. Discrimination analyses showed modest performance for LVEF, GLS, ECG, and composite outcomes, with AUCs of 0.51-0.55. Sensitivity was low (0.28-0.37), while specificity was moderate (0.73-0.74). Calibration analyses indicated acceptable risk prediction. Multivariable models adjusted for age, comorbidities, baseline LVEF, prior anthracycline, and radiation therapy showed that baseline HFA-ICOS category was not independently associated with cardiotoxicity (aOR: 0.88; 95% CI: 0.56-1.37). Kaplan-Meier curves showed overlapping event-free survival across risk groups. In this real-world cohort of HER2-positive breast cancer patients, baseline HFA-ICOS stratification showed limited ability to clearly distinguish cardiotoxicity risk across categories. These findings suggest that baseline risk assessment alone may be insufficient for individualized prediction and should be complemented by dynamic, on-treatment surveillance strategies.
IntroductionTrastuzumab deruxtecan (T-DXd)-induced interstitial lung disease/pneumonitis (ILD) represents a clinically significant and potentially fatal toxicity. Discrepancies exist regarding its reported frequency and severity between clinical trials (CTs) and real-world data (RWD). This meta-analysis aims to evaluate the incidence of T-DXd-related ILD and investigate its differences between CTs and RWD.MethodsA systematic review and meta-analysis was conducted in accordance with the PRISMA guidelines. Databases were searched from their inception through January 2026. CTs and real-world studies reporting T-DXd-related ILD were included in the analysis. Pooled incidences for all-grade, grade ≥3, and fatal ILD were calculated using random-effects models. Subgroup analyses comparing CTs and RWD, and meta-regression analyses for relevant outcomes were performed.ResultsThirty-five studies (19 CTs, 16 RWD) including 6840 patients were analyzed. The pooled incidence was 8.8% for all-grade ILD, 1.6% for grade ≥3 ILD, and 0.26% for fatal ILD. RWD was independently associated with lower reported rates of all-grade and fatal ILD, while prior lines of therapy were the main predictor of grade ≥3 ILD.ConclusionILD risk with T-DXd differs by severity and data source. Vigilant monitoring is essential, particularly in heavily pretreated patients.
Parkinson's disease (PD) remains underdiagnosed in Thailand, and its rising prevalence presents a growing challenge for the healthcare system. The previously validated CheckPD digital population screening platform has been implemented nationally in collaboration with the Thai Red Cross Society (TRCS) and the National Health Security Office (NHSO), enabling integration of digital PD risk screening into preventive health frameworks. To evaluate the early phase of a national rollout of the CheckPD platform, focusing on population reach, adoption, predictive performance, exploratory usability, and implementation factors influencing scalability across diverse real-world settings. This RE-AIM-guided implementation study in 10 Thai provinces assessed reach, adoption, completion, system performance and positive predictive value among neurologist-evaluated screen-positive participants. Preliminary usability was assessed in 30 post-screening completers using the SUS and UEQ-S. Supplementary implementation feedback was collected from Village Health Volunteers and public health officers. Between January 2024 and October 2025, 13,381 out of 18,520 users completed screening across 10 provinces (completion rate: 72.3%). The mean SUS score was 83, with a 92% first-time task completion rate. Programme reach was achieved through multiple channels, including Village Health Volunteers (6,742 participants), community field campaigns (5,207), facilitated online training initiatives (3,448), and self-initiated app downloads (3,123). When compared with neurologists' diagnoses among 730 screen-positive participants who underwent evaluation, the screening demonstrated a positive predictive value of 81.23% (593/730; 95% CI 78.39%-84.07%). Key facilitators of implementation included TRCS endorsement and network support, community volunteer engagement, and user-centred app design. Exploratory multivariable logistic regression analysis identified educational attainment and geographic context as significant predictors of screening completion, with higher educational attainment and residence outside Bangkok associated with a higher likelihood of completing the screening workflow. The CheckPD programme demonstrates that national-scale digital screening for neurological disorders is feasible in a low-to-middle-income country when embedded within trusted institutions, supported by community networks, and aligned with data protection standards. Thailand's experience provides an early, promising, and potentially scalable model for implementing population-level improvements in brain health by enabling earlier detection and assessment of individuals at risk, in alignment with the World Health Organization's Brain Health framework.
Polypharmacy is a common problem in older adults with hip fractures and may negatively affect postoperative outcomes. The aim of this study was to evaluate the association between polypharmacy, including severe polypharmacy, and clinical outcomes in older adults admitted to hospital with hip fracture. A real-life observational study was conducted at a tertiary care hospital in Spain, including patients aged ≥ 70 years who underwent hip fracture surgery between January 1, 2017, and December 31, 2018. Data were extracted from electronic medical records, including demographic details, comorbidities, and medication use. Polypharmacy was defined as the use of five or more medications, and severe polypharmacy as the use of ten or more medications. Mortality rates were analyzed at 30 days, 6 months, 1 year, 2 years, and 5 years post-surgery using Kaplan-Meier survival curves and Cox regression analysis. Among 644 patients included (mean age 84.5 years, 70.5% women), 63.8% had polypharmacy and 19.1% had severe polypharmacy. Compared with patients without polypharmacy, those with polypharmacy, regardless of severity, showed higher mortality at 30 days (8.4% and 10.3% vs 3.9%), 6 months (21.3% and 21.4% vs 10.8%), 1 year (26.6% and 33.3% vs 11.6%), 2 years (38.8% and 46.0% vs 14.2%), and 5 years (68.5% and 76.2% vs 26.3%) (all p ≤ 0.05). Crude hazard ratios for 5-year mortality were 3.65 (95% CI 2.73-4.88) for patients taking 5-9 drugs and 4.51 (95% CI 3.26-6.24) for those taking ≥ 10 drugs; after full adjustment, these remained 3.12 and 3.46, respectively. Patients with polypharmacy also had more red blood cell transfusions, major complications, and worse functional recovery. Polypharmacy was associated with worse postoperative morbidity, poorer functional recovery, and higher mortality in older adults with hip fracture. These findings suggest that medication burden may serve as a marker of clinical vulnerability rather than an isolated causal factor. Prospective interventional studies are needed to determine whether medication optimization improves outcomes.
Trifluridine/tipiracil (FTD/TPI; TAS-102) is an established later-line treatment option for refractory metastatic colorectal cancer (mCRC), and randomized evidence supports the addition of bevacizumab. However, real-world data from Turkey are limited. We conducted a multicenter retrospective cohort study of adults with metastatic colorectal cancer (mCRC) who received FTD/TPI plus bevacizumab (combination) or FTD/TPI monotherapy in routine clinical practice between June 2021 and May 2025, with follow-up updated until September 25, 2025. Overall survival (OS) was the primary endpoint, and progression-free survival (PFS), response outcomes in radiologically evaluable patients, and safety were secondary endpoints. Survival was analyzed using Kaplan-Meier estimates and log-rank tests, with hazard ratios estimated using Cox regression. A prespecified multivariable Cox model for OS was adjusted for ECOG performance status, number of metastatic sites, liver metastasis, age group, and treatment group. Seventy-eight patients were included (combination therapy, n = 57; monotherapy, n = 21). Median OS was 8 months (95% CI, 6.17-9.83) in the combination group and 6 months (95% CI, 5.03-6.97) in the monotherapy group (log-rank p = 0.437). Median PFS was 4 months (95% CI, 2.83-5.16) versus 3 months (95% CI, 1.92-4.07) (log-rank p = 0.409). Best radiologic response was evaluable in 65 patients. In univariable Cox regression, treatment group was not significantly associated with OS or PFS. In the multivariable Cox model for OS, treatment group remained not significantly associated with OS (adjusted HR 1.251, 95% CI 0.644-2.427; p = 0.509), whereas ECOG 1 (vs. 0) and liver metastasis (yes vs. no) were associated with worse OS. Hematologic toxicity was the dominant safety signal in both groups, and no unexpected safety signals were observed. In this multicenter Turkish real-world cohort, FTD/TPI-based therapy was feasible with manageable toxicity in heavily pretreated refractory mCRC. Comparative survival estimates between combination therapy and monotherapy were not statistically significant in unadjusted or adjusted analyses and should be interpreted as exploratory in light of baseline imbalances, subgroup-size asymmetry, and residual confounding. These findings complement, rather than challenge, the comparative efficacy estimates established in randomized trials such as SUNLIGHT.
The overlapping prevalence of chronic hepatitis B (CHB) and metabolic dysfunction-associated steatotic liver disease (MASLD) is high, increasing the risk of liver complications. However, data on whether antiviral therapy influences lipid profiles are limited. Therefore, this study aimed to investigate the efficacy and safety of tenofovir alafenamide (TAF) in treatment-naïve CHB patients with MASLD. A retrospective analysis was conducted on 96 treatment-naïve CHB patients with MASLD who received TAF monotherapy. Clinical data were collected at baseline and at 12, 24, 36, and 48 weeks after treatment initiation. Changes in HBV DNA, ALT, AST, HBsAg, TBIL, ALB, liver stiffness measurement (LSM), controlled attenuation parameter (CAP), lipid biomarkers (HDL, LDL, TG, TC), and renal markers (BUN, Cr, β2-microglobulin) were compared before and after treatment. A total of 96 treatment-naïve CHB patients with MASLD (56 males and 40 females) were eligible and enrolled. The mean age was 50.69 ± 10.89 years, with ALT and AST levels of 90.54 ± 17.76 U/L and 74.01 ± 15.09 U/L, respectively. The rates of undetectable HBV DNA at 12, 24, 36, and 48 weeks were 67.71%, 72.92%, 91.67%, and 96.88%, respectively, all significantly different from baseline (P < 0.001). After 48 weeks of TAF treatment, ALT levels decreased significantly from 90.54 ± 17.76 U/L to 30.81 ± 16.06 U/L (P = 0.001), and AST levels decreased from 74.01 ± 15.09 U/L to 24.78 ± 14.95 U/L (P = 0.002). HBsAg quantification decreased from 6061.29 ± 972.96 IU/mL to 3621.11 ± 699.54 IU/mL (P = 0.047). LSM decreased from 6.53 ± 1.52 kPa to 5.96 ± 1.67 kPa (P = 0.01), and CAP decreased from 299.09 ± 26.44 dB/m to 290.04 ± 28.38 dB/m (P = 0.023). No significant changes were observed in TBIL, ALB, lipid biomarkers (HDL, LDL, TG, TC), or renal function markers (BUN, Cr, β2-microglobulin) throughout the treatment period. TAF treatment in treatment-naïve CHB patients with MASLD effectively suppressed viral replication, showed a trend toward improving liver stiffness, and has no adverse effects on lipid profiles or renal function.
暂无摘要(点击查看详情)
暂无摘要(点击查看详情)
This study aimed to describe and compare patient-reported outcome measures (PROMs) and objective clinical outcome measures (CROMs) in the treatment of age-related macular degeneration (AMD), exploring the concordance between these measures within a value-based healthcare (VBH) framework. This prospective, multicenter, observational, real-world study was conducted at three tertiary referral hospitals specializing in the treatment of neovascular AMD. Clinical outcomes (CROMs) and patient-reported outcomes (PROMs) were analyzed using the National Eye Institute Visual Functioning Questionnaire 25 (NEI VFQ-25) questionnaire as a functional assessment tool. Data were collected at baseline and at three, six, and 12 months following initiation of intravitreal anti-vascular endothelial growth factor (anti-VEGF) therapy. Statistical analysis was primarily descriptive. The comparison between baseline and 12 months in the global NEI VFQ-25 score was performed using the Wilcoxon signed-rank test for paired samples. Concordance between CROMs and PROMs was assessed using the intraclass correlation coefficient (ICC). A total of 235 eyes were included, receiving 2338 intravitreal injections. The mean age of participants was 81 years (SD = 8.57), and 55.8% were female. The mean baseline NEI VFQ-25 score was 67.83 (SD = 10.39). The median best-corrected visual acuity was 63 ETDRS letters (interquartile range [P25 - P75]: 41 - 75) at baseline, increasing to 65 letters at three months and remaining stable through 12 months of follow-up. The comparison between baseline and 12 months revealed a statistically significant difference in visual acuity (Wilcoxon signed-rank test, Z = 4.2; p < 0.001). A reduction in the proportion of patients classified as legally blind was observed, together with an increase in the proportion of patients in the reading-vision and driving-vision categories. At 12 months, 58.7% of patients reported stabilization or improvement in visual function on the NEI VFQ-25 questionnaire. Concordance between the variation in visual acuity and the variation in the global NEI VFQ-25 score showed good agreement between CROMs and PROMs (ICC = 0.76; p < 0.001). The integrated analysis of CROMs and PROMs suggests that anti-VEGF treatment for neovascular AMD is associated with stabilization or improvement in visual acuity and patients' perceived visual function. The implementation of the VBH-AMD model proved feasible in a real-world clinical setting, reinforcing the importance of integrating patient-centered measures into the evaluation of therapeutic outcomes. Introdução: O objetivo deste estudo foi descrever e comparar os resultados reportados pelos doentes (patient-reported outcome measures, PROM) e os resultados clínicos objetivos (clinical-reported outcome measures, CROM) no tratamento da degenerescência macular da idade (DMI), explorando a concordância entre estas medidas no contexto de um modelo de cuidados de saúde baseados em valor (value-based healthcare, VBH). Métodos: Conduziu-se um estudo prospetivo, multicêntrico e observacional, da prática clínica, realizado em três hospitais terciários de referência no tratamento da neovascularização macular secundária à DMI. Foram analisados os resultados clínicos e os resultados reportados pelos doentes, utilizando o questionário National Eye Institute Visual Functioning Questionnaire 25 (NEI VFQ-25) como instrumento de avaliação funcional. Os dados foram recolhidos no início do tratamento e aos três, seis e 12 meses após o início da terapêutica com injeções intra-vítreas de agentes anti-fator de crescimento endotelial vascular (anti-VEGF). A análise estatística baseou-se em estatística descritiva. A comparação entre o baseline e os 12 meses do score global do NEI VFQ- 25 foi realizada através do teste de Wilcoxon para amostras emparelhadas. A concordância entre os CROM e os PROM foi avaliada através do intraclass correlation coefficient (ICC). Resultados: Foram incluídos 235 olhos, tratados com 2338 injeções intravítreas. Na amostra, a idade média dos participantes foi de 81 anos (DP = 8,57), sendo 55,8% do sexo feminino. Relativamente ao questionário, o score médio na avaliação basal foi de 67,83 (DP = 10,39). A mediana da melhor acuidade visual corrigida foi de 63 letras ETDRS (intervalo interquartil [P25 - P75]: 41 - 75) na baseline, aumentando para 65 letras aos três meses e mantendo-se estável até aos 12 meses de seguimento. A comparação entre a baseline e os 12 meses revelou uma diferença estatisticamente significativa na acuidade visual (Wilcoxon signed-rank test, Z = 4,2; p < 0,001). Observou-se uma redução da proporção de doentes classificados como cegueira legal e um aumento das proporções de doentes nas categorias de visão de leitura e visão de condução. Aos 12 meses, 58,7% dos doentes reportaram estabilização ou melhoria da funcionalidade visual no questionário NEI VFQ-25. A concordância entre a variação da acuidade visual e a variação do score global do NEI VFQ-25 revelou boa concordância entre CROM e PROM (ICC = 0,76; p < 0,001). Conclusão: A análise integrada de CROM e PROM sugere que o tratamento da neovascularização macular secundária à DMI com anti-VEGF se associa a uma estabilização ou melhoria da acuidade visual e da perceção funcional da visão. A implementação do modelo VBH-DMI demonstrou ser aplicável em contexto de prática clínica real, reforçando a importância de integrar medidas centradas no doente na avaliação dos resultados terapêuticos.
The advancement of connected autonomous vehicles (CAVs) enables cooperative decision-making for traffic efficiency and safety. This study proposes a centralized lane-change decision coordination framework for multiple CAVs on highways, extending cooperative driving beyond traditional car-following strategies. The controller jointly optimizes speed adaptation and lane-change decisions over a finite prediction horizon to maximize overall traffic utility while ensuring collision-free maneuvers. Assuming reliable vehicle-to-infrastructure (V2I) communication, the planner computes acceleration, braking, and lane-change commands for all vehicles simultaneously. The underlying decision process is formulated as a mixed-integer optimization problem, which is computationally prohibitive for real-time deployment. To address this challenge, a priority-aware search strategy is developed to evaluate only the most promising lane-change combinations at each time step, integrated within a Model Predictive Control (MPC) framework enhanced with Artificial Potential Fields (APFs) for safety assurance and motion guidance. Simulation results demonstrate that the proposed framework effectively balances safety, mobility, and system-level coordination while achieving real-time feasibility through significantly reduced computational complexity. The approach offers a scalable solution for future intelligent transportation infrastructures and real-time traffic management applications.
Lung transplantation is the definitive therapy for end-stage respiratory diseases. To expand the donor lung pool, ex vivo lung perfusion (EVLP) has been developed for the assessment of marginal donor lungs. However, current evaluation methods remain limited. This study aimed to develop non-invasive imaging and monitoring techniques for the quantitative and early assessment of pulmonary function during EVLP. Three novel approaches were established: (1) lung thermography during the initial reperfusion period to assess pulmonary function, (2) optical oxygen saturation (SaO₂) imaging to assess pulmonary oxygenation, and (3) real-time lung weight measurement as an early indicator of transplant suitability. Lung thermography revealed that lung surface temperature at 8 min after shunt closure was significantly lower in non-suitable cases than in suitable cases (25.1 ± 0.6 °C vs. 27.8 ± 1.2 °C, P < 0.01). Optical SaO₂ imaging demonstrated a strong correlation between lower lobe SaO₂ calculated from SaO₂ imaging and PaO2/FiO2 (P/F) ratio in the lower pulmonary vein (R = 0.855, P < 0.01), with SaO₂ being significantly lower in non-suitable cases. Real-time lung weight measurement showed that lung weight gain increased significantly after 40 min in non-suitable cases compared with suitable cases (51.6 ± 46.0 g vs. -8.8 ± 25.7 g, P < 0.01). These three approaches proved effective for the quantitative and early assessment of pulmonary function during EVLP. This review was created based on a translation of the Japanese review written in the Japanese Journal of Artificial Organs in 2024 (Vol. 53, No. 3, pp. 216-220).
This research presents a comprehensive and automated framework for detecting surface cracks and measuring their widths in reinforced concrete (RC) members using a modified YOLO-V11 deep learning (DL) architecture. Basically, manual surface-crack inspection is subjective and labor-intensive, relying heavily on an inspector's skill and conditions on-site. This often leads to inconsistent assessments and longer inspection time. Thus, the proposed approach mitigates these limitations by integrating automated crack detection with direct quantitative crack measurement. The proposed framework presents: (1) a DL crack segmentation model trained on a diverse dataset to enhance generalization in realistic inspection conditions for crack detection and segmentation, (2) a crack width measurement algorithm using patching and stitching method, and (3) a customized image calibration and scaling approach to transfer crack dimensions from pixel-size to real-size using low-cost imaging devices and tools. Finally, the proposed framework was validated using 230 measured crack points collected from both experimental specimens and existing RC structures. The prediction accuracy reached a coefficient of variation of 16.82% and a mean relative error of 12.65%, confirming the reliability of the proposed framework for crack measurements of RC structures.
Glecaprevir/pibrentasvir (G/P) is a pan-genotypic, interferon-free, direct-acting antiviral regimen approved for chronic hepatitis C (CHC) treatment. While clinical trials have demonstrated its efficacy and safety, real-world data in the Korean population remain limited. This post-marketing surveillance study aimed to evaluate the safety and effectiveness of G/P in Korean patients with CHC in routine clinical practice. A prospective, multicenter observational study was conducted across 56 institutions in Korea from January 2018 to January 2024. Adult and adolescent patients (aged ≥ 12 years) with CHC receiving G/P were enrolled. Safety outcomes evaluated adverse events (AEs), including serious AEs (SAEs), and treatment-related AEs. Effectiveness was assessed by sustained virologic response at 12 weeks post-treatment (SVR12) in evaluable patients. Of 3061 patients enrolled, 51.1% were female and 18.6% had cirrhosis. AEs were reported in 9.7% of patients, with pruritus (2.0%) and headache (1.0%) being most common. SAEs occurred in 1.2% of patients, and 0.3% discontinued treatment due to AEs. No new safety signals were identified. SVR12 was achieved in 98.2% of the effectiveness population (n = 2434). Among patients whose hepatitis C virus RNA was monitored during therapy, on-treatment virologic failure occurred in 1.3%, while post-treatment relapse was observed in 1.2%. G/P therapy demonstrated a manageable safety profile and high effectiveness in Korean patients with CHC in real-world settings, supporting its continued use and coverage under national health programs. ClinicalTrials.gov (NCT03740230).