The COVID-19 pandemic accelerated telehealth service adoption in physiotherapy, including telerehabilitation (TR). However, the extent of TR use and factors influencing its implementation in Germany remain unclear. This study aimed to evaluate TR use in physiotherapy (during COVID-19 lockdowns, current use, future intentions, conditions treated, content, setting, and session type) and identify barriers and facilitators from physiotherapists' (PTs) perspectives. A mixed-methods sequential explanatory design was employed, combining a cross-sectional online survey (n = 152) with two focus group interviews (n = 9). The survey collected data on demographics, TR use, barriers, and facilitators. Focus groups explored themes emerging from the survey in greater depth. Data were analyzed using descriptive statistics and content analysis, integrating quantitative and qualitative findings. Quantitative and qualitative findings were integrated through explanatory linking and joint display in a comparative analysis table. TR use peaked during COVID-19 lockdown (32.26%) but decreased to 18.06% by October 2022, with 26.45% intending future use and 43.87% considering it. Among TR users, musculoskeletal conditions were most commonly treated (75%), followed by sports (38%), pulmonology (33%), and neurology (27%). The primary barrier was lack of physical examination (74% agreement). While technical challenges were not reported as a major barrier in the survey, interviews revealed significant concerns about insufficient internet bandwidth and technical infrastructure. Common reasons for using TR included promoting patient self-management (78% agreement) and broadening therapy options (69% agreement). Qualitative data identified additional implementation facilitators, including structured implementation processes, appropriate technical infrastructure, and patient involvement in decision-making. While TR implementation in German physiotherapy shows growth potential, several barriers currently limit its adoption. Successful implementation requires addressing PTs capabilities, knowledge gaps, professional identity concerns, and environmental factors. Addressing these issues could enhance patient care quality, increase service accessibility, and advance healthcare delivery models.
To assess general practitioners' (GP) awareness, attitudes, and practices related to non-pharmacological treatments for insomnia. A cross-sectional online survey was conducted between January and June 2024 among 100 general practitioners across urban and rural regions of the Czech Republic. The questionnaire assessed GPs' awareness, attitudes, and practices related to non-pharmacological treatments for insomnia. Most GPs (85%) reported using non-pharmacological approaches as the initial treatment, primarily sleep hygiene education (64%). Only 28% were familiar with Cognitive Behavioral Therapy for Insomnia, and fewer used structured diagnostic tools such as sleep diaries or validated scales. Hypnotics overuse was reported by 81% of respondents. Urban GPs were more likely to refer patients to sleep specialists and showed greater confidence in non-drug management. Patient interest in non-pharmacological therapies was moderate (31%), and adherence was influenced by the quality of the doctor-patient relationship. While Czech GPs are aware of non-pharmacological treatment options for insomnia, actual implementation - especially of CBT-I and diagnostic tools - is limited. Targeted training and better access to behavioral treatment resources could improve insomnia management in primary care and reduce reliance on pharmacological interventions.
Men with obesity infrequently engage with weight management services. To determine: (1) percentage weight loss at 12 and 24 months for text messages with or without financial incentives compared to control; (2) secondary outcomes; (3) cost-effectiveness; (4) moderators of effectiveness and (5) participant and stakeholder perspectives. Assessor-blinded randomised controlled trial. United Kingdom National Health Service perspective cost-effectiveness over 24 months and modelled lifetime horizon. Mixed-methods process evaluation. Five hundred and eighty-five men with body mass index ≥ 30 kg/m2 enrolled (July 2021-May 2022) in Belfast, Bristol and Glasgow; final follow-up June 2024. Random allocation to 12 months of behavioural text messages plus financial incentives (N = 196), same texts alone (N = 194) or 12-month waiting list control group offered 3 months of texts between 12 and 15 months (N = 195). A £400 financial incentive was lost if weight loss targets were not met. Weight change as a percentage of baseline weight at 12 and 24 months comparing control with (1) texts with financial incentives and (2) texts alone. Of 585 men (mean age 51 years; mean weight: 119 kg), 227 (39%) lived in lower socioeconomic areas, 146 (25%) reported a mental health condition and 253 (40%) had multiple long-term conditions. Follow-up was completed by 426 (73%) at 12 months and 377 (64%) at 24 months. At 12 months, mean percentage weight changes (standard deviation) were -4.8% (6.1) (-5.7 kg), -2.7% (6.3) (-3.0 kg), and - 1.3% (5.5) (-1.5 kg) for the incentives, text-only, and control groups, respectively. Compared to control, weight loss was significantly greater with incentives [-3.2% (97.5% confidence interval -4.6 to -1.9; p < 0.001)] but not texts alone (-1.4%; confidence interval -2.9 to 0.0; p = 0.053). At 24 months, changes were -3.9% (-4.6 kg), -2.6% (-3.1 kg), and -2.2% (-2.6 kg), no significant between-group differences. Intervention costs were £243 for texts with incentives, £110 for texts alone. There were no significant differences between 24-month costs and quality-adjusted life-years. Long-term modelling found texts with incentives versus control were: quality-adjusted life-year difference (95% confidence interval): 0.02 (0.007 to 0.029); cost difference: £176 (£43; £311); incremental cost-effectiveness ratio: £9748 (£7705 to £11,791). For texts alone versus control: quality-adjusted life-year difference: 0.03 (0.015 to 0.037); cost difference: £16.5 (-£117; £152); incremental cost-effectiveness ratio: £628 (-£5914 to £5384). There were no moderator effects for socioeconomic, health or well-being status for either comparison versus control. The texts with incentives group had a higher engagement in weight goal setting, food changes, self-weighing, confidence, satisfaction and quality of life compared to the control. Generalisability to women, diverse ethnic groups and people with low literacy is uncertain. Not generalisable to people with no mobile phone access. Retention was lowest in the text messages alone group. Texts with financial incentives have a modest but important effect to 12 months with clinically relevant weight loss maintenance to 24 months, are cost-effective and equally effective regardless of socioeconomic or health characteristics. Implementation, adapt for women, other cultures and longer-term follow-up. This synopsis presents independent research funded by the National Institute for Health and Care Research (NIHR) Public Health Research programme as award number NIHR129703. The Game of Stones study aimed to help men lose weight and keep it off for at least 2 years. Five hundred and eighty-five men living with obesity across the United Kingdom were split into three groups by chance: supportive text messages for 1 year and opportunity to get money for weight loss the same text messages alone for 1 year neither for 1 year, then text messages for 3 months. The first two groups received the same daily text messages about changing weight-related behaviours. Group 1 was told at the start that £400 had been put aside for them and that money would be lost if weight targets were missed. The targets were 5% weight loss at 3 months, 10% at 6 months and maintaining that 10% loss at 12 months. Money was then paid to the men after being weighed at 12 months. Every man was asked questions about their health, well-being and experiences of being in the study. After 1 year, the men in group 1 lost the most weight (5%, 5.7 kg). The men in group 2 lost some weight (3%, 3.0 kg) but not as much as the first group. The men in group 3 lost the smallest amount of weight (1%, 1.5 kg). On average, men in group 1 received £128 for meeting weight loss targets. One year after the 12-month measures, men in groups 1 and 2 had gained back some weight. Men in group 3 lost a bit more weight between year 1 and 2. Weight loss was similar whether or not men had long-term health conditions, disability, mental health issues or lived in the most deprived areas. This study showed that Game of Stones was a popular, low-cost and modestly effective way of helping men to lose weight.
Torrefied hemp hurd (HH) was investigated as a sustainable filler for poly (lactic acid) (PLA)-based composites. HH, a byproduct of the textile industry, underwent torrefaction at 220, 260, and 300 °C to enhance its compatibility with PLA. The resulting fillers (HH220, HH260, HH300) were characterized in terms of morphology, structure, and thermochemical properties. Composites incorporating 2.5 and 5 wt% of each torrefied filler were prepared via melt mixing and characterized for their structural, thermal, mechanical, and tribological behavior. SEM analysis revealed that higher torrefaction temperatures resulted in improved filler dispersion and adhesion to the PLA matrix. While all composites exhibited reduced ductility (i.e., elongation at break), those containing HH300 demonstrated the highest increases in Young's modulus, tensile strength, hardness, and wear resistance. Notably, the HH300 composite containing 5 wt% showed significant improvements over unfilled PLA, including increased stiffness (+ 12%) and toughness (+ 27%), as well as substantial reductions in coefficient of friction (- 83%), and wear rate, that is reduced by about three orders of magnitude. Finally, when compared to the composite containing untreated hemp hurd, stiffness increased by 24%, toughness by 78%, with wear rate still reduced by about three orders of magnitude and friction coefficient lowered by 80%. Torrefied hemp hurd, especially when treated at high temperatures, is thus a promising bio-based reinforcement that improves the performance of PLA composites in structural and tribological applications.
Alterations in plasma sphingomyelin (SM) levels have been reported in Alzheimer's disease (AD), pointing to disturbances in lipid metabolism that may contribute to disease pathogenesis. Neuronal damage in early AD triggers tau release into central and peripheral systems. Despite influence from peripheral contributions, alterations in plasma total-tau (T-tau) remain valuable in indicating AD-related neurodegeneration. Investigating relationships between SM metabolism and tau release during preclinical AD may uncover important biochemical processes and support advancing early non-invasive detection and treatment approaches. This cross-sectional study investigated cognitively unimpaired (CU) older adults from the KARVIAH cohort, grouped by cortical amyloid-β (Aβ) status through positron emission tomography (PET) imaging (CU Aβ- and CU Aβ+) and utilised a Biocrates-targeted metabolomic platform and Single-molecule array (Simoa) technology to quantify plasma levels of SMs and T-tau, respectively. Associations between circulating SMs and T-tau were examined within each group, with T-tau-associated SMs further evaluated for their association with cognitive performance and cortical Aβ burden and their potential to discriminate CU Aβ+ from CU Aβ- individuals. Significant positive correlations were observed between SMs and T-tau levels exclusively in CU Aβ+ individuals, suggesting connections between SM-mediated biochemical pathways and tau release from early neurodegeneration in preclinical AD. Lower SM levels were associated with weaker working memory and executive function, as well as poorer global cognition, indicating their potential predictive value for weaker cognitive performance. Moreover, SMs were also inversely associated with cortical Aβ load in CU Aβ+ individuals, possibly reflecting early SM-mediated neuroprotective responses against AD pathogenesis. Receiver operating characteristic analysis further revealed the significant potential of the SM panel in distinguishing cortical PET-Aβ status and enhancing the predictive performance of plasma T-tau in CU individuals. Therefore, circulating T-tau-associated SMs may serve as promising early biomarkers of lipid-mediated processes in CU older adults with cortical amyloid pathology and tau-related neurodegeneration.
This study aimed to describe and compare patient-reported outcome measures (PROMs) and objective clinical outcome measures (CROMs) in the treatment of age-related macular degeneration (AMD), exploring the concordance between these measures within a value-based healthcare (VBH) framework. This prospective, multicenter, observational, real-world study was conducted at three tertiary referral hospitals specializing in the treatment of neovascular AMD. Clinical outcomes (CROMs) and patient-reported outcomes (PROMs) were analyzed using the National Eye Institute Visual Functioning Questionnaire 25 (NEI VFQ-25) questionnaire as a functional assessment tool. Data were collected at baseline and at three, six, and 12 months following initiation of intravitreal anti-vascular endothelial growth factor (anti-VEGF) therapy. Statistical analysis was primarily descriptive. The comparison between baseline and 12 months in the global NEI VFQ-25 score was performed using the Wilcoxon signed-rank test for paired samples. Concordance between CROMs and PROMs was assessed using the intraclass correlation coefficient (ICC). A total of 235 eyes were included, receiving 2338 intravitreal injections. The mean age of participants was 81 years (SD = 8.57), and 55.8% were female. The mean baseline NEI VFQ-25 score was 67.83 (SD = 10.39). The median best-corrected visual acuity was 63 ETDRS letters (interquartile range [P25 - P75]: 41 - 75) at baseline, increasing to 65 letters at three months and remaining stable through 12 months of follow-up. The comparison between baseline and 12 months revealed a statistically significant difference in visual acuity (Wilcoxon signed-rank test, Z = 4.2; p < 0.001). A reduction in the proportion of patients classified as legally blind was observed, together with an increase in the proportion of patients in the reading-vision and driving-vision categories. At 12 months, 58.7% of patients reported stabilization or improvement in visual function on the NEI VFQ-25 questionnaire. Concordance between the variation in visual acuity and the variation in the global NEI VFQ-25 score showed good agreement between CROMs and PROMs (ICC = 0.76; p < 0.001). The integrated analysis of CROMs and PROMs suggests that anti-VEGF treatment for neovascular AMD is associated with stabilization or improvement in visual acuity and patients' perceived visual function. The implementation of the VBH-AMD model proved feasible in a real-world clinical setting, reinforcing the importance of integrating patient-centered measures into the evaluation of therapeutic outcomes. Introdução: O objetivo deste estudo foi descrever e comparar os resultados reportados pelos doentes (patient-reported outcome measures, PROM) e os resultados clínicos objetivos (clinical-reported outcome measures, CROM) no tratamento da degenerescência macular da idade (DMI), explorando a concordância entre estas medidas no contexto de um modelo de cuidados de saúde baseados em valor (value-based healthcare, VBH). Métodos: Conduziu-se um estudo prospetivo, multicêntrico e observacional, da prática clínica, realizado em três hospitais terciários de referência no tratamento da neovascularização macular secundária à DMI. Foram analisados os resultados clínicos e os resultados reportados pelos doentes, utilizando o questionário National Eye Institute Visual Functioning Questionnaire 25 (NEI VFQ-25) como instrumento de avaliação funcional. Os dados foram recolhidos no início do tratamento e aos três, seis e 12 meses após o início da terapêutica com injeções intra-vítreas de agentes anti-fator de crescimento endotelial vascular (anti-VEGF). A análise estatística baseou-se em estatística descritiva. A comparação entre o baseline e os 12 meses do score global do NEI VFQ- 25 foi realizada através do teste de Wilcoxon para amostras emparelhadas. A concordância entre os CROM e os PROM foi avaliada através do intraclass correlation coefficient (ICC). Resultados: Foram incluídos 235 olhos, tratados com 2338 injeções intravítreas. Na amostra, a idade média dos participantes foi de 81 anos (DP = 8,57), sendo 55,8% do sexo feminino. Relativamente ao questionário, o score médio na avaliação basal foi de 67,83 (DP = 10,39). A mediana da melhor acuidade visual corrigida foi de 63 letras ETDRS (intervalo interquartil [P25 - P75]: 41 - 75) na baseline, aumentando para 65 letras aos três meses e mantendo-se estável até aos 12 meses de seguimento. A comparação entre a baseline e os 12 meses revelou uma diferença estatisticamente significativa na acuidade visual (Wilcoxon signed-rank test, Z = 4,2; p < 0,001). Observou-se uma redução da proporção de doentes classificados como cegueira legal e um aumento das proporções de doentes nas categorias de visão de leitura e visão de condução. Aos 12 meses, 58,7% dos doentes reportaram estabilização ou melhoria da funcionalidade visual no questionário NEI VFQ-25. A concordância entre a variação da acuidade visual e a variação do score global do NEI VFQ-25 revelou boa concordância entre CROM e PROM (ICC = 0,76; p < 0,001). Conclusão: A análise integrada de CROM e PROM sugere que o tratamento da neovascularização macular secundária à DMI com anti-VEGF se associa a uma estabilização ou melhoria da acuidade visual e da perceção funcional da visão. A implementação do modelo VBH-DMI demonstrou ser aplicável em contexto de prática clínica real, reforçando a importância de integrar medidas centradas no doente na avaliação dos resultados terapêuticos.
Ongoing global changes are strongly impacting the distribution and incidence of vector-borne diseases (VBD) affecting both humans and animals. Livestock production is a cornerstone of the economy and food security of many countries, notably in Europe, and VBD represent a major constraint on its development. Furthermore, domestic animals can serve as reservoirs for zoonotic agents, highlighting the need for a One Health approach to anticipate and control VBD. However, research on livestock-associated vectors in Europe, particularly mosquitoes and stable flies at farm level, remains limited. Although ticks are recognized as the most important vectors in Europe, comparative studies between countries and host animal species are still scarce. On the basis of vector presence, this study assesses the entomological risk for livestock on seven farms located in two European countries, Spain and Hungary, characterized by contrasting climates and husbandry practices. During spring 2023 and 2024, as well as autumn 2023, three groups of arthropods, mosquitoes, ticks, and stable flies were collected from seven cattle, sheep, and pig farms in Spain and Hungary. Environmental, climatic, and meteorological data, together with information on management practices and animal characteristics, were collected on-site and obtained from local databases. A total of 1432 mosquitoes, 345 ticks, and 1266 stable flies were collected and identified to species level, representing 37 species in total: 30 mosquito species, 6 tick species, and 1 stable fly species. Among these, 16 species are recognized vectors of pathogens. Hungary consistently exhibited higher arthropod abundance across all groups. Mosquito diversity was also greater in Hungary, with 21 species dominated by Aedes vexans and Culex pipiens pipiens, whereas in Spain, 13 species were recorded, mainly Culex theileri and Anopheles atroparvus. Four tick species were identified in Hungary (Ixodes ricinus, Haemaphysalis concinna, Dermacentor marginatus, Dermacentor reticulatus) while two species were collected in Spain (Hyalomma lusitanicum and Rhipicephalus bursa). The stable flies Stomoxys calcitrans was the only species present in Europe and accounted for all specimens collected, 99% of which were found in Hungary. Seasonal patterns showed spring peaks for mosquitoes and stable flies, and summer/autumn peaks for ticks in Hungary. The originality of this study lies in its multi-vector description of three arthropod communities associated with three livestock species (cattle, sheep, pigs) on farms located in two European countries with contrasting environments and climates. The study demonstrated the coexistence of 16 arthropod species of veterinary and public health relevance in the surveyed farms. Their diversity and abundance were influenced by geographical contrasts between Mediterranean and Central European climates, as well as environmental characteristics, livestock species, and management practices. These findings provide updated information on the diversity of arthropod vectors present on livestock farms, regardless of production type, and highlighted the need for enhanced vector surveillance in livestock systems, which accounts for environmental, farming, and anthropogenic factors. Such efforts are essential to anticipate VBD emergence driven by invasive vectors and circulating pathogens, mitigate impacts on animal health and productivity, and address interconnected risks to both human and animal populations.
Fungal endophytes are important members of the holobiont of all plants, including that of Scots pine (Pinus sylvestris), potentially affecting host performance. One of the most important pathogens of Scots pine in Europe is Diplodia sapinea, which causes necrotic lesions and is becoming increasingly prevalent in northern regions. Although endophytes are known to affect plant performance, it remains unclear whether naturally established fungal communities in Scots pine shoots can modulate D. sapinea-induced necrosis. Using a field experiment, we tested the hypothesis that exclusion of airborne fungal inoculum shapes the endophytic community in shoots of pine seedlings, and that such alterations in this community influence the necrosis-inducing capacity of D. sapinea. In the field site, airborne fungal inoculum was reduced in half of the saplings by covering shoots with mesh bags. Covered (bagged) and free (unbagged) shoots were transported to the laboratory and inoculated with D. sapinea. The morphology and physiological status of the shoots were monitored using a multispectral 3D scanner, and the necrotic lesion development was assessed. The propagule exclusion resulted in endophytic communities with slightly lower richness, while shoots showed no detectable morphological or physiological differences prior to inoculation. Shoots inoculated with D. sapinea developed clear necrotic lesions, which were significantly larger in covered shoots than in the free ones. Long-read Oxford Nanopore metabarcoding revealed that community shifts following inoculation were more pronounced in covered shoots. Community composition clearly separated necrotic and healthy tissues. Our findings suggest that the structure of the resident fungal endophytic community may influence the extent of necrotic lesions caused by D. sapinea in Scots pine shoots. A more established, diverse fungal community was associated with smaller lesion sizes, whereas shoots exposed to lower propagule pressure developed larger lesions following inoculation. These results highlight the functional role of fungal community assembly in shaping disease outcomes and suggest that endophyte-based approaches may provide new opportunities for improving disease resistance in forest tree species. The results also suggest that endophytic status may need to be considered when lesion size is used to evaluate resistance to pathogens in tree breeding programs.
Porcine circovirus 3 (PCV3) has been detected worldwide in both healthy and diseased pigs, raising questions about its molecular diversity and potential genotype structure. Since its discovery, several genotyping schemes have been proposed; however, most of them rely on arbitrary criteria and often produce inconsistent or conflicting classifications. In this study, all high-quality PCV3 ORF2 sequences available at the GenBank to date were analyzed to evaluate whether consistent and reproducible genotyping criteria can be defined. Phylogenetic trees reconstructed using multiple algorithms and statistical comparisons of topological similarity revealed moderate-to-high incongruence among methods, limited phylogenetic signal, and a general decline in bootstrap support toward deeper nodes. Genetic distance distributions showed complete overlap between and within proposed genotypes, further questioning the biological meaning of such partitions. Overall, these findings indicate that the current genetic landscape of PCV3 does not support a robust or standardized genotyping framework. Given the low genetic variability of PCV3 and the absence of experimental evidence for genotype-specific differences in antigenicity, epidemiology, or virulence, the definition of formal genotypes appears challenging and premature, and likely unnecessary at this stage.
Protein biomarkers such as interleukin 18 (IL-18), CXCL9, and the S100A alarmin proteins are increasingly used in the diagnosis and treatment of juvenile idiopathic arthritis (JIA) and Still's Disease (SD). Reported values for these biomarkers vary considerably among different testing platforms at different centers, representing a barrier to their clinical application as well as international research collaborations. We undertook a systematic evaluation of measurement comparability across different platforms in Europe (EU) and North America (NoA). Recombinant proteins including IL-18, CXCL9, S100A8/9, and S100A12 spiked into donor human serum were distributed in blinded fashion to participating centers in NoA and EU for determination of sample concentration on each center's measurement platform. Assay-specific mathematical correction formulas were calculated based on spike recovery data. Next, samples from the Childhood Arthritis and Rheumatology Research Alliance (CARRA) First-line Options for Systemic JIA Treatment (FROST) study were utilized for validation on selected platforms. Comparisons between measurement platforms before and after mathematical correction were analyzed. In the spiked samples, while overall strong correlation was observed for IL-18 measurements across different platforms, the percent recovery revealed significant analytical variation ranging between approximately 50-400%. Similar variation in recovery was also observed for CXCL9, S100A8/9, and S100A12 across different measurement platforms. Analysis of real-world samples from the FROST study revealed strong correlation in IL-18 and CXCL9 values when comparing the same bead-based platforms at two different centers. Significant differences and systematic bias in measurement of FROST samples were uncovered when comparing ELISA or Ella platforms to commercial Luminex platforms. The application of regression-based mathematical correction factors generated from the spike recovery assays could only partly resolve these inter-platform differences. This is the first systematic analysis of the comparability of biomarker measurements used in JIA and SD across different platforms including ELISA, Luminex, and Ella. We demonstrate substantial differences in the quantification of standardized biomarker concentrations both between assay types and across testing sites. In contrast, analyses of real-world samples from sJIA patients showed high concordance between two independent Ella platforms, indicating that this platform could improve clinical and research standardization.
Short-term monocular deprivation transiently shifts ocular dominance in favor of the deprived eye. Here we asked whether the effect of a one-hour monocular deprivation was enhanced by the execution of a task requiring visually guided goal-directed actions, compared to a passive viewing condition where participants watched a replay of their own actions. Although the visual input was the same across conditions, we found a stronger ocular dominance shift when participants actively performed the task. These results provide strong evidence that voluntary, goal-directed actions modulate visual plasticity.
The aim of the study was to search for the relationship between insomnia, stress, and occupational burnout and to identify factors influencing their severity among health care workers. This survey-based study was conducted among 216 health care workers. It was performed using the author questionnaire, the Perceived Stress Scale (PSS-10), the Athens Insomnia Scale (AIS), and the Maslach Burnout Inventory (MBI). Analysis of the effects of stress (PSS-10) and insomnia (AIS) on occupational burnout showed that stress correlated significantly (p ˂ 0.05) and positively (r ˃ 0) with all MBI subscales; that is, the more severe the stress, the higher the levels of emotional exhaustion, depersonalization, but also personal accomplishment. Insomnia correlated significantly (p ˂ 0.05) and positively (r ˃ 0) with all MBI subscales - the more severe the insomnia, the higher the levels of emotional exhaustion, depersonalization, but also personal accomplishment. (1) Age and work experience were the sociodemographic variables that determined the occurrence of insomnia among the surveyed participants. (2) The form of employment may be an important factor that determines the level of stress, insomnia, and job burnout. Those employed under permanent contracts had lower levels of stress and job burnout, more often suffered from insomnia, and had higher levels of personal accomplishment. (3) In the study group, stress and insomnia influenced all aspects of job burnout. As stress and insomnia increased, so did levels of depersonalization, emotional exhaustion, but also personal accomplishment.
Opioid-related mortality is a global issue, and France shows high opioid use disorder indicators. Despite recommendations for wider take-home-naloxone distribution, access remains limited in France, and many primary care professionals lack training. The SINFONI project aims to enhance naloxone education for these providers. A motion-design training program was created and sent to general practitioners (GPs) and pharmacists. A total of 138 GPs and 207 pharmacists participated in the knowledge assessment before and after training. Post-training, significant improvements were observed, and the greatest knowledge gains were in understanding naloxone's safety (P < .001). Our findings primarily identified critical "red flags" that may hinder prescription and dispensing practices.
Artificial intelligence algorithms have been developed to support clinicians in diagnosing fractures, with the intention to improve the diagnostic accuracy of clinicians reviewing X-rays. The purpose of this rapid early value assessment was to identify the existing evidence base for the technology and to assess whether there was a prima facie case for the technology to represent positive outcomes for patients and a value-for-money investment for people in the National Health Service. This early value assessment assessed the potential value of the use of artificial intelligence to aid clinician diagnosis of fractures in emergency care settings as compared to clinician-diagnosis alone. A rapid evidence review was conducted followed by 'light touch' early economic modelling to explore whether a plausible case could be made for cost-effectiveness at the prices charged by the companies. Evidence searches were conducted in June and July 2024 to identify clinical, diagnostic and service outcomes associated with the technology. A simple decision model incorporating prevalence, sensitivity, specificity and cost per scan for each of the technologies was developed to evaluate plausible cost-effectiveness for detecting ankle and foot, wrist and hand, and hip fractures, selected based on the availability of evidence and their downstream costs and consequences. Sixteen studies identified evaluated the diagnostic accuracy of the technology. None of the included studies were conducted in the United Kingdom and all were associated with limitations. While the studies were not considered to be able to provide reliable estimates of diagnostic accuracy, there was a trend for the technology to improve sensitivity for detecting fractures. The technology had no discernible impact on the rate of false-positive diagnoses. Overall, most of the evaluated technologies were associated with a positive incremental net health benefit at willingness-to-pay thresholds of £20,000 and £30,000 per quality-adjusted life-year gained. Due to data limitations, it was not possible to compare technologies against each other. The results were mostly robust to scenario analyses. The evidence base for the technology is currently limited to studies evaluating diagnostic accuracy and it is unclear whether increases in fracture detection would translate into meaningful benefits for patients and services. While there are some fractures that, if missed, can result in significant harm to patients, it is plausible that the technology would improve diagnosis of more subtle fractures that may not require a change in management. Use of the technology would not eradicate the risk of missed fractures, meaning that health services would need to continue to take precautions to avoid the risk of a missed fracture in clinical practice. A simple decision tree analysis suggested that the technology was plausibly cost-effective at conventional National Institute for Health and Care Excellence thresholds. There are significant limitations in the available evidence leading to uncertainties about the diagnostic accuracy of the technology within NHS settings. Due to the pragmatic nature of the early value assessment and the available evidence base, the economic analysis included many gross assumptions and was unable to produce a definitive estimate of cost-effectiveness. The appraisal resulted in a number of research recommendations for evaluating the technology further. More detailed modelling in a full formal diagnostic assessment review is required to consider the longer-term costs and consequences of false negatives and positives, and how they are likely to impact the estimates of cost-effectiveness. This study is registered as PROSPERO CRD42024574393. This award was funded by the National Institute for Health and Care Research (NIHR) Evidence Synthesis programme (NIHR award ref: NIHR136024) and is published in full in Health Technology Assessment; Vol. 30, No. 33. See the NIHR Funding and Awards website for further award information. X-rays are the usual method for diagnosing broken bones (fractures) in urgent care settings, including accident and emergency, urgent treatment centres, and minor injuries units. Artificial intelligence technologies have been developed to help identify fractures on X-rays. Peninsula Technology Assessment Group was commissioned to conduct an early value assessment to explore whether licensed artificial intelligence technologies could support fracture detection in urgent care while further evidence is developed. A search was conducted to identify all relevant evidence evaluating artificial intelligence to assist fracture detection using X-rays, including published evidence and confidential data from artificial intelligence companies. Sixteen studies evaluated how accurate artificial intelligence was to assist diagnosis and five studies reported how artificial intelligence changed the time needed to interpret X-rays. None of the studies were based within the artificial intelligence and most involved staff different to those who would normally read X-rays in the NHS. This meant that it was not possible to reliably estimate how accurate artificial intelligence would be in UK urgent care. There were early indications that artificial intelligence could help reduce missed fractures, but further research is needed to confirm this. No studies evaluated how artificial intelligence affected patient outcomes (such as health and mobility) or its impact on NHS resources (e.g. repeat appointments or overall costs). To assess whether artificial intelligence could be cost-effective for the NHS, we developed an economic model that combined information on artificial intelligence accuracy, costs, and what happens to patients once their fracture is correctly – and incorrectly – diagnosed (including quality of life and NHS costs). Our findings suggest most artificial intelligence technologies are reasonably priced for the estimated benefits. We explored uncertainties related to the data and assumptions in our analysis and found that our conclusions did not change most of the time.
The length of the time window used to assess "current drug use" or "number of medications used" will influence the estimates hereof; however, no consensus exists on the optimal width of such time windows. We aimed to explore how the estimated prevalence of drug use in general, and of polypharmacy in particular, is affected by definitions used. We conducted a drug-utilization study divided into two parts. In the first part, we focused on current drug use. Using population-based registries from Denmark, we identified adults (i.e., individuals aged ≥ 18) during 2020-2022, and among them, current use of different drugs, including those with typically chronic or episodic patterns of use. The second part of the study focused on polypharmacy. We estimated its prevalence, based on different definitions, using population-based registries from Denmark in a cohort of older adults (i.e., individuals aged ≥ 65) in 2022. We also evaluated the accuracy of different criteria for predicting polypharmacy using simulations. Evaluating current drug use, the proportion of individuals classified as exposed increased with the length of the time window for all drugs, reaching a plateau considering a 120-150-day window for statins, glucose-lowering drugs, and selective serotonin reuptake inhibitors, and a 180-300-day window for opioids, whereas no plateau was reached for non-steroidal anti-inflammatory drugs within 360 days. The prevalence of polypharmacy ranged from 21% (10 different 4th level Anatomical Therapeutic Chemical (ATC) groups in 1 year) to 92% (two different 4th level ATC groups in 1 year) depending on the applied definition. In the simulation, the best criterion for identifying polypharmacy required at least two dispensations during the one-year study period for each of at least five drugs, with sensitivity ranging between 0.93 and 1.0, and specificity between 0.72 and 1.0. Time windows up to 120 days are too short to identify baseline drug use in the Danish setting. How polypharmacy is defined significantly influences its estimate, suggesting a need to use multiple definitions in each study. “Current drug use” is estimated in most pharmacoepidemiological studies. However, the impact of using different time window lengths is rarely investigated. The definition of such baseline drug use itself varies substantially—for example it may be based on prescriptions filled within the last 60, 90 or 120 days prior to, for example, cohort entry. In this study, we explored how the length of the time window considered influences the estimated prevalence of “current drug use” using Danish registry data. We observed that such prevalence was considerably underestimated when time windows shorter than 120 days were considered. We also investigated how both the number of different concomitant drugs and the time window considered affected the estimated prevalence of polypharmacy. Our findings show that these factors substantially influence the estimated prevalence, underscoring the need to include multiple definitions of polypharmacy within each study.
Dermatophytes are among the most common causes of superficial fungal infections worldwide. In recent years, an increasing number of cases of sexually transmitted dermatophytosis have been reported in Europe, mainly associated with Trichophyton mentagrophytes genotype VII. Genomic data on circulating strains remain limited, particularly in Southern Europe. In this study, we described three cases of sexually transmitted dermatophytosis caused by T. mentagrophytes genotype VII diagnosed in Italy and characterized using whole genome sequencing. Skin scale samples were collected from genital/perigenital lesions and processed for mycological culture, antifungal susceptibility testing, and genomic analysis. ITS sequencing confirmed the identification of genotype VII in all isolates. Phylogenetic analysis based on ITS sequences and phylogenomic analysis based on core genome single nucleotide polymorphisms consistently showed that the three isolates clustered within the genotype VII lineage and were closely related to previously described European strains. Antifungal susceptibility testing revealed low MIC values for extended-spectrum triazoles (voriconazole and posaconazole) and amphotericin B. Genomic analysis demonstrated that all isolates carried the wild-type SQLE gene, with no mutations associated with terbinafine resistance. Sequence variations were observed in several members of the MEP protease gene family and in the ZafA transcription factor gene. These findings confirm the presence of sexually transmitted T. mentagrophytes genotype VII in Italy and support the hypothesis that this genotype is an emerging cause of dermatophytosis in Europe. The integration of genomic approaches with clinical and epidemiological data may improve the understanding of transmission dynamics and help identify potential outbreaks. We describe three cases of a fungal skin infection in Italy that was likely transmitted through sexual contact. Genetic analysis showed that the infections were caused by closely related types of Trichophyton mentagrophytes (genotype VII), supporting the emergence of this pathogen in Europe.
Administration of high oral doses of sodium trifluoroacetate (NaTFA) to rabbits during pregnancy led to excessively high trifluoroacetic acid (TFAA) body burden after repeated gavage application. This resulted in rare but apparently reproducible fetal ocular malformations, which were dose-related and exceed the available range of historical control data (HCD). We propose that these ocular malformations (absent aqueous and vitreous humour with secondary, multiple, retinal folding, likely caused by an insufficient expansion of the ocular globe) are a result of peak TFAA concentrations that have reached levels high enough to trigger (a) chaotropic reactions and (b) competitive monocarboxylate transporter (MCT) inhibition in ocular tissues. Rabbit-specific characteristics in the excretion and clearance of trifluoroacetate (TFA) resulted in higher systemic exposure than other species, making rabbits more susceptible to these effects on ocular development. Whilst still potentially relevant, the risk to human development is considered to be low given expected exposure levels.
Outcome assessment after mild traumatic brain injury (mTBI) has largely focused on acute injury severity, while the role of pre-injury health has received comparatively less attention. The American Society of Anesthesiologists (ASA) physical status classification provides a simple measure of pre-existing health and comorbidity that may be relevant to early outcomes after trauma. This study examined whether pre-injury ASA class is independently associated with short-term mortality after mTBI and compared these associations with those observed in trauma patients without brain injury (NTBI). We conducted a nationwide retrospective study using the Swedish Trauma Registry (2018-2023). Adults (≥ 18 years) with Glasgow Coma Scale (GCS) > 12 were included and classified as mTBI (ICD-10 S06.0-S06.9) or NTBI (no S06 diagnosis, AIS head = 0, age-matched). The primary outcome was 30-day all-cause mortality. Associations between clinical variables and mortality were examined using univariable and multivariable logistic regression, with models specified a priori to separate age-ASA effects from injury-related adjustment. A total of 8,670 patients were included in the mTBI cohort and 26,001 in the age-matched cohort. The mTBI cohort had poorer health (ASA ≥ 3: 27% vs 24%), and more often injured by low-energy falls (28% vs 20%). Mortality was higher in mTBI (5% vs 2.8%), and outcomes poorer (Glasgow Outcome Scale 1-3: 23% vs 17%). Within the mTBI cohort, mortality was strongly associated with increasing age, higher ASA category, and greater intracranial injury severity. After adjustment for age and injury-related variables, ASA class remained independently associated with 30-day mortality. Specifically, compared to ASA 1 patients, those with ASA 2 had 2.8 times higher odds of 30-day mortality, ASA 3 patients had 4.3 times higher odds, and ASA 4 patients had 21 times higher odds. A graded association between ASA class and mortality was also observed in the NTBI cohort, although the pattern of injury-related covariates differed. Pre-injury health status, measured by the ASA class, was independently associated with 30-day mortality in patients with mild traumatic brain injury. Comparable associations in non-TBI trauma suggest that comorbidity and baseline health contribute substantially to early post-trauma mortality among awake trauma patients. ASA class may provide complementary information to traditional injury-based assessments, particularly in older patients, although further studies are needed to evaluate how such information can best inform clinical decision-making.
People who use drugs (PWUD) are at high risk of hepatitis C virus (HCV) infection, and HCV patients often have psychiatric disorders requiring nervous system drugs, including antipsychotics. These medicines can interact with HCV treatment-related metabolic pathways producing DDIs (drug-drug interactions). This analysis focused on potential DDIs between direct-acting antivirals (DAAs) and concomitant medications used in PWUD/antipsychotics-treated HCV patients, alongside associated adverse events (AEs) and clinical interventions in Spain. Electronic medical records (BIG-PAC® database) were used to analyse adult HCV patients treated with glecaprevir/pibrentasvir (GLE/PIB) or sofosbuvir/velpatasvir (SOF/VEL) between 2017-2020. The study included 1,620 HCV patients, 985 identified as PWUD and 187 as antipsychotic users, 75% of whom were also PWUD. In the PWUD cohort, cardiovascular (CV) comorbidities were the most frequent; 22.7% patients were at risk of DDIs with CV, with the risk being higher in GLE/PIB-treated (36.8%) versus SOF/VEL (13.7%) (p<0.001). Cardiovascular AEs were more common in the GLE/PIB group. In the antipsychotic cohort, quetiapine was the most prescribed antipsychotic comedication (26.2%), followed by paliperidone (17.6%) and olanzapine (17.1%). Fifty-one per cent of those on GLE/PIB were at risk of DDIs versus 23% on SOF/VEL (p<0.001). Two AEs were reported in the GLE/PIB group (n=37): one patient on quetiapine at a dose <300mg/day experienced extrapyramidal symptoms, leading to DAA discontinuation, and another paliperidone-treated experienced sedation, necessitating a dose reduction. The findings highlight DDIs risks in HCV patients on antipsychotics or with substance addiction, particularly with GLE/PIB. Comprehensive clinical follow-up is essential to optimise treatment and improve patient safety. Electronic medical records (BIG-PAC® database) were used to analyse adult HCV patients treated with glecaprevir/pibrentasvir (GLE/PIB) or sofosbuvir/velpatasvir (SOF/VEL) between 2017-2020. The study included 1,620 HCV patients, 985 identified as PWUD and 187 as antipsychotic users, 75% of which were also PWUD. In the PWUD cohort, cardiovascular (CV) comorbidities were the most frequent; 22.7% patients were at risk of DDIs with CV, being the risk higher in GLE/PIB-treated (36.8%) versus SOF/VEL (13.7%) (p<0.001). Cardiovascular AEs were more common in the GLE/PIB group. In the antipsychotic cohort, quetiapine was the most prescribed antipsychotic comedication (26.2%), followed by paliperidone (17.6%) and olanzapine (17.1%). Fifty-one per cent of those on GLE/PIB were at risk of DDIs versus 23% on SOF/VEL (p<0.001). Two AEs were reported in the GLE/PIB group (n=37): one patient on quetiapine at a dose <300mg/day experienced extrapyramidal symptoms, leading to DAA discontinuation, and another paliperidone-treated experienced sedation, necessitating a dose reduction. The findings highlight DDIs risks in HCV patients on antipsychotics or with substance addiction, particularly with GLE/PIB. Comprehensive clinical follow-up is essential to optimise treatment and improve patient safety. Los usuarios de drogas (UD) tienen un alto riesgo de infección por el virus de la hepatitis C (VHC), y muchos pacientes con VHC presentan trastornos psiquiátricos que requieren medicación del sistema nervioso, incluidos antipsicóticos. Estos medicamentos pueden interactuar con los antivirales de acción directa (AAD), produciendo interacciones farmacológicas (IF). En este estudio, nos centramos en las potenciales IF entre AAD y medicaciones concomitantes utilizadas en estos pacientes, así como en los eventos adversos (EA) asociados e intervenciones clínicas en España. El estudio, basado en registros electrónicos de BIG-PAC®, analizó a adultos tratados con glecaprevir/pibrentasvir (GLE/PIB) o sofosbuvir/velpatasvir (SOF/VEL) entre 2017 y 2020. Se incluyeron 1.620 pacientes con VHC, 985 UD y 187 usuarios de antipsicóticos, de los cuales el 75% también eran UD. En la cohorte UD, las comorbilidades cardiovasculares fueron más frecuentes; el 22,7% presentaba riesgo de IF con medicación cardiovascular, mayor con GLE/PIB que con SOF/VEL (36,8% vs.13,7%, p<0,001). Los EA cardiovasculares fueron más frecuentes en GLE/PIB. En la cohorte de antipsicóticos, quetiapina fue el más prescrito (26,2%), seguida de paliperidona (17,6%) y olanzapina (17,1%). El 51% de los tratados con GLE/PIB presentó riesgo de IF, frente al 23% con SOF/VEL (p<0,001). Se reportaron dos EA en GLE/PIB: un paciente con quetiapina (<300 mg/día) presentó síntomas extrapiramidales y otro con paliperidona sufrió sedación, que requirió suspensión o ajuste. Los resultados subrayan el riesgo de IF en estas cohortes, especialmente en pacientes con GLE/PIB, destacando la necesidad de un seguimiento clínico estrecho para optimizar la seguridad del tratamiento. Se utilizaron registros médicos electrónicos (BIG-PAC®) para evaluar a pacientes con VHC tratados entre 2017-2020. Se incluyeron 1.620 pacientes con VHC, 985 tenían TCS y 187 usaban antipsicóticos; el 75% de estos también sufrían TCS. En la cohorte TCS, las comorbilidades cardiovasculares fueron más frecuentes; el 22,7% estuvo en riesgo de IF relacionados con medicación cardiovascular, el 36,8% con GLE/PIB y el 13,7% con SOF/VEL (p<0.001). Los EA cardiovasculares fueron más comunes con GLE/PIB. En la cohorte de antipsicóticos, la quetiapina fue el más prescrito (26,2%), seguida de paliperidona (17,6%) y olanzapina (17,1%). El 51% de los tratados con GLE/PIB tuvo riesgo de IF, frente al 23% con SOF/VEL (p<0.001). Se reportaron dos EA en el grupo de GLE/PIB (n=37): un paciente con quetiapina (<300 mg/día) experimentó síntomas extrapiramidales, y otro con paliperidona sufrió sedación, que requirió suspensión o ajuste de dosis. Los resultados subrayan el riesgo de IF en estas cohortes, especialmente con GLE/PIB, destacando la necesidad de un seguimiento clínico estrecho para optimizar la seguridad del tratamiento.
To explore the mediating role of negative resources distribution in the relationship between sense of hopelessness and intensity of self-aggression in problematic substance users. Beck's concept was used to define sense of hopelessness, while Gaś's concept of psychological aggression syndrome was used to define self-aggressive behavior. Two dimensions of this syndrome were crucial for our research ‒ self-destructive tendencies and hostility towards oneself. The mediating functions of resource distribution consistent with the desperation principle are described based on Conservation of Resources Theory by Hobfoll. The study encompassed 606 individuals. The mean age was 42.58. The study used the Beck Hopelessness Scale (BHS) in the Polish adaptation by Oleś and Juros, the Psychological Inventory of Aggression Syndrome (IPSA), and the Polish adaptation of Hobfoll's Conservation of Resources-Evaluation (COR-E). Our research showed a significant positive correlation between sense of hopelessness and intensity of self-destructive behaviors in problematic substance users. Distribution of resources is a mediator of the relationship between feeling of hopelessness and the frequency of self-aggressive behaviors: an increase in the feeling of hopelessness leads to a significant unfavorable distribution of resources, while an unfavorable distribution of resources co-occurs with an increased risk of self-destructive behaviors. In a group of problematic substance users, intensity of their sense of hopelessness is correlated with a tendency to self-aggressive behaviors. Distribution of resources moderates this correlation.