To describe the longitudinal ultrasound features and clinical course of ovarian endometriomas in postmenopausal women, aiming to improve diagnostic accuracy and guide follow-up strategies. This retrospective observational study included postmenopausal women with at least one ovarian endometrioma identified by transvaginal ultrasound (TVUS) and monitored for a minimum of 24 months at the University of Rome Tor Vergata (2018-2023). All had a known premenopausal endometriosis detected by TVUS at our Unit. Clinical and ultrasound assessments were conducted at baseline and at 12 and 24 months, recording changes in TVUS characteristics and symptoms. Endometrioma size was classified using the #Enzian classification. Forty-one postmenopausal patients (mean age 53.5 ± 6.4 years) were included. A total of 45 endometriomas were analyzed, mostly unilocular (100%), with a typical "ground glass" echogenicity (75.6%) and classified as #Enzian O1 (82.2%) at the first postmenopausal scan. A significant early dimensional reduction occurred between pre- and postmenopause: the mean maximum diameter decreased from 29.0 ± 15.2 to 20.6 ± 9.7 mm (p = 0.002), and the mean diameter from 24.5 ± 13.1 to 17.9 ± 8.9 mm (p = 0.006), with continued decline at 12 and 24 months (p < 0.05). In contrast, morpho-structural changes emerged later during follow-up, with the proportion of cysts showing wall irregularities rising from 11.1% in premenopause to 38.6% at 24 months (p = 0.003). Vascularization remained minimal throughout. All serum epithelial tumor markers stayed within normal ranges, no suspicious or malignant transformations were observed, and pain symptoms remained stable during follow-up. Ovarian endometriomas in postmenopausal women exhibit a benign evolution, characterized by early dimensional regression and later structural remodeling without malignant features. Regular ultrasound surveillance remains essential to recognize benign morphologic changes, avoid unnecessary surgery, and promptly identify lesions requiring further evaluation.
To explore the prevalence of discrepancies between estimated and actual orthodontic treatment duration and identify predictors of treatment delays. A total of 96 patients (62.5% female; age = 15.6 ± 6.8 years) who completed an orthodontic treatment with pre-adjusted edgewise fixed appliances between 2015 and 2023 were retrospectively included. Differences between actual and estimated treatment duration >3 months were classified as discrepancies and categorized as "overestimation" or "underestimation." Such discrepancies were compared on demographics, COVID period, and orthodontic parameters using Student's t-tests and chi-square tests, as appropriate. Predictors of underestimated treatment duration were assessed with logistic regression analysis. Actual treatment duration significantly differed from the estimated duration (26.5 ± 9.6 vs. 21.6 ± 3.6 months; P <0.001), with 65.6% cases exhibiting treatment discrepancy (P = 0.003) and 61.5% of them being underestimated (P = 0.032). Cases with underestimated durations more commonly displayed posterior crossbite (30.9% vs. 5.4%; P = 0.004), larger SNA angle (83.7 ± 3.7 vs. 78.8 ± 3.8; P = 0.005), bracket debonding (53.4% vs. 31.4%; P = 0.039; odds ratio [OR] = 2.51, 95% confidence interval [CI] = 1.04-6.04), and were most likely conducted during COVID period (33.9% vs. 10.8%; P = 0.011; OR = 4.23, 95% CI = 1.31-13.62) compared to overestimated ones. Posterior crossbite (P = 0.006) and COVID period (P = 0.007) were significant predictors of treatment underestimation. Approximately two-thirds of orthodontic treatments showed discrepancies between estimated and actual duration, with 61.5% being underestimated especially in presence of posterior crossbite and during COVID period. Why orthodontic treatment takes longer than expected?Why was this study done? People who start orthodontic treatment with braces often want to know how long their treatment will last. Knowing the expected treatment time is important because it affects motivation, comfort, and overall satisfaction. However, orthodontic treatment does not always go as planned, and treatment may take longer than expected. When this happens, patients and families may feel frustrated or disappointed. This study aimed to understand how often orthodontic treatment lasts longer than expected and why this happens. What did the researchers want to find out? The researchers wanted to find out which factors are linked to longer-than-expected orthodontic treatment time. They expected that certain bite problems or unexpected events could increase treatment length. What did the researchers do? The research team looked at 96 patients who received orthodontic treatment at a university dental clinic. For each patient, they compared the estimated treatment time given at the start with the actual time it took to complete treatment. They also looked at common features among patients whose treatment lasted longer than expected. What did the researchers find? In approximately two out of three patients, the estimated treatment time was not accurate. In most cases, treatment lasted longer than expected by about 4 months. These patients most likely had a posterior crossbite (a problem with how the back teeth fit together) or their treatments were carried out during the COVID-19 pandemic. What do the findings mean? Estimating orthodontic treatment time in advance is challenging, even for experienced clinicians. Some factors that affect treatment length cannot be predicted at the start of care. Clear communication with patients and families about the possibility of delays is needed. Setting realistic expectations will improve patient satisfaction.
In light of the comprehensive theoretical discussion on function allocation, evidence of human-automation interaction in practice is scarce. This, however, is a prerequisite for appraising the applicability of theoretical models in practice and adjusting models if necessary. The present study explores how organizations structure and design human-automation interaction when automating. Based on data from semi-structured interviews we find a process consisting of three interlocked and iterative phases when automating: the requirements & process analysis phase, the business case phase, and the target process & task design phase. Surprisingly, companies are not consciously designing human-automation interaction, but this happens implicitly while building various scenarios in the target process & task design phase. The decision which task is automated through which scenario is taken via a business case. We conclude by proposing a shift in human-automation interaction research towards models including financial quantification as well as their interplay with additional, non-quantifiable dimensions.
Many proteins contain intrinsically disordered regions (IDRs) that lack stable 3-dimensional structure. IDR behavior is poorly understood, leading to challenges for biochemical and computational analysis of IDR-containing proteins. Formins are a diverse set of homodimers containing an IDR - the FH1 domain - that facilitates polymerization of the cytoskeletal protein actin by increasing the local concentration of actin monomers at the actin assembly site. A commonly accepted model of formin-based actin polymerization involves a capture-and-deliver process: one or more binding sites (proline-rich motifs, PRMs) "capture" actin monomers and then "deliver" actin to the actin assembly site. There is evidence that formin FH1 domains are dimerized on both ends, but much research has been performed with formin constructs lacking the N-terminal dimerization site. Here, we ask: What happens when N-terminal dimerization is added to the standard model of formin-mediated actin assembly? We extend the kinetic model of FH1-mediated actin polymerization by incorporating a coarse-grain polymer model of FH1 domain dynamics, modeling the FH1 domain as a freely-jointed chain. We find that N-terminal dimerization can impact polymerization rates by modifying binding site accessibility and/or local concentration of binding sites (PRMs) at the actin assembly site. Which effect dominates depends on kinetic parameters and formin properties such as FH1 domain length and binding site location. Additionally, we demonstrate that our model can be fit to experimental data and used to make predictions for the effects of N-terminal dimerization on a variety of formin family members.
Idiopathic rapid gastric emptying (IRGE), also called idiopathic dumping syndrome, is a rare gastrointestinal (GI) motility disorder marked by an abnormally accelerated gastric emptying with no history of gastric surgery, diabetes mellitus, or other structural anomalies of the stomach or small intestine. This disorder has a similar presentation to classic dumping syndrome. Yet, it happens in patients with an intact stomach and pylorus, with no evidence of previous gastric surgeries. Due to its similarity with other endocrine hypoglycemic syndromes and functional GI disorders, IRGE remains misdiagnosed and/or underreported. Thus, it is a significant diagnostic challenge for physicians. We present a case of a 21-year-old female with a chronic history of postprandial diarrhea, carbohydrate intolerance, and symptomatic postprandial hypoglycemia. She had no history of prior gastric surgery, and her investigations, including endoscopy and radiology, demonstrated normal findings. Gastric emptying scintigraphy (GES) revealed 100% gastric emptying at 60 minutes (retained meal value of less than 70% at 30 minutes or less than 30% at 1 hour is indicative of rapid gastric emptying). Based on these findings and after exclusion of other differential diagnoses, a diagnosis of IRGE was established. This case highlights the importance of identifying IRGE as a distinct clinical entity that causes postprandial hypoglycemia and GI symptoms. Early recognition can help in timely diagnosis and targeted therapy.
Sickle cell disease (SCD) is a genetic disorder characterized by chronic hemolysis and vaso-occlusive crises. Desidustat, an oral hypoxia-inducible factor prolyl hydroxylase inhibitor (HIF-PHI), has shown promise in treating SCD by stimulating erythropoietin production in a similar manner that happens in response to hypoxia, promoting erythropoiesis and improving iron metabolism, leading to increased hemoglobin and RBC indices. This was a phase IIa, double-blind, randomized, placebo-controlled, parallel-group, proof-of-concept, multicenter clinical trial designed to evaluate the efficacy and safety of desidustat in Indian adults with sickle cell disease (SCD). A total of 24 participants were enrolled based on predefined eligibility criteria, with 8 subjects per dose cohort (50 mg, 100 mg, and 150 mg), randomized in a 3:1 ratio (desidustat:placebo). The study included a screening period of up to 4 weeks, an 8-week treatment phase, and a follow-up visit at week 10, totaling 98 days. Efficacy (hemoglobin response rate) was assessed at week 8 relative to baseline compared to placebo. Dose adjustment of desidustat at week 4 was guided by hemoglobin levels measured using a calibrated HemoCue instrument. Management of sickle cell disease (SCD) has limited treatment modalities available like hydroxyurea and blood transfusions. They offer benefits but their limitations underscore the ongoing need for safer, more effective, and accessible treatments. This proof-of-concept trial was conducted to investigate the efficacy and safety of desidustat oral tablet, 3 times per week for 8 weeks in SCD patients. Results from this trial could aid in establishing a cost-efficient therapeutic option for managing rare diseases like SCD. The study was approved by Central Drugs Standard Control Organization (CDSCO: CT NOC number CT/SND/27/2023, protocol number DESI.23.001, version no. 01 registered on 12 June 2023) and was conducted for a new drug as per the GCP guidelines. The first subject was screened on 7 January 2025 and the last visit of the last subject was conducted on 31 July 2025, followed by clinical study report finalization on 29 September 2025.
OsH5(SiHPh2)(PiPr3)2 (1) catalyzes the monoalcoholysis of diphenylsilane with a variety of alcohols. Density functional theory (DFT) calculations suggest that the reactions occur via a highly ordered transition state resulting from the nucleophilic attack of the alcohol to the silane coordinated to the osmium center in an η1-H-SiHPh2 fashion. The alcoholysis or aminolysis of the Si-H bond of 1 with 2-hydroxypyridine or 2-aminopyridine affords OsH3{κ2-Si,N-(SiPh2-E-py)}(PiPr3)2 (E = O (4), NH (5)). Analogously, OsH4(SiH2Ph)2(PiPr3)2 (2) reacts with 2-hydroxypyridine and 2-aminopyridine to give OsH3{κ2-Si,N-(SiPh(Epy)-E-py)}(PiPr3)2 (E = O (6), NH (7)), as a result of the alcoholysis or aminolysis, respectively, of both Si-H bonds of one of the phenylsilyl ligands. Additionally, 1 catalyzes the tandem hydrosilylation/dehydrogenative silylation of salicylaldehydes with diphenylsilane to afford silacycles. DFT calculations suggest that this process happens via an outer-sphere hydrogenation of the aldehyde moiety to give a diol and tetrahydride-silylene OsH4(=SiPh2)(PiPr3)2. Next, the silylative dehydrogenation of the Ph-OH function affords a silyl-O-functionalized pentahydride, which, upon the nucleophilic intramolecular attack of the benzylic OH group, gives the silacycle and intermediate OsH4(η2-H2)(PiPr3)2, which reacts with diphenylsilane, giving H2 and regenerating OsH5(SiHPh2)(PiPr3)2.
Patients who require major vascular surgery often receive antiplatelet therapy for primary or secondary prevention of cardiovascular disease. Clopidogrel resistance and variability in platelet recovery after drug discontinuation pose clinical challenges, particularly for regional anaesthesia and blood management. The aim of this study was to characterise platelet function and determine the prevalence of antiplatelet resistance using near-patient viscoelastic testing in patients undergoing major, elective non-cardiac vascular surgery. We conducted a single-centre, prospective, observational cohort study at a tertiary vascular surgery centre. Adults scheduled for elective vascular surgery were recruited into four groups: aspirin; clopidogrel; dual antiplatelet therapy; and control (no antiplatelet therapy). Blood samples were obtained at pre-operative assessment and on the day of surgery. Platelet function was assessed using thromboelastography and von Willebrand factor antigen levels. The primary outcome was the proportion of patients with antiplatelet resistance. Eighty patients were enrolled, of whom 64 proceeded to surgery. Antiplatelet resistance was common, affecting 25-70% of patients at baseline depending on regimen and 15-83% on the day of surgery. Clopidogrel resistance was most frequent (70%). Two patients experienced early graft or stent thrombosis, both with evidence of clopidogrel resistance and elevated von Willebrand factor; von Willebrand factor levels exceeded the normal range in two-thirds of patients. In an exploratory analysis, clopidogrel cessation 5-7 days pre-operatively did not result in a statistically significant change in platelet inhibition. High rates of clopidogrel resistance and elevated von Willebrand factor were observed in patients with planned vascular surgery, suggesting that current peri-operative discontinuation guidelines may not restore normal platelet function. Larger multicentre studies that incorporate standardised platelet and near-patient genetic testing are required to validate these findings and guide personalised peri-operative antiplatelet management. We studied people who were having major blood vessel surgery. Many of them were taking medicines that slow blood from clotting, such as aspirin or clopidogrel, sometimes called "blood thinner" drugs. We took blood samples before surgery and on the day of surgery to see how well their platelets (the parts of blood that help it clot) were working. These medicines are important for preventing heart problems, but they can also make surgery more difficult because of bleeding or clotting risks. Sometimes the medicines do not work properly in some people, or their effects do not wear off as expected. We wanted to understand how often this happens so doctors can plan safer care. We found that many patients did not respond to their blood‐thinner medicines as expected, especially those taking clopidogrel. In some people, the blood was still too likely to clot, even after stopping the medicine before surgery. We also found high levels of a blood protein linked to clotting in many patients. This suggests that current advice about when to stop these medicines may not always work, and better ways to check blood clotting before surgery may be needed.
The greatest impact on the burden of CNS infections resulted from preventive measures, mainly vaccination programs, that have decreased the incidence of many CNS infections significantly. Here, we highlight the main cornerstones of vaccination programs on bacterial and viral CNS infections and point at future chances of upcoming vaccines. Vaccination programs have significantly decreased the number of cases due to Haemophilus influenzae type B and Neisseria meningitidis. For pneumococcal meningitis, new vaccines that prevent neuroinvasive serotypes have the potential to decrease numbers in the future. Whereas a vaccine is available for tick-borne encephalitis, prevention of neuroborreliosis is limited to contact precautions. However, vaccination rates against tick-borne encephalitis remain too low in many countries. Another neurotropic virus that can be prevented effectively by vaccination is varicella zoster virus (VZV). Interestingly, vaccination against VZV also seems to reduce the risk of dementia as recently shown. While vaccination is important in general, patients with immunosuppression benefit most from vaccinations. Finally, recent measles outbreaks impressively underline the efficacy of vaccination programs and demonstrate what can happens if vaccination rates decrease. Vaccination programs have shown to be effective in reducing the burden of CNS infections. As vaccination rates are generally still too low and as new vaccines are coming up, it is obvious that the potential of vaccines to prevent CNS infections is not maxed out yet.
Despite attempts to reduce the use of modern secure specialist hospitals (long-stay hospitals) for people with intellectual disabilities and/or autistic people, there remains a limited understanding of what happens after discharge. Systematic review of outcomes following resettlement from UK long-stay hospitals. Outcomes generally improve following resettlement, although some soon level off and rarely reach the level of comparable populations. We still know little about service costs or about how many readmissions/reconvictions to expect given the complexity of people's needs and such rates in other settings. Above all, most studies fail to meaningfully include the voices of people themselves. Outcomes improve after hospital, but this is not guaranteed and even improved outcomes may still not be good enough. Future research should aim to fill a number of gaps in current knowledge-but in particular must draw much more meaningfully on the lived experience of people and families. People with intellectual disabilities and/or autistic people are sometimes kept in hospital for a long time, often in places known as ‘long‐stay hospitals’. When people leave long‐stay hospitals to live in the community, they usually need support to make sure that they can live an ordinary life. We do not know much about what kind of support helps people to live better lives. We looked at research that has come out since 1994 to find out what is already known about this issue in the UK. We found that people's lives often improve when they leave long‐stay hospitals, but not always, and they do not improve as much as they could. We do not know much about how much community care costs, or how often people go back to hospital or end up in prison after moving to the community. Most of the studies we read did not include the voices of people themselves. We suggest that further research is needed in order to understand what life looks like for people after they leave long‐stay hospitals, and it should include the voices of people themselves.
Based on recent studies using both field surveys and laboratory experiments, we review a number of central reproductive traits of the androdioecious barnacle Scalpellum scalpellum. For the first time in any scalpellid species, development has been followed from cypris settlement until a mature dwarf male and compared with the early ontogeny of hermaphrodites. Cyprids settled in preformed receptacles on mature hermaphrodites started to deviate structurally from hermaphrodites almost immediately after attachment. After 14 days they had matured into adult dwarf males that do not resemble any stage in hermaphrodite ontogeny. Therefore, S. scalpellum males are not hermaphrodites arrested in development but the result of a much more profound evolutionary history. Settlement experiments showed that all cyprids are capable of settling and developing into hermaphrodites. Development into males happens only in cyprids attached in receptacles on adult hermaphrodites and must therefore be governed by environmental sex determination (ESD), presumably induced by some chemical factor(s) present only in the receptacle area. We observed mating between hermaphrodites and between a hermaphrodite and its dwarf males. Hermaphrodite to hermaphrodite mating resembles that seen in balanomorphan barnacles, except that adjacent specimens often check their environment with their cirri for the presence of a mating partner. Dwarf male mating is by means of a unique penis structure, made almost exclusively of cuticle and extending from inside the male and bending down into the brood chamber of its partner. This male penis is much larger (relatively to body size) than the structurally very different penis of the hermaphrodite individuals. The hermaphrodite recognizes when the dwarf male extends its penis and arrests its cirral motion so as not to damage or disturb its tiny partner. For the adult hermaphrodites we showed that their allocation of resources to male and female functions is in agreement with predictions from sex allocation theory. As predicted, solitary hermaphrodites allocated fewer resources to male function than those settled gregariously, where one or several hermaphrodite partners are within mating distance. The solitary individuals had both a shorter penis and less developed testes than the gregarious ones. As also predicted, solitary hermaphrodites were more likely to carry males than the gregarious ones.
People learn about the world in many ways: from conducting experiments to copying others' behaviour. The choices we make about how to learn can impact the collective understanding of a whole population, were others to learn from us. Alan Rogers developed simulations to study these phenomena-where agents could individually or socially learn amidst a dynamic, uncertain world-and uncovered a surprising result: the availability of cheap social learning yielded no benefit to population fitness over individual learning. Rogers' Paradox spawned decades of work to understand factors that favour social learning and better model human cultural development. But what happens when humans can learn from artificial intelligence (AI) systems that are themselves learning from us? We revisit Rogers' Paradox in the context of human-AI interaction and extend the simulations towards a simplified network of humans and AIs learning together about an uncertain world. We examine the impact of several learning strategies on the equilibrium of a society's 'collective world model' and assess levers available to stakeholders in human-AI interactions to change network dynamics. We then model negative feedback loops that may arise from humans learning socially from AI, and consider other open questions that could be explored in our simulation framework. This article is part of the theme issue 'World models in natural and artificial intelligence'.
Glyphosate, a non-selective herbicide, is widely used in US agriculture, raising concerns about its off-site transport into streams. This study investigated glyphosate concentrations in streams within nested agricultural watersheds in the US Midwest, dominated by corn (Zea mays)-soybean (Glycine max) rotations. Over a 7-year period (2005-2011), water samples from three nested watersheds were collected daily or every other day and analyzed for glyphosate. Results showed no detectable glyphosate in over 80% of samples (detection limit: 0.62 µg L- 1), with the highest concentrations observed during rainstorms. The maximum glyphosate concentration observed (102 µg L- 1) was below the US Environmental Protection Agency drinking water standard (700 µg L- 1) and the threshold for protecting aquatic life (12,000 µg L- 1). Regressions indicated that glyphosate concentrations correlated strongly with phosphate concentrations, precipitation, and seasonal variations. Additionally, the watershed with the most susceptibility to erosion showed the highest glyphosate off-site transport. These findings highlight the impact of hydrological and chemical factors on glyphosate transport under the conditions evaluated in this study. Glyphosate is a popular herbicide used to kill weeds, especially in corn and soybean fields in the US Midwest. However, there are concerns about what happens to glyphosate and whether it washes off the fields into nearby streams. A study conducted between 2005 and 2011 examined water samples from three streams in the Midwest to detect the presence of glyphosate. Out of over 3000 samples tested, more than 82% of samples had glyphosate concentrations below the detection limit of the testing equipment. The highest levels of glyphosate were found during heavy rainstorms, especially just after it was applied to the fields. It is worth noting, even during these times, the highest levels detected were still below the levels considered safe for drinking water in the United States. Overall, it appears that seasonal rainfall and soil erosion play a significant role in how much glyphosate ends up in the water.
Antibiotics have saved an untold number of people and animals since penicillin's miraculous discovery in 1928. In the following half-century, progressive discoveries involving antibiotic resistance genes (ARGs), the microorganisms responsible, and their transferrable genetic material have yielded the tools necessary for genetic engineering, birthing the biotechnologies that continue to revolutionize healthcare. After half a century of antibiotic use in the biological sciences, we are, however, faced with an inconvenient question: what happens to residual antibiotics and ARG-containing recombinant DNA after experiments? According to sequencing, we demonstrate that neither severe bleach treatments nor autoclaving completely destroys plasmid-encoded ARGs in bacterial cultures. Furthermore, we show that various bacteria can be transformed using the isolated DNA, confirming that intact plasmids survived these common cell culture disposal methods. This work will catalyze future policy discussions, the development of antibiotic-free selection systems, and continued support for research into the underexplored anthropogenic sources of engineered DNA.
The proton-coupled electron transfer (PCET) process is central to sustainable electrochemical energy conversion (e.g., electrocatalytic nitrate reduction (NO3RR) to ammonia). Decoupling electron transfer (ET) and proton transfer (PT) could suppress the *H dimerization that happens in the Volmer-Tafel process, thereby enhancing activity and product selectivity. Herein, we demonstrate that π-conjugated cyanamide (NCN2-) groups featuring a flexible structure ([N═C═N]2- ⇔ [N≡C─N]2-) function as an electron accelerator and proton relay in NO3RR, leading to a decoupled ETPT process. The electron-withdrawing [N═C═N]2- induces the polarization of [Bi2O2] catalytic layers, with more σ holes and an enhanced surface electrostatic potential (VS(r)), which facilitates ET and forms high-valence NO3(1+δ)- (δ > 1) traps. Simultaneously, switchable [N≡C─N]2- serves as a proton relay that accelerates PT in the hydrogenation of NOx intermediates. This sequential ETPT is validated by various in situ experimental characterizations and molecular dynamic modeling. The ETPT-dominant NO3RR process achieves 95.3% Faradaic efficiency for NH3 and stably works for over 500 h at an industrial current density (500 mA cm-2) in a paired electro-refinery system. This work establishes sequential ETPT regulation as a general strategy for optimizing PCET-mediated sustainable energy conversion systems.
Globally, the incidence of allergic diseases in children has shown a significant upward trend, closely linked to environmental exposure in early life through epigenetic regulation mechanisms. Epigenetics enables dynamic regulation of genetic information via DNA methylation, histone modification, and non coding RNA without altering DNA sequence. The critical window from pregnancy to the first 1000 days of life is extremely sensitive to environmental factors exerting epigenetic effects, potentially influencing lifelong allergy susceptibility through fetal programming. This narrative review synthesizes current evidence on how early life air pollution exposure contributes to childhood allergic diseases via epigenetic mechanisms. A literature search was conducted using PubMed, Web of Science, Scopus, and Embase databases (up to December 2025). Studies were selected that provided mechanistic insights, epidemiological evidence from birth cohorts, or experimental findings relevant to epigenetic regulation. This review aims to provide a comprehensive overview of common air pollutant sources, their epigenetic effects at different developmental stages, and potential interventions during key windows to inform prevention and treatment strategies for childhood allergic diseases. Childhood allergies like asthma, hay fever and eczema are becoming more common worldwide. Research shows that air pollution plays an important role, especially when exposure happens during pregnancy and early life. Air pollution comes from many sources. Outdoor pollutants include tiny particles from vehicles and factories, as well as gases like nitrogen dioxide. Indoor pollutants include tobacco smoke, mold from damp homes, and chemicals from new furniture. This review explains how air pollution affects children’s health through epigenetics. Epigenetics works like instructions that tell our genes when to turn on or off. These instructions can be changed by environmental factors, including air pollution, without altering the DNA itself. During pregnancy, harmful particles can cross the placenta and cause epigenetic changes in the developing baby. These changes affect genes related to the immune system, making the child more likely to develop allergies later. Different pollutants affect different genes, and the timing of exposure matters greatly. The most vulnerable periods include the time before conception, during pregnancy, and the first few years of life. Understanding these mechanisms helps identify the most harmful pollutants and critical windows for prevention, guiding better public health policies to protect children’s health.
Encouraging leisure-time physical activity has proved to be a hard-to-solve complex problem. Underpinning this complexity is the range of determinants of sedentary behaviour. We report on a participatory actor mapping exercise which sought to understand the people and organisations in the local system in which a novel exercise referral scheme in a suburban/semi-rural region of England has been introduced. The participatory actor mapping exercise was conducted in two phases in September 2023. The first phase generated a draft actor map visually illustrating the key organisations and individuals (actors) that make up the local physical activity promotion system. The map was aligned to a socio-ecological model in which actors were positioned within a category of determinants. In the second phase, the draft actor map was refined during two workshops with participants (n = 30) representing the key local organisations who sought to promote physical activity. The process of mapping revealed several factors influencing physical activity promotion: (1) differing views on the determinants of physical inactivity, and hence, a hesitancy towards addressing sedentary behaviour through a strategic systems approach; (2) an emerging system of actors with interdependent roles which sometimes act independently; (3) local commitment to encouraging increased leisure time activity but supported via a generalised approach to physical activity promotion; and (4) a peripheral role of the studied exercise referral scheme in the system for local physical activity provisions. The process of a participatory approach to actor mapping highlighted both opportunities and challenges for physical activity promotion. Although there were challenges around differing actors' perspectives on sedentary behaviour and a lack of connection between actors, opportunities included an improved shared understanding of the complex issues of physical inactivity, and actor mapping provided a useful step towards adopting a systems approach with a common vision. Mapping Local Support for Physical Activity: Challenges and OpportunitiesGetting people to be more active in their free time is a difficult challenge. This is partly because there are many different reasons why people are inactive. In this study, researchers looked at a new exercise referral scheme in a suburban and semi-rural area of England. They wanted to understand who the key people and organisations are in the local system that support physical activity. The researchers used a method called actor mapping, which involved two steps. First, they created a draft map showing all the people and organisations involved in promoting physical activity. These were grouped by how they influence behaviour (e.g. individual, community, or policy level). Then, in workshops with 30 participants from local organisations, the map was discussed and refined. The mapping process revealed several key findings: People had different ideas about why physical inactivity happens, making it hard to agree on a joint strategy. Many local organisations are involved and rely on each other, but they often work separately. There is local support for getting people more active, but efforts tend to be broad rather than targeted. The new exercise referral scheme plays only a small role in the wider local system. In conclusion, using a participatory mapping approach helped show both the difficulties and the potential in promoting physical activity. Although there are challenges – such as differing views and a lack of co-ordination – this process helped build a shared understanding. It could be a helpful step towards working together as a system with a common goal.
Patients who are referred for genetic counseling and/or genetic testing may conduct web searches to try to gather more information prior to their appointment. However, little is known about the reading level of such resources or the suitability of the information they provide. This study aims to determine the readability and suitability of top-ranked webpages after general searches about genetic counseling and if there is a difference in these metrics depending on which type of organization (i.e., government, non-profit) authored the webpage. Twenty webpages were identified using Google. Searches of the questions "What is a genetic counselor?", "What is genetic testing?", "Why do I need genetic testing?", and "What happens at a genetic counseling appointment?" were completed and the top 5 pages were taken from each. These webpages were then analyzed using the readability tools of Flesch-Kincaid (FK) and Standardized Measure of Gobbledygook (SMOG). Both FK and SMOG provide an assessment of readability based on grade level with the goal of this study to find resources under grade level 8. Additionally, the webpages were analyzed using the Suitability Assessment of Materials (SAM) on 6 categories. To complete the SAM analysis, two reviewers completed the tool for each webpage. Sponsor type was determined based on the primary goal of the group that supported or published the webpage. When comparing between questions, the average FK scores for the webpages were between 8th and 12th grade and the average SMOG scores were between 11th and 14th grade. Most webpages rated as adequate on the SAM scale. The results of this study highlight the continued need for evaluation of patient resources, especially those on the internet, to ensure they are meeting the needs of the rising number of individuals being referred to genetic services.
Artificial intelligence algorithms have been developed to support clinicians in diagnosing fractures, with the intention to improve the diagnostic accuracy of clinicians reviewing X-rays. The purpose of this rapid early value assessment was to identify the existing evidence base for the technology and to assess whether there was a prima facie case for the technology to represent positive outcomes for patients and a value-for-money investment for people in the National Health Service. This early value assessment assessed the potential value of the use of artificial intelligence to aid clinician diagnosis of fractures in emergency care settings as compared to clinician-diagnosis alone. A rapid evidence review was conducted followed by 'light touch' early economic modelling to explore whether a plausible case could be made for cost-effectiveness at the prices charged by the companies. Evidence searches were conducted in June and July 2024 to identify clinical, diagnostic and service outcomes associated with the technology. A simple decision model incorporating prevalence, sensitivity, specificity and cost per scan for each of the technologies was developed to evaluate plausible cost-effectiveness for detecting ankle and foot, wrist and hand, and hip fractures, selected based on the availability of evidence and their downstream costs and consequences. Sixteen studies identified evaluated the diagnostic accuracy of the technology. None of the included studies were conducted in the United Kingdom and all were associated with limitations. While the studies were not considered to be able to provide reliable estimates of diagnostic accuracy, there was a trend for the technology to improve sensitivity for detecting fractures. The technology had no discernible impact on the rate of false-positive diagnoses. Overall, most of the evaluated technologies were associated with a positive incremental net health benefit at willingness-to-pay thresholds of £20,000 and £30,000 per quality-adjusted life-year gained. Due to data limitations, it was not possible to compare technologies against each other. The results were mostly robust to scenario analyses. The evidence base for the technology is currently limited to studies evaluating diagnostic accuracy and it is unclear whether increases in fracture detection would translate into meaningful benefits for patients and services. While there are some fractures that, if missed, can result in significant harm to patients, it is plausible that the technology would improve diagnosis of more subtle fractures that may not require a change in management. Use of the technology would not eradicate the risk of missed fractures, meaning that health services would need to continue to take precautions to avoid the risk of a missed fracture in clinical practice. A simple decision tree analysis suggested that the technology was plausibly cost-effective at conventional National Institute for Health and Care Excellence thresholds. There are significant limitations in the available evidence leading to uncertainties about the diagnostic accuracy of the technology within NHS settings. Due to the pragmatic nature of the early value assessment and the available evidence base, the economic analysis included many gross assumptions and was unable to produce a definitive estimate of cost-effectiveness. The appraisal resulted in a number of research recommendations for evaluating the technology further. More detailed modelling in a full formal diagnostic assessment review is required to consider the longer-term costs and consequences of false negatives and positives, and how they are likely to impact the estimates of cost-effectiveness. This study is registered as PROSPERO CRD42024574393. This award was funded by the National Institute for Health and Care Research (NIHR) Evidence Synthesis programme (NIHR award ref: NIHR136024) and is published in full in Health Technology Assessment; Vol. 30, No. 33. See the NIHR Funding and Awards website for further award information. X-rays are the usual method for diagnosing broken bones (fractures) in urgent care settings, including accident and emergency, urgent treatment centres, and minor injuries units. Artificial intelligence technologies have been developed to help identify fractures on X-rays. Peninsula Technology Assessment Group was commissioned to conduct an early value assessment to explore whether licensed artificial intelligence technologies could support fracture detection in urgent care while further evidence is developed. A search was conducted to identify all relevant evidence evaluating artificial intelligence to assist fracture detection using X-rays, including published evidence and confidential data from artificial intelligence companies. Sixteen studies evaluated how accurate artificial intelligence was to assist diagnosis and five studies reported how artificial intelligence changed the time needed to interpret X-rays. None of the studies were based within the artificial intelligence and most involved staff different to those who would normally read X-rays in the NHS. This meant that it was not possible to reliably estimate how accurate artificial intelligence would be in UK urgent care. There were early indications that artificial intelligence could help reduce missed fractures, but further research is needed to confirm this. No studies evaluated how artificial intelligence affected patient outcomes (such as health and mobility) or its impact on NHS resources (e.g. repeat appointments or overall costs). To assess whether artificial intelligence could be cost-effective for the NHS, we developed an economic model that combined information on artificial intelligence accuracy, costs, and what happens to patients once their fracture is correctly – and incorrectly – diagnosed (including quality of life and NHS costs). Our findings suggest most artificial intelligence technologies are reasonably priced for the estimated benefits. We explored uncertainties related to the data and assumptions in our analysis and found that our conclusions did not change most of the time.
Antifungal resistance (AFR), an emerging and significant threat to humanity, receives far less attention than antibacterial resistance. AFR management is plagued by rising incidence in both immunocompromised and immunocompetent populations, zoonotic infections, limited fungal diagnostics, limited antifungal armamentarium with even fewer in pipeline, cross-resistance between environmental fungicides and clinically used antifungals, lack of strong antifungal stewardship programs, limited applications of novel strategies for human AFR, to name a few. In this review, we compile available information, current advances and our perspectives on major challenges in the origin and management of AFR. We conduct systematic review of all available publications on standard databases using search terms related to AFR from the past until January 2026. We consider all major publications on key questions in AFR. We focus on the following aspects of AFR: environmental drivers, AFR and "One Health" perspective, climate change and AFR, emerging fungal zoonoses, fungal diagnostic challenges and advances, key resistant fungal pathogens, antifungal stewardship programs, summarize the use of newer antifungals and their clinical trials, and provide an overview of novel strategies for tackling AFR, like the potential use of mycoviruses and vaccines. We also briefly present our perspective on the future of AFR progression and management. Fungi are tiny germs that can cause serious infections in humans. There are only a few medicines available to cure these infections. Sometimes, these medicines cannot kill the fungi. This is called antifungal resistance (AFR). When AFR happens, the fungus keeps growing in the body, making the person sicker and the infection does not heal. This resistance can develop when these medicines are used too much or unnecessarily, either in farming (to protect crops) or in people (when a different medicine was needed as the germ was different). It is important to detect fungal infections and resistance early so that the right treatment can be given. In the future, we need to develop new medicines, improve testing methods in the laboratory for detecting this germ early in patients, use antifungal medicines carefully, and create rules to prevent their unnecessary use, especially in farming. These steps will help keep the current medicines effective and prevent antifungal resistance, so that patients are cured of such infections.