Maintaining stable perception amidst dynamic visual input driven by eye movements is a remarkable feat of the brain. One proposed mechanism underlying this phenomenon is receptive field remapping in the lateral intraparietal area (LIP), in which neurons predictively update their receptive fields to compensate for eye movements. Models of remapping have suggested that the underlying mechanism may involve either a wave of activity or a single jump in activity. To test these competing hypotheses, we investigated the timing of remapping as a function of saccade length. We predicted that if remapping occurs through a jump, the remapped response will align more closely with saccade onset, independent of saccade length. Alternatively, if remapping involves a wave moving over time, we predicted that the remapped response will occur later for longer saccades when aligned by saccade onset. We recorded the activity of single LIP neurons and multi-unit activity in animals performing a saccade task in which a probe appeared in the post-saccadic receptive field before a 7, 14 or 21 deg saccade. We found that responses to single neurons and to multi-units were all consistent with the wave hypothesis. Surprisingly, we also found that remapping responses starting before the saccade only occurred in conditions in which the probe was presented within the classical receptive field. These findings bring into question whether pre-saccadic remapped responses are a fundamental feature of LIP remapping and support the idea that the mechanism underlying remapping is driven by a wave of activity rather than a jump.
Recently reported criteria are more stringent for classification of axial spondyloarthritis (axSpA) in the absence of positive imaging. This places increased reliance on the accuracy of sacroiliac joint (SIJ) imaging. This review highlights some of the key challenges to bringing advances in SIJ imaging to clinical practice. The first international consensus for an MRI acquisition protocol for sacroiliitis, published in 2024, defines how the sequences should be orientated, and which sequences are required to identify various lesions that may be seen in inflammatory sacroiliitis and degeneration. However, as anatomical and physiological variation and degeneration are very common in the SIJ, new techniques are not necessarily specific for sacroiliitis and may be more sensitive to changes in the SIJ regardless of cause. Artificial intelligence techniques are currently in use for improvement in image acquisition, but models used for enhancement of diagnostic ascertainment still need development and validation. New techniques, especially adherence to a recommended MRI protocol, are essential for accurate assessment of sacroiliitis. However, the introduction of new techniques to clinical practice must be accompanied by the appropriate education to assist less experienced observers with appropriate interpretation of novel images.
Men show a higher mortality than women, especially at a young age (between 15 and 39 years). They are more likely to engage in unhealthy behaviours and tend not to implement preventative efforts or to seek help. While (mental) health promotion programmes aim to foster healthy behaviours, men often do not feel addressed by them and are therefore reluctant to participate. This synthesis aims at drawing together barriers to and facilitators of male participation in (mental) health promotion programmes and identifying how to best address men in health communication and programme promotion. This rapid qualitative evidence synthesis includes a sample of 21 studies. 18 are qualitative studies and 3 are mixed-methods studies with separately reported qualitative findings that captured the perspectives of males aged 12 to 79 years and of professionals working in men's health on the barriers to and facilitators of participation in (mental) health promotion programmes and on preferred health communication. Studies were purposefully selected to maximise variation across interview content, context, and participant characteristics (e.g., age, occupation). The selection was restricted to studies published between 2015 and 2025. Gender norms were one of the main barriers to participation in men's (mental) health promotion programmes. Preferably such programmes should be integrated into settings attractive or familiar to men, such as sport clubs or handicraft workshops, or the workplace. Peers and peer support played a crucial role within men's health promotion and were found to facilitate positive behavioural changes. When reaching out to men, clinical and stigmatising terminology should be avoided in favour of action-oriented language that emphasises control and practical solutions while keeping the messaging simple and focused on tangible benefits. Health promotion programmes for men require embedding interventions within male-relevant contexts, such as sports, workplaces, and peer networks, that ease participation and reduce stigma. To reach and benefit men, communication strategies should use relatable, non-stigmatising language from credible messengers and should frame self-care as compatible with masculine identities.
Value-based healthcare (VBHC) proposes a framework for managing healthcare systems, connecting health and economic outcomes to determine the value of healthcare. The value equation remains ambiguous, serving more as a theoretical framework than a practical decision-making tool. The key challenge lies in estimating and interpreting the value equation. The purpose of this study is to provide a methodological proof-of-concept to address this gap. A cohort of 330 patients diagnosed with breast cancer with a 12-month follow-up from two healthcare centres was used to illustrate the proposed approach. Patient-reported outcomes and economic-related outcomes (PROs and EROs) were collected. The numerator was defined as the patient-centred outcome-adjusted life years (PACELYs), a novel metric proposed here that combines PROs and survival, whilst the denominator was expressed in euros. Moving towards a marginal perspective, Incremental value (IV) and value curve were proposed as decision-making measures. The mean PACELYs for healthcare centres A and B were between 69.85 and 73.24, and the costs for these centres were 12,129€-13,404€. The InIV showed that centre B generated an additional PACELY at 376€ compared to A, reflecting differences in organisational efficiency. The value curve showed variation in efficiency across VBHC thresholds, depending on the healthcare context. This is the first proof-of-concept to estimate a value figure as a patient-centred efficiency measure for comparing healthcare providers within VBHC, with two pivotal transformations of the value equation: the use of PACELYs and the adoption of a marginal perspective, thereby positioning it as a decision-making tool in VBHC. The estimated figure will facilitate comprehensive benchmarking across centres and be applicable to other medical conditions. Further research should focus on designing value-based payment systems.
It is no wonder that AI is making a major impact in drug development. As AI tools continue to advance and improve, it is easy to understand why pharmaceutical companies are taking a keen interest in AI in drug development and its potential benefits for saving time and money. AI is helping drug development become faster, more accurate, and cost-effective, all while benefiting the bottom line. When machine learning and deep learning come into play in this scenario, the potential for AI and its benefits in drug development is endless. It has already been proven in areas such as predicting drug properties, identifying and validating new targets, developing small molecule drugs, and even speeding up clinical trials through drug repurposing, drug development, and drug outcome predictions. Of course, as in all things in life, there are obstacles and hurdles that need to be addressed and overcome. This includes better data-sharing practices, better standards for algorithms, and even the potential for bringing biology and computer science closer together in order to bring lab work and modeling closer.
Cellular heterogeneity is an inherent feature of biological systems, and living single-cell metabolomics (SCM) has emerged as a powerful approach to probe this diversity-a dimension often lost in conventional bulk analyses. Currently, mass spectrometry (MS)-based living SCM techniques are driving a revolution toward higher throughput, sensitivity, and coverage, enabling the identification of rare cell subpopulations and expanding applications across various biological fields. Nevertheless, several bottlenecks remain, including limited metabolome coverage, insufficient throughput, batch effects, instrumental constraints, and challenges in processing large-scale datasets. Future efforts should focus on all stages of SCM, prioritizing the development of microfluidics-integrated living-cell analysis platforms, enhanced ionization sources, in situ chemical derivatizations, AI-powered data processing pipelines, and integrated multi-omics analyses at the single-cell level. Despite existing hurdles, continuous progress in technology, data science, and interdisciplinary collaboration is expected to bring transformative breakthroughs in MS-based living SCM, ultimately advancing our understanding of dynamic biological processes and accelerating biomedical discovery.
Electrochemical water treatment is essential for tackling global water scarcity but remains difficult to optimize due to limited expertise and computing resources at many treatment facilities. Here, we introduce an intelligent on-device platform that combines electrochemical process knowledge with large language models deployed directly on edge devices such as Raspberry Pi. This system integrates theoretical understanding with real-time optimization, eliminating the need for cloud connectivity while ensuring data privacy and accessibility. Tested against 320 published studies, it achieves a 60 % reduction in hallucination rates and maintains high predictive accuracy (R² > 0.80) for key variables such as effluent concentration and energy, even with incomplete sensor inputs. Notably, prediction accuracy for challenging parameters, such as the applied current (the driving force for electrochemical water desalination), improves from 0.03 to 0.63. By bringing intelligence to the data rather than sending it to the cloud, this approach makes advanced water-treatment intelligence feasible in resource-limited, data-imperfect, decentralized environments where physics-based models cannot be deployed.
The global energy crisis and climate challenges urgently require a transition to safer, cleaner, and more diverse energy sources, and hybrid renewable energy systems have considerable scope for development as one of the best responses. However, there are fewer studies on hybrid renewable energy systems applied to farms, and the unused roofs of farms are not fully utilized. Therefore, this paper innovatively applies biogas-rooftop photovoltaic hybrid system (BRPHS) to farms and establishes a research framework based on life cycle cost and considering environmental benefits to explore whether it can bring good economic benefits. Meanwhile, power generation, organic fertilizer production, and greenhouse gas (GHG) emission reduction potentials were measured. Calculations based on a typical farm showed that the farm BRPHS is environmentally friendly and economically superior, which not only solves the problems of environmental pollution and GHG emissions caused by farm manure but also makes full use of idle resources and renewable energy sources, with an annual electricity generation of up to 42,851.92 MWh, an organic fertilizer production potential of 541.66 t per year, and a net present value of 1376.91. The net present value is 1376.91 thousand US dollars, with an internal rate of return of 19.46%, while the whole life cycle can reduce GHG emissions by 188,414.89 t CO2e.
Deep neural networks (DNNs) are strongly vulnerable to adversarial attacks, which brings a large threat in safety-critical applications such as autonomous driving and facial recognition. The model integration technique, which adopts gradient information from many surrogate models, is commonly regarded as a powerful black-box attack method. But existing black-box attack methods using an integration model tend to average the outputs of several surrogate models. Such an approach ignores the gradient differences among these different models and the migrating characteristic of adversarial examples. In this way, the above method limits the diversity of adversarial examples, thus leads to low attack success rates across different models of different architectures. In order to solve these limitations, in this paper, we propose the hybrid method of model integration and input transformation, called AIIT. In AIIT, we consider image transform method to generate various adversarial examples and also consider dynamic gradient adjustment to promote model integration. Moreover, we present the gradient optimal algorithm to alleviate the overfitting of surrogate models.Extensive experiments on various datasets show that our approach can improve attack success rates by 13% to 35% compared to the existing methods and can achieve an average attack success rate of 90%, which proves effective in improving the transferability.
Monitoring of home mechanical ventilation (HMV) - including noninvasive ventilation (NIV) and invasive mechanical ventilation (IMV) - as well as continuous positive airway pressure (CPAP) is essential to ensure effective ventilation, optimize adherence, and improve patient outcomes in both pediatric and adult patients. Analysis of data derived from the built-in software of HMV and CPAP devices constitute a key component of routine monitoring. These data provide valuable information on treatment adherence, unintentional leaks, residual respiratory events, and patient-ventilator synchrony. In many clinical situations, these data may be sufficient to guide ventilator adjustments without full in-lab poly(somno)graphy. Although current built-in software offers multiple monitoring features, their availability and implementation vary widely across manufacturers. Limitations include incomplete access to settings and alarms, inconsistent leak and inspiratory time statistics, limited waveform visualization, and insufficient tools for scoring respiratory events and patient-ventilator asynchronies. Additional features-such as customizable adherence thresholds, dual cursors for time measurements, deselection of specific periods or events, and integration of physiologic signals like transcutaneous carbon dioxide pressure (PtcCO2) or thoraco-abdominal movements-could enhance monitoring. Standardizing and expanding built-in software capabilities would improve the precision of HMV and CPAP monitoring, facilitate individualized ventilator adjustments, and bring software analysis closer to poly(somno)graphy-level assessment. These enhancements have the potential to optimize patient-ventilator synchrony and overall treatment quality.
Second generation tyrosine kinase inhibitors (TKIs) such as gilteritinib, characterized by minimal EGFR and absent VEGFR inhibition, are in theory associated with low dermatologic toxicity. This case report brings to the awareness that the opposite may occur and emphasize the need for attentive pharmacovigilance. An elderly woman presented to us with relapsed/refractory (R/R) FLT3-ITD AML following azacytidine treatment, received single-agent TKI gilteritinib, selected for its greater potency and specificity. Unexpectedly, she developed a severe hand-foot skin lesion requiring treatment interruption. After receiving two cycles of gilteritinib 120mg orally daily without therapeutic response, the dose was escalated to 200mg in accordance with RCT guidelines. After one week, the patient developed dry skin and mild erythema of the hands and feet, which progressed to a severe hand-foot syndrome the following week. This unprecedented adverse event reporting suggests that the FLT3-specific TKI gilteritinib can induce cutaneous toxicities, through dose-dependent inhibition of proangiogenic pathways.
The ClinGen Craniofacial Malformations Gene Curation Expert Panel (Cranio GCEP) was formed in 2020 with an initial target of evaluating genes implicated in craniosynostosis and skull abnormalities. This work summarizes the findings of the Cranio GCEP during its first round of curation, aiming to provide expert guidance for clinical validity of gene-disease relationships in the context of craniofacial malformations. The curation scope of the GCEP was separated into multiple rounds based on frequency of occurrence and uniqueness of associated features. Twelve genes (EFNB1, ERF, FGFR1, FGFR2, FGFR3, MEGF8, MSX2, POR, RAB23, SKI, TCF12, and TWIST1) were selected, based on review of literature, multi-gene sequencing panels from the Genetic Testing Registry (GTR), and expert input. On average, there were two disease relationships per gene, ranging from one to six. In total, the Cranio GCEP curated 23 gene-disease pairs. Of these curations, 17 (74%) classifications reached Definitive, 3 (13%) Moderate, and 3 (13%) Limited. The classification of gene-disease relationships in round one curation of the Cranio GCEP has contributed to systematically evaluating the validity of gene-disease relationships for craniofacial malformations to establish accurate testing panels and improve patient care. By bringing together content experts to focus on gene curation, the Cranio GCEP facilitates education, new collaboration, and encourages publication of clinical cases in previously discovered genes in order to reflect the broadening spectrum of gene-disease relationships in the craniofacial malformation and craniosynostosis literature.
Physiology graduates have a wide range of career options beyond health-science specializations, yet many are unaware of these paths or lack the skills that would allow them to directly enter these fields. To address this gap, the Department of Physiology at the University of Toronto developed a Master of Health Science (MHSc) in Medical Physiology, a one-year, course-based professional degree to train students to apply existing physiological knowledge and put it into practice in emerging areas related to health. Rather than serving as a pre-medicine degree, the MHSc is designed with a multidisciplinary approach to empower physiology-skilled undergraduates to be job ready. The program combines courses in advanced physiology, with a mentored literature review report, commercialization, big data analysis, and clinical applications, as well as embedded professional development and career exploration. Finally, a practicum placement allows students to explore how physiology is applied in diverse sectors, through employment in biotechnology, the pharmaceutical industry, clinical trial coordination, and health-related data science. Practicum supervisors report that the students bring a unique skillset that is highly valued in both academic and non-academic organizations. Graduate outcomes (2021-2024) demonstrate that almost two-thirds of our students enter the workforce directly in physiology-related sectors such as biotechnology, consulting, medical communications, data analytics, and more, whereby the others will pursue advanced degrees. Herein, the program is detailed, including the curricular development and implementation, as well as the significant outcomes of this unique MHSc in Medical Physiology graduate program.
Across ages and cultures, planning actions adaptively during tool use is a hallmark of human intelligence and a critical factor in human survival and proper function. Previous cross-sectional studies showed that adaptive planning begins in infancy and improves with age and experience. However, little is known about how developmental improvements in adaptive planning occur. Do infants gradually adapt their action planning with one object and then generalize this skill to other objects, or does learning remain tool-specific? Here, we longitudinally tested nine infants in weekly sessions across the age range when tool use is rapidly developing. Infants were presented with a familiar tool (spoon) and three unfamiliar tools (brush, hammer, magnet) with handles pointing to the right or left. For each trial, we scored whether infants used an adaptive radial grip (evidence of action planning) or an inefficient ulnar grip (no evidence of planning). Across several months of testing, every infant gradually learned to use an adaptive radial grip for the spoon, but none showed improvement for the unfamiliar tools. Adaptive planning with the spoon was further limited to self-directed actions (bringing food to their own mouth) rather than other-directed actions (feeding a puppet). Learning was characterized by high variability before stable achievement of an efficient grip. Across all tools, right-pointing handles elicited more radial grips than left-pointing handles. Our findings replicate previous cross-sectional research and provide new insights into the longitudinal progression of adaptive planning during tool use in infancy. Specifically, the development was gradual rather than abrupt, and learning remained highly tool-specific without generalization, emphasizing the critical role of specific and extensive experience with particular tool-action combinations.
LLMs are increasingly supporting decision-making across high-stakes domains, requiring critical reflection on the socio-technical factors that shape how humans and LLMs are assigned roles and interact during human-in-the-loop decision-making. This paper introduces the concept of human-LLM archetypes -- defined as re-curring socio-technical interaction patterns that structure the roles of humans and LLMs in collaborative decision-making. We describe 17 human-LLM archetypes derived from a scoping literature review and thematic analysis of 113 LLM-supported decision-making papers. Then, we evaluate these diverse archetypes across real-world clinical diagnostic cases to examine the potential effects of adopting distinct human-LLM archetypes on LLM outputs and decision outcomes. Finally, we present relevant tradeoffs and design choices across human-LLM archetypes, including decision control, social hierarchies, cognitive forcing strategies, and information requirements. Through our analysis, we show that selection of human-LLM interaction archetype can influence LLM outputs and decisions, bringing important risks and considerations for the designers of human-AI decision-making systems.
Digadoglucitol is an extracellular macrocyclic dinuclear gadolinium-based contrast agent (GBCA) based on the association of two [Gd-(HP-DO3A)] units conjugated through a spacer containing the glucamine moiety. It displays a relaxivity per Gd that is 2 to 3 times higher than the most currently used GBCAs, allowing the use of reduced doses while ensuring a noninferior image contrast. Its high relaxivity is the result of a rational design aimed at exploiting the intramolecular catalysis of the prototropic exchange of the coordinated -OH groups as well as the second sphere contribution brought about by the presence of the hydroxyl functionalities on glucamine. Digadoglucitol maintains the excellent kinetic and thermodynamic properties of the parent [Gd-(HP-DO3A)] with an SAP/TSAP ratio of 2/3. A HPLC workup yielded three fractions of diastereoisomers based on the chirality of the 2-hydroxypropyl pendants with similar relaxometric and stability properties. From pH 5 to 9, the deprotonated glucamine nitrogen acts as base to catalyze the prototropic exchange of the coordinating -OH group bringing an enhancement of 1.5-2.0 mM-1 s-1 of the observed relaxivity with respect to the expected value for a q = 1 complex of a similar formula weight. Biodistribution and the Magnetic Resonance Imaging pharmacokinetics of digadoglucitol resulted very similarly to those found for [Gd-(BT-DO3A)].
Modern gut microbiota research, enabled by high-throughput sequencing, has established gastrointestinal microbial communities as integral components of host biology across the animals and humans. Animal studies established their roles in polysaccharide fermentation, nutrient utilization, and host physiology, while human investigations linked microbiota variation to metabolic and inflammatory conditions. The Gut Microbiota Collection (https://www.nature.com/collections/jhdjahhcea) brings together studies across host species, populations, and systems to offer more insights in how diet, geography, and host physiology shape microbiota composition and function.
Background: In healthcare and rehabilitation, artificial intelligence (AI) is being widely used in assistive devices, administration, and diagnostic support. There is, however, little data on how healthcare workers in low-resource environments understand and view the use of AI, especially in different professional groups engaging in clinical decision-making and rehabilitation. Objective: The objective of this study was to explore the knowledge, awareness, and perceptions of medical officers (MOs) and physical therapists (PTs) on the application of artificial intelligence (AI) in healthcare and rehabilitation. Methods: An exploratory qualitative study was conducted utilising semi-structured interviews with 40 clinicians (20 MOs and 20 PTs) selected through purposive sampling from major public and private hospitals in Peshawar. Interviews were performed both in person and online; audio-recorded and transcribed verbatim; and analysed using inductive theme analysis using Braun and Clarke's methodology. Results: Five interrelated themes were identified: fundamental knowledge of AI, awareness of therapeutic applications, perceived positive outcomes, ethical and pragmatic concerns, and limitations to integration. PTs were more familiar with the application of AI in assistive robotic technology, while MOs prioritised AI usage for the purpose of diagnosis and administration work. Both groups viewed AI as a useful technology to improve clinical decision-making and workplace efficiency, while both groups raised concerns regarding data privacy, autonomy, lack of formal training, and realistic application in the local context. Conclusion: MOs and PTs in KP were cautiously optimistic about the application of AI in healthcare and rehabilitation, with significant concerns formed by ethical, educational, and technical limitations. The results of this research emphasise that Artificial Intelligence (AI) has the potential to radically change the recovery process through timely and accurate diagnosis, personalisation of treatment plans, and better patient monitoring. Still, the deficits in knowledge and confidence of physiotherapists and medical officers, as revealed by the study, suggest that effective execution calls for the systematic incorporation of AI literacy in rehabilitation education and clinical practice. Healthcare professionals need to be able to understand AI-generated data in a responsible manner and see it as a tool that can be used to support their clinical judgement, not replace it. Additionally, the use of AI in rehabilitative exercises through the help of robots, motion analysis systems, and prediction models for end results can bring about a significant improvement in the patient’s recovery journey if only the clinicians are sufficiently trained for operation and interpretation. Those involved in making policies and healthcare administration should take it upon themselves to create the conditions necessary for the easy collaboration between different professions, the continuous professional development of staff, and the ethical use of AI in clinical rehabilitation settings. Closing the knowledge gap and enhancing the digital readiness of healthcare professionals will not only enable them to utilise AI as a powerful tool but also allow them to extend the quality, speed, and access of rehabilitation services to far-flung areas like Khyber Pakhtunkhwa and other under-resourced places.
Advanced prehospital care delivered by air ambulance services in the United Kingdom aims to reduce preventable trauma deaths by bringing hospital-level interventions directly to patients. Despite these efforts, a significant proportion of patients still die in the immediate phase, and current learning frameworks focus predominantly on identifying errors rather than examining all fatalities for system-wide improvements. This paper explores the potential of integrating multidisciplinary approaches, particularly clinicopathological correlation (CPC) meetings, into high-performing air ambulance services to enhance learning from every death. By including autopsy pathologists alongside clinicians, CPC meetings provide a robust platform to correlate prehospital findings with definitive postmortem results, improving diagnostic accuracy, clinical reasoning, and professional development. They also foster interprofessional collaboration, support clinician well-being by providing closure, and strengthen patient safety through contextualized learning beyond fault finding. Barriers include limited data sharing, coronial processes, and inconsistent governance across independent air ambulance services. However, the successful implementation of CPC multidisciplinary team meetings demonstrates significant educational and systemic benefits, driving innovation, and quality assurance. We propose that all high-performing air ambulance services should adopt structured, regular CPC meetings with pathologist involvement, thereby embedding learning from every fatality as a cornerstone of governance, resilience, and future improvements in care.
Emergency physicians pursuing critical care training must enter fellowships designed for internal medicine, anesthesiology, or surgery trainees. In this study we aimed to assess how emergency medicine (EM)-trained fellows are perceived by critical care fellowship leadership compared to their peers and to identify specialty-specific strengths and gaps that may inform targeted educational approaches. We conducted a national, cross-sectional survey of program directors and associate/assistant directors of Accreditation Council of Graduate Medical Education-accredited critical care fellowships. Respondents rated the baseline competence of incoming fellows across 11 core critical care domains using a 5-point Likert scale. We compared competency ratings across residency training backgrounds using linear mixed models, accounting for clustering and adjusting for rater specialty where appropriate. Of 429 distributed surveys, 118 (27.5%) were completed. Our respondents represented internal medicine-based fellowships (63, 53%), surgical fellowships (32, 27%), and anesthesia fellowships (23, 20%). On a 5-point Likert scale ranging from 1 = "Not competent" to 5 = "Very competent," EM-trained fellows were rated significantly higher than their internal medicine-trained peers in intubation (3.93 vs 1.86, P < .01); vascular access (3.72 vs 2.52, P < .01); point-of-care ultrasound (3.80 vs 2.52, P < .01); surgical critical care (2.39 vs 1.99, P < .01); and neurologic emergencies (2.59 vs 2.10, P < .01). Fellows trained in internal medicine were rated higher in ventilator management (2.54 vs 2.06, P < .01); palliation (3.05 vs 2.08, P < .01); and renal physiology/acid-base disturbances (3.18 vs 2.40, P < .01). Slightly different patterns emerged when comparing EM to surgery and anesthesiology trainees, where EM-trained fellows were rated similarly or lower in procedural domains but demonstrated more robust competence in organ-specific physiology and ultrasonography. These patterns remained largely consistent in sensitivity analyses adjusting for rater specialty. Critical care fellows who trained in EM bring distinct strengths in diagnostics and resuscitation to critical care training, but their educational needs may differ from those of peers within specialty-specific fellowships. Tailoring curricula to address these differences can help ensure all trainees achieve proficiency across core domains.