The rapid integration of generative artificial intelligence (GenAI) tools, such as ChatGPT, into educational contexts has raised important questions regarding how adolescents conceptualize and make sense of these technologies. Understanding students' perceptions is essential for developing age-appropriate, ethical, and pedagogically sound approaches to AI use in secondary education. This descriptive qualitative study employed a phenomenological approach and metaphor analysis to explore secondary school students' perceptions of generative artificial intelligence. The study sample consisted of 332 students aged 14-18 years from four secondary schools in Türkiye. Data were collected using an open-ended prompt ("Generative artificial intelligence is like … because …") and analyzed through content analysis. Metaphors were categorized based on shared semantic and conceptual features, and inter-rater reliability was established using Cohen's kappa (κ = 0.92). Analysis revealed ten metaphor categories clustered under five overarching themes: generative artificial intelligence as (1) a source of knowledge, (2) a teaching and guiding entity, (3) a supportive and assisting tool, (4) a reflection of human intelligence, and (5) a dual-purpose (beneficial-risky) technology. Students most frequently conceptualized GenAI as a comprehensive knowledge source (e.g., book, encyclopedia) and as a human-like cognitive entity (e.g., brain, wise person). At the same time, metaphors reflecting ethical awareness and potential risks, such as misuse and overreliance, were also identified. The findings indicate that secondary school students hold multifaceted and nuanced perceptions of generative artificial intelligence, encompassing both educational opportunities and ethical concerns. These results highlight the importance of integrating AI literacy into secondary education in ways that promote critical thinking, responsible use, and awareness of GenAI's limitations alongside its potential benefits. It was determined that secondary school students perceive generative artificial intelligence ambivalently as both a useful tool and a source of ethical and emotional concern, highlighting the need for developmentally appropriate artificial intelligence literacy approaches. • GenAI tools such as ChatGPT are increasingly integrated into educational contexts and have the potential to support personalized learning, information access, and student engagement. • Existing research has primarily focused on educators' perspectives or higher education settings, while studies examining adolescents' perceptions of GenAI remain limited. • This study provides empirical evidence on secondary school students' metaphorical perceptions of generative artificial intelligence within a K-12 context. • Findings reveal that adolescents conceptualize GenAI in multifaceted ways, including as a knowledge source, teaching and guiding entity, supportive tool, reflection of human intelligence, and a dual-purpose (beneficial-risky) technology.
To systematically evaluate the methodological quality and diagnostic performance of artificial intelligence (AI) applications, specifically machine learning (ML) and deep learning (DL), in the diagnosis of endometriosis through imaging and clinical symptomology. A systematic search was conducted across seven databases for studies published between 2015 and 2025. Inclusion criteria focused on primary research utilizing AI for endometriosis diagnosis via MRI, ultrasound, or patient-reported symptoms. Methodological quality was appraised using the QUADAS-2 tool. Study selection adhered to a double-blinded protocol to minimize selection bias. Clinical and methodological conflicts were addressed by a Professor of Radiography, while technical AI complexities were adjudicated by a Professor of Artificial Intelligence. AI models demonstrated high technical efficacy, with imaging-based algorithms achieving diagnostic accuracies up to 94.32% (MRI) and AUCs of 0.90 (Ultrasound). Symptom-based models reported accuracies reaching 95.95%, utilizing classifiers such as Random Forest and XGBoost. However, quality appraisal revealed significant clinical heterogeneity and systemic vulnerabilities. Spectrum bias was prevalent, as most models were trained on advanced-stage cohorts, limiting applicability for early-stage detection. Furthermore, symptom-based models often relied on self-reported data from social media, introducing significant selection and verification bias. While AI demonstrates high potential for automating endometriosis detection, current literature is constrained by retrospective designs and narrow patient selection. To move from experimental prototypes to clinical screening tools, future research must prioritize prospective validation in undifferentiated populations using a combination of diagnostic reference methods.
The complexity and rapidly evolving nature of critical patient care in Intensive Care Units underscore the importance of the accuracy and timeliness of nursing decisions, further highlighting the significance of nursing education. This study aims to examine the accuracy of four generative artificial intelligence tools (ChatGPT 5.0 Plus, ChatGPT 5.0, DeepSeek, and Google Gemini) in answering multiple-choice questions related to the intensive care nursing exam, a fundamental area in nursing education. In the study, the ChatGPT 5.0 Plus, ChatGPT 5.0, DeepSeek, and Google Gemini models were evaluated using a test data set consisting of 55 questions. The questions were classified according to their difficulty levels as easy (n = 16), medium (n = 17), and difficult (n = 22). The models' correct response rates and standard or unique correct/incorrect response distributions were examined. Computer-assisted statistical analysis used the Chi-square, one-way ANOVA, and Post-hoc Tukey tests. The study was reported according to STROBE. According to the study results, the success rates of all models were similar for easy and medium-level questions (70-82%), and the difference between them was not statistically significant (p > 0.05). Under difficult questions, however, the performance of the models diverged significantly, with Google Gemini achieving the highest success rate at 77.27% and DeepSeek showing the lowest performance at 45.45%. The chi-square analysis revealed no statistically significant difference in the correct/incorrect distribution among the models (χ²=3.69; p = 0.296), but at the observational level, Google Gemini had a higher number of unique correct answers (n = 6) compared to the other models. ChatGPT 5.0 was found to have no unique errors. In conclusion, while AI models generally showed similar levels of success in intensive care nursing exam questions, Google Gemini demonstrated superior performance in difficult questions, and DeepSeek showed the lowest level of success among the models. The study provides an essential comparative framework regarding the usability of AI-based learning and assessment tools in nursing education. It offers guidance for the future development of AI-based educational technologies. Not applicable.
Artificial intelligence (AI) technologies are increasingly integrated into cardiology and intensive care settings to enhance clinical decision-making, patient monitoring, and workflow efficiency. However, limited evidence exists regarding AI utilization among nurses and its association with nursing practice in high-acuity units. To assess the level of AI utilization among nurses working in cardiology and intensive care units (ICUs) and examine its association with nursing practice performance. A descriptive cross-sectional study was conducted among 53 nurses working in cardiology and ICUs at tertiary hospitals. Data were collected using a structured, self-administered questionnaire assessing demographic characteristics, AI utilization (clinical decision support systems, predictive monitoring tools, and electronic documentation), and nursing practice performance. Data were analyzed using descriptive statistics, chi-square tests, and logistic regression. Statistical significance was set at P≤0.05. Nurses with high AI utilization demonstrated significantly better nursing practice performance compared to those with low utilization, (69.2% vs. 23.8%, P=0.013). AI training was significantly associated with higher utilization levels (OR=2.45, 95% CI: 1.08-5.54, P=0.031). Additionally, years of experience showed a significant relationship with effective AI use (P=0.044). AI utilization is significantly associated with improved nursing practice in cardiology and ICU settings. Strengthening AI training programs and institutional support may enhance nursing performance and quality of patient care.
Lymphedema is a chronic, progressive condition characterized by impaired lymphatic drainage and fluid accumulation. Conventional diagnostic and monitoring tools remain operator-dependent or insensitive to early disease. Artificial intelligence (AI) offers opportunities to address these limitations through multimodal data integration and automated, reproducible analysis. This systematic review followed the PRISMA guidelines and was registered in PROSPERO (CRD420251133232). PubMed and Google Scholar were searched for clinical studies published between January 2015 and July 2025 by applying AI to lymphedema diagnosis, risk prediction, monitoring, or surgical planning. Data extraction included study design, population, methodology, predictors, and performance metrics. Risk of bias was assessed using PROBAST and QUADAS-2. Eighteen studies involving 8720 patients were included. Applications covered risk prediction, imaging-based diagnosis, volumetric assessment, and clinical decision support. Reported performance ranged as follows: AUC, 0.80-0.99 and accuracy, 77-98%. Machine learning models integrating demographic and clinical data achieved AUCs up to 0.89, whereas deep learning models applied to ultrasound, CT, MRI, and clinical photographs achieved diagnostic accuracies up to 98%. Volumetric tools using dual-camera or 3D imaging correlated strongly with gold-standard water displacement (R = 0.99). External validation was absent and methodological heterogeneity was substantial. AI in lymphedema shows promise for early detection, risk stratification, and longitudinal monitoring; however, current evidence remains preliminary. Larger, multi-institutional validation studies are essential to confirm generalizability and demonstrate clinical utility.
Artificial intelligence (AI) offers a potential solution to radiologist shortages in breast cancer screening while maintaining diagnostic accuracy. Retrospective studies suggest AI performs comparably to human readers in detecting cancers, but no economic evaluations have yet used prospective trial data. We developed a de novo discrete-event simulation model to estimate the cost-effectiveness of integrating AI into the NHS screening pathway using evidence from a large prospective trial. The AI-only strategy generated a small incremental QALY gain of 0.00009 and reduced lifetime costs by £159.55 per woman invited, and had a 100% probability of being most cost-effective at the £20,000/QALY threshold. Replacing one human reader with AI also increased QALYs, by 0.00019, and reduced costs by £31.07. Triple reading (two humans plus AI) produced the largest QALY gain (0.00023) but increased costs by £72.79. All AI-based pathways reduced cancer deaths, shifted cancers from advanced (TNM stage 4) to earlier stages at detection, and increased the proportion of cancers detected by screening. Using AI in place of human readers is likely to be cost-effective, marginally improving health outcomes while reducing overall costs, with full replacement of both human readers being the most cost-effective screening strategy.
The integration of Artificial Intelligence (AI) into medicine has progressed from discriminative models to Generative AI (GenAI), which can synthesize novel content. For orthopaedic surgeons, scientific publication remains a vital marker of academic success but is often constrained by clinical workload. This review proposes a structured, practical framework to help orthopaedists effectively harness AI tools, transitioning from opaque, "black box" generation to grounded, verifiable research assistance through Retrieval-Augmented Generation (RAG). A PubMed search was conducted to explore the application of GenAI in the context of orthopaedic scientific research. An interactive review with experts in GenAI was also conducted, from which the proposed structure was developed. From this synthesis, a three-phase workflow is proposed: (1) Evidence selection using semantic discovery systems to identify and map relevant literature beyond keyword matching; (2) Data extraction and synthesis employing RAG-based systems to anchor AI responses to verified PDF sources, thereby minimizing hallucinations; and (3) Drafting and refining using Large Language Models (LLMs) for structured composition, linguistic clarity, and iterative manuscript improvement. The workflow integrates platform features to enhance efficiency, accuracy, and accessibility in orthopaedic research. When applied within a controlled, evidence-grounded environment, these systems can automate literature synthesis, expedite data extraction, and assist with scientific writing, while preserving authorial intent and accountability. However, challenges remain. Risks include algorithmic bias, "hallucinations", privacy concerns, and ethical issues related to authorship. Despite these limitations, AI represents a paradigm shift in orthopaedic scholarship, functioning as a cognitive exoskeleton that augments rather than replaces human expertise. With vigilant human oversight and adherence to journal ethics, orthopaedic surgeons can leverage AI to enhance research productivity, reproducibility, and quality while upholding the highest standards of scientific integrity.
Tanzania has adopted artificial intelligence (AI)-assisted chest X-ray screening for tuberculosis (TB), including the use of CAD4TB version 6, which is registered by the Tanzania Medicines and Medical Devices Authority (TMDA). While GeneXpert, practical reference standard used in routine practice, remains the primary bacteriological confirmatory test in routine practice, there is currently no established national threshold for CAD4TB use in either active case finding (ACF) or passive case finding (PCF) settings. This study evaluates the implementation and operational use of CAD4TB version 6 within mobile TB screening units in Tanzania and highlights challenges affecting its effective use. We conducted a retrospective analysis of screening data from 11,923 individuals collected from mobile clinics equipped with digital X-ray, CAD4TB version 6, and GeneXpert systems. Comparisons were made between manual chest X-ray interpretation, CAD4TB scores, and GeneXpert results within the subset of individuals who underwent confirmatory testing. The findings reveal substantial inconsistencies in screening workflows, including non-uniform use of CAD4TB prior to GeneXpert testing, missing radiological records, and deviations from intended protocols across sites. Descriptive analysis showed that CAD4TB scores generally aligned with GeneXpert-positive cases within the tested subset; however, due to selective application of GeneXpert and incomplete data, these observations cannot be interpreted as measures of diagnostic accuracy. This study should be interpreted as an implementation and operational assessment of AI-assisted TB screening rather than a diagnostic accuracy or threshold-setting study. The findings highlight important gaps in protocol adherence, data completeness, and workflow standardization, underscoring the need for prospective, protocol-driven studies to establish validated national thresholds for CAD4TB use in Tanzania.
Artificial intelligence (AI) platforms are becoming increasingly popular as resources for equine information. However, these platforms generate responses from a wide range of sources and do not always distinguish between fact and opinion. The objective of this study was to assess the accuracy and quality of AI-generated answers to equine-related questions. Researchers hypothesized that AI platforms could answer basic equine questions effectively but would perform poorly on complex topics or questions. Forty questions were written covering general horse care, facilities management, nutrition, genetics, and reproduction. Each question was categorized by difficulty level: beginner, intermediate, advanced, or trending. Three AI platforms were tested: ChatGPT (CGPT), Microsoft Copilot (MicCP), and ExtensionBot (ExtBot). Responses were scored for accuracy, relevance, thoroughness, and source quality (5 points each; total 20). Data were analyzed using PROC GLM in SAS (v. 9.4). Total score was affected by level (P = 0.002). Intermediate questions had the highest total score (15.95 ± 1.99). Accuracy was affected by platform (P < 0.001), level (P < 0.001), and topic (P = 0.015). CGPT (4.18 ± 0.93) and MicCP (4.08 ± 0.83) outperformed ExtBot (3.26 ± 1.21). Relevance was affected by platform (P = 0.042) and level (P < 0.001). Thoroughness was affected by platform (P < 0.001). Source quality differed by platform (P = 0.037). AI platforms could be resources; currently they fall short of the knowledge that Equine Extension Specialists can offer. AI platforms had difficulty addressing complex topics and demonstrated inconsistent performance across criteria.
Pancreatic cancer is characterized by prolonged subclinical progression, molecular heterogeneity, and late clinical presentation, resulting in diagnosis predominantly at advanced stages. Current screening approaches lack sufficient sensitivity and scalability, underscoring the need for risk-adapted early detection strategies. Artificial intelligence (AI) offers a shift from reactive diagnosis toward proactive, precision-oriented screening. This review synthesizes recent advances in AI for the early screening and diagnosis of pancreatic cancer. We focus on how AI enables population-level and high-risk prediction, augments diagnostic assessment in patients with suspicious clinical, imaging, or molecular findings, and supports precision stratification through multimodal integration of radiologic imaging, circulating biomarkers, and longitudinal electronic health records (EHRs). Advances span three domains. In imaging, deep learning models-including convolutional neural networks, transformer architectures, and self-configuring segmentation frameworks-improve pancreas segmentation, lesion detection, and classification, with several systems demonstrating radiologist-level performance in retrospective multicenter studies. In biomarker discovery, machine learning approaches such as LASSO, random forest, and XGBoost facilitate high-dimensional feature selection from transcriptomic, metabolomic, and exosomal data, enabling composite diagnostic signatures beyond CA19-9. In longitudinal EHR analysis, temporal deep learning models identify latent disease trajectories and predict pancreatic cancer risk months to years before clinical diagnosis. Despite these advances, most models remain retrospectively validated and face limitations related to data heterogeneity, interpretability, and cross-population generalizability. AI strengthens early detection through multimodal integration, risk-adapted stratification, and data-driven clinical support aligned with precision medicine. Its near-term value lies in augmenting detection among high-risk populations rather than enabling universal screening or autonomous diagnosis. Prospective multicenter validation and improved model transparency are critical for translation into routine practice.
Policymakers are increasingly adopting artificial intelligence (AI) tools to support legislative decision-making, yet there is limited empirical understanding of how these technologies are used and the implications for evidence-based policymaking. General-purpose AI tools, such as large language models (LLMs), present both opportunities for improved efficiency and risks related to misinformation and lack of transparency. This study examines state legislators' use of AI in policymaking and introduces the AIRE Protocol (AI for Informed and Responsible Evidence-use), a structured framework for developing specialized AI tools grounded in validated evidence. We demonstrate the application of the AIRE Protocol through the development of the Results First AI Assistant, designed to enhance policymakers' access to the Results First Clearinghouse. A mixed-methods approach was used. Forty-five US state legislators participated in live interviews to assess AI adoption patterns, perceived benefits, and concerns. The AIRE Protocol guided the rapid prototyping and iterative development of the AI assistant, with input from policymakers, national policy organizations, and technical experts, resulting in tailored evidence based recommendations. While policymakers expressed interest in AI tools for improving access to information under time constraints, they also raised concerns regarding transparency, reliability, and appropriate use. Our findings suggest that AI tools tailored to policymakers' needs-developed using frameworks like AIRE-will facilitate the integration of validated evidence into legislative decision-making while addressing ethical and practical concerns associated with generalized AI solutions.
A number of articles have heralded the use of artificial intelligence (AI) agents to serve as a replacement for human psychotherapists. Despite the rapid advancements in the use of both rule-based and generative AI programs in the recent past, an overall review shows only small impacts on certain mental health symptoms, particularly depression, and then only in the short-term. Significant strides forward, both in terms of technology and the development of answers to ethical questions regarding AI's use in psychotherapy, must be seen before the use of such systems becomes widespread or regularly recommended to replace human mental health clinicians.
Cadavers play an irreplaceable role in anatomy education, offering unique opportunities for hands-on learning and the internalization of ethical values. While large language models (LLMs) are increasingly utilized in medical education, their perspectives on the moral status of cadavers remain underexplored. This study examined the responses of four LLMs-ChatGPT, Gemini, DeepSeek, and Copilot-regarding the concept, significance, and ethical responsibilities toward cadavers. A thematic analysis was conducted based on the AI-generated responses. Three main themes emerged: (1) The Meaning of the Cadaver, where all LLMs preferred the term "donor," reflecting respect for the body's human origin and voluntary contribution to science; (2) The Importance of the Cadaver, emphasizing its educational superiority over models and simulations due to realism, anatomical variation, and ethical learning; and (3) Attitudes and Responsibilities, where LLMs expressed moral, ethical, legal, and academic responsibilities, highlighting respect, non-maleficence, and professional conduct. LLMs also acknowledged that donor-related terminology and background knowledge influence learners' attitudes. Large language models attribute moral value to cadavers based on their human origin and educational role. While not granting full personhood, they support respectful and ethically guided engagement. These findings suggest that LLMs, when integrated into medical education, may reinforce ethical awareness and serve as potential tools for promoting professional identity formation.
暂无摘要(点击查看详情)
Insurance fraud detection remains challenging to predict in reality because claims data is often uneven among classes, and the information concerning claims is often multidimensional and nonhomogeneous. The present research used a unified evaluation framework to assess the predictive and interpretive capabilities of three distinct model families: CatBoost (tree-based ensemble learning), Bi-GRU with Attention (sequence-oriented learning), and TabTransformer (categorical feature contextual). The model families were tested using a standardised experimental protocol.The study is novel in the sense of a cross-model interpretability framework that unites Shapley Additive Explanation (SHAP)-based feature attribution with attention-based contextual analysis to enable a clear comparison of model reasoning between the suggested frameworks. The data on which the experiments were done consisted of 4,000 life insurance claims that were characterized in terms of 83 attributes. Common preprocessing procedures like missing values, scaling numerical variables, and selecting highly correlated variables were used before training the models. Experimentally, CatBoost is proven to be the most precise on legitimate claims, Bi-GRU is the most recall on fraudulent claims, and TabTransformer is the best in terms of tradeoff between accuracy, interpretability, and computational efficiency. Practical characteristics such as the quantity of claim, tenure in a policy, and diagnosis were repeatedly emphasized in both SHAP and attention analyses. Combined, the current research study provides a consistent and explainable benchmark that may be applied to conduct fraud detection research reliably and assist practitioners in choosing models that are accurate and understandable.
Wetlands are critical in keeping ecological balance and supporting biodiversity. However, wetland degradation has become more severe. It is crucial to understand the ecological and biological conditions within them to address these threats. Recent advances in artificial intelligence give new tools for wetland monitoring. The study uses Landsat because it is well-suited for long-term and regional-scale analysis for its resolution and continuous data. A novel framework-the Lightweight Attention-Conditional Convolution (LACC) network-is introduced to efficiently process extended time series. By combining lightweight attention with conditional convolution, LACC is designed to better capture temporal variability and complex class patterns while remaining computationally efficient for large-area applications. Using the new model, we generated a 20-year wetland land-cover dataset for the whole state of Louisiana, providing valuable insights into the wetland conditions. The time series could be used to link class transitions and vegetation conditions to changes in biomass/soil carbon stocks, as well as methane-emission proxies, thereby strengthening regional GHG inventories and connecting wetland monitoring to carbon accounting and urban resilience planning across coastal Louisiana.
Since the emergence of synthetic biology, biofoundries have developed as enabling infrastructures that scale engineering biology globally. Landmark initiatives, such as Genome Project-Write, JCVI-syn3.0, Sc2.0, SynMoss and the Synthetic Human Genome Project, have significantly advanced the feasibility of constructing chromosome-sized DNA and revealed key principles of genome function and design. Nevertheless, the intrinsic complexity of cellular systems and the resource-intensive nature of experimental design-build-test-learn cycles continue to constrain innovation. Recent advances in artificial intelligence (AI), whole-cell modelling and digital twinning are now creating opportunities for self-improving, AI-driven biofoundries that seamlessly integrate in silico design and validation with miniaturised and automated in vitro testing. This review surveys the technologies shaping AI-driven synthetic biology, highlighting their convergence with automation, digitisation and miniaturisation to enable fully autonomous biofoundries that unify computational design, automated fabrication and data-driven learning within a single adaptive framework.
Next-generation sequencing (NGS) has revolutionized the field of genomics by providing rapid, high-throughput, and cost-effective platforms for analyzing genomes, transcriptomes, and epigenomes. Its application spans cancer genomics, infectious disease research, rare disease diagnostics, and precision medicine, enabling comprehensive detection of genetic variants and their functional implications. The advent of advanced methods such as single-cell sequencing, long-read technologies, and multi-omics integration has further expanded the scope of NGS, allowing unprecedented insights into cellular heterogeneity, structural variations, and systems-level interactions. These innovations have facilitated the identification of actionable mutations, supported biomarker discovery, and enhanced our understanding of complex biological processes in both research and clinical contexts. Despite these advancements, several challenges remain. The vast volume of sequencing data necessitates robust computational infrastructures for storage, processing, and interpretation. Sequencing error rates, though improving, continue to impact variant detection and clinical reliability. Ethical concerns regarding privacy, data sharing, and equitable access are also critical barriers that must be addressed, particularly in resource-limited settings. Moreover, translating genomic findings into clinically actionable outcomes requires standardized frameworks and interdisciplinary collaboration among clinicians, geneticists, and bioinformaticians. Looking ahead, the integration of artificial intelligence, machine learning, and automation into NGS data pipelines promises to significantly enhance accuracy, scalability, and clinical utility. These emerging innovations, coupled with global efforts to ensure accessibility and ethical implementation, position NGS as a cornerstone of precision medicine, paving the way for individualized treatment strategies and transformative improvements in healthcare delivery.
The gut microbiome supports digestion, immunity, and metabolism; its imbalance (dysbiosis) drives inflammation and metabolic dysfunction, contributing to chronic diseases such as diabetes, cardiovascular disease, inflammatory bowel disease, and autoimmune disorders. Medicinal plants provide a wide range of phytochemicals (such as polyphenols, flavonoids, alkaloids, saponins), which reach the colon and undergo two-sided interactions with microbes in the gut, acting as potential microbiome modulators and substrates of biotransformation into bioactive metabolites. This structured narrative review synthesises evidence from peer-reviewed studies indexed in PubMed, Scopus, and Web of Science over the last 10 years on the role of medicinal plants in microbiome-mediated chronic disease modulation. This literature is organised into three mechanistic axes: (i) perturbations, defined here as measurable shifts in microbial diversity or taxonomic composition relative to a baseline or healthy reference state, together with beneficial taxa enrichment; (ii) alterations in microbial metabolite output, especially short-chain fatty acids (SCFAs) and other immunometabolic mediators; and (iii) downstream host metabolic and immune signalling. Rather than broad descriptive summaries, the literature is organised using an axis-based mechanistic framework, highlighting key translational constraints such as botanical heterogeneity, dose/formulation variability, and inconsistent microbiome endpoint standardisation, that must be addressed to strengthen human evidence and clinical relevance. Illustrative microbiome-mediated processes involve botanicals such as turmeric (curcumin), ginseng (ginsenosides), and green tea (catechins), though evidence strength varies by study design. Future progress requires standardised phytochemical characterisation, microbiome-stratified trials, and integration of multi-omics with artificial intelligence analytics to enhance mechanistic insight, identify responders, and enable personalised plant-based microbiome therapies.
Respiratory syncytial virus (RSV) remains a major cause of severe acute respiratory infections across the life course, particularly in infants, older adults, and immunocompromised individuals. For decades, clinical management relied almost exclusively on supportive care, while ribavirin, the only licensed antiviral, offered limited therapeutic benefit. The recent introduction of prefusion F (pre-F)-based vaccines and long-acting monoclonal antibodies has reshaped RSV prevention and represents the most significant advance since the discovery of the virus. Nevertheless, effective pharmacological treatment of established infection continues to be an unmet need, and the burden of RSV-associated hospitalizations and mortality persists worldwide. This review critically synthesizes current and emerging RSV therapeutic strategies from a pharmacological and translational perspective, integrating approved interventions with emerging antiviral pipelines. Licensed vaccines and monoclonal antibodies have demonstrated high efficacy in preventing lower respiratory tract disease; however, their impact is constrained by limited access and uptake, as well as the absence of complementary direct-acting antivirals (DAAs). Investigational agents targeting the fusion protein and the N/L replication complex have shown potent antiviral activity, but clinical trials have highlighted challenges related to the timing of administration, host immunity, and resistance selection. Advances in structural biology, air-liquid interface models, high-throughput screening, and artificial intelligence are accelerating the identification of new molecular targets and host-directed strategies. Overall, RSV control will require an integrated therapeutic framework in which vaccines and monoclonal antibodies prevent severe disease, while early-administered DAAs and resistance-aware combination strategies treat established infection and reduce breakthrough disease in high-risk populations.