Community Paramedicine (CP) is an evolving model of care designed to improve access to healthcare, particularly in rural communities where disparities are pronounced. However, the perspectives of Kansas Emergency Medical Services (EMS) professionals regarding CP have not been previously studied. Authors of this study aimed to inform CP implementation in Kansas. A cross-sectional survey was administered to EMS professionals in rural Kansas communities with clinical sites affiliated with the Summer Training Option in Rural Medicine (STORM), including EMS and fire departments across 19 communities. The survey included Likert-scale and open-ended questions assessing EMS professionals' understanding and perceptions of CP. The study yielded 59 completed surveys and 9 incomplete surveys. Across all certification levels, respondents reported the highest confidence in assessment skills and the lowest confidence in referral activities. Paramedics and Advanced EMTs reported at least moderate confidence across more skill areas than EMTs. Reported concerns that may discourage CP certification included training and certification requirements, limited interest, funding and staffing constraints, perceived mismatch with the EMS mission, workload and age, liability, compensation, patient education, and organizational culture and morale. Most respondents (86%) believed that implementing CP would at least moderately improve healthcare access in their communities. EMS professionals reported confidence in many skills relevant to CP, suggesting that a CP scope of practice may align with existing EMS competencies, potentially minimizing the need for substantial additional training. Targeted education in prevention, public health, and referral skills may strengthen CP programs. Ensuring fair compensation, accessible training opportunities, and financial support for CP program development will be critical to successful implementation.
Evaluating resident physicians is essential for resident development and patient safety. Fear of retaliation from residents may be a barrier to faculty completing resident physician evaluations. This study examined family medicine program directors' perceptions on fear of retaliation from resident physicians as a barrier to faculty completing honest, high-quality evaluations. The study was conducted as part of the 2024 Council of Academic Family Medicine Educational Research Alliance study of family medicine residency program directors. The 10-item survey assessed program directors' perceptions of faculty fear of retaliation, the impact of this fear, and rates of retaliation occurring in their programs in the last 3 years. The response rate was 45.39% (320/705). More than half (56.4%, 172/305) perceived that faculty in their programs are reluctant to give critical feedback on evaluations; nearly half (48.9%, 150/305) believed that fear of retaliation is a barrier. Fear of a reciprocal negative evaluation (34.5%, 106/305) and fear of formal complaints (38.9%, 119/305) were prevalent. Lack of adequate documentation was attributed to a failure to remediate and dismiss a resident in 19.8% (61/307) and 11.7% (36/306) of programs, respectively. Formal complaints against an evaluator or program occurred in 18.6% (57/307) of programs, and civil lawsuits were filed in 5.2% (16/306) in the preceding 3 years. Family medicine program directors perceive fear of retaliation from residents as a barrier to faculty completing honest, high-quality evaluations. Formal complaints and even civil lawsuits against evaluators or programs are not uncommon.
Uncontrolled hypertension remains a leading cause of preventable cardiovascular morbidity and mortality. Despite frequent clinic visits, timely follow-up for elevated blood pressure (BP) is often inconsistent, delaying medication adjustments. At the University of Kansas School of Medicine Center for Health Care (CHC) Internal Medicine Clinic, BP rechecks were often delayed for months after medication changes, contributing to suboptimal control. To address this gap, we implemented a standardized two-week BP recheck workflow supported by provider education, nursing engagement, and home BP monitoring. The aim was to improve timely BP rechecks and achieve a clinic-wide BP control rate at or above the 75th national percentile. This quality improvement initiative was implemented from August 2023 to March 2024, with outcomes tracked longitudinally. A standardized hypertension follow-up and medication titration workflow, modeled after Kaiser Northern California guidelines, was introduced. Clinicians documented BP targets and follow-up plans. Patients with repeated BP >130/80 mmHg were offered self-measured BP (SMBP) monitoring, using shared decision-making to determine home or nursing follow-up. Participants received education, BP logs, and instructions, and BP readings were documented in REDCap® or the patient portal. Educational sessions reinforced workflow consistency and care coordination. Same-visit and two-week BP rechecks showed modest improvement (14% and 4%, respectively), but hypertension control improved substantially, increasing from 66% in 2023 to 76% by October 2025, exceeding the national average of 62%. Improvements correlated with ongoing staff education and reinforcement of the standardized workflow. A standardized, team-based workflow incorporating home BP monitoring and a structured treatment algorithm improved BP control, demonstrating a scalable model for chronic disease management.
The use of large language models and natural language processing (NLP) in medical education has expanded rapidly in recent years. Because of the documented risks of bias and errors, these artificial intelligence (AI) tools must be validated before being used for research or education. Traditional and novel conceptual frameworks can be used. This study aimed to validate the application of an NLP method, bidirectional encoder representations from transformers (BERT) model, to identify the presence and patterns of sentiment in end-of-course evaluations from M3 (medical school year 3) core clerkships at multiple institutions. We used the Patino framework, designed for the use of artificial intelligence in health professions education, as a guide for validating the NLP. Written comments from de-identified course evaluations at four schools were coded by teams of two human coders, and human-human interrater reliability statistics were calculated. Humans identified key terms to train the BERT model. The trained BERT model predicted the sentiments of a set of comments, and human-NLP interrater reliability statistics were calculated. A total of 364 discrete comments were evaluated in the human phase. The range of positive (30.6%-61.0%), negative (4.9%-39.5%), neutral (9.8%-19.0%), and mixed (1.7%-27.5%) sentiments varied by school. Human-human and human-AI interrater reliability also varied by school. Human-human and human-AI reliability were comparable. Several conceptual frameworks offer models for validation of AI tools in health professions education. A BERT model, with training, can detect sentiment in medical student course evaluations with an interrater reliability similar to human coders.
Learners in inpatient learning environments with frequent transitions can result in decreased valuation of relationship-building and can contribute to lack of ownership in patient care. Faculty involved with longitudinal clerkships have stronger relationships with students and improved job satisfaction. This project aimed to assess preceptors' attitudes before and after reformatting the structure of the clinical component of the Obstetrics & Gynecology rotation to have continuity of students in private clinics. This project utilized cross-sectional surveys administered to a convenience sample of volunteer faculty affiliated with The University of Kansas School of Medicine-Wichita, Wichita, Kansas Department of Obstetrics and Gynecology. Survey questions included demographics, attitudes regarding student integration into clinical settings and continuity of educational interactions. The survey was administered before standardizing the clerkship (pre-survey) and 6 months after implementation (post-survey). Fifteen pre-surveys and six post-surveys were completed. Compared to 40% (n = 6) in the pre-survey, 50% of post-survey respondents (n = 3) agreed with the statement that having the same student in clinic would decrease their clinic efficiency. More respondents disagreed that having the same student in clinic required a large time commitment, 83.3% (n = 5) post-survey compared to 46.7% (n = 7) pre-survey. All respondents agreed having the same student during the entirety of a clerkship allowed the faculty member to better integrate them into the care team. Continuity of medical students in clinic may subjectively reduce clinic workflow efficiency, however students were better integrated into the care team and clinical activities, and preceptors reported that it did not require a large time commitment.
Rural Kansas accounts for one-quarter of the state's population and births. Living in rural areas is associated with delayed prenatal care, adverse birth outcomes, and higher infant mortality. Limited access to obstetric services, particularly in maternity care deserts (MCDs), defined as counties without hospitals or clinicians offering obstetric care, contributes to these disparities. Authors of this study assessed obstetric care availability in rural Kansas and compared findings with 2016 data. In this cross-sectional study, rural hospitals were defined as those located in counties with <39.9 people per square mile using Kansas Department of Health and Environment (KDHE) data. Ninety-three hospitals were contacted by phone between August and October 2023. A survey, replicated from a 2016 study, assessed obstetric clinicians' availability, attrition, and anticipated retirements. Data were analyzed using descriptive statistics and Wilcoxon signed-rank tests. Of 93 hospitals, 66 (71.0%) responded. Among these, 38 (57.6%) reported no obstetric services and 28 (42.4%) offered services. Nine hospitals (32.1%) anticipated losing clinicians within five years, including one (3.5%) expecting complete loss. Among 28 hospitals with data from both years, 50.0% lost ≥1 provider and 21.4% lost all. Median provider counts declined from 5 (IQR 3-6.5) to 4 (IQR 2-5; p = 0.038). MCDs have continued to expand. Obstetric care access in rural Kansas has declined, with expanding MCDs. These trends threaten maternal and neonatal outcomes and underscore the need for targeted strategies to sustain rural obstetric services.
Opioid poisoning remains a major public health challenge in the United States. Intranasal naloxone has expanded community capacity to respond to opioid poisoning events; however, outcomes from distribution programs remain incompletely understood. Authors of this study examined naloxone's impact on poisoning events in Kansas, informed by recipients' perceptions and experiences. A cross-sectional, electronic survey was conducted in partnership with DCCCA, a nonprofit organization. Eligible participants included individuals and organizational representatives who (1) obtained a free naloxone kit from DCCCA Inc. between November 2021 and April 2025 and provided an email address, or (2) accessed a naloxone vending machine in Wichita, Kansas, between May and June 2025. The survey assessed experiences with opioid poisoning reversals, post-reversal care, confidence, training, barriers to carrying naloxone, and harm reduction perceptions. Of 767 respondents, 56.8% were individuals, 23.8% were organizational representatives, and 13.4% were both. Overall, 32.1% reported witnessing an opioid poisoning, and 14.8% reported administering naloxone at least once. Nearly all reported administrations resulted in survival. After reversal, 73.3% of recipients sought medical care; perceived lack of necessity (70.0%) was the most common reason for declining care. Emergency preparedness (58.9%) was the most common reason for obtaining naloxone. Forgetfulness (27.0%) was the most frequently reported barrier to carrying naloxone. Among organizational representatives, 63.5% reported offering naloxone training, while 28.5% distributed naloxone kits. Naloxone distribution programs may support opioid overdose harm reduction by equipping laypersons to respond to poisoning events. However, barriers remain, particularly related to post-reversal medical care and consistent naloxone carriage.
The optimal timing for basal insulin initiation in diabetic ketoacidosis (DKA) remains unclear. British guidelines endorse early basal insulin (EBI), while the American Diabetes Association emphasises overlap duration with intravenous insulin, without mention of timing. This meta-analysis evaluates whether EBI administration of basal insulin improves clinical outcomes in adults with DKA. A systematic review and meta-analysis were performed according to the PRISMA guidelines. Databases searched included MEDLINE, Embase, Scopus, Web of Science and the Cochrane Central Register of Controlled Trials through December 11, 2024. Inclusion criteria included articles in the English-language randomised controlled trials (RCTs) or observational studies evaluating EBI in adult DKA patients. Non-human studies, conference abstracts, and case reports were excluded. The primary outcome for this study was hospital length of stay (LOS). Additional outcomes included intensive care unit (ICU) LOS, time to DKA resolution, hypoglycemia, and rebound hyperglycemia between EBI and usual care. Risk of bias was assessed using the Cochrane and Newcastle-Ottawa tools. Grading of Recommendations Assessment, Development, and Evaluation was performed to evaluate the quality of evidence. From 1214 identified studies, eight (4 RCTs, 4 observational) met inclusion criteria for a total of 247 patients in the EBI group and 552 patients in the control group. No significant difference in hospital LOS was found (mean difference -11.17 h; 95% CI: -29.91 to 7.56). ICU LOS, time to DKA resolution, incidence of hypoglycemia, and rebound hyperglycemia also did not demonstrate any significant differences between groups. Significant heterogeneity existed across studies for most outcomes. All studies had a high risk of bias, and the quality of evidence was very low. EBI did not result in significant differences in hospital LOS, ICU LOS, or time to DKA resolution; however, there were no increased adverse events with EBI. Current studies of early EBI have significant limitations. Future research should focus on developing high-quality RCTs.
Comorbidities, common in patients with advanced breast cancer (ABC), may impact survival outcomes and health-related quality of life (HRQoL). Here, we report subgroup analyses on the basis of comorbidities of patients from POLARIS (NCT03280303), a prospective, observational study of patients with hormone receptor-positive/human epidermal growth factor receptor 2-negative (HR+/HER2-) ABC who were treated with palbociclib plus endocrine therapy in routine clinical practice in North America. Real-world progression-free survival (rwPFS) and overall survival (OS) were evaluated by line of therapy (1 LOT, ≥ 2 LOT), Charlson Comorbidity Index (CCI) score (0, 1-2, and 3+), and common comorbid disorder categories (cardiovascular, psychiatric, metabolic and nutritional, and blood and lymphatic disorders). HRQoL was assessed using the global health status (GHS)/QoL subscale of the European Organization for Research and Treatment of Cancer Quality-of-Life Questionnaire Core 30. From January 2017 to October 2019, 1250 patients (median age, 64 years) initiated palbociclib-based therapy. Median rwPFS (95% CI) for patients with CCI scores of 0, 1-2, and 3+ were 20.3 (17.1-24.8), 24.2 (19.4-29.5), and 16.8 (11.2-20.8) months in 1 LOT and 13.7 (6.2-19.7), 13.2 (9.4-17.5), and 14.9 (8.0-21.9) months in ≥ 2 LOT, respectively. Median OS durations (95% CI) were 48.8 (37.3-not estimable [NE]), not reached (43.0-NE), and 34.8 (29.1-44.1) months in 1 LOT and 39.0 (30.6-50.5), 37.9 (26.5-42.6), and 31.6 (20.9-45.2) months in ≥ 2 LOT, respectively. Patients with blood and lymphatic disorders had shorter rwPFS and OS than those with other common comorbidities. GHS/QoL was maintained irrespective of CCI score. Patients with a CCI score of 0 had clinically meaningful and statistically significantly higher mean GHS/QoL scores than patients with a CCI score of 3+ at each assessment. Patients with HR+/HER2- ABC receiving palbociclib with higher comorbidity burden, especially blood and lymphatic system disorders, had poorer clinical outcomes. GHS/QoL was preserved regardless of comorbidity burden. Clinical trial number: NCT03280303.
Use of complementary and alternative medicine (CAM) among cancer patients has increased, yet limited research has examined how patient behaviors, clinician perspectives, and communication patterns intersect within the same clinical environment. This scoping review examines the prevalence and types of CAM used by cancer patients, evaluates oncologists' knowledge and attitudes toward CAM, and identifies communication factors shaping disclosure and clinical decision-making in conventional oncology care. Following PRISMA-ScR guidelines, this review included peer-reviewed studies assessing CAM use among adult cancer patients, clinician perspectives, or patient-physician communication. Eligible studies included surveys, observational studies, systematic reviews, and clinical reports. Included studies demonstrated that approximately 40% of cancer patients reported using CAM, with common modalities including herbal products, nutraceuticals, probiotics, and mind-body therapies. Non-disclosure rates varied considerably, with many patients refraining from discussing CAM use because of concerns about negative provider reactions or limited consultation time. On the clinician side, key barriers included limited formal training in CAM and uncertainty regarding the quality of supporting evidence. Evidence also demonstrated communication gaps and discordance between patient motivations and clinician concerns related to treatment safety and herb-drug interactions. CAM use remains prevalent in oncology and is shaped by patient beliefs, cultural factors, and clinical communication. Variability in study methods and definitions limits cross-study comparisons. Enhancing clinician education, fostering open communication, and conducting institutional assessments may improve the safe and informed integration of CAM into oncology practice.
Myasthenia gravis (MG) is a rare autoimmune neurologic disorder with a heterogenous disease presentation. The objective of this targeted literature review was to characterize the burden of disease and unmet treatment needs in patients with MG. Scientific articles published in English between May 4, 2013, and November 24, 2025, were identified in the PubMed, Medline, Embase, and Cochrane Library databases using a pre-defined Boolean search strategy. Titles and abstracts were screened for information on clinical presentation, pathology, and diagnostic considerations for MG; burden of disease, including epidemiologic, clinical, humanistic, and economic burden; and treatments, including treatment guidelines. The analysis included 318 records. Population-based estimates of MG incidence published from 2007 onwards ranged from 0.3 to 6.1 per 100,000 person-years, and prevalence estimates ranged from 2.2 to 58.6 per 100,000 persons. The clinical and humanistic burden of MG remains high, with patients reporting inadequate symptom control, fatigue, poor quality of life, and dissatisfaction with their current treatment. Greater disease severity was associated with reduced quality of life and poorer mental health. Medical costs varied by region, and key drivers of direct medical costs include hospitalization and treatment of exacerbation or myasthenic crisis. The burden of MG remains high, despite the availability of novel treatments. Studies are needed to better characterize the current burden of MG in the context of newer treatment options and to explore how the disease burden may be reduced for patients.
To validate a custom smartphone application for at-home visual acuity (VA) measurement in children. A total of 452 children aged 3-17.5 years participated. Certified examiners measured in-office test-retest VA (logMAR) using gold-standard Amblyopia Treatment Study HOTV (3-to-6-year-olds, younger cohort) or electronic Early Treatment of Diabetic Retinopathy Study (7-to-17.5-year-olds, older cohort) protocols at 3-4.5 m and app-based VA at 1.5 m. Caregivers measured at-home app-based VA at 1.5 m. Comparing at-home app-based with gold-standard VA, in eyes 20/40 or better, 95% (143/151) and 93% (91/98) of the younger and older cohorts were within 2 lines, respectively (mean differences: younger = -0.03, older = -0.04; 95% limits-of-agreement half-width (LOA): younger = ±0.26, older = ±0.22). In eyes 20/50 or worse, 66% (42/64) and 75% (76/101) of the younger and older cohorts were within 2 lines, respectively (mean differences: younger = 0.11, older = 0.13, LOA: younger = ±0.50, older = ±0.51). Comparing in-office app-based VA with gold-standard VA, in eyes 20/40 or better, 98% (160/164) and 94% (99/105) of the younger and older cohorts were within 2 lines, respectively (mean differences: younger = -0.03, older = -0.03; LOA: younger = ±0.22; older = ±0.24). In eyes 20/50 or worse, 85% (60/71) and 91% (101/111) of the younger and older cohorts were within 2 lines, respectively (mean differences: younger = 0.04; older = 0.04; LOA: younger = ±0.39; older = ±0.24). For gold-standard test-retest, in eyes 20/40 or better, 99% (163/164) and 99% (104/105) of the younger and older cohorts had retest within 2 lines, respectively (mean differences: younger = 0.00; older = 0.01; LOA: younger = ±0.17; older = ±0.11). For 20/50 or worse, 92% (66/72) and 100% (111/111) in the younger and older cohorts were within 2 lines, respectively (mean differences: younger = 0.01; older = 0.02; LOA: younger = ±0.35; older = ±0.15). Our app demonstrated good concordance with the gold standard at home and in the office for eyes with VA of 20/40 or better. However, concordance decreased considerably for eyes with VA 20/50 or worse, particularly at home.
Transthyretin amyloid cardiomyopathy (ATTR-CM) is a progressive disease. With the availability of multiple disease-modifying therapies, improved monitoring strategies are needed to optimize management. Integrating clinical parameters, biomarkers, functional assessments, and health-related quality of life (HR-QOL) via patient-reported outcomes provides a more accurate characterization of disease progression in ATTR-CM. This is a prospective observational study of patients with ATTR-CM who were followed at Columbia University Irving Medical Center. Disease progression was defined across 4 domains: (1) clinical parameters (New York Heart Association [NYHA] class of worsening or outpatient worsening heart failure characterized by increased oral loop diuretics; (2) biomarkers (NT-proBNP increase >700 pg/mL and >30%, or estimated glomerular filtration rate decline >20%); (3) functionality (6-minute-walk-test [6MWT] decline >35m or >5%, or Short Physical Performance Battery (SPPB) decline ≥1 point); and (4) HR-QOL (Kansas City Cardiomyopathy Questionnaire-overall score [KCCQ-OS] decrease >5 points or SF-36v2 (a short-form health survey) decline >0.5 × standard deviation (SD). The combined endpoints included all-cause mortality and cardiovascular hospitalization. Of the 158 patients (91% male, median age 78.5 years) who were included, most of them were in early stages (68% National Amyloidosis Centre, stage 1), and all of them were taking disease-modifying therapy. At follow-up, 23% had clinical worsening (7% NYHA, 11% outpatient worsening heart failure, and 5% both). Biomarker worsening occurred in 21.5% (9% NT-proBNP rise, 9% estimated glomerular filtration rate decline, 4.5% both). Functional decline was observed in 54.9% (22.9% 6-Minute Walk Test, 21% Short Physical Performance Battery Test, 11.1% both). HR-QOL worsening was reported in 44% (15% KCCQ, 16% -36v2, 13% both). In total, 69% of patients showed disease progression in at least 1 domain. A progression score (0-4 points) was created by assigning 1 point per worsening domain, which was associated with increased risk of the combined endpoint (HR = 2.54 per point; 95% CI:1. 68-3.85; P < 0.001). Disease progression is common in patients with ATTR-CM who are taking disease-modifying therapy, and integrating a multimodal assessment provides a comprehensive framework for monitoring progression across more domains; it was associated with an increased risk of adverse events.
The β-D-glucan (BDG) serum assay is a screening tool used in the diagnosis and management of invasive fungal infections (IFI). False-positive results have been reported, including in patients who have recently received intravenous albumin prior to testing. Author of this study examined the association between timing of albumin administration and BDG assay results. We conducted a retrospective cross-sectional study of 4,599 electronic health records at The University of Kansas Health System (TUKHS). Patients were eligible if they were ≥18 years of age and had a BDG serum assay performed between 2010 and 2020. Demographic data, comorbidities, albumin administration, and IFI status were extracted and recorded in REDCap, a HIPAA-compliant database. The final analytic cohort included 2,061 patients. Logistic regression was used to assess the association between time from albumin administration to BDG testing and false-positive results. Statistical analyses were performed using R version 4.5.2. A total of 255 patients received albumin within two weeks prior to BDG testing, of whom 109 (42.7%) had a positive BDG result. Among these positive results, 83 were classified as false positives (false-positive rate: 76.1%). Logistic regression demonstrated the highest odds of a false-positive result when albumin was administered 6-8 days prior to testing (OR 1.22; 95% CI 0.51-2.91). Albumin administration within days preceding BDG testing may be associated with an increased risk of false-positive results, potentially leading to unnecessary diagnostic evaluation and treatment.
ObjectiveThe present retrospective diagnostic accuracy study aimed to evaluate the performance of an AI-based system for automated detection of teeth, caries, implants, restorations, and fixed prostheses on bitewing radiographs.MethodsA total of 407 bitewing radiographs from 315 adult patients were analyzed using an AI system developed by VELMENI Inc. and compared with reference annotations on individual tooth-level provided by two oral and maxillofacial radiologists. Every tooth was encoded for the absence (0) or presence (1) of radiographic findings: caries, restorations, fixed prosthesis, and implants. Cohen's kappa (κ) with 95% bootstrap confidence intervals was used for assessment of inter-rater reliability. The AI system's diagnostic accuracy was evaluated for sensitivity and specificity using human consensus reference standards.ResultsThe annotated dataset consisted of 2,829 tooth-level observations. The two radiologists showed substantial to near-perfect agreement for prosthesis (κ = 0.925) and restorations (κ = 0.872) detection, moderate for caries detection (κ = 0.804), and lowest for implants detection (κ = 0.726). The AI system showed substantial agreement with the human observers for restorations (κ = 0.812-0.871) and prosthesis detection (κ = 0.882-0.940), moderate for caries (κ = 0.454-0.508), and the highest agreement for implant detection (κ = 0.763-0.974). Post-filtration for human consensus, the AI system showed high sensitivity for implant (1.000), prosthesis (0.984), restorations (0.974), and caries (0.972). The AI system showed high specificity for implant (1.000) and prosthesis (0.984), and restorations (0.936) detection, but slightly lower specificity for caries detection (0.842).ConclusionsThe AI system demonstrated diagnostic performance comparable to that of oral and maxillofacial radiologists for detecting multiple dental findings on bitewing radiographs, including restorations, prosthesis, and implants, and slightly lower for caries detection. These findings support the potential role of the AI system as a clinical adjunct to improve efficiency and consistency in routine dental imaging interpretation.
Healthcare industry is a major contributor to global carbon emissions. In the United States, a substantial portion is linked to solid waste, with a single hospital bed generating 29 pounds of waste per day, approximately 30% of which is attributable to the operating room. Much of this waste results from improper disposal of items requiring specific processing. Authors of this study assessed obstetrician-gynecologists' knowledge, practices, and perceptions regarding proper waste management. Authors conducted a cross-sectional survey among practicing obstetrician-gynecologists listed with the Kansas Board of Healing Arts or affiliated with the University of Kansas School of Medicine. Clinically inactive physicians were excluded. Survey questions addressed demographics, knowledge of appropriate surgical waste disposal, waste management practices, and perceptions regarding waste management. IRB approval was obtained. Categorical variables were reported as frequencies and percentages. Of 46 respondents, most agreed they understood the environmental impact of medical waste (81.1%, 30/37), and 86.5% (32/37) expressed concern about their personal contributions to the climate crisis. Proper waste disposal was considered important by physicians (96.9%, 31/32) and, in respondents' view, by their patients (89.2%, 33/37). Regarding their primary surgical facility, 25.6% (11/43) reported being unaware of the facility's waste management plan, and 34.9% (15/43) reported their facility did not recycle. All respondents incorrectly identified items that should be placed in red biohazard waste bags for chemotherapy patients. While most physicians are concerned about the environmental impact of their medical practice, education and institutional resources related to waste management do not appear to match that concern. These findings highlight a significant opportunity to improve waste management education and practices within healthcare facilities.
Kansas faces a critical pediatric mental health (MH) workforce shortage, leaving many children without timely access to specialty care. As a result, pediatric primary care physicians (PCPs) are increasingly responsible for MH care despite limited training, time, and system support. Understanding PCP perspectives is essential to developing interventions that expand pediatric MH capacity. A qualitative focus group was conducted at the Kansas Chapter of the American Academy of Pediatrics' 2025 Progress in Pediatrics Conference in Wichita, Kansas. Practicing physicians participated in guided discussions exploring experiences with pediatric MH, confidence, barriers, and recommendations. Sessions were recorded, transcribed, and analyzed using thematic analysis informed by grounded theory, with independent coding and consensus-based theme development. Nine resident physicians participated. They described growing awareness and acceptance of pediatric MH needs. Helpful supports included co-located MH professionals, electronic health record prompts, required training, and programs such as KAAP and KSKidsMAP. Persistent barriers included limited time, referral challenges, poor continuity, and limited knowledge of community resources. Parents were viewed as essential partners, though they sometimes hindered care because of stigma or communication difficulties. Residents felt confident managing anxiety and depression but preferred referral for more complex conditions. Participants emphasized the need for streamlined referrals, expanded training, and stronger collaborative care models. Findings highlight ongoing system- and knowledge-related barriers and reinforce the need for programs that support PCPs in addressing pediatric MH gaps in Kansas.
Respiratory syncytial virus (RSV) is a leading cause of hospitalization among infants worldwide. Passive immunization remains central to prevention. Palivizumab has been used for over two decades, whereas nirsevimab, approved in 2023, provides extended protection with a single dose. However, comparative post-marketing safety data between these agents remain limited. Authors compared post-marketing adverse event profiles of nirsevimab and palivizumab using the U.S. Food and Drug Administration Adverse Event Reporting System (FAERS). FAERS reports from July 2023 to March 2025 were analyzed. Reports were included if patients were ≤2 years of age and either nirsevimab or palivizumab was listed as the primary suspect drug. Duplicate and incomplete records were excluded. Adverse events were classified using Medical Dictionary for Regulatory Activities (MedDRA) System Organ Class (SOC) categories. Group comparisons were performed using chi-square or Fisher's exact tests as appropriate. Logistic regression was used to identify SOCs disproportionately associated with each drug. A total of 2,021 reports met inclusion criteria (nirsevimab = 641; palivizumab = 1,380). Hospitalization was reported in 50% of nirsevimab cases and 70% of palivizumab cases, while deaths were reported in 5.9% and 9.8% of cases, respectively. Logistic regression identified three SOCs reported more frequently with nirsevimab: injury, poisoning, and procedural complications (OR 11.9, p < 0.0001), nervous system disorders (OR 2.9, p = 0.029), and skin and subcutaneous tissue disorders (OR 4.5, p = 0.002). No SOCs were reported more frequently with palivizumab. Nirsevimab and palivizumab demonstrated broadly comparable post-marketing safety profiles. Higher reporting of administration-related adverse events for nirsevimab likely reflects its recent introduction and reporting patterns rather than inherent differences in safety.
Adolescent and young adult (AYA) kidney transplant recipients (KTR) are susceptible to risk-taking behaviors that peak during transfer from pediatric to adult healthcare, resulting in poorer outcomes. Professional guidelines support healthcare transition (HCT) practices to mitigate this risk; however, HCT perspectives and practices among providers, especially adult providers caring for KTR in the US, have not been widely studied. An online survey of American Society of Transplantation (AST) members was conducted targeting both adult providers (AP) and pediatric providers (PP). The survey was completed by 135 respondents (predominantly US), including 61 PP and 74 AP. US respondents who disclosed their center ID represented 30% of all programs. Providers agreed that HCT was important (PP 97%, AP 94%) and a shared responsibility (PP 91% and AP 84%). Respondents to our survey frequently worked within transition programs (PP 70% vs. AP 48%), but the majority did not have a formal HCT policy to guide them (PP 43%, AP 21%). Multidisciplinary transition clinics were rare (PP 27%, AP 23%). Only 64% of PP included targeted HCT education either as part of a routine visit or a HCT session. Perceived barriers to providing HCT interventions were similar in both groups, including a lack of provider resources, outcomes tracking, provider communication, and patient/caregiver feedback. Providers recognize the importance of HCT and the joint responsibility of PP and AP but have difficulty following best practices. A multipronged approach will be necessary to address barriers. Our survey reminds us of the gap that persists in providing AYAs with a seamless transfer from pediatric to adult care.
Advanced age may influence the benefits derived from transcatheter edge-to-edge repair for primary mitral regurgitation (MR). We explored the impact of age on clinical, echocardiographic, functional, and patient-reported outcomes after contemporary mitral transcatheter edge-to-edge repair. Patients with primary MR were identified from a pooled, patient-level analysis of the EXPAND and EXPAND G4 studies. Key outcomes included 30-day and 1-year major adverse events, MR severity, New York Heart Association class, Kansas City Cardiomyopathy Questionnaire - Overall Summary score, and mortality. A total of 854 patients were included. The oldest quartile ranged in age from 86 to 99 years, while the younger quartiles ranged from 21 to 85 years. The 30-day major adverse events rate was not different in the oldest compared with the younger quartiles (5.7% versus 4.3%, P=0.46). At 1 year, MR ≤1+ was present in 81% of the oldest versus 88% of the younger patients (P=0.08). LV end-diastolic volumes improved to similar extent (-12.0±28.8 mL, P=0.003 and -14.1±31.5 mL, P<0.0001, P=0.34 across groups). Functional status and Kansas City Cardiomyopathy Questionnaire - Overall Score improved significantly across age groups, without differences according to age. Adjusted 1-year mortality was higher in the oldest compared with the younger quartiles (adjusted hazard ratio 1.74 [95% CI, 1.02-2.98], P=0.04). Although 1-year mortality was higher among the very elderly, mitral transcatheter edge-to-edge repair using next-generation devices was associated with low complication rates and substantial improvements in ventricular remodeling, functional status, and quality of life across age groups. These findings support the use of mitral transcatheter edge-to-edge repair in appropriately selected elderly patients and suggest that age alone should not be a barrier to treatment.