Breast cancer incidence has historically been higher in Western countries; however, rates have risen substantially in East Asia, with variation by age and generation. Direct comparisons between native Asians and their U.S. counterparts remain limited. Therefore, the age-period-cohort (APC) effects of breast cancer incidence were examined among native Japanese women, with comparisons with Japanese Americans, native Korean women, Korean Americans, and U.S. White and Black women. We analyzed population-based cancer registry data from Japan (1985-2019), Korea (1993-2017), Hawaii (1988-2017), Los Angeles County (1988-2017), and SEER 8 (1985-2019). Women aged 25-84 years were grouped into 5-year age groups and calendar periods, and APC analyses were conducted using the NCI APC web-based tool. Incidence increased with age in all populations; however, the patterns differed. Native Japanese and Koreans had the lowest overall rates with an early-midlife peak, whereas Japanese Americans resembled U.S. Whites, with incidence rising into older ages. Period and cohort effects were strongest among native Japanese and Koreans, with sustained increases among women born after 1950. Annual incidence increases were highest among native Koreans (5.3%) and native Japanese (3.8%), followed by Korean Americans (2.8%) and Japanese Americans (1.6%), while U.S. Whites were stable or declining and U.S. Blacks showed modest increases. These findings indicate an ongoing epidemiologic transition in East Asia and demonstrate how migration and generational change shape breast cancer risk across populations. Direct comparison of native and U.S. Asian populations reveals migration-related divergence in breast cancer risk.
The Oswestry Disability Index (ODI) is one of the most widely used instruments for assessing disability related to low back pain (LBP). However, population-based normative values are scarce, and the structural validity of Japanese version of the ODI (version 2.1a) has not been sufficiently evaluated. To psychometrically validate the Japanese version of the ODI (version 2.1a) and to establish population-based normative values stratified by age and LBP duration. Population-based cross-sectional study using a nationwide survey in Japan. A nationally representative sample of community-dwelling adults in Japan selected through a two-stage stratified random sampling design. A total of 5,000 individuals were invited to participate. Self-report measures: Disability related to low back pain assessed using the Japanese version of the ODI (version 2.1a). Japan was stratified into 65 strata, from which 250 sampling points were randomly selected, and 20 residents were randomly sampled per point. Psychometric properties of the ODI version 2.1a were evaluated among respondents using factor analyses. Response-adjusted, survey-weighted normative values were estimated for the general population and for individuals with LBP, stratified by age group and pain duration subtype (acute, subacute, and chronic). Of the 5,000 individuals invited, 2,188 responded (response rate 43.8%). Among respondents, 1,270 had complete ODI data and were included in the psychometric analysis, including 173 individuals with LBP. The estimated population prevalence of LBP was 14.7% (95% confidence interval [CI] 13.1-16.4%), comprising acute 2.5% (95% CI 1.9-3.3%), subacute 1.1% (95% CI 0.7-1.7%), and chronic 11.1% (95% CI 9.8-12.5%). Factor analyses demonstrated a unidimensional structure of the ODI version 2.1a. Among individuals with LBP, the survey-weighted mean ODI score was 20.23 (95% CI 18.32-22.13). Normative values varied by pain duration (acute: 12.54 [95% CI 8.87-16.22]; subacute: 13.54 [95% CI 8.78-18.31]; chronic: 22.74 [95% CI 20.56-24.92]) and increased with advancing age. This study provides psychometric validation of the Japanese ODI version 2.1a and establishes age- and pain duration-specific normative values. These findings support the use of ODI version 2.1a as a single total score for profiling disability in everyday life and for evaluating the management of LBP in both clinical practice and research.
This study estimated annual consultation rates for menopausal complaints among insured women in Japan and examined trends in comorbidities and medication use. This descriptive study analyzed administrative claims data for women insured by the Japan Health Insurance Association, Japan's largest health insurer. Between the fiscal years 2016 and 2022, 5,833,765 insured women aged 45-57 years were identified. Menopausal disorders and comorbidities were defined using the 10th revision of the International Classification of Diseases, and women with at least one diagnostic code for menopausal disorders during each fiscal year were identified. The annual consultation rate per 100,000 person-years increased from 6848 in 2016 to 9532 in 2022, and was consistently higher among women aged 50-57 years than those aged 45-49 years. Among those diagnosed, 58% had at least one comorbidity. Sleep disorders were the most common (approximately 26%), consistent across age groups, followed by hypertension (approximately 21%), more frequent in the older group. Among patients with menopausal disorders, 75% received at least one prescription. Kampo medications, which are traditional Japanese herbal formulations, were most used (40%), especially by the younger group. Hormone therapy was second (30%), prescribed more frequently in the older group. For hormone therapy, conjugated estrogens and estradiol were most prescribed, and their usage remained stable. Conversely, prescriptions for progestins and combined estrogen-progestin therapies increased annually. Consultation rates for menopausal disorders among Japanese working women increased over time, with frequent comorbidities and distinctive Kampo use.
To establish effective infection, viral pathogens employ diverse strategies through encoded proteins to interfere with host antiviral responses. While previous studies have predominantly focused on elucidating the mechanisms by which individual viral proteins regulate type I interferon (IFN) responses, this study presents the first demonstration that Japanese encephalitis virus (JEV)-encoded NS1 and NS4B proteins cooperatively target the TLR3 receptor signaling pathway to suppress IFN production. Here, we first discovered JEV-encoded multifunctional glycoprotein NS1, which inhibits TLR3-mediated IFN-β production by targeting TLR3 and TRIF. Mechanistically, NS1 interacts with these host factors and may induce their degradation via the autophagy pathway. Building on these findings, we further demonstrated that NS1 and NS4B act synergistically to suppress type I IFN production by enhancing TLR3 and TRIF degradation, thereby facilitating JEV replication. Structural simulation of the NS1-NS4B-TLR3 complex unveiled a dynamic mechanism that NS4B binding induces conformational changes in dimerized NS1, which leads to a tighter binding posture between NS1 and TLR3. Notably, NS4B binding expands the interface between NS1 and TLR3 by 1.5-fold that results in enhanced TLR3 degradation and downstream IFN-β suppression. Functional analysis confirmed that the C291/K293/R314 triple mutation in NS1 synergistically impairs TLR3 degradation. Collectively, using JEV as a model, this study reveals a novel mechanism by which two viral components can act synergistically to evade the host antiviral response. Given the common coexistence and functional interplay of viral-encoded proteins in naturally infected cells, this study establishes a framework for investigating cooperative interactions among multiple viral proteins. Viruses evade host immunity through encoded viral proteins, while previous research has predominantly focused on single protein mechanisms. This study reveals a novel cooperative immune evasion strategy, demonstrating for the first time that Japanese encephalitis virus NS1 and NS4B proteins act synergistically to degrade the host's TLR3 and TRIF adaptor, thereby suppressing type I interferon production. Structural simulations show that NS4B induces conformational changes in NS1, enhancing its binding to TLR3 and accelerating its degradation. This work establishes a new paradigm for how multiple viral components can function cooperatively to subvert antiviral defenses. Given that viral proteins naturally coexist and interact, this study provides a crucial framework for investigating complex viral protein interplay, which is fundamental to understanding viral pathogenesis.
Streptococcus dysgalactiae subsp. equisimilis (SDSE), historically considered less virulent than Streptococcus pyogenes, has emerged as an important invasive pathogen, particularly in aging societies. Data on nationwide clinical and molecular epidemiology in super-aged populations remain limited. We aimed to identify admission abnormalities associated with the 28-day mortality for invasive SDSE (iSDSE) infections in Japanese adults and evaluate the molecular epidemiology of this emerging pathogen through a large-scale surveillance. A prospective nationwide surveillance of iSDSE was conducted between March 2024 and March 2025 in 132 hospitals across Japan. Clinical, epidemiological, and laboratory data, including emm type distribution, were collected and analyzed. Overall, 278 patients with iSDSE infections were identified (median age, 83 years); one-quarter of the patients were residents of long-term care facilities. Nearly all patients had comorbidities. The 28-day mortality was 12.2% overall, which increased to 16.1% among patients aged ≥ 80 years. Fatal outcomes were strongly associated with bacteremic pneumonia and streptococcal toxic shock syndrome, whereas cellulitis showed a lower mortality burden. Intensive care unit admission, mechanical ventilation, and vasopressor use were more common in patients with fatal outcomes, reflecting disease severity. Variables associated with poor outcome included leukopenia, thrombocytopenia, and elevated levels of C-reactive protein, creatinine, and creatine kinase. Emm typing revealed stG6792 as the predominant lineage; however, a marked increase of stG840 was observed. No significant outcome differences were noted across emm types. These findings emphasize the combined impact of advanced age, comorbid conditions, and critical illness on iSDSE outcomes, highlight the clinical burden of iSDSE in older adults, and support continued monitoring in Japan.
The optimal fixation method in total hip arthroplasty (THA) remains under debate. While cemented fixation has been associated with a lower risk of periprosthetic fracture, uncemented fixation predominates in Japan. This study aimed to compare early postoperative complications between cemented and uncemented fixation in elective THA using a nationwide inpatient database. We identified 198,102 patients aged ≥ 65 years who underwent primary THA for osteoarthritis, osteonecrosis, or rheumatoid arthritis between December 2011 and March 2023 from the Japanese Diagnosis Procedure Combination (DPC) database. After 1:1 propensity score matching for age, sex, body mass index (BMI), and Charlson Comorbidity Index, 36,859 patients were included in each fixation cohort. Surgical and medical complications, and in-hospital mortality were compared using multivariate logistic regression. Cemented fixation was associated with a significantly lower risk of periprosthetic fracture (odds ratio [OR], 0.40; 95% confidence interval [CI], 0.30-0.53; p < 0.001), blood transfusion (OR, 0.76; 95% CI, 0.74-0.78; p < 0.001), and deep vein thrombosis (OR, 0.79; 95% CI, 0.74-0.84; p < 0.001). There were no statistically significant differences based on the predefined threshold (p < 0.001) in dislocation, infection, pulmonary embolism, cardiac or cerebrovascular events, or in-hospital mortality between fixation types, although a trend toward higher in-hospital mortality in the cemented group was observed. Cemented THA was associated with reduced rates of periprosthetic fracture, transfusion, and deep vein thrombosis without increasing other perioperative or medical complications. These findings suggest that cemented fixation may be associated with favorable short-term outcomes in selected patients.
The Japanese blue-lined octopus, Hapalochlaena cf. fasciata, is known to harbor the potent neurotoxin tetrodotoxin (TTX) throughout its body. In the present study, we detected TTX and its analog 5,6,11-trideoxyTTX (TDT) in the maternal tissues and eggs of H. cf. fasciata, the latter showing individual variation in toxin profiles. Immunohistochemical analysis of H. cf. fasciata eggs using an anti-TTX polyclonal antibody revealed that TTX is primarily localized in the chorion, the outermost egg layer, implying its role in chemical defense against predators. In egg predation experiments, the goby Bathygobius fuscus took the eggs into its mouth but immediately spat them out, whereas Favonigobius gymnauchen frequently ingested the eggs. These results suggest that the predation deterrence by TTX in H. cf. fasciata eggs is predator-dependent. In addition, VGSC (Nav1) sequences revealed that H. cf. fasciata shares three amino acid substitutions at putative TTX-binding sites with other Hapalochlaena species, consistent with TTX resistance.
Emergency resuscitative thoracotomy (ERT) is a crucial intervention employed in prehospital settings to address life-threatening conditions, such as cardiac tamponade, hemorrhage, and air embolism. Despite its critical nature, the efficacy of prehospital ERT in enhancing survival rates compared with in-hospital procedures remains controversial. This retrospective analysis was conducted using data from the Japanese Society for Aeromedical Services Registry between January 2020 and December 2022. After excluding nontraumatic cases, non-ERT cases, and records with missing data, 143 prehospital ERT cases were identified. The cohort was categorized into survivors (n = 3) and nonsurvivors (n = 140) based on patient outcomes. Comparative analyses were conducted on variables such as age, injury severity, time intervals, and transportation modalities using the Wilcoxon rank-sum test and Pearson's chi-square test, with the statistical significance set at P < .05. The overall survival rate after prehospital ERT was 2.1% (3 of 143). Only a few variables, such as hospital length of stay, showed statistically significant differences between the groups; most patient characteristics and prehospital time intervals did not. The patients who experienced cardiac arrest at the time of contact with the emergency medical service (EMS) contact had a survival rate of 0%, whereas those who arrived at the hospital with vital signs had the highest survival rate (11.1%). The presence of vital signs upon hospital arrival and the rapid initiation of intervention were identified as key factors influencing survival. These findings suggest that prehospital ERT provides limited survival benefits, with a 0% survival rate in cases of cardiac arrest at EMS contact. Therefore, further research is essential to refine the patient selection criteria and optimize ERT deployment to improve prehospital patient outcomes.
Risk stratification systems for non-muscle-invasive bladder cancer (NMIBC) differ among the National Comprehensive Cancer Network (NCCN)/American Urological Association (AUA), European Association of Urology (EAU), and Japanese Urological Association (JUA) guidelines, and their comparative ability to predict clinical outcomes in real-world practice remains unclear. We retrospectively analyzed 591 patients who underwent transurethral resection of bladder tumor and classified them into risk categories according to the NCCN, EAU, and JUA guidelines, comparing recurrence and progression outcomes among classifications. During median follow-up of 64.1 months, intravesical recurrence (32.1%), T stage progression (4.5%), grade progression (2.5%), undergoing radical nephroureterectomy (2.7%), and radical cystectomy (5.3%) were observed, respectively. Grade progression from low-grade to high-grade disease was significantly associated with increasing EAU risk (P = .026) but not with NCCN or JUA classifications. Among pTa tumors, progression to ≥ pT1 occurred more frequently in higher risk groups across all classifications (all, P < .05), whereas progression from pT1 to pT2 was not significantly stratified by any system. Early intravesical recurrence increased with higher risk in all classifications (all, P < .05), with the strongest discrimination observed for the NCCN system. In multivariable Cox proportional hazards analyses, the NCCN classification demonstrated the highest discriminative performance for predicting intravesical recurrence. The NCCN, EAU, and JUA risk classifications are all clinically relevant for NMIBC, but their predictive performance varies by endpoint, indicating the importance of selecting the classification according to the specific outcome of interest.
暂无摘要(点击查看详情)
Parechovirus A (PeV-A) is a significant cause of sepsis-like illnesses in neonates and infants. Although PeV-A3 has a known association with severe diseases, severe cases of PeV-A5 infection have recently been reported. We present herein a case of PeV-A5 infection in a 23-day-old neonate who presented with sepsis-like symptoms, including fever, irritability, abdominal distension, and a reticular rash. An erythematous rash developed and spread from the trunk to the extremities after the patient's admission. Blood cultures were negative, and the patient's symptoms rapidly improved with supportive therapy. Polymerase chain reaction testing of pharyngeal and rectal swabs confirmed PeV-A5 infection, as well as the presence of PeV-A5 in the serum during the acute phase. PeV-A5 RNA was undetectable in serum at follow-up testing, whereas stool specimens showed higher detection sensitivity, indicating that the time from symptom onset and specimen selection influence diagnostic accuracy. Whole-genome analysis suggested a recombinant PeV-A5 strain with structural protein regions similar to those of PeV-A5 and nonstructural regions similar to those of PeV-A3. This genomic feature may explain the patient's clinical presentation resembling that of a PeV-A3 infection. PeV-A5 may cause sepsis-like symptoms in early infancy, the pathogenicity of which may be affected by genetic recombination. Further case studies are required to elucidate the clinical features and pathogenicity of PeV-A5 infection.
暂无摘要(点击查看详情)
The Nursery School Absenteeism Surveillance System (NSASSy) has operated since 2010, but assessing NSASSy effectiveness at controlling outbreaks has been difficult. This study was conducted to demonstrate NSASSy effects for reducing infectious disease incidence at NSASSy-participating nursery schools. In two cities, we acquired NSASSy information from nursery schools with more than 80 children. The study period for one city extended from April 1, 2016 through March in 2019, and for another city from January 1, 2024 through May 20, 2025. We examined the incidence of influenza and COVID-19 and symptoms for age classes at each facility compared to their proportions of data entry to NSASSy, community outbreaks in national official surveillance, and their mutual interaction terms. The estimation procedure was the individual effect model with a random effect of the facility. The relation of influenza to community outbreak was found to be significant and positive. Interaction terms between data entry rates and community outbreaks were significant and negative for both cities. For COVID-19, no association was found. Regarding symptoms, for all symptoms and fever, relations to data entry rates were significant and negative in both cities. Regarding respiratory symptoms and diarrhea, the rate was significant and negative in one city. Results show that NSASSy mitigated community influenza outbreak effects on outbreaks at nursery schools not only before the COVID-19 pandemic, but also after the pandemic. Moreover, NSASSy reduced the rates of incidence of all symptoms.
Biogeographic ancestry inference provides valuable insights into forensic science for identifying unknown remains and analyzing trace evidence found at crime scenes, particularly when other useful information is lacking. In this study, we investigated the impact of two feature encoding strategies (raw allele values and one-hot encoding) and three machine learning algorithms (Random Forest (RF), XGBoost (XGB), and Support Vector Machine (SVM)) on population classification using simulated autosomal STR profiles. Simulations were conducted using 20 autosomal STR loci commonly included in commercial kits, focusing on East Asian (EA) and Southeast Asian (SEA) populations. XGB demonstrated strong performance with both raw allele values and one-hot encoding. In contrast, while SVM achieved comparable accuracy with one-hot encoding, its performance markedly declined when raw allele values were used. RF consistently yielded lower accuracy than the other two algorithms. We also investigated the limits of intra-continental population classification and found that accuracy generally plateaued at approximately 80-85 % for EA vs. SEA, Japanese vs. Chinese, and Japanese vs. Vietnamese classifications, with similar performance observed in a preliminary evaluation using real Japanese profiles. However, in the classification of the genetically close Japanese and Korean populations, a marked discrepancy was observed between the simulation results and those derived from the real profiles. These findings clarify how feature encoding interacts with machine learning in STR-based population classification and delineate practical limits for closely related Asian populations.
While scholarly activities are considered essential for medical advancement and improved patient outcomes, the educational value and potential impact on postgraduate medical trainees' well-being has not been well studied. We examined whether clinical research experience and scholarly activity requirements during early postgraduate medical training affect trainees' competency and well-being. We conducted a cross-sectional study of Japanese early postgraduate medical trainees (postgraduate year [PGY]1-2) recruited from hospitals participating in the General Medicine In-Training Examination (GM-ITE) in January 2024 and from teaching hospitals with scholarly activity requirements that did not participate in GM-ITE. Participants were classified into three groups based on scholarly activity experience: no scholarly activity, scholarly activity without clinical research, and scholarly activity with clinical research. The primary outcome was evidence-based medicine (EBM) competency assessed using the Japanese version of the Assessing Competency in EBM (ACE) tool. Secondary outcomes included GM-ITE scores, future scholarly activity intentions, depression symptoms, and well-being measures. Associations between scholarly activity experience and outcomes were examined using multiple regression analysis for continuous outcomes and logistic regression analysis for binary outcomes, adjusting for potential confounders; missing data were addressed using multiple imputation. Among 1,152 participants (1,150 from GM-ITE-participating hospitals and 2 from non-participating hospitals), 656 (57.0%) reported engaging in scholarly activities during early postgraduate medical training, with 60 (9.1%) conducting clinical research. After adjusting for potential confounders, compared to trainees without scholarly activity, the difference in ACE tool scores was 0.22 points (95% confidence interval [CI]: 0.02 to 0.41) for those who engaged in scholarly activities without clinical research and 0.40 points (95% CI: -0.02 to 0.83) for those who conducted clinical research. Stratification by program requirements revealed differences of 0.34 points (95% CI: 0.09 to 0.58) for voluntary scholarly activities and 0.09 points (95% CI: -0.19 to 0.37) for required activities. Trainees who conducted clinical research or engaged in required scholarly activities showed higher odds of future academic interest (adjusted odds ratio [OR]: 2.68 [95% CI: 1.33-5.37] and 1.52 [1.02-2.25], respectively), while those who engaged in scholarly activities without clinical research or without requirements showed similar odds to the reference group (adjusted OR: 1.27 [0.97-1.68] and 1.24 [0.88-1.74], respectively) Well-being measures showed minimal differences across groups. Scholarly activities during early postgraduate medical training, whether required by programs or not, had little impact on trainees' competency and well-being. However, trainees who engaged in clinical research showed increased interest in future academic activities, suggesting the importance of establishing supportive environments for interested trainees.
Differences in chest HRCT findings between myeloperoxidase (MPO)- antineutrophil cytoplasmic antibody (ANCA) and proteinase 3 (PR3)-ANCA in Japanese patients with ANCA-associated vasculitis (AAV) remain unclear. We reviewed chest HRCT findings at diagnosis in 195 patients with AAV enrolled in the Remission Induction Therapy in Japanese Patients with ANCA-associated Vasculitis and Rapidly Progressive Glomerulonephritis (RemIT-JAV-RPGN) observational cohort study (2011-2013). Findings were classified by ANCA subtype and compared. Abnormal chest HRCT findings were observed in 172 of 195 patients. Main findings included ground-glass opacity (n = 92, 47%), reticulation (n = 79, 41%), traction bronchiectasis (n = 67, 34%), and honeycombing (n = 4 9, 25%). Honeycombing (n = 46, 29%) and reticulation (n = 70, 43%) predominated in patients with positive MPO-ANCA, whereas nodules (n = 4, 44%) and cavities (n = 3, 33%) were more frequent in those with positive PR3-ANCA. Interstitial pneumonia (IP) was diagnosed in 89 patients, with HRCT patterns of definite usual interstitial pneumonia (UIP) in 31 (35%), possible UIP in 22 (25%), and inconsistent with UIP in 36 (40%). IP was more frequent in patients with positive MPO-ANCA than in those with positive PR3-ANCA (47% vs 11%, respectively; p = 0.003). Definite UIP was more common in MPO-ANCA positivity than in other subtypes (41% vs 0%, respectively; p = 0.023). In Japanese patients with AAV, IP with a definite UIP pattern was more frequent in patients with MPO-ANCA positivity while nodules and cavities were more frequent in patients with positive PR3-ANCA.
The prevalence and characteristics of diabetes distress (DD) in Japanese adults with type 1 diabetes mellitus (T1D) remain unclear. Therefore, this study aimed to investigate the prevalence and associated features of DD in this population. A cross-sectional study was conducted using the Type 1 Diabetes Distress Scale (T1-DDS), the Problem Areas in Diabetes (PAID) scale, and the Hypoglycemia Fear Survey (HFS). Data from 117 adults with T1D were analyzed. Group comparisons were made using t-tests and anova, and associations were assessed using Spearman's rank correlation. Confirmatory factor analysis (CFA) was used to examine the structure of the T1-DDS. Of the 144 screened participants, 117 (mean age: 50.6 ± 12.8 years; 44.4% male; mean diabetes duration: 18.1 ± 10.7 years; mean HbA1c: 7.6% ± 0.9%) were included in the analysis. Moderate and high DD were observed in 30.8% and 16.2%, respectively. Participants with high DD had significantly higher PAID and HFS-Worry/Behavior scores, despite no significant differences in age, sex, diabetes duration, or HbA1c. Among T1-DDS subscales, Powerlessness and Eating Distress had the highest mean scores. All subscale scores were correlated with total distress (rho = 0.636-0.916) and showed good internal consistency (Cronbach's alpha = 0.784-0.875). CFA supported the seven-factor structure with marginal-to-acceptable model fit. DD is prevalent in Japanese adults with T1D. The T1-DDS can be a useful screening tool for identifying individuals experiencing diabetes-related emotional burden and guiding personalized interventions for these individuals.
Population-based organized prostate-specific antigen (PSA) screening is implemented in 80% of Japanese municipalities; however, Shiga Prefecture remains a unique exception without such a systematic program. This study characterized the longitudinal clinical features and treatment patterns in this opportunistic testing environment using data from 1716 patients diagnosed via prostate biopsy in 2012, 2017, and 2022. While median PSA levels remained stable (10.40-11.43 ng/mL), median age at diagnosis increased from 72 to 74 years. Over the decade, the incidence of International Society of Urological Pathology Grade Group 1 and cT1c stages decreased significantly (p < 0.001), with nearly 90% of cases being cT2 or higher in 2022. Risk classification showed a decrease in low-risk cases and a rise in high-risk cases. Regarding treatment, radical prostatectomy rates remained stable at approximately 25%, whereas the overall use of active surveillance (AS) increased from 1 to 9%. Notably, among low-risk patients, AS adoption rose markedly from 2.3% in 2012 to 68% in 2022. While clinical practices have evolved to successfully minimize unnecessary invasive intervention, these findings suggest that clinical progress alone cannot fully compensate for the lack of organized efforts to improve early detection.
The minimal clinically important difference (MCID) is widely used to interpret patient-reported outcome measures (PROMs) in cervical spondylotic myelopathy (CSM). However, consensus on its definition is lacking, and its long-term consistency remains unknown. The objective of this study was to determine if MCID thresholds for PROMs at 5 years after surgery for CSM remained consistent when compared to previously established 2-year values. The Spine CORe™ study group performed a post hoc analysis of the prospective Quality Outcomes Database. Eight established anchor- and distribution-based methods were applied to define MCID thresholds for the following PROMs: Neck Disability Index (NDI), neck pain numeric rating scale (NP-NRS), arm pain numeric rating scale (AP-NRS), 5-dimension EuroQol health utility questionnaire (EQ-5D) for quality-adjusted life years, and modified Japanese Orthopaedic Association (mJOA) scores. Predictive validity was evaluated using area under the curve (AUC) analysis with North American Spine Society satisfaction as the anchor, and results were compared with calculated 2-year values from the same cohort using DeLong's test. A total of 1085 patients were originally enrolled, with ≥ 80% follow-up for all PROMs except the mJOA score (79%). At 5 years, optimal percentage change and ≥ 30% improvement methods were consistently highest performing for the NDI (AUC 0.71 and 0.68, respectively), NP-NRS (AUC 0.65 for both), and AP-NRS (AUC 0.73 and 0.72, respectively) scores. For the EQ-5D score, both the optimal numeric cutoff and one-half standard deviation methods performed best, yielding a consistent MCID threshold of 0.11 (AUC 0.64 for both). For the mJOA score, the severity-adjusted method provided the strongest discrimination, with an AUC of 0.74 at 5 years. MCID thresholds were consistent between 2 and 5 years, except for the severity-adjusted MCID for the mJOA score (0.74 at 5 years vs 0.65 at 2 years, p = 0.026). The 30% improvement threshold corresponded to absolute changes of 11.3 points for the NDI score, 1.5 points for the NP-NRS score, and 1.4 points for the AP-NRS score based on mean baseline scores. To the authors' knowledge, this represents the largest cohort of patients with CSM in the United States with validated, long-term 5-year MCID thresholds. This study establishes practical MCID definitions for NDI (≥ 30% improvement threshold of 11.3 points), NP-NRS and AP-NRS (≥ 30% improvement thresholds of 1.5 and 1.4 points, respectively), EQ-5D (optimal numeric cutoff of 0.11), and mJOA (severity-adjusted: ≥ 3 points for severe, ≥ 2 for moderate, ≥ 1 for mild) scores that can serve as benchmarks for evaluating improvement after CSM surgery in both research and routine clinical practice.
Although the World Health Organization (WHO) emphasizes patient safety as a core competency, formal educational opportunities in clinical settings remain limited. This study aimed to explore the perceptions of patient safety specialists regarding the informal learning opportunities that expose medical students to patient safety competencies during clinical clerkships. A descriptive qualitative study was conducted with 34 of 36 patient safety specialists at Japanese national university hospitals. Data were collected via an online open-ended questionnaire regarding safety-related behaviors students engaged in during clinical practice. The responses were analyzed using conventional content analysis. The analysis identified 14 learning categories situated within four clinical contexts: receiving instructions, invasive procedures, interprofessional communication, and incident response. These competencies are closely aligned with the WHO Patient Safety Curriculum Guide. However, the learning related to quality improvement was notably limited. Participants proposed specific support measures such as utilizing incident reports for education and facilitating student participation in safety conferences to further enhance these informal learning opportunities. Clinical environments provide latent opportunities for students to engage with essential patient safety competencies through informal participation in clinical practice and complementing formal curricula. Clinical educators should recognize the value of these situated learning opportunities and intentionally facilitate informal learning environments to complement formal education.