Never-smoker non-small cell lung cancer (NSCLC) represents a distinct molecular subtype enriched for actionable driver mutations, including ERBB2 (HER2) exon 20 insertions, which occur in approximately 2-4% of cases and are more common in women and individuals of Asian ancestry. We report a 75-year-old never-smoking, asymptomatic Asian-American woman with incidentally discovered stage IV (cT1cN0M1a) lung adenocarcinoma. Imaging demonstrated innumerable bilateral solid, part-solid, ground-glass, and cavitary pulmonary nodules, with the largest measuring 29 × 25 mm, without nodal or distant metastases. Lung biopsy confirmed TTF-1 and Napsin A-positive adenocarcinoma, and genomic profiling identified an ERBB2 (HER2) exon 20 insertion mutation (p.A775_G776insYVMA; VAF 3.9%). She enrolled in a clinical trial of the selective HER2 tyrosine kinase inhibitor zongertinib (60 mg twice daily) in February 2025 and tolerated therapy well, with mild diarrhea, lactose intolerance, brittle nails, and intermittent muscle cramps. Serial imaging demonstrated an early partial response by RECIST 1.1 at six weeks, followed by sustained radiographic stability with cavitary changes consistent with treatment effect. As of January 2026, she has maintained lung-confined disease control for over 11 months without extrapulmonary progression and remains fully functional without symptoms. This case highlights the clinical benefit and tolerability of selective HER2-targeted therapy in HER2-mutant NSCLC and adds to emerging evidence supporting consideration of risk-based lung cancer screening strategies in high-risk never-smoking populations.
Whether secondhand smoke (SHS) exposure confers an increased risk of liver cancer remains elusive, and no epidemiological studies have examined the potential impact of thirdhand smoke (THS) exposure on the risk of this cancer. Therefore, we aimed to examine the potential associations of SHS and THS exposure with the risk of liver cancer. A population-based cohort of 315,833 Chinese never smokers was identified from the China Kadoorie Biobank. SHS and THS exposure were assessed by the retrospective self-reports. Multivariable Cox regression was used to compute hazard ratio (HR) for liver cancer incidence. Restricted cubic spline regression was used to test nonlinearity. Over an average follow-up of 11.89 years, 876 liver cancer cases were documented. Daily or almost daily SHS exposure [HR 1.36, 95% confidence interval (CI) 1.16-1.60] and THS exposure at enrollment (HR 1.23, 95% CI 1.02-1.47) were found to be associated with an elevated risk of liver cancer, which were more pronounced in participants aged <60 years than in those aged ≥60 years (both Pinteraction<0.05). Moreover, a nonlinear positive dose-response association with the risk of liver cancer was found for the usual duration of SHS exposure (Pnonlinearity=0.001). No significant joint association of SHS and THS exposure with the risk of liver cancer was found (Padditive interaction=0.343). Our findings suggest that reducing SHS or THS exposure may be beneficial in the primary prevention of liver cancer. Nevertheless, these findings should be interpreted with much caution, given that the retrospective self-reports cannot clearly distinguish SHS from THS exposure.
暂无摘要(点击查看详情)
暂无摘要(点击查看详情)
Monitoring trends in transitions in the use of electronic nicotine delivery systems (ENDS) and cigarettes among youth is important for understanding the potential public health impacts of these products. Using a weighted Markov multistate transition model accounting for complex survey design, we estimated transition rates and one-year transition probabilities between never, non-current, ENDS-only, and cigarette use (with or without dual use of ENDS) among 26,744 youth aged 12-17 years who participated in at least two consecutive waves from Waves 2-7.5 (approximately 2015-2023) of the nationally representative Population Assessment of Tobacco and Health (PATH) Study. We also estimated transitions stratified by ages 12-14 and 15-17 years. The one-year probability of ENDS-only initiation from never use among youth peaked in 2017-19 (Waves 4-5) at 4.0% (95%CI: 3.6-4.3%) and was higher for 15-17-year-olds at 5.8% (95%CI: 5.2-6.4%) than 12-14-year-olds at 2.2% (95%CI: 1.8-2.6%). In the following years, ENDS-only initiation rates declined and plateaued, with 2.6% (95%CI: 2.3-3.0%) initiation in 2022-23. Cigarette initiation from never use decreased over 2015- 23 from 0.8% (95%CI: 0.6-1.0%) in 2015-16 to 0.1% (95%CI: 0.0-0.2%) in 2022-23. There was an increase in the fraction of youth who transitioned from non-current product use to ENDS-only use from 13.7% (95%CI: 7.5-20.0%) in 2015-16 to 35.1% (95%CI: 25.4-44.8%) in 2022-23, paired with a decrease in non-current use to cigarette use from 20.9% (95%CI: 11.8-30.0%) to 6.3% (95%CI: 1.7-10.8%). Transitions from ENDS-only or cigarette use to non-current use remained relatively constant over time at around 25% and 15% per year, respectively. ENDS-only use initiation has changed over time, peaking around 2019 and subsequently decreasing and plateauing, but cessation rates for both ENDS and cigarettes have remained relatively stable. Thus, interruption of tobacco product initiation may be the most effective approach to reducing tobacco product use among youth. What is already known on this topic:Transitions in cigarette and ENDS use have changed over time, with youth more likely to adopt ENDS and less likely to adopt cigarettes than older age groups.What this study addsWe found that ENDS initiation among youth peaked around 2019 and was higher for those 15-17 years than 12-14 years. There were few significant differences between the two age groups for other transitions.Cigarette initiation among youth declined over this period. Cessation rates for both ENDS and cigarettes have remained relatively stable.How this study might affect research, practice or policyTobacco control efforts should prioritize preventing all tobacco and nicotine product initiation among youth.
The hypersomnolence disorder diagnostic criteria of the DSM-5-TR rely on the report of a 'hypersomnolence' syndrome, without defining the term or proposing any measuring tool to assess this. Among the vast diversity of tools to measure hypersomnolence, the Hypersomnia Severity Index (HSI) has been specifically designed to account for the multidimensionality of hypersomnolence. Showing good psychometric validity in two previous validation studies, the HSI has never been validated against the gold-standard Multiple Sleep Latency Test (MSLT). This preliminary study aims to validate the French version of the HSI and to provide a first exploration of the link between HSI and the MSLT in a group of French subjects clinically diagnosed with hypersomnolence disorders. In addition, we propose an original visualization of the dimensions of the HSI. This study is a secondary analysis of a cohort of 34 patients diagnosed with hypersomnolence disorders without comorbidity. They underwent a MSLT and filled out the French version HSI as well as classical sleep medicine questionnaires. The HSI has undergone a rigorous translation and psychometric validation, which we compare with two previous psychometric validation studies. We obtained the same level of psychometric validity as previous studies, validating our French version on a population of patients with hypersomnolence disorder. However, we did not find any correlation between the HSI and the MSLT. Nevertheless, the dimensional approach offered by the HSI combined with our data visualization provide a valuable tool for sleep medicine clinical practice and research.
Global fertility rates are nearing replacement levels, yet Sub-Saharan Africa continues to experience high fertility with slow declines. Modern contraceptive use is critical to fertility reduction, but comparative multi-country analyses remain limited, particularly for women not in union (never or formerly married). This study compares contraceptive use between women in union and those not in union across the region. Pooled Demographic and Health Survey (DHS) data from 35 sub-Saharan African countries were analyzed to examine the determinants of modern contraceptive use. We incorporated sampling weights to ensure representativeness, and the final weighted analytic sample comprised 472,811 women. A multilevel random-intercept logistic regression model was employed to account for the hierarchical data structure, with women nested within communities and countries. Both individual- and community-level covariates were included, and analyses were stratified by marital status (in union, never married, and formerly married). Contextual variations in contraceptive use were assessed by estimating intraclass correlation coefficients (ICCs) at the community and country levels. To assess changes in community-level heterogeneity over time, multiple DHS survey waves were analyzed separately for each country, and intraclass correlation coefficients (ICCs) were synthesized using a meta-analytic approach. The study found that among women in union, older women are less likely to use contraceptive methods, whereas among women not in union, older women are more likely to use contraceptives. Higher educational attainment, access to media, exposure to family planning information, better household socioeconomic status, and women's employment are positively associated with contraceptive use, regardless of marital status. Beyond individual and household factors, community-level variables, such as poverty rates, literacy levels, and health coverage, are also significantly associated with modern contraceptive use. Additionally, the study revealed substantial variation in contraceptive use across countries and communities. Countries with low contraceptive prevalence tended to show greater disparities in usage, whereas those with higher prevalence exhibited more consistent and uniform contraceptive use across communities. While various individual and community-level factors are associated with contraceptive use in sub-Saharan Africa, there are significant contextual variations both across and within countries. Nations with low levels of modern contraceptive use, particularly in Western and Central Africa, often show inconsistent coverage across communities. In contrast, countries with higher contraceptive prevalence, particularly in Southern Africa and, to some extent, Eastern Africa, tend to exhibit more consistent and equitable coverage across their communities.
Home environments shape children's dietary habits, but which factors are most influential is unclear. The study purpose was to identify factors in the home environment associated with child intake of fruit and vegetables (FV) and sugar-sweetened beverages (SSBs) using a national dataset collected in 2013-2015 in the U.S. Data from 5,138 school-aged children (4-15 years old) from 130 U.S. communities were collected in 2013-2015. Parents and/or children completed a dietary screener and additional survey questions to assess household socioeconomic status (SES), grocery shopping sources, home food availability, social support for healthy eating, eating out frequency, and other home eating and related behaviors. Other child characteristics included breastfeeding history, intake of school foods, and participation in other nutrition programs. Community variables included predominant race/ethnicity and SES. Classification and regression trees (CART) identified key predictors of intake. The FV and SSB CARTS had 14 and 12 terminal groups, respectively. Children with the highest FV intake (0.54 SD from mean cups/day; 13% of sample) had fruit more often available at home, dark green vegetables more often available at home, ate dinner with family more often, had SSBs less often available at home, and were breastfed longer. Conversely, children in the two groups with the lowest FV intake either had fruit less often available at home, and family never complimented their eating (-0.86; 2%), or they had family that rarely or sometimes complimented their eating, and perceived school lunches as unhealthy (-0.87; 1%). For SSB intake, the lowest consumers (-0.63 SD from mean tsp/day sugar; 17%) never or rarely had SSBs available at home, and lived in higher SES communities. Children in the two groups with the highest SSB intakes had SSBs available at home more often, and lived in a SNAP-participating household and either ate out less often, used a phone/computer for social networking, and had SSBs available at home very often (1.3; 1%), or they ate out more often, and were breastfed for a shorter duration (1.1; 5%). Home availability of FV and SSBs were the most salient predictors of intake of both FV and SSBs, while other predictors differed between FV and SSB intake. Study findings highlight several actionable home-environment strategies to test in future studies to improve school-aged children's diets.
Early institutional rearing is associated with adverse biological and health outcomes in later life, including accelerated cellular aging as measured by telomere length. However, the extent to which foster care intervention can mitigate these risks, and whether telomere dynamics predict cardiometabolic health in young adulthood remains unclear. The present study aimed to estimate the association between early institutional care, randomization to foster care (intent-to-treat), and longitudinal changes in telomere length from ages 12-22 years among participants of the Bucharest Early Intervention Project (BEIP), and to determine whether the rate of telomere shortening predicts cardiometabolic health in early adulthood. The study included 156 BEIP participants who had been randomly assigned to either foster care or care-as-usual, with an additional comparison group of never-institutionalized peers. Buccal DNA was collected, and telomere length (T/S ratio) was measured at two to five timepoints between the ages 12 and 22. Cardiometabolic health at age 22 was assessed using metabolic z-scores and criteria for metabolic syndrome. Participants assigned to foster care exhibited a significantly slower decline in telomere length over the 10-year period compared to those in care-as-usual. Ever-institutionalized and never-institutionalized groups had similar overall patterns of telomere decline. Sex-specific analyses indicated that among the foster care group, males had shorter telomere length at age 12 than females, but rates of telomere shortening were similar between sexes over time. The rate of telomere attrition between ages 12 and 22 was not associated with cardiometabolic outcomes at age 22. Foster care intervention during early childhood may protect against telomere shortening among previously institutionalized children, highlighting its role in buffering the long-term impact of early adversity on cellular aging. However, variation in telomere shortening during adolescence and young adulthood did not predict cardiometabolic risk at age 22.
The objective of this study was to determine the time frame of clinical improvement in patient-reported outcomes (PROs) following surgical decompression for cervical spondylotic myelopathy (CSM). Based on previously published 12-month data from this group, the authors hypothesized that the average time to minimal clinically important difference (MCID) improvement would primarily occur by 3 months postoperatively regardless of preoperative myelopathy severity. They also hypothesized that there would be minimal additional improvement between 3 months and 5 years after surgery. This was a post hoc analysis of prospectively collected data from the 14-site Spine CORe™ study group of the Quality Outcomes Database (QOD). Patients were stratified according to myelopathy severity using the modified Japanese Orthopaedic Association (mJOA) myelopathy scale into mild (mJOA score 15-17), moderate (mJOA score 12-14) or severe (mJOA score < 12). PRO measures included the Neck Disability Index (NDI), numeric rating scale (NRS) for neck and arm pain, and EQ-5D for quality-adjusted life years. PROs were recorded at baseline, 3-month, 12-month, 2-year, and 5-year intervals. MCID thresholds were calculated using previously validated methods in this cohort. Time to meet the MCID cutoff and the proportion of patients achieving MCID at each time point were determined. A total of 1085 patients (with ≥ 80% follow-up at 60 months for all PRO measures [PROMs]) were enrolled. Patients with more severe myelopathy had worse baseline comorbidities (e.g., BMI, American Society of Anesthesiology class, ambulation dependence) and lower PRO scores. Average PROs met the MCID threshold in each category at 3 months postoperatively, regardless of baseline myelopathy severity. Of the patients with complete 5-year follow-up data, the majority achieved the MCID cutoff threshold for PROMs at 3 months (50%-73%, depending on the PROM). A minority of patients went on to meet the MCID for PROMs at 12 months (12%-21%), 2 years (4%-8%), and 5 years (1%-6%). Between 4% and 25% of patients never achieved MCID cutoffs at any time point. On average, patients achieved clinically meaningful improvement in PROs at 3 months postoperatively, regardless of preoperative severity. While the majority (50%-73%, depending on the PROM) reached MCID within 3 months, an additional 12%-21% improved by 12 months, 4%-8% by 2 years, and only 1%-6% by 5 years; 4%-25% never reach the MCID. This 5-year follow-up study clarifies the timeline of clinical improvement after surgery for CSM and provides a useful tool for both surgeon planning and patient counseling.
While depressive symptoms are a known risk factor for cardiovascular disease (CVD), their influence over time in individuals without standard modifiable risk factors ('SMuRF-less' adults) is unclear. This study examined the association between depressive symptoms and incident CVD in SMuRF-less middle-aged and older adults. This prospective cohort study analyzed data from the China Health and Retirement Longitudinal Study (CHARLS; n = 2095) and the English Longitudinal Study of Ageing (ELSA; n = 798). Depressive symptoms were assessed at baseline and at a roughly 2-year follow-up using the CES-D scale. Participants were categorized into four patterns: 'Never', 'Onset', 'Remission', and 'Persistence'. The primary outcome was the first occurrence of self-reported, physician-diagnosed CVD (heart disease or stroke). Cox proportional hazard models were used to calculate the hazard ratio (HR) and 95% confidence interval (95% CI). Over follow-up, compared with the Never group, participants with Onset (CHARLS: HR 1.55, 95% CI 1.16-2.08; ELSA: HR 1.28, 95% CI 1.01-1.62) and Persistence (CHARLS: HR 2.11, 95% CI 1.68-2.65; ELSA: HR 1.77, 95% CI 1.12-2.79) depressive symptoms had a significantly higher risk of incident CVD. The Remission group did not show a significantly elevated risk (CHARLS: HR 1.15, 95% CI 0.87-1.52; ELSA: HR 1.14, 95% CI 0.71-1.82). A dose-response relationship was observed, with higher cumulative depressive symptom scores associated with linearly increasing CVD risk. Among SMuRF-less middle-aged and older adults, the onset and persistence of depressive symptoms were associated with an increased risk of incident CVD.
Poor dental health is linked to poor physical and mental health. This study was aimed to examine the characteristics of U.S. adults that are associated with having seen a dentist in the past year. A cross-section of adults aged 18-64 years (N=19,975) from the 2023 National Health Interview Survey was examined. Bivariate analyses examined the associations of sociodemographic and financial variables with recent dental visits in the last 12 months. Multinomial modeling was used to assess these variables to predict 3 outcomes of time since the last dental visit: in the last 12 months; over a year but <10 years; and over 10 years or never, which was the reference category. In young and middle-aged adults, 4.8% of Americans, representing over 9 million people, had either never seen a dentist or not seen a dentist in 10 years or more. The likelihood of a dental visit in the last 12 months increased with education level (no high-school degree versus a graduate or professional degree [AOR=0.21, 95% CI=0.09, 0.50]) and income (income below the federal poverty line versus income in the highest quartile [AOR=0.20, 95% CI=0.11, 0.35]). Having dental coverage in a private plan or Medicaid, compared with having no coverage, predicted having a dental visit within the last 12 months in both multinomial and bivariate analyses. Access to dental care in young and middle-aged adults is determined by financial ability. Increasing access to dental care could happen once the financial barriers to dental care are reduced, including increasing the age at which a young adult can be covered by a parent's plan and making dental coverage comparable with physical health coverage. Given the current data about the links between dental, mental, and physical health, parity for all care is warranted.
Informed consent is a process by which a patient can understand all the aspects of the treatment, including procedures benefits, potential risks, advantages, disadvantages, and purpose of treatment and alternative treatments. Informed consent assists the patient to make an informed decision whether to accept or refuse treatment. A cross-sectional, non-probability online survey study that targeted adult dental patients with previous dental experience in the Eastern province of Saudi Arabia. An online survey using Survey Monkey was sent through social media platforms including Twitter, Instagram, Facebook and WhatsApp. Seven hundred and thirteen (713) individuals responded to the survey. This was a 25% response rate. Of respondents, 585 (82%) had previous dental treatment and were included in analyses. Around 50% of responded reported they were always informed of the risks, understood the risks, were informed of the alternative treatment options and understood these alternatives. Most responded were rarely or never been informed of specific complications or risks prior to root canal treatment (possibility of instrument breakage 85% of, perforation 77%) or tooth extraction (possibility of tooth fracture 92%, temporary nerve damage 91%, or dry socket 71%). Patients treated in private clinics had 3.6‑fold higher odds of not signing consent compared to those treated in government clinics (OR = 3.58, 95% CI: 2.55-5.04). Older age (> 28 years) was also a significant predictor, with 1.4-fold higher odds of rarely/never signing (OR = 1.40, 95% CI: 1.02-1.92). The findings suggest that the quality and use of informed consent practices in dental settings in Saudi Arabia vary considerably. These findings suggest potential gaps requiring further evaluation, procedure-specific consent protocols to align with national and international standards.
Reusable surgical instruments require repeated sterilization between procedures, yet many instruments prepared for surgery are never used. The national scale of the reusable surgical instrument fleet and its associated reprocessing burden remain poorly quantified. Aggregated operational data from 251 U.S. healthcare facilities encompassing 2,618 operating rooms were analyzed to quantify instrument inventories, sterilization volume, and instrument loss. National estimates were derived by scaling operating room-level metrics to reported U.S. operating room capacity. Published sterilization cost estimates and instrument utilization rates were applied to estimate national sterilization expenditures and idle reprocessing costs. Replacement costs for missing instruments were modeled using log normal price distributions and Monte Carlo simulation. The dataset included 336,784 trays and 6.7 million instruments, with 236 million instruments reprocessed annually. National extrapolation suggests a reusable instrument fleet of approximately 112 million instruments and nearly 4 billion annual reprocessing events. Estimated annual sterilization costs ranged from $1.3 to $11.8 billion. This study provides an initial estimate of the national cost of surgical instrument reprocessing in the U.S. Further work is required to determine a national estimate of the percentage of instruments that go unused. Nevertheless, the scale of expenditures on this process indicates there will be significant opportunities to improve perioperative efficiency and reduce waste through better inventory management yielding substantial cost savings and decreasing the environmental footprint of surgical care.
Benign breast disease (BBD) is common and confers heterogeneous increases in breast cancer risk; however, risk prediction relies mainly on histopathology and clinical factors. Sclerosing adenosis (SA) is a proliferative BBD lesion associated with an approximately two-fold increase in risk, yet most women with SA never develop breast cancer. We hypothesize that the immune-stromal microenvironment of SA and its surrounding lobular field relates to subsequent invasive breast cancer. In a nested case-control study within a BBD cohort, we profiled 24 sclerosing adenosis (SA) biopsies (9 developing invasive breast cancer within 15 years, cases; 15 cancer-free at ≥ 15 years, controls). We integrated whole-tissue NanoString gene-expression profiling with multiplex immunofluorescence (MxIF) imaging of SA lesions and surrounding morphologically normal lobules. We measured immune and stromal biomarkers in SA lesions and adjacent lobules, with image analysis masked to case-control status, integrated these data with whole-tissue gene expression, and summarized both microenvironment patterns and the proximity of immune cells to proliferating epithelium. SA biopsies from women who later developed cancer showed a low-immune, high-stromal gene-expression program, whereas controls were enriched for immune signatures. Stromal densities of CD8⁺, CD68⁺ and RUNX3⁺ cells in both lobular stroma and SA lesions mirrored this axis and were markedly lower in cases than controls. Unsupervised clustering identified immune-cold and immune-hot lobule types and four SA lesion field archetypes; immune-hot lobules and immune-hot/epithelium-proliferative lesion fields were enriched in controls. Spatial analyses further showed that immune-hot lobules have stromal immune cells positioned closer to proliferating epithelium and enriched CD27-CD8 microclusters, whereas SA lesions from cases exhibit greater immune-to-Ki67 distances, fewer boundary-proximal CD8⁺ sentinels, and depletion of CD27-RUNX3 and RUNX3-CD8 microclusters. These findings support an association of an immune-cold SA lesion embedded within an immune-cold lobular field phenotype with subsequent invasive breast cancer risk in women with SA, and suggest that spatially organized, RUNX3-rich immune microenvironments may contribute to epithelial surveillance. Validation in larger cohorts will be needed to confirm generalizability and clarify lesion-specific versus field-wide contributions.
Accurate, same-cycle assessment of endometrial readiness remains a major unmet need in assisted reproduction. Animal models have shown that spiral artery angiogenesis drives a midluteal rise in uterine oxygen tension, but direct, real-time measurements in humans have never been reported. We hypothesized that intrauterine dissolved O₂ profiling during the luteal phase could represent a potential functional, noninvasive biomarker of endometrial status. In this prospective, observational pilot feasibility study, eight healthy women aged 18-35 years with regular menstrual cycles and BMI < 30 underwent serial intrauterine pO₂ measurements across the luteal phase during a single natural cycle. Measurements were scheduled approximately every 48 h from the day of the LH surge (LH + 0) up to LH + 13/14, with minor variations due to scheduling constraints. Dissolved oxygen was recorded using a 1 mm fiber optic microsensor positioned 1 cm from the uterine fundus under ultrasound guidance. The primary outcome was intrauterine pO₂ (Torr) across the luteal phase. Two distinct intrauterine oxygenation profiles were identified. Four participants exhibited a "peak" pattern characterized by early luteal low pO₂ (< 15 Torr), followed by a sharp mid-luteal rise in pO₂ (40-45 Torr at LH + 4 to LH + 6, p < 0.0001), a short plateau, and a decline by LH + 8. One participant showed an earlier and abbreviated peak. The remaining four participants maintained pO₂ values < 35 Torr throughout the luteal phase ("no-peak" pattern). Post-hoc review of baseline screening data and follow-up participant interviews identified plausible physiological, pharmacological, or lifestyle-related factors that may influence endometrial vascular maturation in the no-peak subgroup. This study provides the first in vivo characterization of real-time intrauterine oxygen dynamics across the luteal phase in women. Intrauterine pO₂ profiling identified distinct temporal oxygenation patterns across the luteal phase and may reflect physiologically relevant changes in endometrial function. These preliminary findings support further evaluation of intrauterine oxygen profiling as a potential non-invasive, same-cycle functional biomarker of embryo-endometrium synchrony. Larger studies are required to validate its predictive value for implantation and live birth outcomes. ISRCTN85528745 (retrospectively registered on 30/01/2026).
Our group previously reported that lung cancer (LC) screening history results and subsequent timing of diagnosis are associated with significant differences in survival outcomes. As a follow-up study, we sought to develop novel personalized risk models that considered screening history for incidence cancers, interval LCs, and prevalence LCs. Using data from the CT-arm of the NLST, four independent case-control analyses were conducted to develop parsimonious risk models. Controls (n=26,038) were those never diagnosed with LC. The four LC case groups were 270 prevalence LCs, 44 interval LCs, 206 screen-detected LCs (SDLCs) that had a baseline positive screen, and 164 SDLCs that had a baseline negative screen. For each case-control analysis, univariable analyses identified statistically significant covariates from 48 variables and then significant covariates were included into a stepwise backward selection approach to identify a model with the most informative covariates. For prevalence LCs, the model (AUC=0.711) included age, pack-years smoked, BMI, smoking status, smoking onset age, personal history of cancer, family history of LC, alcohol consumption, and milling occupation. For interval LCs, the model (AUC=0.734) included age, smoking status, smoking onset age, cigar smoking, marital status, and asbestos occupation. For baseline positive SDLCs, the model (AUC=0.685) included age, pack-years smoked, BMI, emphysema, chemicals/plastics exposure, and milling occupation. For baseline negative SDLCs, the model (AUC=0.701) included age, pack-years smoked, BMI, smoking status, emphysema, sarcoidosis, and sandblasting occupation. Besides smoking and age, which are inclusion criteria for screening, these models identified other important risk factors which could be used to provide personalized LC risk assessment and screening management.
Occupational radiation protection depends on both competency (knowledge) and consistent compliance with monitoring and protective practice. This study quantified radiation protection knowledge and self-reported compliance-related safety practices among occupationally exposed radiology-related professionals in Türkiye and examined how these outcomes vary across workforce and institutions. We conducted a national cross-sectional, web-based survey (25 items) among staff working in radiology, nuclear medicine, radiotherapy, and interventional settings. Knowledge was summarized as a 0-100 index based on eight objective items; descriptive statistics and non-parametric group comparisons were used. A total of 153 respondents participated (56.9% women; 97.4% technologists/technicians; 69.7% Ministry of Health facilities). Mean knowledge score was 74.7 ± 19.2 (0-100). Correct response rates were highest for the purpose of personal dosimetry (151/153, 98.7%) and the dose-distance relationship (141/153, 92.2%), while significant deficiencies were found in understanding occupational dose limits (75/153, 49.0%) and the license renewal interval for radiation devices (72/153, 47.1%). Most participants reported using personal dosimeters (133/153, 86.9%) and reviewing dosimetry reports (123/149, 82.6%), but consistent use of personal protective equipment was lower (87/153, 56.9%; 40/153, 26.1% sometimes; 26/153, 17.0% never). Knowledge differed by education (H = 27.5, p < 0.001), work experience (H = 39.4, p < 0.001), and institution type (H = 14.7, p < 0.001), but not by gender (p = 0.33). While core radiation protection knowledge is generally adequate important gaps persist in regulatory knowledge and consistent protective equipment use. Focused, practice-oriented reinforcement targeting early-career staff and lower-performing institutions may strengthen ALARA-aligned behavior and improve occupational radiation safety practices nationwide Keywords.