BACKGROUND: Antioxidants may protect the aging brain against oxidative damage associated with pathological changes of Alzheimer disease (AD). OBJECTIVE: To examine the relationship between antioxidant supplement use and risk of AD. DESIGN: Cross-sectional and prospective study of dementia. Elderly (65 years or older) county residents were assessed in 1995 to 1997 for prevalent dementia and AD, and again in 1998 to 2000 for incident illness. Supplement use was ascertained at the first contact. SETTING: Cache County, Utah. PARTICIPANTS: Among 4740 respondents (93%) with data sufficient to determine cognitive status at the initial assessment, we identified 200 prevalent cases of AD. Among 3227 survivors at risk, we identified 104 incident AD cases at follow-up. MAIN OUTCOME MEASURE: Diagnosis of AD by means of multistage assessment procedures. RESULTS: Analyses of prevalent and incident AD yielded similar results. Use of vitamin E and C (ascorbic acid) supplements in combination was associated with reduced AD prevalence (adjusted odds ratio, 0.22; 95% confidence interval, 0.05-0.60) and incidence (adjusted hazard ratio, 0.36; 95% confidence interval, 0.09-0.99). A trend toward lower AD risk was also evident in users of vitamin E and multivitamins containing vitamin C, but we saw no evidence of a protective effect with use of vitamin E or vitamin C supplements alone, with multivitamins alone, or with vitamin B-complex supplements. CONCLUSIONS: Use of vitamin E and vitamin C supplements in combination is associated with reduced prevalence and incidence of AD. Antioxidant supplements merit further study as agents for the primary prevention of AD.
BACKGROUND: Counties are the smallest unit for which mortality data are routinely available, allowing consistent and comparable long-term analysis of trends in health disparities. Average life expectancy has steadily increased in the United States but there is limited information on long-term mortality trends in the US counties This study aimed to investigate trends in county mortality and cross-county mortality disparities, including the contributions of specific diseases to county level mortality trends. METHODS AND FINDINGS: We used mortality statistics (from the National Center for Health Statistics [NCHS]) and population (from the US Census) to estimate sex-specific life expectancy for US counties for every year between 1961 and 1999. Data for analyses in subsequent years were not provided to us by the NCHS. We calculated different metrics of cross-county mortality disparity, and also grouped counties on the basis of whether their mortality changed favorably or unfavorably relative to the national average. We estimated the probability of death from specific diseases for counties with above- or below-average mortality performance. We simulated the effect of cross-county migration on each county's life expectancy using a time-based simulation model. Between 1961 and 1999, the standard deviation (SD) of life expectancy across US counties was at its lowest in 1983, at 1.9 and 1.4 y for men and women, respectively. Cross-county life expectancy SD increased to 2.3 and 1.7 y in 1999. Between 1961 and 1983 no counties had a statistically significant increase in mortality; the major cause of mortality decline for both sexes was reduction in cardiovascular mortality. From 1983 to 1999, life expectancy declined significantly in 11 counties for men (by 1.3 y) and in 180 counties for women (by 1.3 y); another 48 (men) and 783 (women) counties had nonsignificant life expectancy decline. Life expectancy decline in both sexes was caused by increased mortality from lung cancer, chronic obstructive pulmonary disease (COPD), diabetes, and a range of other noncommunicable diseases, which were no longer compensated for by the decline in cardiovascular mortality. Higher HIV/AIDS and homicide deaths also contributed substantially to life expectancy decline for men, but not for women. Alternative specifications of the effects of migration showed that the rise in cross-county life expectancy SD was unlikely to be caused by migration. CONCLUSIONS: There was a steady increase in mortality inequality across the US counties between 1983 and 1999, resulting from stagnation or increase in mortality among the worst-off segment of the population. Female mortality increased in a large number of counties, primarily because of chronic diseases related to smoking, overweight and obesity, and high blood pressure.
Abstract Background Addressing COVID-19 is a pressing health and social concern. To date, many epidemic projections and policies addressing COVID-19 have been designed without seroprevalence data to inform epidemic parameters. We measured the seroprevalence of antibodies to SARS-CoV-2 in a community sample drawn from Santa Clara County. Methods On April 3-4, 2020, we tested county residents for antibodies to SARS-CoV-2 using a lateral flow immunoassay. Participants were recruited using Facebook ads targeting a sample of individuals living within the county by demographic and geographic characteristics. We estimate weights to adjust our sample to match the zip code, sex, and race/ethnicity distribution within the county. We report both the weighted and unweighted prevalence of antibodies to SARS-CoV-2. We also adjust for test performance characteristics by combining data from 16 independent samples obtained from manufacturer’s data, regulatory submissions, and independent evaluations: 13 samples for specificity (3,324 specimens) and 3 samples for sensitivity (157 specimens). Results The raw prevalence of antibodies to SARS-CoV-2 in our sample was 1.5% (exact binomial 95CI 1.1-2.0%). Test performance specificity in our data was 99.5% (95CI 99.2-99.7%) and sensitivity was 82.8% (95CI 76.0-88.4%). The unweighted prevalence adjusted for test performance characteristics was 1.2% (95CI 0.7-1.8%). After weighting for population demographics of Santa Clara County, the prevalence was 2.8% (95CI 1.3-4.7%), using bootstrap to estimate confidence bounds. These prevalence point estimates imply that 54,000 (95CI 25,000 to 91,000 using weighted prevalence; 23,000 with 95CI 14,000-35,000 using unweighted prevalence) people were infected in Santa Clara County by early April, many more than the approximately 1,000 confirmed cases at the time of the survey. Conclusions The estimated population prevalence of SARS-CoV-2 antibodies in Santa Clara County implies that the infection may be much more widespread than indicated by the number of confirmed cases. More studies are needed to improve precision of prevalence estimates. Locally-derived population prevalence estimates should be used to calibrate epidemic and mortality projections.
OBJECTIVE: This study examined shortages of mental health professionals at the county level across the United States. A goal was to motivate discussion of the data improvements and practice standards required to develop an adequate mental health professional workforce. METHODS: Shortage of mental health professionals was conceptualized as the percentage of need for mental health visits that is unmet within a county. County-level need was measured by estimating the prevalence of serious mental illness, then combining separate estimates of provider time needed by individuals with and without serious mental illness derived from National Comorbidity Survey Replication, U.S. Census, and Medical Panel Expenditure Survey data. County-level supply data were compiled from professional associations, state licensure boards, and national certification boards. Shortage was measured for prescribers, nonprescribers, and a combination of both groups in the nation's 3,140 counties. Ordinary least-squares regression identified county characteristics associated with shortage. RESULTS: Nearly one in five counties (18%) in the nation had unmet need for nonprescribers. Nearly every county (96%) had unmet need for prescribers and therefore some level of unmet need overall. Rural counties and those with low per capita income had higher levels of unmet need. CONCLUSIONS: These findings identified widespread prescriber shortage and poor distribution of nonprescribers. A caveat is that these estimates of need were extrapolated from current provider treatment patterns rather than from a normative standard of how much care should be provided and by whom. Better data would improve these estimates, but future work needs to move beyond simply describing shortages to resolving them.
BACKGROUND: We previously reported that the prevalence of Crohn's disease (CD) and ulcerative colitis (UC) in Olmsted County, Minnesota, had risen significantly between 1940 and 1993. We sought to update the incidence and prevalence of these conditions in our region through 2000. METHODS: The Rochester Epidemiology Project allows population-based studies of disease in county residents. CD and UC were defined by previously used criteria. County residents newly diagnosed between 1990 and 2000 were identified as incidence cases, and persons with these conditions alive and residing in the county on January 1, 2001, were identified as prevalence cases. All rates were adjusted to 2000 US Census figures for whites. RESULTS: In 1990-2000 the adjusted annual incidence rates for UC and CD were 8.8 cases per 100,000 (95% confidence interval [CI], 7.2-10.5) and 7.9 per 100,000 (95% CI, 6.3-9.5), respectively, not significantly different from rates observed in 1970-1979. On January 1, 2001, there were 220 residents with CD, for an adjusted prevalence of 174 per 100,000 (95% CI, 151-197), and 269 residents with UC, for an adjusted prevalence of 214 per 100,000 (95% CI, 188-240). CONCLUSION: Although incidence rates of CD and UC increased after 1940, they have remained stable over the past 30 years. Since 1991 the prevalence of UC decreased by 7%, and the prevalence of CD increased about 31%. Extrapolating these figures to US Census data, there were approximately 1.1 million people with inflammatory bowel disease in the US in 2000.
BACKGROUND: Long-term care facilities are high-risk settings for severe outcomes from outbreaks of Covid-19, owing to both the advanced age and frequent chronic underlying health conditions of the residents and the movement of health care personnel among facilities in a region. METHODS: After identification on February 28, 2020, of a confirmed case of Covid-19 in a skilled nursing facility in King County, Washington, Public Health-Seattle and King County, aided by the Centers for Disease Control and Prevention, launched a case investigation, contact tracing, quarantine of exposed persons, isolation of confirmed and suspected cases, and on-site enhancement of infection prevention and control. RESULTS: As of March 18, a total of 167 confirmed cases of Covid-19 affecting 101 residents, 50 health care personnel, and 16 visitors were found to be epidemiologically linked to the facility. Most cases among residents included respiratory illness consistent with Covid-19; however, in 7 residents no symptoms were documented. Hospitalization rates for facility residents, visitors, and staff were 54.5%, 50.0%, and 6.0%, respectively. The case fatality rate for residents was 33.7% (34 of 101). As of March 18, a total of 30 long-term care facilities with at least one confirmed case of Covid-19 had been identified in King County. CONCLUSIONS: In the context of rapidly escalating Covid-19 outbreaks, proactive steps by long-term care facilities to identify and exclude potentially infected staff and visitors, actively monitor for potentially infected patients, and implement appropriate infection prevention and control measures are needed to prevent the introduction of Covid-19.
Abstract Knowledge of the area under different crops is important to the U.S. Department of Agriculture. Sample surveys have been designed to estimate crop areas for large regions, such as crop-reporting districts, individual states, and the United States as a whole. Predicting crop areas for small areas such as counties has generally not been attempted, due to a lack of available data from farm surveys for these areas. The use of satellite data in association with farm-level survey observations has been the subject of considerable research in recent years. This article considers (a) data for 12 Iowa counties, obtained from the 1978 June Enumerative Survey of the U.S. Department of Agriculture and (b) data obtained from land observatory satellites (LANDSAT) during the 1978 growing season. Emphasis is given to predicting the area under corn and soybeans in these counties. A linear regression model is specified for the relationship between the reported hectares of corn and soybeans within sample segments in the June Enumerative Survey and the corresponding satellite determination for areas under corn and soybeans. A nested-error model defines a correlation structure among reported crop hectares within the counties. Given this model, the mean hectares of the crop per segment in a county is defined as the conditional mean of reported hectares, given the satellite determinations and the realized (random) county effect. The mean hectares of the crop per segment is the sum of a fixed component, involving unknown parameters to be estimated and a random component to be predicted. Variance-component estimators in the nested-error model are defined, and the generalized least-squares estimators of the parameters of the linear model are obtained. Predictors of the mean crop hectares per segment are defined in terms of these estimators. An estimator of the variance of the error in the predictor is constructed, including terms arising from the estimation of the parameters of the model. Predictions of mean hectares of corn and soybeans per segment for the 12 Iowa counties are presented. Standard errors of the predictions are compared with those of competing predictors. The suggested predictor for the county mean crop area per segment has a standard error that is considerably less than that of the traditional survey regression predictor. Key Words: Small-area estimationLANDSATJune Enumerative SurveyComponents of varianceNested-error model
Florida is one of several states that have sought to protect newborns by requiring that mothers known to have used alcohol or illicit drugs during pregnancy be reported to health authorities. To estimate the prevalence of substance abuse by pregnant women, we collected urine samples from all pregnant women who enrolled for prenatal care at any of the five public health clinics in Pinellas County, Florida (n = 380), or at any of 12 private obstetrical offices in the county (n = 335); each center was studied for a one-month period during the first half of 1989. Toxicologic screening for alcohol, opiates, cocaine and its metabolites, and cannabinoids was performed blindly with the use of an enzyme-multiplied immunoassay technique; all positive results were confirmed. Among the 715 pregnant women we screened, the overall prevalence of a positive result on the toxicologic tests of urine was 14.8 percent; there was little difference in prevalence between the women seen at the public clinics (16.3 percent) and those seen at the private offices (13.1 percent). The frequency of a positive result was also similar among white women (15.4 percent) and black women (14.1 percent). Black women more frequently had evidence of cocaine use (7.5 percent vs. 1.8 percent for white women), whereas white women more frequently had evidence of the use of cannabinoids (14.4 percent vs. 6.0 percent for black women). During the six-month period in which we collected the urine samples, 133 women in Pinellas County were reported to health authorities after delivery for substance abuse during pregnancy. Despite the similar rates of substance abuse among black and white women in our study, black women were reported at approximately 10 times the rate for white women (P less than 0.0001), and poor women were more likely than others to be reported. We conclude that the use of illicit drugs is common among pregnant women regardless of race and socio-economic status. If legally mandated reporting is to be free of racial or economic bias, it must be based on objective medical criteria.
We estimate the causal effect of each county in the United States on children's incomes in adulthood. We first estimate a fixed effects model that is identified by analyzing families who move across counties with children of different ages. We then use these fixed effect estimates to (i) quantify how much places matter for intergenerational mobility, (ii) construct forecasts of the causal effect of growing up in each county that can be used to guide families seeking to move to opportunity, and (iii) characterize which types of areas produce better outcomes. For children growing up in low-income families, each year of childhood exposure to a one standard deviation (std. dev.) better county increases income in adulthood by 0.5%. There is substantial variation in counties' causal effects even within metro areas. Counties with less concentrated poverty, less income inequality, better schools, a larger share of two-parent families, and lower crime rates tend to produce better outcomes for children in poor families. Boys' outcomes vary more across areas than girls' outcomes, and boys have especially negative outcomes in highly segregated areas. Areas that generate better outcomes have higher house prices on average, but our approach uncovers many "opportunity bargains"-places that generate good outcomes but are not very expensive.
On March 17, 2020, a member of a Skagit County, Washington, choir informed Skagit County Public Health (SCPH) that several members of the 122-member choir had become ill. Three persons, two from Skagit County and one from another area, had test results positive for SARS-CoV-2, the virus that causes coronavirus disease 2019 (COVID-19). Another 25 persons had compatible symptoms. SCPH obtained the choir's member list and began an investigation on March 18. Among 61 persons who attended a March 10 choir practice at which one person was known to be symptomatic, 53 cases were identified, including 33 confirmed and 20 probable cases (secondary attack rates of 53.3% among confirmed cases and 86.7% among all cases). Three of the 53 persons who became ill were hospitalized (5.7%), and two died (3.7%). The 2.5-hour singing practice provided several opportunities for droplet and fomite transmission, including members sitting close to one another, sharing snacks, and stacking chairs at the end of the practice. The act of singing, itself, might have contributed to transmission through emission of aerosols, which is affected by loudness of vocalization (1). Certain persons, known as superemitters, who release more aerosol particles during speech than do their peers, might have contributed to this and previously reported COVID-19 superspreading events (2-5). These data demonstrate the high transmissibility of SARS-CoV-2 and the possibility of superemitters contributing to broad transmission in certain unique activities and circumstances. It is recommended that persons avoid face-to-face contact with others, not gather in groups, avoid crowded places, maintain physical distancing of at least 6 feet to reduce transmission, and wear cloth face coverings in public settings where other social distancing measures are difficult to maintain.
In an attempt to replicate Berkman and Syme's study of social networks and mortality in Alameda County, California, the authors investigated the relationship between a social network index and survivorship from 1967 to 1980 in the Evans County, Georgia, cohort. They constructed an index modeled after the Berkman Social Network Index and tested it in race- and sex-specific proportional hazards models for 2,059 subjects who were examined in 1967-1969 during the Evans County Cardiovascular Epidemiologic Study. The present study emphasized a priori specification of the social network index and statistical hypothesis test. Descriptive analyses were consistent with a modest social networks effect (e.g., hazard ratio (95 per cent confidence interval) of 1.6 (1.2-2.2) ). Among white males, the age-adjusted hazard ratio comparing the lowest to the highest value of our six-level index was 2.0 (1.2-3.4), but control for potential confounders (principally cardiovascular disease risk factors) reduced this value to 1.5 (0.8-2.6). The social networks effect among white females, black males, and black females was weaker and clearly nonsignificant. Exploratory analyses suggested that marital status, church activities, and an alternate social network index predicted survivorship, but not in a dose-response fashion. Reduced survivorship among older subjects with few social ties was the most important feature of the data.
BACKGROUND: There is significant geographic variation in the reported incidence of ulcerative colitis. AIMS: To update the incidence and prevalence of ulcerative colitis in Olmsted County, Minnesota, examine temporal trends, and determine overall survival. PATIENTS: All Olmsted County residents diagnosed with ulcerative colitis between 1940 and 1993 (incidence cases), and all residents with ulcerative colitis alive on 1 January 1991 (prevalence cases). METHODS: Incidence and prevalence rates were adjusted using 1990 US census figures for whites. The effects of age, sex, and calendar year on incidence rates were evaluated using Poisson regression. Survival from diagnosis was compared with that expected for US north-central whites. RESULTS: Between 1940 and 1993, 278 incidence cases were identified, for an adjusted incidence rate of 7.6 cases per 100 000 person years (95% confidence interval (CI), 6.7 to 8.5). On 1 January 1991, there were 218 residents with definite or probable ulcerative colitis, for an adjusted prevalence rate of 229 cases per 100 000 (95% CI, 198 to 260). Increased incidence rates were associated with later calendar years (p<0.002), younger age (p<0.0001), urban residence (p<0.0001), and male sex (p<0.003). Overall survival was similar to that expected (p>0.2). CONCLUSIONS: The overall incidence rate of ulcerative colitis in Olmsted County increased until the 1970s, and remained stable thereafter. Incidence rates among men and urban residents were significantly higher. The prevalence rate in Rochester in 1991 was 19% higher than that in 1980. Overall survival was similar to that of the general population.
BACKGROUND: Limited data exist on trends in incidence of atrial fibrillation (AF). We assessed the community-based trends in AF incidence for 1980 to 2000 and provided prevalence projections to 2050. METHODS AND RESULTS: The adult residents of Olmsted County, Minnesota, who had ECG-confirmed first AF in the period 1980 to 2000 (n=4618) were identified. Trends in age-adjusted incidence were determined and used to construct model-based prevalence estimates. The age- and sex-adjusted incidence of AF per 1000 person-years was 3.04 (95% CI, 2.78 to 3.31) in 1980 and 3.68 (95% CI, 3.42 to 3.95) in 2000. According to Poisson regression with adjustment for age and sex, incidence of AF increased significantly (P=0.014), with a relative increase of 12.6% (95% CI, 2.1 to 23.1) over 21 years. The increase in age-adjusted AF incidence did not differ between men and women (P=0.84). According to the US population projections by the US Census Bureau, the number of persons with AF is projected to be 12.1 million by 2050, assuming no further increase in age-adjusted incidence of AF, but 15.9 million if the increase in incidence continues. CONCLUSIONS: The age-adjusted incidence of AF increased significantly in Olmsted County during 1980 to 2000. Whether or not this rate of increase continues, the projected number of persons with AF for the United States will exceed 10 million by 2050, underscoring the urgent need for primary prevention strategies against AF development.
BACKGROUND: Annually since 2010, the University of Wisconsin Population Health Institute and the Robert Wood Johnson Foundation have produced the County Health Rankings-a "population health checkup" for the nation's over 3,000 counties. The purpose of this paper is to review the background and rationale for the Rankings, explain in detail the methods we use to create the health rankings in each state, and discuss the strengths and limitations associated with ranking the health of communities. METHODS: We base the Rankings on a conceptual model of population health that includes both health outcomes (mortality and morbidity) and health factors (health behaviors, clinical care, social and economic factors, and the physical environment). Data for over 30 measures available at the county level are assembled from a number of national sources. Z-scores are calculated for each measure, multiplied by their assigned weights, and summed to create composite measure scores. Composite scores are then ordered and counties are ranked from best to worst health within each state. RESULTS: Health outcomes and related health factors vary significantly within states, with over two-fold differences between the least healthy counties versus the healthiest counties for measures such as premature mortality, teen birth rates, and percent of children living in poverty. Ranking within each state depicts disparities that are not apparent when counties are ranked across the entire nation. DISCUSSION: The County Health Rankings can be used to clearly demonstrate differences in health by place, raise awareness of the many factors that influence health, and stimulate community health improvement efforts. The Rankings draws upon the human instinct to compete by facilitating comparisons between neighboring or peer counties within states. Since no population health model, or rankings based off such models, will ever perfectly describe the health of its population, we encourage users to look to local sources of data to understand more about the health of their community.
Losses from environmental hazards have escalated in the past decade, prompting a reorientation of emergency management systems away from simple postevent response. There is a noticeable change in policy, with more emphasis on loss reduction through mitigation, preparedness, and recovery programs. Effective mitigation of losses from hazards requires hazard identification, an assessment of all the hazards likely to affect a given place, and risk-reduction measures that are compatible across a multitude of hazards. The degree to which populations are vulnerable to hazards, however, is not solely dependent upon proximity to the source of the threat or the physical nature of the hazard –social factors also play a significant role in determining vulnerability. This paper presents a method for assessing vulnerability in spatial terms using both biophysical and social indicators. A geographic information system was utilized to establish areas of vulnerability based upon twelve environmental threats and eight social characteristics for our study area, Georgetown County, South Carolina. Our results suggest that the most biophysically vulnerable places do not always spatially intersect with the most vulnerable populations. This is an important finding because it reflects the likely ‘social costs’ of hazards on the region. While economic losses might be large in areas of high biophysical risk, the resident population also may have greater safety nets (insurance, additional financial resources) to absorb and recover from the loss quickly. Conversely, it would take only a moderate hazard event to disrupt the well-being of the majority of county residents (who are more socially vulnerable, but perhaps do not reside in the highest areas of biophysical risks) and retard their longer-term recovery from disasters. This paper advances our theoretical and conceptual understanding of the spatial dimensions of vulnerability. It further highlights the merger of conceptualizations of human environment relationships with geographical techniques in understanding contemporary public policy issues.
Older adults are susceptible to severe coronavirus disease 2019 (COVID-19) outcomes as a consequence of their age and, in some cases, underlying health conditions (1). A COVID-19 outbreak in a long-term care skilled nursing facility (SNF) in King County, Washington that was first identified on February 28, 2020, highlighted the potential for rapid spread among residents of these types of facilities (2). On March 1, a health care provider at a second long-term care skilled nursing facility (facility A) in King County, Washington, had a positive test result for SARS-CoV-2, the novel coronavirus that causes COVID-19, after working while symptomatic on February 26 and 28. By March 6, seven residents of this second facility were symptomatic and had positive test results for SARS-CoV-2. On March 13, CDC performed symptom assessments and SARS-CoV-2 testing for 76 (93%) of the 82 facility A residents to evaluate the utility of symptom screening for identification of COVID-19 in SNF residents. Residents were categorized as asymptomatic or symptomatic at the time of testing, based on the absence or presence of fever, cough, shortness of breath, or other symptoms on the day of testing or during the preceding 14 days. Among 23 (30%) residents with positive test results, 10 (43%) had symptoms on the date of testing, and 13 (57%) were asymptomatic. Seven days after testing, 10 of these 13 previously asymptomatic residents had developed symptoms and were recategorized as presymptomatic at the time of testing. The reverse transcription-polymerase chain reaction (RT-PCR) testing cycle threshold (Ct) values indicated large quantities of viral RNA in asymptomatic, presymptomatic, and symptomatic residents, suggesting the potential for transmission regardless of symptoms. Symptom-based screening in SNFs could fail to identify approximately half of residents with COVID-19. Long-term care facilities should take proactive steps to prevent introduction of SARS-CoV-2 (3). Once a confirmed case is identified in an SNF, all residents should be placed on isolation precautions if possible (3), with considerations for extended use or reuse of personal protective equipment (PPE) as needed (4).
The relationship between social and community ties and mortality was assessed using the 1965 Human Population Laboratory survey of a random sample of 6928 adults in Alameda County, California and a subsequent nine-year mortality follow-up. The findings show that people who lacked social and community ties were more likely to die in the follow-up period than those with more extensive contacts. The age-adjusted relative risks for those most isolated when compared to those with the most social contacts were 2.3 for men and 2.8 for women. The association between social ties and mortality was found to be independent of self-reported physical health status at the time of the 1965 survey, year of death, socioeconomic status, and health practices such as smoking, alcoholic beverage consumption, obesity, physical activity, and utilization of preventive health services as well as a cumulative index of health practices.
Using the records linkage system of the Mayo Clinic and of the Rochester Epidemiology Project, which accesses diagnostic data on the entire population of Olmsted County, Minnesota, we identified 45 new cases of idiopathic dilated cardiomyopathy (DCM) and 19 new cases of hypertrophic cardiomyopathy (HCM) among county residents for the years 1975-1984. Overall age- and sex-adjusted incidence rates were 6.0/100,000 and 2.5/100,000 person-years, respectively. The incidence of DCM doubled from 3.9/100,000 in the first 5 years to 7.9/100,000 person-years in the last 5 years of study. The corresponding change for HCM was from 1.4 to 3.6/100,000 person-years. Age- and sex-adjusted prevalence rates as of January 1, 1985, for DCM and HCM were 36.5/100,000 and 19.7/100,000 population, respectively. The prevalence of DCM in persons less than 55 years old was 17.9/100,000, over a third of whom were New York Heart Association functional Class III or IV at diagnosis. These estimates may be of value in determining the potential use of health care resources, particularly cardiac transplantation.
In this paper, we outline (i) why σ‐convergence may not accompany β‐convergence, (ii) discuss evidence of β‐convergence in the United States, and (iii) use U.S. county‐level data containing over 3,000 cross‐sectional observations to demonstrate that σ‐convergence cannot be detected at the county level across the United States, or within the large majority of the individual U.S. states considered separately. Indeed, in many cases statistically significant σ‐ divergence is found.
OBJECTIVES: This report details development of the 2013 National Center for Health Statistics' (NCHS) Urban-Rural Classification Scheme for Counties (update of the 2006 NCHS scheme) and applies it to health measures to demonstrate urban-rural health differences. METHODS: The methodology used to construct the 2013 NCHS scheme was the same as that used for the 2006 NCHS scheme, but 2010 census-based data were used rather than 2000 census-based data. All U.S. counties and county-equivalent entities are assigned to one of six levels (four metropolitan and two nonmetropolitan) based on: 1) their February 2013 Office of Management and Budget designation as metropolitan, micropolitan, or noncore; 2) for metropolitan counties, the population size of the metropolitan statistical area (MSA) to which they belong; and 3) for counties in MSAs of 1 million or more, the location of principal city populations within the MSA. The 2013 and 2006 NCHS schemes were applied to data from the National Vital Statistics System (NVSS) and National Health Interview Survey (NHIS) to illustrate differences in selected health measures by urbanization level and to assess the magnitude of differences between estimates from the two schemes. RESULTS AND CONCLUSIONS: County urban-rural assignments under the 2013 NCHS scheme are very similar to those under the 2006 NCHS scheme. Application of the updated scheme to NVSS and NHIS data demonstrated the continued usefulness of the six categories for assessing and monitoring health differences among communities across the full urbanization spectrum. Residents of large central and large fringe metro counties differed substantially on many health measures, illustrating the importance of continuing to separate these counties. Residents of large fringe metro counties generally fared better than residents of less urban counties. Estimates obtained from the 2013 and 2006 schemes were similar.