OBJECTIVES: This report provides detailed information on how the 2000 Centers for Disease Control and Prevention (CDC) growth charts for the United States were developed, expanding upon the report that accompanied the initial release of the charts in 2000. METHODS: The growth charts were developed with data from five national health examination surveys and limited supplemental data. Smoothed percentile curves were developed in two stages. In the first stage, selected empirical percentiles were smoothed with a variety of parametric and nonparametric procedures. In the second stage, parameters were created to obtain the final curves, additional percentiles and z-scores. The revised charts were evaluated using statistical and graphical measures. RESULTS: The 1977 National Center for Health Statistics (NCHS) growth charts were revised for infants (birth to 36 months) and older children (2 to 20 years). New body mass index-for-age (BMI-for-age) charts were created. Use of national data improved the transition from the infant charts to those for older children. The evaluation of the charts found no large or systematic differences between the smoothed percentiles and the empirical data. CONCLUSION: The 2000 CDC growth charts were developed with improved data and statistical procedures. Health care providers now have an instrument for growth screening that better represents the racial-ethnic diversity and combination of breast- and formula-feeding in the United States. It is recommended that these charts replace the 1977 NCHS charts when assessing the size and growth patterns of infants, children, and adolescents.
The authors describe changes proposed for the census scheduled for the year 2000 in Poland. These include changes in coverage, definitions, methods of tabulation, and concepts. Some of these changes concern data on households and families, fertility, and economic activities. (SUMMARY IN ENG AND RUS)
Describing the distribution of disease between different populations and over time has been a highly successful way of devising hypotheses about causation and for quantifying the potential for preventive activities.1 Statistical data are also essential components of disease surveillance programs. These play a critical role in the development and implementation of health policy, through identification of health problems, decisions on priorities for preventive and curative programs and evaluation of outcomes of programs of prevention, early detection/screening and treatment in relation to resource inputs. Over the last 12 years, a series of estimates of the global burden of cancer have been published in the International Journal of Cancer.2-6 The methods have evolved and been refined, but basically they rely upon the best available data on cancer incidence and/or mortality at country level to build up the global picture. The results are more or less accurate for different countries, depending on the extent and accuracy of locally available data. This “data-based” approach is rather different from the modeling method used in other estimates.7-10 Essentially, these use sets of regression models, which predict cause-specific mortality rates of different populations from the corresponding all-cause mortality.11 The constants of the regression equations derive from datasets with different overall mortality rates (often including historic data from western countries). Cancer deaths are then subdivided into the different cancer types, according to the best available information on relative frequencies. GLOBOCAN 2000 updates the previously published data-based global estimates of incidence, mortality and prevalence to the year 2000.12 The data sources that have been used to build up the global estimates are as follows. Incidence, the number of new cases occurring, can be expressed as the annual number of cases (the volume of new patients presenting for treatment) or as a rate per 100,000 persons per year. Incidence data are produced by population-based cancer registries.13 Registries may cover national populations or, more often, certain regions. In developing countries in particular, coverage is often confined to the capital city and its environs. It was estimated that, in 1990, about 18% of the world population were covered by registries, 64% of developed countries and 5% of developing countries, although the situation is improving each year. The most recent volume of “Cancer Incidence in Five Continents” (CI5) contains comparable incidence information from 150 registries in 50 countries, primarily over the period 1988–1992.14 Survival statistics are also produced by cancer registries by the follow-up of registered cancer cases. Population-based figures are published by registries in many developed countries, for example, the SEER program covering 10% of the U.S. population15 and the EUROCARE II project, including 17 countries of Europe.16 Survival data from populations of China, the Philippines, Thailand, India and Cuba have been published by Sankaranarayanan et al.17 Mortality is the number of deaths occurring and the mortality rate the number of deaths per 100,000 persons per year. It is the product of incidence and fatality (the inverse of survival) of a given cancer. Mortality rates measure the average risk to the population of dying from a specific cancer, while fatality (1-survival) represents the probability that an individual with cancer will die from it. Mortality data are derived from vital registration systems, where the fact and “underlying” cause of death are certified, usually by a medical practitioner. Their great advantage is comprehensive coverage and availability. By 1990, about 42% of the world population was covered by vital registration systems producing mortality statistics on cancer. Not all are, however, of the same quality in all countries. National-level statistics are collated and made available by the World Health Organiztion (http://www-dep.iarc.fr/dataava/globocan/who.htm), although for some countries coverage of the population is manifestly incomplete (so that the so-called mortality rates produced are implausibly low) and in others, quality of cause of death information is poor. Frequency data, e.g., case series from hospitals and pathology laboratories, provide an indication of the relative importance of different cancers in a country or region in the absence of a population-based registry and mortality statistics. There are problems in extrapolating the results to the general population, since such series are subject to various forms of selection bias. Such data are generally published locally or in journal articles, although a few compendia are available.18, 19 Prevalence is the proportion of a population that has the disease at a given point in time.20 For many diseases (e.g., hypertension, diabetes), prevalence usefully describes the number of individuals requiring care. For cancer, however, many persons diagnosed in the past have been “cured”—they no longer have an excess risk of death (although some residual disability may be present, for example, following a resective operation). A straightforward comparison of need for cancer services can be made using partial prevalence, cases diagnosed within 1, 3 and 5 years, to indicate the numbers of persons undergoing initial treatment (cases within 1 year of diagnosis), clinical follow-up (within 3 years) or not considered “cured” (before 5 years). Patients alive 5 years after diagnosis are usually considered cured since, for most cancers, the death rates of such patients are similar to those in the general population. The methods used to produce the estimates are summarised in several recent articles.5, 6, 21, 22 The “Help” option of GLOBOCAN 2000 lists the sources of data and methods used for each country. National incidence data from good-quality cancer registries. National mortality data, with estimation of incidence using sets of regression models specific for site, sex and age, derived from local cancer registry data (incidence plus mortality). Local (regional) incidence data from 1 or more regional cancer registries within a country. When there are several cancer registries in the country, their incidence rates must be combined into a common set of values by some weighted average. Local mortality data from some sort of sample survey of deaths, converted to incidence using specific models. Frequency data. For several developing countries, only data on the relative frequency of different cancers (by age and sex) are available. These are applied to an estimated “all sites” incidence rate, derived from existing cancer registry results, in 7 world regions (Eastern Africa, Middle Africa, Northern Africa, Southern Africa, Western Africa, Middle East and Other Oceania). No data. The country-specific rates are those of the corresponding world area (calculated from the other countries for which estimates could be made). There are few large countries that fall into this category. Those with a population greater than 10 million were Morocco, Afghanistan, Nepal, Sri Lanka, Mozambique, Madagascar and Yemen. National mortality rates, with for some countries a correction factor applied to account for known and quantified underreporting of deaths. Rates for missing sites were computed using proportions from mortality files provided by cancer registries. When no national mortality data are available, local (regional) mortality rates derived from the data of 1 or more cancer registries covering a part of a country (state, province, etc.) were used. When mortality data were unavailable or known to be of poor quality, mortality was estimated from incidence, using country/region-specific survival (see prevalence data). In the absence of any data, country-specific rates are calculated from the average of those of neighbouring countries in the same regions. Estimates of partial prevalence in each country were derived by combining the annual number of new cases and the corresponding probability of survival by time. For example, 1-year prevalence at a fixed point in mid-2000 was estimated from the number of new cases in 2000 multiplied by the probability of surviving at least 6 months, and 3-year prevalence sums the numbers alive at 0.5, 1.5 and 2.5 years. Relative survival data were obtained from the sources cited above and converted to observed survival using “normal” mortality probability (derived from the corresponding life tables). The shape of the survival curve from 0 to 5 years postdiagnosis was assumed to follow a Weibull distribution.22 GLOBOCAN 2000 presents incidence, mortality and prevalence data for 5 broad age groups (0–14, 15–44, 45–54, 55–64 and 65 and over) and sex for all countries of the world for 24 different types of cancer. Since cancer data are collected and compiled sometime after the events to which they relate, the most recent statistics available are from periods from 3–10 years earlier. The actual number of cancer cases, deaths and prevalent cases are calculated by applying these rates to the estimated world population for 2000, obtained from the most recent projections prepared by the United Nations Population Division.23 On the CD-ROM are computer programs to analyse and present the cancer database. The database itself may be downloaded from the Internet (http://www-dep.iarc.fr/globocan/globocan.htm). This site contains the most recently available estimates of the incidence and mortality rates in different countries worldwide. GLOBOCAN 2000 can present the statistics described at any level of geographical aggregation and in tabular or graphical format (maps, bar charts, age-specific curves and pie charts). Some examples of these graphical presentations are shown on the cover of this issue. Tabulations of numbers and rates may also be displayed and printed. Incorporation of population projections for 5-year intervals, from 2005 to 2050,23 allows GLOBOCAN 2000 to be used to prepare projections of future burden, assuming current rates of incidence and mortality, or incorporating age/sex-specific rates of change in the rates. Table I shows the most basic summary data of all—the global numbers of cases, deaths and prevalent cancers (within 5 years of diagnosis) by cancer site in males, females and both sexes. There are an estimated 10.1 million new cases, 6.2 million deaths and 22.4 million persons living with cancer in the year 2000. No attempt has been made to estimate incidence or mortality of nonmelanoma skin cancer because of the difficulties of measurement and consequent lack of data. The total “All Cancer” therefore excludes such tumours. The 2000 estimate represents an increase of around 22% in incidence and mortality since our most recent comprehensive estimates (for 1990). Lung cancer is the main cancer in the world today, whether considered in terms of numbers of cases (1.2 million) or deaths (1.1 million), because of the high case fatality (ratio of mortality:incidence = 0.9). However, breast cancer, although it is the second most common cancer overall (1.05 million new cases) ranks much less highly (5th) as a cause of death because of the relatively favourable prognosis (ratio of mortality:incidence = 0.4). Colon plus rectum is third in importance in terms of new cases (945,000 cases, 492,000 deaths), and stomach cancer (876,000 cases, 647,000 deaths) fourth. In terms of prevalence, the most common cancers are breast (3.9 million breast cancer cases), colorectal cancers (2.4 million) and prostate (1.6 million). The ratio between prevalence and incidence is an indicator of prognosis. This explains why breast cancer appears as the most prevalent cancer in the world, despite there being fewer new cases than for lung cancer, for which the outlook is considerably poorer. Table II shows incidence rates for all cancers (excluding skin) by world area and sex. Two indices are used, the age standardized rate per 100,000 (standardized to the world standard population) and the cumulative rate (percent), from birth to age 65. Both of these indicators allow comparisons between populations that are not influenced by differences in their age structures. Age standardized rates in developed countries are about twice those in developing countries; the differential is less for the cumulative rate, which ignores disease rates in the 65 and over age groups. On average, worldwide, there is about a 10% chance of getting a cancer before age 65. Incidence (and mortality) rates are highest in North America, Australia/New Zealand and Western Europe, and lowest in parts of Africa. This overall risk is, of course, dependent upon the contributions of different types of cancer. For example, in West Africa, incidence of almost all cancers is low (except for cervix cancer in women and liver cancer in men). This contrasts with Southern Africa, which has, in addition, high rates of lung and oesophagus cancer, and with East Africa, with high rates of AIDS-related tumours, notably Kaposi's sarcoma. The statistics used to assess the importance (burden) of cancer and of different types of cancer in the population either quantify the disease itself (the “need” for services) or the demand that it places upon them.24 Incidence rates provide a measure of the risk of developing specific cancers in different populations. Changes in incidence are the appropriate indicator of the impact of primary prevention strategies. Mortality rates are sometimes used as a convenient proxy measure of the risk of acquiring the disease (incidence) when comparing different groups, since they may be more generally available. However, this use assumes equal survival in the populations being compared, and this assumption may well be incorrect, for example, there are well-documented differences between countries. Mortality does provide an unambiguous measure of the outcome or impact of cancer and, used in conjunction with data on incidence, is the index of choice for the evaluation of the effects of early diagnosis or treatment. Prevalence, as the number of persons ever diagnosed with cancer (lifetime prevalence), does not have much apparent utility. The data can be derived from cancer registries that have very long-term registration of cases and complete follow-up for vital status over many years.25, 26 Population surveys are another approach, although they underestimate true prevalence.27 In the absence of complete data, an estimate can be prepared using models that incorporate longtime series of incidence and survival.28, 29 Other workers have attempted to define the proportion and timing of “cure” for different cancers, so that only patients not cured are considered prevalent.30 The data needed for such calculations are rarely available, however, and, for international comparisons, a simpler approach is needed. Partial prevalence, as estimated in GLOBOCAN, as well as approximating the numbers of patients under treatment or follow-up, does not require long time series of incidence or survival data (or a further set of assumptions required to estimate them). Compound indicators, which use information on the duration or severity of disease, have a genuine utility in setting priorities within health-care systems. They include person-years of life lost (how many years of normal life span are lost due to deaths from cancer)31 and disability or quality-adjusted life-years lost.32, 33 The latter measures require that a numerical score is given to the years lived with a reduced quality of life between diagnosis and death (where quality = 0) or cure (quality = 1). The problem with such indicators, however, is that there is simply insufficient quantitative information on quality or disability following a cancer diagnosis in different cultures (or countries) worldwide to permit calculation of valid comparative statistics. The GLOBOCAN estimates of incidence, mortality and (5-year) prevalence help to define priorities for cancer control program (prevention and treatment, aided by early detection, if appropriate). For countries with well-established sources of data, changes in the estimates over time indicate progress against cancer. Incidence trends can monitor the success of prevention and the success of treatment (resulting from earlier diagnosis or more effective therapies). In addition, the geographic patterns of cancer internationally serve one of the classic roles of descriptive epidemiology: observing whether the distribution of specific cancers follows the patterns expected from the distribution of known risk factors between populations or whether there are apparent anomalies that merit further investigation. GLOBOCAN 2000 incorporates the best currently available national statistics, but as information systems extend to all countries of the world and improve their coverage and accuracy, we expect that our knowledge of the world cancer burden will improve and so too will our ability to mount effective strategies against it.
In order to improve bitrates of lossless JPEG 2000, we propose to modify the discrete wavelet transform (DWT) by skipping selected steps of its computation. We employ a heuristic to construct the skipped steps DWT (SS-DWT) in an image-adaptive way and define fixed SS-DWT variants. For a large and diverse set of images, we find that SS-DWT significantly improves bitrates of non-photographic images. From a practical standpoint, the most interesting results are obtained by applying entropy estimation of coding effects for selecting among the fixed SS-DWT variants. This way we get the compression scheme that, as opposed to the general SS-DWT case, is compliant with the JPEG 2000 part 2 standard. It provides average bitrate improvement of roughly 5% for the entire test-set, whereas the overall compression time becomes only 3% greater than that of the unmodified JPEG 2000. Bitrates of photographic and non-photographic images are improved by roughly 0.5% and 14%, respectively. At a significantly increased cost of exploiting a heuristic, selecting the steps to be skipped based on the actual bitrate instead of an estimated one, and by applying reversible denoising and lifting steps to SS-DW
SIMPLE 2000 (Superheated Instrument for Massive ParticLE searches) will consist of an array of eight to sixteen large active mass ($\sim15$ g) Superheated Droplet Detectors(SDDs) to be installed in the new underground laboratory of Rustrel-Pays d'Apt. Several factors make of SDDs an attractive approach for the detection of Weakly Interacting Massive Particles (WIMPs), namely their intrinsic insensitivity to minimum ionizing particles, high fluorine content, low cost and operation near ambient pressure and temperature. We comment here on the fabrication, calibration and already-competitive first limits from SIMPLE prototype SDDs, as well as on the expected immediate increase in sensitivity of the program, which aims at an exposure of $>$25 kg-day during the year 2000. The ability of modest-mass fluorine-rich detectors to explore regions of neutralino parameter space beyond the reach of the most ambitious cryogenic projects is pointed out.
The Advanced Accelerator Concepts 2000 (AAC2K) Workshop was held in Santa Fe in June, 2000, and included a wide array of conceptual and theoretical advances at the frontier of accelerator physics. This paper reviews the highlights of the workshop, with subjects ranging from acceleration using lasers, plasmas and microstructures, to the beam physics of muon colliders. Particular emphasis is given to the topics which are relevant to research at existing linear accelerator facilities, and the effect of this research on the capabilities of such facilities.
We present low- and medium resolution spectra of the recurrent nova CI Aquilae taken at 14 epochs in May and June, 2000. The overall appearance is similar to other U Sco-type recurrent novae (U Sco, V394 CrA). Medium resolution (R=7000-10000) hydrogen and iron profiles suggest an early expansion velocity of 2000-2500 km/s. The Hαevolution is followed from Dt = -0.6 d to +53 d, starting from a nearly Gaussian shape to a double peaked profile through strong P-Cyg profiles. The interstellar component of the sodium D line and two diffuse interstellar bands put constraints on the interstellar reddening which is estimated to be E(B-V)=0.85\pm0.3. The available visual and CCD-V observations are used to determine t0,t2 and t3. The resulting parameters are: t0=2451669.5\pm0.1, t2=30\pm1 d, t3=36\pm1 d. The recent lightcurve is found to be generally similar to that observed in 1917 with departures as large as 1-2 mag in certain phases. This behaviour is also typical for the U Sco subclass.
The Nasdaq Composite fell another $\approx 10 %$ on Friday the 14'th of April 2000 signaling the end of a remarkable speculative high-tech bubble starting in spring 1997. The closing of the Nasdaq Composite at 3321 corresponds to a total loss of over 35% since its all-time high of 5133 on the 10'th of March 2000. Similarities to the speculative bubble preceding the infamous crash of October 1929 are quite striking: the belief in what was coined a ``New Economy'' both in 1929 and presently made share-prices of companies with three digits price-earning ratios soar. Furthermore, we show that the largest draw downs of the Nasdaq are outliers with a confidence level better than 99% and that these two speculative bubbles, as well as others, both nicely fit into the quantitative framework proposed by the authors in a series of recent papers.
We give here a compilation of papers presented at Lattice 2000 (XVIII Intl. Symposium on Lattice Field Theory, Bangalore, India, 17-22 August 2000). The table of contents provides links to papers on the e-print arXiv.
Over the past decades automated debugging has seen major achievements. However, as debugging is by necessity attached to particular programming paradigms, the results are scattered. The aims of the workshop are to gather common themes and solutions across programming communities, and to cross-fertilize ideas. AADEBUG 2000 in Munich follows AADEBUG'93 in Linkoeping, Sweden; AADEBUG'95 in Saint Malo, France; AADEBUG'97 in Linkoeping, Sweden.
In April 2000 the single bunch energy spread, bunch length, horizontal emittance, and vertical emittance were measured as functions of current in KEK's ATF damping ring. In this report the measurement results are analyzed in light of intrabeam scattering theory. The measurements are found to be relatively consistent with theory, although the measured effects appear to be stronger than theory. In addition, the factor of 3 growth in vertical emittance at a current of 3 mA does not seem to be supported.
The workshop `Astrophysical Dynamics 1999/2000' followed a homonymous advanced research course, and both activities were organized by me. In this opening paper of the proceedings book, I describe them and document their strong impact on the academic life of the local institutions. The advanced research course was open to graduate students, senior researchers, and motivated under-graduate students with good background in physics and mathematics. The course covered several multi-disciplinary issues of modern research on astrophysical dynamics, and thus also of interest to physicists, mathematicians and engineers. The major topic was gas dynamics, viewed in context with stellar dynamics and plasma physics. The course was complemented by parallel seminars on hot topics given by experts in such fields, and open to a wide scientific audience. In particular, I gave a friendly introduction to wavelets, which are becoming an increasingly powerful tool not only for processing signals and images but also for analysing fractals and turbulence, and which promise to have important applications to dynamical modelling of disc galaxies. The workshop was open to a wide scientific audience. The works
This report presents bunch length and energy spread measurements performed in April 2000 at the ATF Damping Ring, at KEK. Measurements were performed with the beam on and then off the linear (difference) coupling resonance. Due to strong intra-beam scattering in the ATF ring, the results depended strongly on the coupling.
In the paper "Excitation Mechanism of Near-Infrared [Fe II] Emission in Seyfert and Starburst Galaxies" by Hideaki Mouri, Kimiaki Kawara, and Yoshiaki Taniguchi (ApJ, 528, 186 [2000]), the two panels of Figure 6 were printed into different sizes, as the result of an error in the printing process. The correct version of Figure 6 appears here.
We summarize the experimental and theoretical results presented in the "Physics at the Highest Q^2 and p^2_t" working group at the DIS 2000 Workshop. High Q^2 and p^2_t processes measured at current and future colliders allow to improve our knowledge of Standard Model (SM) physics, by providing precise measurements of the SM parameters and, consequently, consistency checks of the SM. Moreover, they give information on key quantities for the calculation of the SM expectations in a yet unexplored domain, such as the parton densities of the proton or the photon. In addition to these experimental inputs, higher-order calculations are also needed to obtain precise expectations for SM processes, which are a key ingredient for the searches for new phenomena in high Q^2 and p^2_t processes at current and future experiments. The experimental and theoretical status of SM physics at high Q^2 and p^2_t is reviewed in the first part of this summary, with the remaining being dedicated to physics beyond the Standard Model.
We developed a new technology for global detection of atmospheric disturbances, on the basis of phase measurements of the total electron content (TEC) using an international GPS networks. Temporal dependencies of TEC are obtained for a set of spaced receivers of the GPS network simultaneously for the entire set of visible satellites. These series are subjected to filtering in the selected range of oscillation periods using known algorithms for spatio-temporal analysis of signals. An "instantaneous" ionospheric response to the sudden commencement of a strong magnetic storm of April 6, 2000 was detected. On the dayside of the Earth the largest value of the net response amplitude was found to be of order 0.8*10^16 m^-2 (1--2 % of the background TEC value), and the delay with respect to the SC in mid-latitudes was about 200 s. In higher latitudes the delay goes as long as 15 min. On the nightside these values are 0.2*10^16 m^-2 and 30 min, respectively. The velocity of the traveling disturbance from the middle to high latitudes on the dayside as well as from the dayside to the nightside was about 10-20 km/s.
This summary of the working group 2 of DIS 2000 encompasses experimental and theoretical results of jet physics, open and bound state heavy flavour production, prompt photon production, next-to-leading order QCD calculations and beyond, instantons, fragmentation, event shapes, and power corrections, primarily from deep-inelastic scattering and photoproduction at HERA, but also from the LEP and Tevatron colliders.
The 3x+1 problem concerns iteration of the map T(n) =(3n+1)/2 if n odd; n/2 if n even. The 3x +1 Conjecture asserts that for every positive integer n>1 the forward orbit of n includes the integer 1. This paper is an annotated bibliography of work done on the 3x+1 problem published from 2000 through 2009, plus some later papers that were preprints by 2009. This is a sequel to an annotated bibliography on the 3x+1 problem covering 1963-1999. At present the 3x+1 Conjecture remains unsolved.
Observational data on the bursting activity of all five known Soft Gamma Repeaters are presented. This information was obtained with Konus gamma-ray burst experiments on board Venera 11-14, Wind, and Kosmos-2326 spacecraft in the period from 1978 to 2000. These data on appearance rates, time histories, and energy spectra of repeated soft bursts obtained with similar instruments and collected together in a comparable form should be useful for further studies of SGRs. (available at http://www.ioffe.rssi.ru/LEA/SGR/Catalog/).
This is a survey written in 2000 about upper chromatic numbers