A method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subeodes. Both the encoders and decoders proposed are shown to take advantage of the code's explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to make effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles
We introduce space-time block coding, a new paradigm for communication over Rayleigh fading channels using multiple transmit antennas. Data is encoded using a space-time block code and the encoded data is split into n streams which are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. Maximum-likelihood decoding is achieved in a simple way through decoupling of the signals transmitted from different antennas rather than joint detection. This uses the orthogonal structure of the space-time block code and gives a maximum-likelihood decoding algorithm which is based only on linear processing at the receiver. Space-time block codes are designed to achieve the maximum diversity order for a given number of transmit and receive antennas subject to the constraint of having a simple decoding algorithm. The classical mathematical framework of orthogonal designs is applied to construct space-time block codes. It is shown that space-time block codes constructed in this way only exist for few sporadic values of n. Subsequently, a generalization of orthogonal designs is shown to provide space-time block codes for both real and complex constellations for any number of transmit antennas. These codes achieve the maximum possible transmission rate for any number of transmit antennas using any arbitrary real constellation such as PAM. For an arbitrary complex constellation such as PSK and QAM, space-time block codes are designed that achieve 1/2 of the maximum possible transmission rate for any number of transmit antennas. For the specific cases of two, three, and four transmit antennas, space-time block codes are designed that achieve, respectively, all, 3/4, and 3/4 of maximum possible transmission rate using arbitrary complex constellations. The best tradeoff between the decoding delay and the number of transmit antennas is also computed and it is shown that many of the codes presented here are optimal in this sense as well.
We consider the design of channel codes for improving the data rate and/or the reliability of communications over fading channels using multiple transmit antennas. Data is encoded by a channel code and the encoded data is split into n streams that are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. We derive performance criteria for designing such codes under the assumption that the fading is slow and frequency nonselective. Performance is shown to be determined by matrices constructed from pairs of distinct code sequences. The minimum rank among these matrices quantifies the diversity gain, while the minimum determinant of these matrices quantifies the coding gain. The results are then extended to fast fading channels. The design criteria are used to design trellis codes for high data rate wireless communication. The encoding/decoding complexity of these codes is comparable to trellis codes employed in practice over Gaussian channels. The codes constructed here provide the best tradeoff between data rate, diversity advantage, and trellis complexity. Simulation results are provided for 4 and 8 PSK signal sets with data rates of 2 and 3 bits/symbol, demonstrating excellent performance that is within 2-3 dB of the outage capacity for these channels using only 64 state encoders.
A new class of convolutional codes called turbo-codes, whose performances in terms of bit error rate (BER) are close to the Shannon limit, is discussed. The turbo-code encoder is built using a parallel concatenation of two recursive systematic convolutional codes, and the associated decoder, using a feedback decoding rule, is implemented as P pipelined identical elementary decoders.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
Previous article Next article Polynomial Codes Over Certain Finite FieldsI. S. Reed and G. SolomonI. S. Reed and G. Solomonhttps://doi.org/10.1137/0108018PDFPDF PLUSBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAbout[1] R. W. Hamming, Error detecting and error correcting codes, Bell System Tech. J., 29 (1950), 147–160 MR0035935 CrossrefISIGoogle Scholar[2] Irving S. Reed, A class of multiple-error-correcting codes and the decoding scheme, Trans. I.R.E., PGIT-4 (1954), 38–49, Prof. Group on Information Theory MR0089789 Google Scholar[3] Neal Zierler, Linear recurring sequences, J. Soc. Indust. Appl. Math., 7 (1959), 31–48 10.1137/0107003 MR0101979 0096.33804 LinkISIGoogle Scholar Previous article Next article FiguresRelatedReferencesCited ByDetails A General Family of MSRD Codes and PMDS Codes with Smaller Field Sizes from Extended Moore MatricesUmberto MartÍnez-Pen͂asSIAM Journal on Discrete Mathematics, Vol. 36, No. 3 | 16 August 2022AbstractPDF (562 KB)Algorithmic Fault Tolerance Using the Lanczos MethodSIAM Journal on Matrix Analysis and Applications, Vol. 13, No. 1 | 31 July 2006AbstractPDF (1962 KB)A New Class of Cyclic CodesSIAM Journal on Applied Mathematics, Vol. 16, No. 1 | 12 July 2006AbstractPDF (1580 KB)Multiple-Burst Error Correction with the Chinese Remainder TheoremJournal of the Society for Industrial and Applied Mathematics, Vol. 11, No. 1 | 13 July 2006AbstractPDF (758 KB)A Class of Error-Correcting Codes in $p^m $ SymbolsDaniel Gorenstein and Neal ZierlerJournal of the Society for Industrial and Applied Mathematics, Vol. 9, No. 2 | 10 July 2006AbstractPDF (691 KB) Volume 8, Issue 2| 1960Journal of the Society for Industrial and Applied Mathematics225-424 History Submitted:21 January 1959Published online:10 July 2006 InformationCopyright © 1960 Society for Industrial and Applied MathematicsPDF Download Article & Publication DataArticle DOI:10.1137/0108018Article page range:pp. 300-304ISSN (print):0368-4245ISSN (online):2168-3484Publisher:Society for Industrial and Applied Mathematics
A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">j \geq 3</tex> of l's and each row contains a small fixed number <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k > j</tex> of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">j</tex> . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">j</tex> . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">j > 3</tex> and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.
This paper presents a new family of convolutional codes, nicknamed turbo-codes, built from a particular concatenation of two recursive systematic codes, linked together by nonuniform interleaving. Decoding calls on iterative processing in which each component decoder takes advantage of the work of the other at the previous step, with the aid of the original concept of extrinsic information. For sufficiently large interleaving sizes, the correcting performance of turbo-codes, investigated by simulation, appears to be close to the theoretical limit predicted by Shannon.
The probability of error in decoding an optimal convolutional code transmitted over a memoryless channel is bounded from above and below as a function of the constraint length of the code. For all but pathological channels the bounds are asymptotically (exponentially) tight for rates above <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">R_{0}</tex> , the computational cutoff rate of sequential decoding. As a function of constraint length the performance of optimal convolutional codes is shown to be superior to that of block codes of the same length, the relative improvement increasing with rate. The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">R_{0}</tex> and whose performance bears certain similarities to that of sequential decoding algorithms.
We discuss the cosmological simulation code GADGET-2, a new massively parallel TreeSPH code, capable of following a collisionless fluid with the N-body method, and an ideal gas by means of smoothed particle hydrodynamics (SPH). Our implementation of SPH manifestly conserves energy and entropy in regions free of dissipation, while allowing for fully adaptive smoothing lengths. Gravitational forces are computed with a hierarchical multipole expansion, which can optionally be applied in the form of a TreePM algorithm, where only short-range forces are computed with the 'tree' method while long-range forces are determined with Fourier techniques. Time integration is based on a quasi-symplectic scheme where long-range and short-range forces can be integrated with different time-steps. Individual and adaptive short-range time-steps may also be employed. The domain decomposition used in the parallelization algorithm is based on a space-filling curve, resulting in high flexibility and tree force errors that do not depend on the way the domains are cut. The code is efficient in terms of memory consumption and required communication bandwidth. It has been used to compute the first cosmological N-body simulation with more than 10 10 dark matter particles, reaching a homogeneous spatial dynamic range of 10 5 per dimension in a three-dimensional box. It has also been used to carry out very large cosmological SPH simulations that account for radiative cooling and star formation, reaching total particle numbers of more than 250 million. We present the algorithms used by the code and discuss their accuracy and performance using a number of test problems. GADGET-2 is publicly released to the research community.
From the Publisher: Should cyberspace be regulated? How can it be done? It's a cherished belief of techies and net denizens everywhere that cyberspace is fundamentally impossible to regulate. Harvard Professor Lawrence Lessig warns that, if we're not careful we'll wake up one day to discover that the character of cyberspace has changed from under us. Cyberspace will no longer be a world of relative freedom; instead it will be a world of perfect control where our identities, actions, and desires are monitored, tracked, and analyzed for the latest market research report. Commercial forces will dictate the change, and architecturethe very structure of cyberspace itselfwill dictate the form our interactions can and cannot take. Code And Other Laws of Cyberspace is an exciting examination of how the core values of cyberspace as we know itintellectual property, free speech, and privacy-are being threatened and what we can do to protect them. Lessig shows how codethe architecture and law of cyberspacecan make a domain, site, or network free or restrictive; how technological architectures influence people's behavior and the values they adopt; and how changes in code can have damaging consequences for individual freedoms. Code is not just for lawyers and policymakers; it is a must-read for everyone concerned with survival of democratic values in the Information Age.
The Search for the Codable Moment A Way of Seeing Developing Themes and Codes Deciding on Units of Analysis and Units of Coding as Issues of Sampling Developing Themes and a Code Using the Inductive Method An Example Using Life Stories Developing Themes Using the Theory-Driven and Prior-Research-Driven Method and Then Applying the Code An Example Using a Critical Incident Interview Scoring, Scaling and Clustering Themes Reliability Is Consistency of Judgment Don't Go Breaking My Heart Challenges in Using Thematic Analysis
We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixel-to-pixel correlations are first removed by subtracting a lowpass filtered copy of the image from the image itself. The result is a net data compression since the difference, or error, image has low variance and entropy, and the low-pass filtered image may represented at reduced sample density. Further data compression is achieved by quantizing the difference image. These steps are then repeated to compress the low-pass image. Iteration of the process at appropriately expanded scales generates a pyramid data structure. The encoding process is equivalent to sampling the image with Laplacian operators of many scales. Thus, the code tends to enhance salient image features. A further advantage of the present code is that it is well suited for many image analysis tasks as well as for image compression. Fast algorithms are described for coding and decoding.
This manual is a practical guide for the use of our general-purpose Monte Carlo code MCNP. The first chapter is a primer for the novice user. The second chapter describes the mathematics, data, physics, and Monte Carlo simulation found in MCNP. This discussion is not meant to be exhaustive---details of the particular techniques and of the Monte Carlo method itself will have to be found elsewhere. The third chapter shows the user how to prepare input for the code. The fourth chapter contains several examples, and the fifth chapter explains the output. The appendices show how to use MCNP on various computer systems and also give details about some of the code internals.
Saturation is a core guiding principle to determine sample sizes in qualitative research, yet little methodological research exists on parameters that influence saturation. Our study compared two approaches to assessing saturation: code saturation and meaning saturation. We examined sample sizes needed to reach saturation in each approach, what saturation meant, and how to assess saturation. Examining 25 in-depth interviews, we found that code saturation was reached at nine interviews, whereby the range of thematic issues was identified. However, 16 to 24 interviews were needed to reach meaning saturation where we developed a richly textured understanding of issues. Thus, code saturation may indicate when researchers have "heard it all," but meaning saturation is needed to "understand it all." We used our results to develop parameters that influence saturation, which may be used to estimate sample sizes for qualitative research proposals or to document in publications the grounds on which saturation was achieved.
A unified approach to the coder control of video coding standards such as MPEG-2, H.263, MPEG-4, and the draft video coding standard H.264/AVC (advanced video coding) is presented. The performance of the various standards is compared by means of PSNR and subjective testing results. The results indicate that H.264/AVC compliant encoders typically achieve essentially the same reproduction quality as encoders that are compliant with the previous standards while typically requiring 60% or less of the bit rate.
Chromatin, the physiological template of all eukaryotic genetic information, is subject to a diverse array of posttranslational modifications that largely impinge on histone amino termini, thereby regulating access to the underlying DNA. Distinct histone amino-terminal modifications can generate synergistic or antagonistic interaction affinities for chromatin-associated proteins, which in turn dictate dynamic transitions between transcriptionally active or transcriptionally silent chromatin states. The combinatorial nature of histone amino-terminal modifications thus reveals a "histone code" that considerably extends the information potential of the genetic code. We propose that this epigenetic marking system represents a fundamental regulatory mechanism that has an impact on most, if not all, chromatin-templated processes, with far-reaching consequences for cell fate decisions and both normal and pathological development.
1. Coding for Reliable Digital Transmission and Storage. 2. Introduction to Algebra. 3. Linear Block Codes. 4. Important Linear Block Codes. 5. Cyclic Codes. 6. Binary BCH Codes. 7. Nonbinary BCH Codes, Reed-Solomon Codes, and Decoding Algorithms. 8. Majority-Logic Decodable Codes. 9. Trellises for Linear Block Codes. 10. Reliability-Based Soft-Decision Decoding Algorithms for Linear Block Codes. 11. Convolutional Codes. 12. Trellis-Based Decoding Algorithms for Convolutional Codes. 13. Sequential and Threshold Decoding of Convolutional Codes. 14. Trellis-Based Soft-Decision Algorithms for Linear Block Codes. 15. Concatenated Coding, Code Decomposition ad Multistage Decoding. 16. Turbo Coding. 17. Low Density Parity Check Codes. 18. Trellis Coded Modulation. 19. Block Coded Modulation. 20. Burst-Error-Correcting Codes. 21. Automatic-Repeat-Request Strategies.
While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.
Comprend des références bibliographiques et un index.
A coding technique is described which improves error performance of synchronous data links without sacrificing data rate or requiring more bandwidth. This is achieved by channel coding with expanded sets of multilevel/phase signals in a manner which increases free Euclidean distance. Soft maximum--likelihood (ML) decoding using the Viterbi algorithm is assumed. Following a discussion of channel capacity, simple hand-designed trellis codes are presented for 8 phase-shift keying (PSK) and 16 quadrature amplitude-shift keying (QASK) modulation. These simple codes achieve coding gains in the order of 3-4 dB. It is then shown that the codes can be interpreted as binary convolutional codes with a mapping of coded bits into channel signals, which we call "mapping by set partitioning." Based on a new distance measure between binary code sequences which efficiently lower-bounds the Euclidean distance between the corresponding channel signal sequences, a search procedure for more powerful codes is developed. Codes with coding gains up to 6 dB are obtained for a variety of multilevel/phase modulation schemes. Simulation results are presented and an example of carrier-phase tracking is discussed.