3R27. Theory of Composites. Cambridge Monographs on Applied and Computational Mathematics. - GW Milton (Dept of Math, Univ of Utah, Salt Lake City UT). Cambridge UP, Cambridge, UK. 2002. 719 pp. ISBN 0-521-78125-6. $80.00. AT Sawicki (Inst of Hydro-Eng, Koscierska 7, Gdansk-Oliwa, 80-953, Poland).This is a book about the mathematical world of composites, where the electrical, thermal, magnetic, thermoelectric, mechanical, piezoelectric, poroelastic, and electromagnetic properties of these materials are described in detail. It is rather unusual to cover such a broad spectrum of difficult problems in a single volume, since most other books on composite materials are restricted to particular aspects of their behavior as, for example, mechanical properties, or even particular geometries as fiber-reinforced composites, etc. For applied mathematicians, the theory of composites is the study of partial differential equations with rapid oscillations in their coefficients. These equations have similar mathematical structure for different physical phenomena, as those mentioned above, which enables unified treatment of various problems. Obviously, such an approach restricts applicability of various theories presented only to certain classes of materials and their properties, but the author is aware of it. For example, important problems dealing with plasticity and strength of composites, etc, are not analyzed, but respective references to other sources of information are provided. In this book, the author presents the classical approach, where the effective (or homogenized) equations describe composites’ properties at the macroscopic level. These properties are related to the microstructure of composites and respective properties of the constituents. The book consists of 31 chapters, each containing several sections (from 3 to 14), and an extensive list of references. The first two chapters are of an introductory character. Chapters 3–9 deal with the exact results for effective moduli, and Chapter 10 discusses some approximations for estimating these moduli. In Chapter 11, some wave propagation problems are considered. Chapters 12–18 cover the general theory concerning effective tensors, including important variational principles. Chapters 19 and 20 provide some information on the so-called theory of Y-tensors which parallels that of effective tensors. Chapters 21–26 are devoted to variational methods for bounding effective tensors. Chapters 27–29 deal with the analytical properties of the effective tensors. The last two chapters discuss the set of effective tensors and bounding of effective moduli as a quasiconvexification problem. In this reviewer’s opinion, this book is written mainly for applied mathematicians working in the field of composites. Engineers and other specialists may find it difficult to follow. Engineers would expect more practically oriented theoretical guidelines and experimental validations of sophisticated theories. But we cannot expect such definite treatment from a single textbook. This book shows the deep erudition of the author and is well organized, written with precision, and nicely edited. Particular parts have been discussed with many professionals, including well-known barons of the science of composites, for example, Zvi Hashin or John Willis, who, together with the author’s outstanding achievements in the field of mathematical analysis of composites, guarantee a high standard. Theory of Composites deserves its proper place in each library of applied mathematics and mechanics of materials, as it is an excellent addition to the existing literature on composites.
$ $[This paper is a (self contained) chapter in a new book, Mathematics and Computation, whose draft is available on my homepage at https://www.math.ias.edu/avi/book ]. We survey some concrete interaction areas between computational complexity theory and different fields of mathematics. We hope to demonstrate here that hardly any area of modern mathematics is untouched by the computational connection (which in some cases is completely natural and in others may seem quite surprising). In my view, the breadth, depth, beauty and novelty of these connections is inspiring, and speaks to a great potential of future interactions (which indeed, are quickly expanding). We aim for variety. We give short, simple descriptions (without proofs or much technical detail) of ideas, motivations, results and connections; this will hopefully entice the reader to dig deeper. Each vignette focuses only on a single topic within a large mathematical filed. We cover the following: $\bullet$ Number Theory: Primality testing $\bullet$ Combinatorial Geometry: Point-line incidences $\bullet$ Operator Theory: The Kadison-Singer problem $\bullet$ Metric Geometry: Distortion of embeddings $\bullet$ Group Theory:
Abstract Computational complexity theory is a subfield of computer science originating in computability theory and the study of algorithms for solving practical mathematical problems. Amongst its aims is classifying problems by their degree of difficulty — i.e., how hard they are to solve computationally. This paper highlights the significance of complexity theory relative to questions traditionally asked by philosophers of mathematics while also attempting to isolate some new ones — e.g., about the notion of feasibility in mathematics, the $\mathbf{P} \neq \mathbf{NP}$ problem and why it has proven hard to resolve, and the role of non-classical modes of computation and proof.
With examples of all 450 functions in action plus tutorial text on the mathematics, this book is the definitive guide to Experimenting with Combinatorica, a widely used software package for teaching and research in discrete mathematics. Three interesting classes of exercises are provided--theorem/proof, programming exercises, and experimental explorations--ensuring great flexibility in teaching and learning the material. The Combinatorica user community ranges from students to engineers, researchers in mathematics, computer science, physics, economics, and the humanities. Recipient of the EDUCOM Higher Education Software Award, Combinatorica is included with every copy of the popular computer algebra system Mathematica.
An introduction to computational complexity theory, its connections and interactions with mathematics, and its central role in the natural and social sciences, technology, and philosophyMathematics and Computation provides a broad, conceptual overview of computational complexity theory-the mathematical study of efficient computation. With important practical applications to computer science and industry, computational complexity theory has evolved into a highly interdisciplinary field, with strong links to most mathematical areas and to a growing number of scientific endeavors.Avi Wigderson takes a sweeping survey of complexity theory, emphasizing the field's insights and challenges. He explains the ideas and motivations leading to key models, notions, and results. In particular, he looks at algorithms and complexity, computations and proofs, randomness and interaction, quantum and arithmetic computation, and cryptography and learning, all as parts of a cohesive whole with numerous cross-influences. Wigderson illustrates the immense breadth of the field, its beauty and richness, and its diverse and growing interactions with other areas of mathematics. He ends with a comprehensive look at the theory of computation, its methodology and aspirations, and the unique and fundamental ways in which it has shaped and will further shape science, technology, and society. For further reading, an extensive bibliography is provided for all topics covered.Useful for undergraduates in mathematics and computer science as well as researchers and teachers in the field, Mathematics and Computation brings conceptual clarity to a central and dynamic scientific discipline.· Comprehensive coverage of computational complexity theory· High-level, intuitive exposition· Historical accounts of the evolution and motivations of central concepts and models· A resourceful look at the theory's influence on science, technology, and society· Extensive bibliography
Assembly theory (AT) quantifies selection using the assembly equation and identifies complex objects that occur in abundance based on two measurements, assembly index and copy number, where the assembly index is the minimum number of joining operations necessary to construct an object from basic parts, and the copy number is how many instances of the given object(s) are observed. Together these define a quantity, called Assembly, which captures the amount of causation required to produce objects in abundance in an observed sample. This contrasts with the random generation of objects. Herein we describe how AT's focus on selection as the mechanism for generating complexity offers a distinct approach, and answers different questions, than computational complexity theory with its focus on minimum descriptions via compressibility. To explore formal differences between the two approaches, we show several simple and explicit mathematical examples demonstrating that the assembly index, itself only one piece of the theoretical framework of AT, is formally not equivalent to other commonly used complexity measures from computer science and information theory including Shannon entropy, Huffma
The only book devoted exclusively to matrix functions, this research monograph gives a thorough treatment of the theory of matrix functions and numerical methods for computing them. The author s elegant presentation focuses on the equivalent definitions of f(A) via the Jordan canonical form, polynomial interpolation, and the Cauchy integral formula, and features an emphasis on results of practical interest and an extensive collection of problems and solutions. Functions of Matrices: Theory and Computation is more than just a monograph on matrix functions; its wide-ranging content including an overview of applications, historical references, and miscellaneous results, tricks, and techniques with an f(A) connection makes it useful as a general reference in numerical linear algebra. Other key features of the book include development of the theory of conditioning and properties of the Frchet derivative; an emphasis on the Schur decomposition, the block Parlett recurrence, and judicious use of Pad approximants; the inclusion of new, unpublished research results and improved algorithms; a chapter devoted to the f(A)b problem; and a MATLAB toolbox providing implementations of the key algorithms. Audience: This book is for specialists in numerical analysis and applied linear algebra as well as anyone wishing to learn about the theory of matrix functions and state of the art methods for computing them. It can be used for a graduate-level course on functions of matrices and is a suitable reference for an advanced course on applied or numerical linear algebra. It is also particularly well suited for self-study. Contents: List of Figures; List of Tables; Preface; Chapter 1: Theory of Matrix Functions; Chapter 2: Applications; Chapter 3: Conditioning; Chapter 4: Techniques for General Functions; Chapter 5: Matrix Sign Function; Chapter 6: Matrix Square Root; Chapter 7: Matrix pth Root; Chapter 8: The Polar Decomposition; Chapter 9: Schur-Parlett Algorithm; Chapter 10: Matrix Exponential; Chapter 11: Matrix Logarithm; Chapter 12: Matrix Cosine and Sine; Chapter 13: Function of Matrix Times Vector: f(A)b; Chapter 14: Miscellany; Appendix A: Notation; Appendix B: Background: Definitions and Useful Facts; Appendix C: Operation Counts; Appendix D: Matrix Function Toolbox; Appendix E: Solutions to Problems; Bibliography; Index.
We survey results on the formalization and independence of mathematical statements related to major open problems in computational complexity theory. Our primary focus is on recent findings concerning the (un)provability of complexity bounds within theories of bounded arithmetic. This includes the techniques employed and related open problems, such as the (non)existence of a feasible proof that P = NP.
From the Publisher: What is the most accurate way to sum floating point numbers? What are the advantages of IEEE arithmetic? How accurate is Gaussian elimination and what were the key breakthroughs in the development of error analysis for the method? The answers to these and many related questions are included here. This book gives a thorough, up-to-date treatment of the behavior of numerical algorithms in finite precision arithmetic. It combines algorithmic derivations, perturbation theory, and rounding error analysis. Software practicalities are emphasized throughout, with particular reference to LAPACK and MATLAB. The best available error bounds, some of them new, are presented in a unified format with a minimum of jargon. Because of its central role in revealing problem sensitivity and providing error bounds, perturbation theory is treated in detail. Historical perspective and insight are given, with particular reference to the fundamental work of Wilkinson and Turing, and the many quotations provide further information in an accessible format. The book is unique in that algorithmic developments and motivations are given succinctly and implementation details minimized, so that attention can be concentrated on accuracy and stability results. Here, in one place and in a unified notation, is error analysis for most of the standard algorithms in matrix computations. Not since Wilkinson's Rounding Errors in Algebraic Processes (1963) and The Algebraic Eigenvalue Problem (1965) has any volume treated this subject in such depth. A number of topics are treated that are not usually covered in numerical analysis textbooks, including floating point summation, block LU factorization, condition number estimation, the Sylvester equation, powers of matrices, finite precision behavior of stationary iterative methods, Vandermonde systems, and fast matrix multiplication. Although not designed specifically as a textbook, this volume is a suitable reference for an advanced course, and could be used by instructors at all levels as a supplementary text from which to draw examples, historical perspective, statements of results, and exercises (many of which have never before appeared in textbooks). The book is designed to be a comprehensive reference and its bibliography contains more than 1100 references from the research literature. Audience Specialists in numerical analysis as well as computational scientists and engineers concerned about the accuracy of their results will benefit from this book. Much of the book can be understood with only a basic grounding in numerical analysis and linear algebra. About the Author Nicholas J. Higham is a Professor of Applied Mathematics at the University of Manchester, England. He is the author of more than 40 publications and is a member of the editorial boards of the SIAM Journal on Matrix Analysis and Applications and the IMA Journal of Numerical Analysis. His book Handbook of Writing for the Mathematical Sciences was published by SIAM in 1993.
In recent years, an abundance of new molecular structures have been elucidated using cryo-electron microscopy (cryo-EM), largely due to advances in hardware technology and data processing techniques. Owing to these new exciting developments, cryo-EM was selected by Nature Methods as Method of the Year 2015, and the Nobel Prize in Chemistry 2017 was awarded to three pioneers in the field. The main goal of this article is to introduce the challenging and exciting computational tasks involved in reconstructing 3-D molecular structures by cryo-EM. Determining molecular structures requires a wide range of computational tools in a variety of fields, including signal processing, estimation and detection theory, high-dimensional statistics, convex and non-convex optimization, spectral algorithms, dimensionality reduction, and machine learning. The tools from these fields must be adapted to work under exceptionally challenging conditions, including extreme noise levels, the presence of missing data, and massively large datasets as large as several Terabytes. In addition, we present two statistical models: multi-reference alignment and multi-target detection, that abstract away much of the intricacies of cryo-EM, while retaining some of its essential features. Based on these abstractions, we discuss some recent intriguing results in the mathematical theory of cryo-EM, and delineate relations with group theory, invariant theory, and information theory.
Since ancient times, mathematics has proven unreasonably effective in its description of physical phenomena. As humankind enters a period of advancement where the completion of the much coveted theory of quantum gravity is at hand, there is mounting evidence this ultimate theory of physics will also be a unified theory of mathematics.
The learning of mathematics starts early but remains far from any theoretical considerations: pupils' mathematical knowledge is first rooted in pragmatic evidence or conforms to procedures taught. However, learners develop a knowledge which they can apply in significant problem situations, and which is amenable to falsification and argumentation. They can validate what they claim to be true but using means generally not conforming to mathematical standards. Here, I analyze how this situation underlies the epistemological and didactical complexities of teaching mathematical proof. I show that the evolution of the learners' understanding of what counts as proof in mathematics implies an evolution of their knowing of mathematical concepts. The key didactical point is not to persuade learners to accept a new formalism but to have them understand how mathematical proof and statements are tightly related within a common framework; that is, a mathematical theory. I address this aim by modeling the learners' way of knowing in terms of a dynamic, homeostatic system. I discuss the roles of different semiotic systems, of the types of actions the learners perform and of the controls they imple
The hope that mathematical methods employed in the investigation of formal logic would lead to purely computational methods for obtaining mathematical theorems goes back to Leibniz and has been revived by Peano around the turn of the century and by Hilbert's school in the 1920's. Hilbert, noting that all of classical mathematics could be formalized within quantification theory, declared that the problem of finding an algorithm for determining whether or not a given formula of quantification theory is valid was the central problem of mathematical logic. And indeed, at one time it seemed as if investigations of this “decision” problem were on the verge of success. However, it was shown by Church and by Turing that such an algorithm can not exist. This result led to considerable pessimism regarding the possibility of using modern digital computers in deciding significant mathematical questions. However, recently there has been a revival of interest in the whole question. Specifically, it has been realized that while no decision procedure exists for quantification theory there are many proof procedures available—that is, uniform procedures which will ultimately locate a proof for any formula of quantification theory which is valid but which will usually involve seeking “forever” in the case of a formula which is not valid—and that some of these proof procedures could well turn out to be feasible for use with modern computing machinery. Hao Wang [9] and P. C. Gilmore [3] have each produced working programs which employ proof procedures in quantification theory. Gilmore's program employs a form of a basic theorem of mathematical logic due to Herbrand, and Wang's makes use of a formulation of quantification theory related to those studied by Gentzen. However, both programs encounter decisive difficulties with any but the simplest formulas of quantification theory, in connection with methods of doing propositional calculus. Wang's program, because of its use of Gentzen-like methods, involves exponentiation on the total number of truth-functional connectives, whereas Gilmore's program, using normal forms, involves exponentiation on the number of clauses present. Both methods are superior in many cases to truth table methods which involve exponentiation on the total number of variables present, and represent important initial contributions, but both run into difficulty with some fairly simple examples. In the present paper, a uniform proof procedure for quantification theory is given which is feasible for use with some rather complicated formulas and which does not ordinarily lead to exponentiation. The superiority of the present procedure over those previously available is indicated in part by the fact that a formula on which Gilmore's routine for the IBM 704 causes the machine to computer for 21 minutes without obtaining a result was worked successfully by hand computation using the present method in 30 minutes. Cf. §6, below. It should be mentioned that, before it can be hoped to employ proof procedures for quantification theory in obtaining proofs of theorems belonging to “genuine” mathematics, finite axiomatizations, which are “short,” must be obtained for various branches of mathematics. This last question will not be pursued further here; cf., however, Davis and Putnam [2], where one solution to this problem is given for ele
Mathematics can help analyze the arts and inspire new artwork. Mathematics can also help make transformations from one artistic medium to another, considering exceptions and choices, as well as artists' individual and unique contributions. We propose a method based on diagrammatic thinking and quantum formalism. We exploit decompositions of complex forms into a set of simple shapes, discretization of complex images, and Dirac notation, imagining a world of "prototypes" that can be connected to obtain a fine or coarse-graining approximation of a given visual image. Visual prototypes are exchanged with auditory ones, and the information (position, size) characterizing visual prototypes is connected with the information (onset, duration, loudness, pitch range) characterizing auditory prototypes. The topic is contextualized within a philosophical debate (discreteness and comparison of apparently unrelated objects), it develops through mathematical formalism, and it leads to programming, to spark interdisciplinary thinking and ignite creativity within STEAM.
Constructivists (and intuitionists in general) asked what kind of mental construction is needed to convince ourselves (and others) that some mathematical statement is true. This question has a much more practical (and even cynical) counterpart: a student of a mathematics class wants to know what will the teacher accept as a correct solution of a homework problem. Here the logical structure of the claim is also very important, and we discuss several types of problems and their use in teaching mathematics.
These notes were originally developed as lecture notes for a category theory course. They should be well-suited to anyone that wants to learn category theory from scratch and has a scientific mind. There is no need to know advanced mathematics, nor any of the disciplines where category theory is traditionally applied, such as algebraic geometry or theoretical computer science. The only knowledge that is assumed from the reader is linear algebra. All concepts are explained by giving concrete examples from different, non-specialized areas of mathematics (such as basic group theory, graph theory, and probability). Not every example is helpful for every reader, but hopefully every reader can find at least one helpful example per concept. The reader is encouraged to read all the examples, this way they may even learn something new about a different field. Particular emphasis is given to the Yoneda lemma and its significance, with both intuitive explanations, detailed proofs, and specific examples. Another common theme in these notes is the relationship between categories and directed multigraphs, which is treated in detail. From the applied point of view, this shows why categorical thin
This is a survey paper on applications of mathematics of semirings to numerical analysis and computing. Concepts of universal algorithm and generic program are discussed. Relations between these concepts and mathematics of semirings are examined. A very brief introduction to mathematics of semirings (including idempotent and tropical mathematics) is presented. Concrete applications to optimization problems, idempotent linear algebra and interval analysis are indicated. It is known that some nonlinear problems (and especially optimization problems) become linear over appropriate semirings with idempotent addition (the so-called idempotent superposition principle). This linearity over semirings is convenient for parallel computations.
The Monte Carlo Computational Summit was held on the campus of the University of Notre Dame in South Bend, Indiana, USA on 25--26 October 2023. The goals of the summit were to discuss algorithmic and software alterations required for successfully porting respective code bases to exascale-class computing hardware, compare software engineering techniques used by various code teams, and consider the adoption of industry-standard benchmark problems to better facilitate code-to-code performance comparisons. A large portion of the meeting included candid discussions of direct experiences with approaches that have and have not worked. Participants reported that identifying and implementing suitable Monte Carlo algorithms for GPUs continues to be a sticking point. They also report significant difficulty porting existing algorithms between GPU APIs (specifically Nvidia CUDA to AMD ROCm). To better compare code-to-code performance, participants decided to design a C5G7-like benchmark problem with a defined figure of merit, with the expectation of adding more benchmarks in the future. Problem specifications and results will eventually be hosted in a public repository and will be open to submi
We give a widely self-contained introduction to the mathematical theory of the Anderson model. After defining the Anderson model and determining its almost sure spectrum, we prove localization properties of the model. Here we discuss spectral as well as dynamical localization and provide proofs based on the fractional moments (or Aizenman-Molchanov) method. We also discuss, in less self-contained form, the extension of the fractional moment method to the continuum Anderson model. Finally, we mention major open problems. These notes are based on several lecture series which the author gave at the Kochi School on Random Schrödinger Operators, November 26-28, 2009, the Arizona School of Analysis and Applications, March 15-19, 2010 and the Summer School on Mathematical Physics, Sogang University, July 20-23, 2010.
This article was motivated by the discovery of a potential new foundation for mainstream mathematics. The goals are to clarify the relationships between primitives, foundations, and deductive practice; to understand how to determine what is, or isn't, a foundation; and get clues as to how a foundation can be optimized for effective human use. For this we turn to history and professional practice of the subject. We have no asperations to Philosophy. The first section gives a short abstract discussion, focusing on the significance of consistency. The next briefly describes foundations, explicit and implicit, at a few key periods in mathematical history. We see, for example, that at the primitive level human intuitions are essential, but can be problematic. We also see that traditional axiomatic set theories, Zermillo-Fraenkel-Choice (ZFC) in particular, are not quite consistent with mainstream practice. The final section sketches the proposed new foundation and gives the basic argument that it is uniquely qualified to be considered {the} foundation of mainstream deductive mathematics. The ``coherent limit axiom'' characterizes the new theory among ZFC-like theories. This axiom plays