Scalp electric potentials (electroencephalograms) and extracranial magnetic fields (magnetoencephalograms) are due to the primary (impressed) current density distribution that arises from neuronal postsynaptic processes. A solution to the inverse problem--the computation of images of electric neuronal activity based on extracranial measurements--would provide important information on the time-course and localization of brain function. In general, there is no unique solution to this problem. In particular, an instantaneous, distributed, discrete, linear solution capable of exact localization of point sources is of great interest, since the principles of linearity and superposition would guarantee its trustworthiness as a functional imaging method, given that brain activity occurs in the form of a finite number of distributed hot spots. Despite all previous efforts, linear solutions, at best, produced images with systematic nonzero localization errors. A solution reported here yields images of standardized current density with zero localization error. The purpose of this paper is to present the technical details of the method, allowing researchers to test, check, reproduce and validate the new method.
In 4 experiments, students who read expository passages with seductive details (i.e., interesting but irrelevant adjuncts) recalled significantly fewer main ideas and generated significantly fewer problem-solving transfer solutions than those who read passages without seductive details. In Experiments 1, 2, and 3, revising the passage to include either highlighting of the main ideas, a statement of learning objectives, or signaling, respectively, did not reduce the seductive details effect. In Experiment 4, presenting the seductive details at the beginning of the passage exacerbated the seductive details effect, whereas presenting the seductive details at die end of the passage reduced the seductive details effect. The results suggest that seductive details interfere with learning by priming inappropriate schemas around which readers organize the material, rather than by distracting the reader or by disrupting the coherence of the passage. Textbooks are second only to lecture as the instructional medium of choice for presenting information to students (Garner, Gillingham, & White, 1989), but students often find the material boring (Harp & Mayer, 1997). One suggested
Twenty adults were asked to read a three-paragraph expository text on differences among insects. Information in the text had been rated for importance and interestingness. Half of the adults read the text with "seductive details" (propositions presenting interesting, but unimportant, information), half without. After reading, the adults recalled the important information (a macroprocessing task), rated the text for overall interestingness, reported the single most interesting piece of information read, and matched pictures of animals on the basis of differences mentioned in text (a microprocessing task). The adults presented with seductive details in text were significantly less adept than their peers at including three main ideas in their recall protocols. Microprocessing performance and interestingness ratings were unaffected by text condition. In a second study, with 36 seventh graders, macroprocessing performance in general was weak. Students presented with seductive details in text were significantly less adept at macroprocessing than students given no such irrelevant information and given redundant signaling of the main ideas. Microprocessing success of seventh graders was also affected by the presence of seductive details. Results are examined in the context of current theories of expository text processing.
A large number of novel encodings for bag of visual words models have been proposed in the past two years to improve on the standard histogram of quantized local features. Examples include locality-constrained linear encoding [23], improved Fisher encoding [17], super vector encoding [27], and kernel codebook encoding [20]. While several authors have reported very good results on the challenging PASCAL VOC classification data by means of these new techniques, differences in the feature computation and learning algorithms, missing details in the description of the methods, and different tuning of the various components, make it impossible to compare directly these methods and hard to reproduce the results reported. This paper addresses these shortcomings by carrying out a rigorous evaluation of these new techniques by: (1) fixing the other elements of the pipeline (features, learning, tuning); (2) disclosing all the implementation details, and (3) identifying both those aspects of each method which are particularly important to achieve good performance, and those aspects which are less critical. This allows a consistent comparative analysis of these encoding methods. Several conclusions drawn from our analysis cannot be inferred from the original publications.
The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available.
One of the major lessons of memory research has been that human memory is fallible, imprecise, and subject to interference. Thus, although observers can remember thousands of images, it is widely assumed that these memories lack detail. Contrary to this assumption, here we show that long-term memory is capable of storing a massive number of objects with details from the image. Participants viewed pictures of 2,500 objects over the course of 5.5 h. Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images. These results have implications for cognitive models, in which capacity limitations impose a primary computational constraint (e.g., models of object recognition), and pose a challenge to neural models of memory storage and retrieval, which must be able to account for such a large and detailed storage capacity.
This paper summarizes the development of a convergent weighted-averaging interpolation scheme which can be used to obtain any desired amount of detail in the analysis of a set of randomly spaced data. The scheme is based on the supposition that the two-dimensional distribution of an atmospheric variable can be represented by the summation of an infinite number of independent waves, i.e., a Fourier integral representation. The practical limitations of the scheme are that the data distribution be reasonably uniform and that the data be accurate. However, the effect of inaccuracies can be controlled by stopping the convergence scheme before the data errors are greatly amplified. The scheme has been tested in the analysis of 500-mb height data over the United States producing a result with details comparable to those obtainable by careful manual analysis. A test analysis of sea level pressure based on the data obtained at only the upper air network stations produced results with essentially the same features as the analysis produced at the National Meteorological Center. Further tests based on a regional sampling of stations reporting airways data demonstrate the applicability of the scheme to mesoscale wavelengths.
A general all-atom force field for atomistic simulation of common organic molecules, inorganic small molecules, and polymers was developed using state-of-the-art ab initio and empirical parametrization techniques. The valence parameters and atomic partial charges were derived by fitting to ab initio data, and the van der Waals (vdW) parameters were derived by conducting MD simulations of molecular liquids and fitting the simulated cohesive energies and equilibrium densities to experimental data. The combined parametrization procedure significantly improves the quality of a general force field. Validation studies based on large number of isolated molecules, molecular liquids and molecular crystals, representing 28 molecular classes, show that the present force field enables accurate and simultaneous prediction of structural, conformational, vibrational, and thermophysical properties for a broad range of molecules in isolation and in condensed phases. Detailed results of the parametrization and validation for alkane and benzene compounds are presented.
Abstract. Image fusion techniques are widely used to integrate a lower spatial resolution multispectral image with a higher spatial resolution panchromatic image, such as Thematic Mapper (TM) multispectral band and SPOT Panchromatic images. However, the existing techniques either cannot avoid dis-torting the image spectral properties or involve complicated and time-consuming frequency decomposition and re-construction processing. A simple spectral pre-serve fusion technique: the Smoothing Filter-based Intensity Modulation (SFIM) has thus been developed based on a simpli ed solar radiation and land surface re ection model. By using a ratio between a higher resolution image and its low pass ltered (with a smoothing lter) image, spatial details can be modulated to a co-registered lower resolution multispectral image without altering its spectral properties and contrast. The technique can be applied to improve spatial reso-lution for either colour composites or individual bands. The delity to spectral property and the spatial textural quality of SFIM are convincingly demonstrated by an image fusion experiment using TM and SPOT Panchromatic images of south-east Spain. The visual evaluation and statistical analysis compared with HSI and Brovey transform techniques con rmed that SFIM is a superior fusion technique for improving spatial detail of multispectral images with their spectral properties reliably preserved. 1.
More than 50 cytokines signal via the JAK/STAT pathway to orchestrate hematopoiesis, induce inflammation and control the immune response. Cytokines are secreted glycoproteins that act as intercellular messengers, inducing proliferation, differentiation, growth, or apoptosis of their target cells. They act by binding to specific receptors on the surface of target cells and switching on a phosphotyrosine-based intracellular signaling cascade initiated by kinases then propagated and effected by SH2 domain-containing transcription factors. As cytokine signaling is proliferative and often inflammatory, it is tightly regulated in terms of both amplitude and duration. Here we review molecular details of the cytokine-induced signaling cascade and describe the architectures of the proteins involved, including the receptors, kinases, and transcription factors that initiate and propagate signaling and the regulatory proteins that control it.
The initial route of metastases in most patients with melanoma is via the lymphatics to the regional nodes. However, routine lymphadenectomy for patients with clinical stage I melanoma remains controversial because most of these patients do not have nodal metastases, are unlikely to benefit from the operation, and may suffer troublesome postoperative edema of the limbs. A new procedure was developed using vital dyes that permits intraoperative identification of the sentinel lymph node, the lymph node nearest the site of the primary melanoma, on the direct drainage pathway. The most likely site of early metastases, the sentinel node can be removed for immediate intraoperative study to identify clinically occult melanoma cells. We successfully identified the sentinel node(s) in 194 of 237 lymphatic basins and detected metastases in 40 specimens (21%) on examination of routine hematoxylin-eosin-stained slides (12%) or exclusively in immunohistochemically stained preparations (9%). Metastases were present in 47 (18%) of 259 sentinel nodes, while nonsentinel nodes were the sole site of metastasis in only two of 3079 nodes from 194 lymphadenectomy specimens that had an identifiable sentinel node, a false-negative rate of less than 1%. Thus, this technique identifies, with a high degree of accuracy, patients with early stage melanoma who have nodal metastases and are likely to benefit from radical lymphadenectomy.
Self-regulated learning (SRL) has become a pivotal construct in contemporary accounts of effective academic learning. I examine several areas of theory and empirical research, which are not prominently cited in educational psychology's research into SRL, that reveal new details of what SRL is and how students develop productive SRL. I interpret findings from these investigations to suggest that nondeliberative, knowledge-based elements are inherent in the processes of SRL, and in learning more generally. Several topics for future research are sketched based on an assumption that learning effectively by oneself will remain a goal of education and can be an especially revealing context in which to research SRL.
Does democracy promote economic development? This paper reviews recent attempts to address this question that exploited within-country variation. It shows that the answer is largely positive, but also depends on the details of democratic reforms. First, the sequence of economic vs political reforms matters: countries liberalizing their economy before extending political rights do better. Second, different forms of democratic government lead to different economic policies, and this might explain why presidential democracy leads to faster growth than parliamentary democracy. Third, it is important to distinguish between expected and actual political reforms. Taking expectations of regime change into account helps identify a stronger growth effect of democracy.
Abstract This chapter shows the potential of new literary techniques in cultural history for enriching more traditional social history topics. It also argues that humanitarianism depended in part on the development of a constellation of narrative forms—the realistic novel, the enquiry, and the medical case history—which created a sense of veracity and sympathy through narrative detail. It then asks how details about the suffering bodies of others engender compassion and how that compassion comes to be understood as a moral imperative to undertake ameliorative action. Case histories and autopsies constitute humanitarian narratives. The systematic investigation of a particular patient's demise is paradigmatic of the sorts of narrative structures that make “humanitarianism” possible, even though these narratives are written in the icy language of science. Humanitarian narrative created dialectically its antithesis.
Abstract There is significant interest in understanding inflammatory responses within the brain and spinal cord. Inflammatory responses that are centralized within the brain and spinal cord are generally referred to as ‘neuroinflammatory’. Aspects of neuroinflammation vary within the context of disease, injury, infection, or stress. The context, course, and duration of these inflammatory responses are all critical aspects in the understanding of these processes and their corresponding physiological, biochemical, and behavioral consequences. Microglia, innate immune cells of the CNS , play key roles in mediating these neuroinflammatory responses. Because the connotation of neuroinflammation is inherently negative and maladaptive, the majority of research focus is on the pathological aspects of neuroinflammation. There are, however, several degrees of neuroinflammatory responses, some of which are positive. In many circumstances including CNS injury, there is a balance of inflammatory and intrinsic repair processes that influences functional recovery. In addition, there are several other examples where communication between the brain and immune system involves neuroinflammatory processes that are beneficial and adaptive. The purpose of this review is to distinguish different variations of neuroinflammation in a context‐specific manner and detail both positive and negative aspects of neuroinflammatory processes. image In this review, we will use brain and spinal cord injury, stress, aging, and other inflammatory events to illustrate the potential harm and benefits inherent to neuroinflammation. Context, course, and duration of the inflammation are highly important to the interpretation of these events, and we aim to provide insight into this by detailing several commonly studied insults. This article is part of the 60th anniversary supplemental issue .
Graphical displays can reveal problems in a statistical model that might not be apparent from purely numerical summaries. Such visualizations can also be helpful for the reader to evaluate the validity of a model if it is reported in a scholarly publication or report. But, given the onerous costs involved, researchers often avoid preparing information-rich graphics and exploring several statistical approaches or tests available. The ggstatsplot package in the R programming language (R Core Team, 2021) provides a one-line syntax to enrich ggplot2based visualizations with the results from statistical analysis embedded in the visualization itself. In doing so, the package helps researchers adopt a rigorous, reliable, and robust data exploratory and reporting workflow.
暂无摘要(点击查看原文获取完整内容)
We introduce extreme summarization, a new single-document summarization task which does not favor extractive strategies and calls for an abstractive modeling approach. The idea is to create a short, one-sentence news summary answering the question "What is the article about?". We collect a real-world, large scale dataset for this task by harvesting online articles from the British Broadcasting Corporation (BBC). We propose a novel abstractive model which is conditioned on the article's topics and based entirely on convolutional neural networks. We demonstrate experimentally that this architecture captures longrange dependencies in a document and recognizes pertinent content, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans. 1
Introduction 1 Principle of the method 1 Brief background of purine metabolism in ruminants 2 Limitation of the method 5 Sample collection 5 Determination of purine derivatives 6 Dilution of urine samples 6 List of published methods 8 Determination of allantoin by a colorimetric method 9 Determination of xanthine plus hypoxanthine by enzymatic method 12 Determination of uric acid by uricase method 15 Calculations 16 Daily excretion of purine derivatives 17 Calculation of microbial N supply 17 Presentation of results 18 Use of spot samples 19 Related Literature 19
Though emotion conveys memory benefits, it does not enhance memory equally for all aspects of an experience nor for all types of emotional events. In this review, I outline the behavioral evidence for arousal's focal enhancements of memory and describe the neural processes that may support those focal enhancements. I also present behavioral evidence to suggest that these focal enhancements occur more often for negative experiences than for positive ones. This effect of valence appears to arise because of valence-dependent effects on the neural processes recruited during episodic encoding and retrieval, with negative affect associated with increased engagement of sensory processes and positive affect leading to enhanced recruitment of conceptual processes.