In this paper, we will provide an introduction to the brain structure and function. Brain is an astonishing living organ inside our heads, weighing about 1.5kg, consisting of billions of tiny cells. The brain enables us to sense the world around us (to touch, to smell, to see and to hear, etc.), to think and to respond to the world as well. The main obstacles that prevent us from creating a machine which can behavior like real-world creatures are due to our limited knowledge about the brain in both its structure and its function. In this paper, we will focus introducing the brain anatomical structure and biological function, as well as its surrounding sensory systems. Many of the materials used in this paper are from wikipedia and several other neuroscience introductory articles, which will be properly cited in this article. This is the first of the three tutorial articles about the brain (the other two are [26] and [27]). In the follow-up two articles, we will further introduce the low-level composition basis structures (e.g., neuron, synapse and action potential) and the high-level cognitive functions (e.g., consciousness, attention, learning and memory) of the brain, respectivel
The brain is a complex organ characterized by heterogeneous patterns of structural connections supporting unparalleled feats of cognition and a wide range of behaviors. New noninvasive imaging techniques now allow these patterns to be carefully and comprehensively mapped in individual humans and animals. Yet, it remains a fundamental challenge to understand how the brain's structural wiring supports cognitive processes, with major implications for the personalized treatment of mental health disorders. Here, we review recent efforts to meet this challenge that draw on intuitions, models, and theories from physics, spanning the domains of statistical mechanics, information theory, and dynamical systems and control. We begin by considering the organizing principles of brain network architecture instantiated in structural wiring under constraints of symmetry, spatial embedding, and energy minimization. We next consider models of brain network function that stipulate how neural activity propagates along these structural connections, producing the long-range interactions and collective dynamics that support a rich repertoire of system functions. Finally, we consider perturbative experime
The anatomically layered structure of a human brain results in leveled functions. In all these levels of different functions, comparison, feedback and imitation are the universal and crucial mechanisms. Languages, symbols and tools play key roles in the development of human brain and entire civilization.
The brain is immensely complex, with diverse components and dynamic interactions building upon one another to orchestrate a wide range of functions and behaviors. Understanding patterns of these complex interactions and how they are coordinated to support collective neural activity and function is critical for parsing human and animal behavior, treating mental illness, and developing artificial intelligence. Rapid experimental advances in imaging, recording, and perturbing neural systems across various species now provide opportunities and challenges to distill underlying principles of brain organization and function. Here, we take stock of recent progresses and review methods used in the statistical analysis of brain networks, drawing from fields of statistical physics, network theory and information theory. Our discussion is organized by scale, starting with models of individual neurons and extending to large-scale networks mapped across brain regions. We then examine the organizing principles and constraints that shape the biological structure and function of neural circuits. Finally, we describe current opportunities aimed at improving models in light of recent developments and
Brain tumor segmentation is crucial for diagnosis and treatment planning, yet challenges such as class imbalance and limited model generalization continue to hinder progress. This work presents a reproducible evaluation of U-Net segmentation performance on brain tumor MRI using focal loss and basic data augmentation strategies. Experiments were conducted on a publicly available MRI dataset, focusing on focal loss parameter tuning and assessing the impact of three data augmentation techniques: horizontal flip, rotation, and scaling. The U-Net with focal loss achieved a precision of 90%, comparable to state-of-the-art results. By making all code and results publicly available, this study establishes a transparent, reproducible baseline to guide future research on augmentation strategies and loss function design in brain tumor segmentation.
Background: Information processing in the brain requires large amounts of metabolic energy, the spatial distribution of which is highly heterogeneous reflecting complex activity patterns in the mammalian brain. Results: Here, it is found based on empirical data that, despite this heterogeneity, the volume-specific cerebral glucose metabolic rate of many different brain structures scales with brain volume with almost the same exponent around -0.15. The exception is white matter, the metabolism of which seems to scale with a standard specific exponent -1/4. The scaling exponents for the total oxygen and glucose consumptions in the brain in relation to its volume are identical and equal to $0.86\pm 0.03$, which is significantly larger than the exponents 3/4 and 2/3 suggested for whole body basal metabolism on body mass. Conclusions: These findings show explicitly that in mammals (i) volume-specific scaling exponents of the cerebral energy expenditure in different brain parts are approximately constant (except brain stem structures), and (ii) the total cerebral metabolic exponent against brain volume is greater than the much-cited Kleiber's 3/4 exponent. The neurophysiological factors
We present BrainPainter, a software that automatically generates images of highlighted brain structures given a list of numbers corresponding to the output colours of each region. Compared to existing visualisation software (i.e. Freesurfer, SPM, 3D Slicer), BrainPainter has three key advantages: (1) it does not require the input data to be in a specialised format, allowing BrainPainter to be used in combination with any neuroimaging analysis tools, (2) it can visualise both cortical and subcortical structures and (3) it can be used to generate movies showing dynamic processes, e.g. propagation of pathology on the brain. We highlight three use cases where BrainPainter was used in existing neuroimaging studies: (1) visualisation of the degree of atrophy through interpolation along a user-defined gradient of colours, (2) visualisation of the progression of pathology in Alzheimer's disease as well as (3) visualisation of pathology in subcortical regions in Huntington's disease. Moreover, through the design of BrainPainter we demonstrate the possibility of using a powerful 3D computer graphics engine such as Blender to generate brain visualisations for the neuroscience community. Blend
Interpretability methods for large language models (LLMs) typically derive directions from textual supervision, which can lack external grounding. We propose using human brain activity not as a training signal but as a coordinate system for reading and steering LLM states. Using the SMN4Lang MEG dataset, we construct a word-level brain atlas of phase-locking value (PLV) patterns and extract latent axes via ICA. We validate axes with independent lexica and NER-based labels (POS/log-frequency used as sanity checks), then train lightweight adapters that map LLM hidden states to these brain axes without fine-tuning the LLM. Steering along the resulting brain-derived directions yields a robust lexical (frequency-linked) axis in a mid TinyLlama layer, surviving perplexity-matched controls, and a brain-vs-text probe comparison shows larger log-frequency shifts (relative to the text probe) with lower perplexity for the brain axis. A function/content axis (axis 13) shows consistent steering in TinyLlama, Qwen2-0.5B, and GPT-2, with PPL-matched text-level corroboration. Layer-4 effects in TinyLlama are large but inconsistent, so we treat them as secondary (Appendix). Axis structure is stable
Exploring the developing brain is a major issue in understanding what enables children to acquire amazing abilities, and how early disruptions can lead to a wide range of neurodevelopmental disorders. MRI plays a key role here by providing a non-invasive way to link brain and behavioral changes. Several modalities are used in newborns and infants to characterize the properties of the developing brain, from growth, morphology to microstructure and functional specialization. Recent multi-modal studies have sought to couple complementary MRI markers to provide a more integrated view of brain development. In this chapter, we describe successively how these approaches have made it possible to assess the early maturation of brain tissues, to link different aspects of structural development, and to compare structural and functional brain development.
The idea that complex systems have a hierarchical modular organization originates in the early 1960s and has recently attracted fresh support from quantitative studies of large scale, real-life networks. Here we investigate the hierarchical modular (or "modules-within-modules") decomposition of human brain functional networks, measured using functional magnetic resonance imaging (fMRI) in 18 healthy volunteers under no-task or resting conditions. We used a customized template to extract networks with more than 1800 regional nodes, and we applied a fast algorithm to identify nested modular structure at several hierarchical levels. We used mutual information, 0 < I < 1, to estimate the similarity of community structure of networks in different subjects, and to identify the individual network that is most representative of the group. Results show that human brain functional networks have a hierarchical modular organization with a fair degree of similarity between subjects, I=0.63. The largest 5 modules at the highest level of the hierarchy were medial occipital, lateral occipital, central, parieto-frontal and fronto-temporal systems; occipital modules demonstrated less sub-modul
Universal embodied intelligence demands robust generalization across heterogeneous embodiments, such as autonomous driving, robotics, and unmanned aerial vehicles (UAVs). However, existing embodied brain in training a unified model over diverse embodiments frequently triggers long-tail data, gradient interference, and catastrophic forgetting, making it notoriously difficult to balance universal generalization with domain-specific proficiency. In this report, we introduce ACE-Brain-0, a generalist foundation brain that unifies spatial reasoning, autonomous driving, and embodied manipulation within a single multimodal large language model~(MLLM). Our key insight is that spatial intelligence serves as a universal scaffold across diverse physical embodiments: although vehicles, robots, and UAVs differ drastically in morphology, they share a common need for modeling 3D mental space, making spatial cognition a natural, domain-agnostic foundation for cross-embodiment transfer. Building on this insight, we propose the Scaffold-Specialize-Reconcile~(SSR) paradigm, which first establishes a shared spatial foundation, then cultivates domain-specialized experts, and finally harmonizes them thr
The development of foundation models for functional magnetic resonance imaging (fMRI) time series holds significant promise for predicting phenotypes related to disease and cognition. Current models, however, are often trained using a mask-and-reconstruct objective on small brain regions. This focus on low-level information leads to representations that are sensitive to noise and temporal fluctuations, necessitating extensive fine-tuning for downstream tasks. We introduce Brain-Semantoks, a self-supervised framework designed specifically to learn abstract representations of brain dynamics. Its architecture is built on two core innovations: a semantic tokenizer that aggregates noisy regional signals into robust tokens representing functional networks, and a self-distillation objective that enforces representational stability across time. We show that this objective is stabilized through a novel training curriculum, ensuring the model robustly learns meaningful features from low signal-to-noise time series. We demonstrate that learned representations enable strong performance on a variety of downstream tasks even when only using a linear probe. Furthermore, we provide comprehensive s
We present a multi-scale differentiable brain modeling workflow utilizing BrainPy, a unique differentiable brain simulator that combines accurate brain simulation with powerful gradient-based optimization. We leverage this capability of BrainPy across different brain scales. At the single-neuron level, we implement differentiable neuron models and employ gradient methods to optimize their fit to electrophysiological data. On the network level, we incorporate connectomic data to construct biologically constrained network models. Finally, to replicate animal behavior, we train these models on cognitive tasks using gradient-based learning rules. Experiments demonstrate that our approach achieves superior performance and speed in fitting generalized leaky integrate-and-fire and Hodgkin-Huxley single neuron models. Additionally, training a biologically-informed network of excitatory and inhibitory spiking neurons on working memory tasks successfully replicates observed neural activity and synaptic weight distributions. Overall, our differentiable multi-scale simulation approach offers a promising tool to bridge neuroscience data across electrophysiological, anatomical, and behavioral sc
Understanding the human brain remains the Holy Grail in biomedical science, and arguably in all of the sciences. Our brains represent the most complex systems in the world (and some contend the universe) comprising nearly one hundred billion neurons with septillions of possible connections between them. The structure of these connections engenders an efficient hierarchical system capable of consciousness, as well as complex thoughts, feelings, and behaviors. Brain connectivity and network analyses have exploded over the last decade due to their potential in helping us understand both normal and abnormal brain function. Functional connectivity (FC) analysis examines functional associations between time series pairs in specified brain voxels or regions. Brain network analysis serves as a distinct subfield of connectivity analysis in which associations are quantified for all time series pairs to create an interconnected representation of the brain (a brain network), which allows studying its systemic properties. While connectivity analyses underlie network analyses, the subtle distinction between the two research areas has generally been overlooked in the literature, with them often b
Cognitive science and neuroscience have long faced the challenge of disentangling representations of language from representations of conceptual meaning. As the same problem arises in today's language models (LMs), we investigate the relationship between LM--brain alignment and two neural metrics: (1) the level of brain activation during processing of sentences, targeting linguistic processing, and (2) a novel measure of meaning consistency across input modalities, which quantifies how consistently a brain region responds to the same concept across paradigms (sentence, word cloud, image) using an fMRI dataset (Pereira et al., 2018). Our experiments show that both language-only and language-vision models predict the signal better in more meaning-consistent areas of the brain, even when these areas are not strongly sensitive to language processing, suggesting that LMs might internally represent cross-modal conceptual meaning.
Deep learning-based segmentation techniques have shown remarkable performance in brain segmentation, yet their success hinges on the availability of extensive labeled training data. Acquiring such vast datasets, however, poses a significant challenge in many clinical applications. To address this issue, in this work, we propose a novel 3D brain segmentation approach using complementary 2D diffusion models. The core idea behind our approach is to first mine 2D features with semantic information extracted from the 2D diffusion models by taking orthogonal views as input, followed by fusing them into a 3D contextual feature representation. Then, we use these aggregated features to train multi-layer perceptrons to classify the segmentation labels. Our goal is to achieve reliable segmentation quality without requiring complete labels for each individual subject. Our experiments on training in brain subcortical structure segmentation with a dataset from only one subject demonstrate that our approach outperforms state-of-the-art self-supervised learning methods. Further experiments on the minimum requirement of annotation by sparse labeling yield promising results even with only nine slice
Pairwise metrics are often employed to estimate statistical dependencies between brain regions, however they do not capture higher-order information interactions. It is critical to explore higher-order interactions that go beyond paired brain areas in order to better understand information processing in the human brain. To address this problem, we applied multivariate mutual information, specifically, Total Correlation and Dual Total Correlation to reveal higher-order information in the brain. In this paper, we estimate these metrics using matrix-based Rényi's entropy, which offers a direct and easily interpretable approach that is not limited by direct assumptions about probability distribution functions of multivariate time series. We applied these metrics to resting-state fMRI data in order to examine higher-order interactions in the brain. Our results showed that the higher-order information interactions captured increase gradually as the interaction order increases. Furthermore, we observed a gradual increase in the correlation between the Total Correlation and Dual Total Correlation as the interaction order increased. In addition, the significance of Dual Total Correlation va
We developed a tool for visualizing and analyzing large pre-trained vision models by mapping them onto the brain, thus exposing their hidden inside. Our innovation arises from a surprising usage of brain encoding: predicting brain fMRI measurements in response to images. We report two findings. First, explicit mapping between the brain and deep-network features across dimensions of space, layers, scales, and channels is crucial. This mapping method, FactorTopy, is plug-and-play for any deep-network; with it, one can paint a picture of the network onto the brain (literally!). Second, our visualization shows how different training methods matter: they lead to remarkable differences in hierarchical organization and scaling behavior, growing with more data or network capacity. It also provides insight into fine-tuning: how pre-trained models change when adapting to small datasets. We found brain-like hierarchically organized network suffer less from catastrophic forgetting after fine-tuned.
A summary of the experimental and theoretical presentations in the Structure Function Working Group on the proton and photon unpolarized structure functions is given.
Accurate segmentation of brain tumors plays a key role in the diagnosis and treatment of brain tumor diseases. It serves as a critical technology for quantifying tumors and extracting their features. With the increasing application of deep learning methods, the computational burden has become progressively heavier. To achieve a lightweight model with good segmentation performance, this study proposes the MBDRes-U-Net model using the three-dimensional (3D) U-Net codec framework, which integrates multibranch residual blocks and fused attention into the model. The computational burden of the model is reduced by the branch strategy, which effectively uses the rich local features in multimodal images and enhances the segmentation performance of subtumor regions. Additionally, during encoding, an adaptive weighted expansion convolution layer is introduced into the multi-branch residual block, which enriches the feature expression and improves the segmentation accuracy of the model. Experiments on the Brain Tumor Segmentation (BraTS) Challenge 2018 and 2019 datasets show that the architecture could maintain a high precision of brain tumor segmentation while considerably reducing the calcu