Writing by hand involves a sequential component, characterized by the succession of strokes to form letters, and a motor adaptation component for controlling pen movements under spatial constraints. The topography of the brain network supporting handwriting is well established, but the functional properties of its components in the motor control of handwriting as a procedural skill remain poorly understood. To address this question, we recorded the brain activity of adult participants as they wrote in the MRI scanner. We manipulated the sequential component of handwriting, e.g., the succession of strokes, and the motor adaptation component, e.g., the visuo-spatial control of pen movements to manage spatial constraints. Analysis of the brain data revealed the recruitment of two distinct networks, depending on the component being manipulated. The motor adaptation component relies on the cortico-cerebellar loop. The sequential component of handwriting rather appears to be computed in both the cortico-striatal and cortico-cerebellar loops. Finally, our study specifies the functional contributions of several regions of the cortical motor system as a function of the sequential and spatial adaptation requirements of the writing movement.
There is evidence that the sensorimotor system builds fine-grained spatial maps of the limbs based on somatosensory signals. Can a hand-held tool be mapped in space with a comparable spatial resolution? Do spatial maps change following tool use? In order to address these questions, we used a spatial mapping task on healthy participants to measure the accuracy and precision of spatial estimates pertaining to several locations on their arm and on a hand-held tool. To study spatial accuracy, we first fitted linear regressions with real location as predictor and estimated location as dependent variables. The slopes-representing estimation accuracy-were compared between arm and tool, and before to after tool use. We further investigated changes induced by tool use in terms of variable error associated with spatial estimates, representing their precision. We found that the spatial maps for the arm and tool were comparably accurate, suggesting that holding the tool provides enough information to the sensorimotor system to map it in space. While we did not observe changes in the accuracy of spatial maps following tool use, we did observe changes in their spatial precision. Although these effects were absent in a control experiment without tool use, the direct comparison between the two conditions did not yield significant differences, suggesting that the observed precision changes may be driven by non-specific factors. In all, our results suggest that tool users can build up a map of tool space that is comparable to body space.
The frontal eye field (FEF) and the inferior frontal junction (IFJ) are prefrontal regions that mediate top-down functions, with mounting neuroimaging evidence suggesting that they specialize in controlling spatial versus non-spatial processing, respectively. We hypothesized that their unique patterns of structural connectivity underlie these specialized roles. To infer the localization of FEF and IFJ in standard space, we performed an activation likelihood estimation meta-analysis of functional MRI paradigms that targeted these regions. Using surface-based probabilistic tractography methods at the individual subject level, we tracked streamlines ipsilaterally from the inferred FEF and IFJ activation peaks to the dorsal and ventral visual streams mapped on the native white matter surface of 56 subjects parcellated using the multimodal atlas by Glasser et al. (2016). By contrasting FEF and IFJ connectivity likelihoods, we found predominant structural connectivity from the FEF to regions of the dorsal visual stream compared to the IFJ (particularly in the left hemisphere), and conversely, predominant structural connectivity from the IFJ to regions of the ventral visual stream compared to the FEF bilaterally. Additionally, we analyzed the cortical terminations of the superior longitudinal fasciculus to the FEF and IFJ, implicating its first and third branches as segregated pathways mediating their communication to the posterior parietal cortex. The structural connectivity fingerprints of the FEF and IFJ support the view that the two visual stream architectures extend to the posterior lateral prefrontal cortex and provide converging anatomical evidence of their specialization in spatial versus non-spatial control.
Distractor-response binding (DRB) has been widely studied to understand the interplay between perception and motor processes, with DRB effects referring to performance costs or benefits that arise when previously co-occurring distractors and responses are retrieved together. We hypothesize that musical training and musical perception skills modulate flexibility in reconfiguring auditory perception-action associations; this has not yet been investigated in the context of DRB. Here, we use an auditory DRB paradigm with concomitant EEG recordings to investigate how auditory-motor bindings are established, retrieved, and how they might differ between harmonic versus inharmonic sounds. Using a healthy sample of participants (N = 42) with a wide range of musical training, we also investigated whether these processes are modulated by musical perception skills, assessed using the well-established Micro-PROMS (Profile of Music Perception Skills). Behavioral and EEG results indicated significant DRB effects for both harmonic and inharmonic distractor sound combinations. These effects were modulated by harmonicity: stronger behavioral DRB effects and weaker DRB effects in theta band activity were found when inharmonic as compared to harmonic distractor stimuli were presented. Beamformer analysis localized the theta band effect to the right superior temporal cortex, highlighting the role of this brain area in auditory-motor integration. Further, this study provides evidence that participants with better musical perception skills and higher cumulative practice time show increased flexibility in handling perception-action associations. Together, these findings enhance the understanding of how auditory stimuli interact with motor actions, particularly in relation to individual differences in musical perception skills.
The Default Mode Network (DMN) is a collection of interconnected transmodal cortical areas that was originally found to be engaged when people do not focus on external sensory stimuli but instead attend to their inner thoughts. More recent experiments have shown that it is also recruited during externally oriented tasks that require the processing of concepts, including the increasingly complex meanings of words, sentences, and stories. We contend, however, that current views about the involvement of the DMN in semantic cognition are still too limited because they neglect the fact that the roughly 6,500 languages in the world differ greatly regarding the concepts they encode in lexical items and grammatical constructions. We propose that what may be "default" about the DMN is its tendency to structure thoughts in terms of language-specific concepts that are not only deeply entrenched due to their habitual use, but also culturally quite diverse. In particular, we argue that (1) every language enhances semantic cognition in an idiosyncratic way by fostering a unique inventory of concepts, and the DMN represents them as transmodally integrated meanings; (2) because these language-specific concepts are the most plentiful units of cultural common ground, they constitute the foundation of the shared mental worlds that the DMN hosts during interpersonal verbal communication; (3) these concepts are also activated during inner speech, which often accompanies the private thoughts that the DMN was originally found to subserve; and (4) these concepts may even be enlisted as "default" representations during some putatively nonlinguistic cognitive processes.
Brain activity continuously fluctuates alongside the spontaneously co-occurring mental events that comprise our spontaneous thought. Understanding the functional relevance of this intrinsic activity requires investigation of the covariation between ongoing brain dynamics and spontaneous thought. The large-scale electrophysiological events known as electroencephalographic (EEG) microstates provide an important window into the activity of neuronal networks at the millisecond time scale, and sequences of microstates are thought to reflect cognitively relevant mental operations. Yet, attempts to link momentary thoughts to the dynamics of microstates through more temporally precise experience sampling methods have been limited. We address this gap by asking participants to report on the content and quality of their spontaneous thought across nine experiential dimensions by answering questions adapted from the Amsterdam Resting-State Questionnaire (ARSQ) after eight separate EEG recording periods where participants engaged in eyes-closed rest. We found that individuals' retrospective reports of the flow of their thought content varied substantially from one moment to the next and were coupled with the dynamics of microstates. In particular, microstates C and E demonstrate associations with several prominent features of spontaneous thought, providing links between these electrophysiological events and large-scale functional brain networks thought to be involved in internal cognition and self-generated mental processes. Together, these findings elucidate the functional relevance of microstates by linking their dynamics to distinct dimensions of spontaneous thought and demonstrate the utility of more temporally precise experience sampling approaches to capture thoughts in individuals at rest.
Olfaction is an archaic sense but its central mechanisms are lesser known than those of other senses. Here we address a possible link between olfactory stimuli and mental spatial representations. Although olfactory percepts are not commonly related to space, perfumiers tend to describe scents in terms of top/head or base notes and arrange them vertically on olfactory pyramids, with the most volatile on top. We tested whether odors evoke in naïve participants a mental vertical representation dependent on odor quality, in the absence of explicit references to elevation. In a speeded choice classification task, 110 participants pressed one of two vertically aligned buttons in response to fruity or gourmand odors. A spatial stimulus-response compatibility (SRC) effect was expected to emerge from compatible versus incompatible mappings of stimuli to responses, due to the hypothesised dimensional overlap. However, the preregistered contrast on means of median correct responses neither confirmed the presence of a vertical SRC effect at the group level, nor provided conclusive evidence for its absence. An analogous exploratory test on means of restricted means supported the presence of the predicted effect but its Bayesian counterpart found the outcome inconclusive. Exploratory analyses revealed three distinct clusters of participants with regards to the vertical SRC effect for odors, with two (N = 61 and N = 19) showing a significant effect in the expected direction and one (N = 30) showing a significant effect in the opposite direction. These results call for replications that factor in potential sources of individual differences.
Body ownership refers to the feeling that the body belongs to oneself. This sense could be manipulated using the mirror box illusion (MBI), which modulates somatosensory processing. However, whether alterations in body ownership influence motor processing remains unclear. Therefore, we aimed to investigate the hemisphere-specific effects of altered body ownership on corticomotor excitability using transcranial magnetic stimulation (TMS). Twenty healthy participants were enrolled. A mirror box was used to modulate the sense of body ownership. Participants observed the mirror reflection of either their right or left hand and tapped both index fingers either synchronously or asynchronously across a 15.2-cm gap for 60 sec. Their actual finger on the reflected side was obscured with a cardboard and cloth. Motor-evoked potentials (MEPs) were elicited from the first dorsal interosseous muscle of the hidden hand using TMS over the right or left primary motor cortex (M1). Changes in body ownership were assessed using the vividness of illusion (VI) and proprioceptive drift. MEP amplitudes increased after synchronous tapping compared with those after asynchronous tapping in the right hand (left M1 stimulation), with no such effect in the left hand. The VI score and proprioceptive drift were greater after synchronous tapping but did not differ between hands. Participants perceived their hands as closer to the mirror than their actual position, regardless of side. Mirror box-induced disownership produced a hemisphere-specific modulation of corticomotor excitability, suggesting that altered body ownership influences motor output in a lateralized manner.
Visual information plays a key role in guiding food-related decisions. While previous studies have shown that features such as calories and naturalness are encoded by the brain, upon simply seeing the stimuli, it remains unclear how this encoding is shaped by the observer's current state. In this study, we explore the effect of 1) hunger state, 2) task relevance, and 3) current individual preference on the processing of visual food information. Participants (N = 23) underwent two EEG sessions: one after fasting overnight and another after eating normally. During each session, participants did two separate tasks, one where the stimuli were task-relevant and one where attention was distracted away. We used multivariate analysis methods to assess the impact of hunger on the representation of food-related features, and to determine the time-course of information related to food flavour, personal appeal, and arousal, across both tasks. Results showed that information about edibility (food vs non-food object), food identity (e.g., hamburger vs pizza), flavour profile, or personal appeal and arousal was not influenced by the hunger manipulation. Flavour was represented regardless of attentive state, whereas personal appeal and arousal information emerged later and were only observed when the food was task-relevant. We found that food appeal and arousal encoding were more closely aligned with behavioural ratings within rather than between sessions, suggesting the nature of the encoding was driven by current state. The study provides insights into how personal preferences and physiological states influence the representation of food information in the brain.
Disfluencies in speech frequently occur before the production of longer and more complex speech content. Listeners are thought to use the distribution of disfluencies in the comprehension of speech to inform their predictions. Here, we investigated whether the presence of disfluencies in speech affects word processing also in naturalistic listening conditions. Participants (n = 36) listened to the spoken recall of the events of a television series while undergoing fMRI. We modelled word processing effort using parametric modulations for word length, frequency, entropy, as well as surprisal and presence/absence of a disfluency. To investigate the effects of disfluencies on word processing, we tested the interaction between disfluency and frequency, and disfluency and surprisal. Words preceded by a disfluency were associated with increased activity in the left and right superior temporal gyrus (STG). Lower word frequency was associated with an increase in activity in the left mid STG. Increased word surprisal elicited a similar distribution of activity, with bilateral superior temporal activation. The effect of surprisal was reduced after a disfluency in a cluster in the left posterior temporal lobe, while the effect of frequency increased following disfluencies in the left superior temporal gyrus and the left inferior frontal cortex. Therefore, the presence of a disfluency affects the response to upcoming input, suggesting that it prepares the listener for higher complexity in the upcoming speech, by potentially allocating increased attention resources that facilitate integration in context.
This case report investigates the cortical and subcortical representations of verbal and sign language in a bilingual patient who uses both spoken and signed modalities, assessed intraoperatively during awake surgery. Although spoken language is regularly mapped during awake craniotomies, other language modalities are rarely reported. We performed direct electrical stimulation (DES) during both spoken and sign language tasks in peritumoral regions of the left temporoparietal lobe. The language abilities of the patient were intraoperatively assessed using verbal object naming and sign recognition. Our findings demonstrate that cortical regions, such as the supramarginal gyrus, play a crucial role for both verbal and sign language. However, the specific sites within this region that elicit DES-positive responses differ between the two language modalities. Similarly, subcortical disconnections highlight the overlap between sign and verbal language, particularly in major language pathways, while also emphasizing the specialized role of motor pathways in sign language processing. Clinically, our results emphasize the importance of tailoring DES protocols for intraoperative mapping to individual patient needs, and theoretically, they enhance our understanding of the roles of the supramarginal gyrus and the corticospinal tract in language comprehension.
Self-monitoring- the ability to monitor and adapt one's behavior to align with social situations-is a fundamental aspect of human social functioning. Recent research has revealed the involvement of the left post-central gyrus and parietal cortex in social cognitive behavior, highlighting impairments in self-monitoring following traumatic brain injury, along with related executive and cognitive functions. To explore self-monitoring, we analyzed self-reported self-monitoring scores from 99 individuals with penetrating brain injuries and their relatives. We then used a voxel-based lesion-symptom mapping technique to identify brain regions associated with differences between self- and other-evaluations. Our findings revealed that veterans' ability to assess their own self-monitoring-when compared to their caregivers' perspectives-is related to cognitive functions, and dependent on damage to the left postcentral gyrus, the parietal lobe, and associated subcortical pathways that enable the reliable regulation of actions and goal-directed behaviors in response to the external environment. This study provides causal evidence that self-awareness and cognitive functions are both critical skills for monitoring social behavior.
Focus on how non-cortical regions such as the thalamus contribute to cognitive function has been increasing as it could have profound implications in understanding brain function in both health and disease. Of particular interest is the mediodorsal thalamus (MD) as it has unique connectivity patterns that suggest it can support frontal cortical operations. A subset of MD cells projects diffusely to broad regions of the frontal cortex. These diffusely projecting cells are thought to modulate the way downstream cortical regions respond to signaling from other cortical areas. It is theorized that through this type of modulation the MD supports cognition but there is little empirical evidence in humans for this theory. Damage to the MD will sometimes lead to pronounced impairments in recognition memory, recall memory, and executive function, but these findings are inconsistent. The present study analyzed 22 chronic thalamic stroke patients to relate their MD damage, their cognitive impairment, and network connectivity differences as measured by resting state functional magnetic resonance imaging. While recognition, recall, working memory, language, and executive function were impaired in the thalamic stroke group there was not a detectable link between cognitive impairment and MD damage in particular. Interestingly, the results showed no indication that MD damage disrupts cortico-cortical communication patterns. This result suggests that if the MD plays a prominent role in cortico-cortical modulation it is only during specific tasks or at smaller scales. This study marks a critical first step towards understanding the limitations of MD involvement in modulation of cortico-cortical communication.
Mental imagery and visual perception can both give rise to vivid visual experiences, yet the extent to which they can functionally influence each other remains an open question. Previous research has shown that imagining a stimulus before viewing a rivalrous display can bias perception towards the imagined content. However, this effect has been demonstrated primarily with simple, low-level stimuli such as oriented gratings. Here, we investigated whether imagery of more complex representations-people and buildings-can influence perception, using the binocular rivalry paradigm. Participants in our study imagined either a personally familiar person or personally familiar building before viewing a rivalrous face-house stimulus. We measured their perceptual dominance and imagery vividness on each trial. Their overall imagery ability was assessed using the Vividness of Visual Imagery Questionnaire (VVIQ). We found that participants were significantly more likely to perceive the imagined stimulus; however, this priming effect was driven by person imagery. Greater vividness of person imagery on each trial significantly increased dominance of the face stimulus, but this effect did not extend to building imagery and the house stimulus. Furthermore, the VVIQ did not predict individual differences in priming magnitude. These results extend previous work by showing that mental imagery can influence perception beyond simple stimuli, but that this functional link is shaped by stimulus-specific features. Our findings highlight the need for future research to examine the conditions under which imagining more complex representations affects seeing.
Letters and Arabic digits are the building blocks of words and numbers. In the visual cortex, these culturally acquired characters are characterized by a differential involvement of the left and right hemispheres. Letters, as language-related symbols, predominantly involve left-hemispheric structures in the occipito-temporal cortex, while digits, as quantity-related symbols, elicit right-hemispheric or bilateral visual recognition processes. However, it remains unclear whether the human brain processes single elements and strings of characters differently depending on their category. This question is important because in the Latin alphabet, letters are usually combined in strings to form words and do not stand alone, while digits have meaning in both cases. Using Fast Periodic Visual Stimulation (frequency-tagging) during EEG recordings, we investigated how adults (N = 18) discriminate letters and digits from each other, as a function of their string length (i.e. 1 vs 5 characters). One category of stimuli (e.g., single letters) was periodically inserted (1/5) in a stream of stimuli of the other category (e.g., single digits) displayed at 10 Hz. Results showed clear discrimination responses at 2 Hz (i.e., 10 Hz/5) with occipito-temporal topography, stronger for strings than for single elements. Digits gave rise to right-lateralized responses whatever the length. Letters displayed a left-lateralized topography only when strings were presented, while single letters were right-lateralized. A second experiment (N = 20) replicated these novel and unexpected findings. The results are discussed as potentially indicating that expert readers perceive single letters as visual objects of expertise, whereas letter strings engage linguistic (orthographic, phonological) processes that rely on the left hemisphere.
Numerous factors interfere with the successful transmission of messages during verbal conversations leading speakers to use different speech modes, i.e., specific prototypes of speech with unique phonatory and articulatory characteristics. Despite their omnipresence in verbal exchanges, no theoretical model in the speech production literature has provided a mechanistical account of the encoding processes underpinning speech in different modes. The present study thus aims to investigate how speech modes are planned/programmed relative to standard speech using the high temporal resolution provided by electroencephalography (EEG). 20 Participants uttered pseudowords in three different conditions-standard speech, speaking louder than usual and faking an English accent in French-during a delayed production task. Event-related potential (ERP) of standard speech was contrasted separately with the two others speech modes. Results indicate that speaking by adopting speech modes varying in articulatory and phonatory properties entails increased neural activity of the brain networks that are already involved in standard speech production. Especially, electrophysiological signatures of loud speech and faking an accent were both associated to differences in ERP responses relative to standard speech in a time period covering the last 200 msec preceding the vocal onset. This observation was coherent across waveform analysis (more extended differences in time and space for faking an accent), topographical dissimilarity analysis and microstates analysis (more extended for loud speech). The present findings highlight that different speech modes are encoded in the last 200 msec preceding their vocal production, possibly in a mode-specific way which will need further investigation.
Language neuroscience has historically relied on highly controlled experimental paradigms that differ markedly from the conditions of real-world communication. Although such approaches have yielded important insights, they often fail to capture the integrative processes required for discourse and connected language. Here, we treat discourse as language extending beyond a single simple clause and used for a specific purpose. Recent advances in computational modeling, natural language processing, and neurophysiological measurement now make it possible to study language in more naturalistic, temporally extended, and ecologically valid contexts. In this closing editorial for a special issue of Cortex, we synthesize contributions that collectively argue for a discourse-centered neuroscience: the view that the neural basis of language becomes most fully visible when language is studied in its connected, purposeful form. We organize the issue around four broad themes-cortical topography and continuous integration, structural connectivity, large-scale network dynamics, and clinical mapping of language, thought, and interaction-and show how each reveals aspects of language organization that remain difficult to detect in isolated word- and sentence-level paradigms. We conclude by considering the implications of this work for basic and clinical science and by outlining future directions for the neurocognitive study of discourse.
The absence of visual mental imagery, called aphantasia, occurs congenitally in up to 3% of the general population, but the brain regions responsible for aphantasia remain uncertain. Rare cases of acquired aphantasia caused by brain lesions may lend insight into the neuroanatomy responsible for this condition, and the neural substrate of visual mental imagery itself. We performed a systematic literature review to identify cases of lesion-induced aphantasia and traced the lesion locations onto a common brain atlas. These locations were compared to control lesions causing other neuropsychiatric symptoms (n = 887). First, we tested for intersection between lesion locations and an a priori region of interest termed the fusiform imagery node, active during visual mental imagery tasks. Second, we tested for connectivity between lesion locations and this region of interest, leveraging resting-state functional connectivity from a large cohort of healthy subjects (n = 1000). Finally, we performed a data-driven analysis assessing whole-brain lesion connectivity that was sensitive (100% overlap) and specific (family-wise error p < .05) for aphantasia. We identified 12 cases of lesion-induced aphantasia, only 5 of which intersected the fusiform imagery node. However, 100% of these lesion locations were functionally connected to the fusiform imagery node. Connectivity to this region was both sensitive (100% overlap) and specific (family-wise error p < .05) for aphantasia in a data-driven whole-brain analysis. Lesions causing acquired aphantasia occur in multiple different brain regions but are all functionally connected to the left fusiform imagery node. This study provides causal support for the importance of this brain region in visual mental imagery.
This review explored cognitive-communication disorders (CCD) in speakers of Berber languages residing in Morocco and abroad. It emphasized the unique interplay between neuropsychological, linguistic, and cultural factors relevant to Berber languages, which belonged to the Afroasiatic language family and included three main varieties in Morocco: Tachelhit, Tarifit, and Central Atlas Tamazight. A narrative review has been carried out of studies from MEDLINE, Web of Science, and Scopus, summarizing the demographic and clinical characteristics of the participants. The analysis of six remaining studies, comprising a combined sample size of over 923 participants, identified the diversity of Berber-speaking populations in Morocco and the Netherlands. These studies used cross-sectional designs and validation protocols for assessment tools. However, challenges included the significant scarcity of published articles, the suitability of standard tools for low-literacy or culturally diverse populations, limited sample sizes, and socio-cultural barriers. These studies provided a foundation for evidence-based practice using validated neuro-cognitive instruments in accordance with international standards. Therefore, clinicians should prioritize Amazigh cultural and linguistic awareness in diglossic contexts, as the growing prominence of these understudied languages underscored the need for fair, standardized cognitive-linguistic assessment tools.
Autotopagnosia is a rare neuropsychological syndrome characterized by disordered localization of body parts. Reports remain scarce and the cognitive mechanisms underlying this syndrome are not fully understood. We describe the case of a 65-year-old Japanese man whose clinical and neuropsychological profiles and neuroimaging findings supported a diagnosis of probable Alzheimer's disease. Notably, he had episodic memory deficits and visuospatial impairment without aphasia, along with Gerstmann syndrome and marked autotopagnosia. Neuropsychological examinations assessed body-part knowledge and processing, combining pictorial representations, spoken instructions, and tactile input with pointing and naming responses; single-case comparisons were benchmarked against age-matched, cognitively healthy adults (n = 20). The patient could name body parts and retain general knowledge and visual recognition of them, yet consistently failed to localize those parts on whole-body stimuli: his own body, another person's body, and a full-body diagram. Language-only probes of inter-part spatial relations were impaired, whereas non-spatial definitions were preserved. This cross-modal profile is not attributable to a single sensory modality or language/motor factor, nor is it readily explained by accounts invoking failures to map visual/somatosensory inputs to body-part location or the primary disruption of an egocentric, multisensory body schema. Instead, it is most consistent with a modality-independent disruption of body-part localization knowledge, i.e., a high-level body-centered spatial map linking body-part labels to their canonical positions on the human body. Our findings suggest that autotopagnosia can be parsimoniously unified through this mechanism. These observations underscore the need for standardized, modality-diverse examinations to clarify the mechanisms underlying this rare clinical phenomenon.