Author Topic: The Brain Dictionary and Inner Speech defined by voxel-based lesion–symptom map  (Read 43 times)

Offline Chip (OP)

  • Server Admin
  • Hero Member
  • *****
  • Administrator
  • *****
  • Join Date: Dec 2014
  • Location: Australia
  • Posts: 5918
  • Reputation Power: 0
  • Chip has hidden their reputation power
  • Gender: Male
  • Last Login:Today at 07:31:22 AM
  • Deeply Confused Learner

So i wonder who DOESN'T talk to themselves ~ anybody ?

source 1:

source 2:

Where Words are Stored: The Brain’s Meaning Map

This word map shows which parts of the brain responded as a single subject listened to a storyteller. The words tend to cluster by semantic category, as shown by color (for example, pink words are “social”). For a deeper exploration of the map, watch the video at

Listening to speech is so easy for most of us that it is difficult to grasp the neural complexity involved. Previous studies have revealed several brain regions, collectively called the semantic system, that process meaning. Yet such studies have typically focused on specific distinctions, such as abstract versus concrete words, or found discrete areas responsive to groups of related words, such as tools or food. Now a team of neuroscientists in Jack Gallant's laboratory at the University of California, Berkeley, led by Alexander Huth, has generated a comprehensive “atlas” of where different meanings are represented in the human brain.

The researchers played two hours of stories from the Moth Radio Hour, a public broadcast show, to seven participants while recording their brain activity in a functional MRI scanner. They then analyzed the activity in the roughly 50,000 voxels (three-dimensional pixels) that make up the entire brain, creating detailed maps of where different meanings are represented in each individual. This approach contrasts with standard studies, where activity is averaged across many participants to look at similarities across a group while ignoring variations among individuals.

The maps cover much of the cortex, the outermost brain regions controlling higher cognitive functions, extending beyond areas traditionally thought of as language centers. Every meaning appears in multiple locations, and every location contains a cluster of related meanings. Some areas selectively respond to words related to people, for instance, whereas others respond to places or numbers. “This is way more information, and probably way more generalizable to natural narrative comprehension, than any previous study,” Gallant says.

The maps were remarkably similar from one participant to the next, though not identical. The researchers developed a statistical tool that enabled them to produce a general semantic “atlas,” by finding functional areas common to all participants. This technique, improved and extended to other cognitive functions, could ultimately be useful for mapping brain function so as to minimize the impact of surgery or other invasive treatments.

source 3:

The neural correlates of inner speech defined by voxel-based lesion–symptom mapping (see Voxel-based morphometryWiki)

I love this shit as i talk to myself (eg. "Chip, ya gotta get some more METH now !  ::))



The neural correlates of inner speech have been investigated previously using functional imaging. However, methodological and other limitations have so far precluded a clear description of the neural anatomy of inner speech and its relation to overt speech. Specifically, studies that examine only inner speech often fail to control for subjects’ behaviour in the scanner and therefore cannot determine the relation between inner and overt speech. Functional imaging studies comparing inner and overt speech have not produced replicable results and some have similar methodological caveats as studies looking only at inner speech. Lesion analysis can avoid the methodological pitfalls associated with using inner and overt speech in functional imaging studies, while at the same time providing important data about the neural correlates essential for the specific function. Despite its advantages, a study of the neural correlates of inner speech using lesion analysis has not been carried out before. In this study, 17 patients with chronic post-stroke aphasia performed inner speech tasks (rhyme and homophone judgements), and overt speech tasks (reading aloud). The relationship between brain structure and language ability was studied using voxel-based lesion–symptom mapping. This showed that inner speech abilities were affected by lesions to the left pars opercularis in the inferior frontal gyrus and to the white matter adjacent to the left supramarginal gyrus, over and above overt speech production and working memory. These results suggest that inner speech cannot be assumed to be simply overt speech without a motor component. It also suggests that the use of overt speech to understand inner speech and vice versa might result in misleading conclusions, both in imaging studies and clinical practice.


Inner speech, or the ability to speak silently in one's head, has been suggested to play an important role in memory (Baddeley and Hitch, 1974), reading (Corcoran, 1966), language acquisition (Vygotsky, 1962), language comprehension (Blonskii, 1964), thinking (Sokolov, 1972) and even in consciousness and self-reflective activities (Morin and Michaud, 2007).

Currently, two main levels of inner speech may be differentiated from the available literature: The first level is abstract inner speech or ‘the language of the mind’. The first to investigate it using the methodology of experimental psychology were Egger (1881) and Ballet (1886). By using introspection, they tried to understand the relation between inner speech and thought and by doing so they also brought about an outburst of experimental work on inner speech (reviewed in Sokolov, 1972). Later, Vygotsky (1962) argued that young children have no inner speech and therefore they can only think out loud. With the acquisition of language, speech becomes increasingly internalized. Mature inner speech, he argued, is different from overt speech in that it lacks the complete syntactic structures available in overt speech, and its semantics is personal and contextual rather than objective.

The second level is of concrete inner speech. It is flexible and can therefore be either phonological or phonetic (Oppenheim and Dell, 2010; see Vigliocco and Hartsuiker, 2002 for a related distinction). Phonological inner speech displays the ‘lexical bias effect’ (the tendency for errors in speech production to produce other words rather than non-words) but not the ‘phonemic similarity effect’ (the tendency to mix similar phonemes in speech production), suggesting that it is phonetically impoverished in comparison to overt speech (Oppenheim and Dell, 2008). Phonetic inner speech, on the other hand, displays both types of biases (Oppenheim and Dell, 2010). Ozdemir et al. (2007) examined the influence of the ‘uniqueness point’ of a word on monitoring for the presence of specific phonemes in a word. A word's uniqueness point is the place in its sequence of phonemes at which it deviates from all other words in the language; hence, it makes the word ‘unique’. They reported that ‘uniqueness point’ influenced inner speech, therefore suggesting that its phonetic components are similar to that of overt speech. In a study that looked at inner speech monitoring, participants were asked to produce ‘tongue twisters’ and report the number of self-corrections (Postma and Noordanus, 1996). Participants repeated the task in different conditions: inner speech, mouthing, overt speech in the presence of white noise and overt speech without noise. Interestingly, there was no difference in the number of errors detected by the participant in the first three conditions. Together, these two studies also give evidence to the existent of a phonetically rich inner speech. In this study, we investigated concrete inner speech. Inner speech was defined as the ability to create an internal representation of the auditory word form, and to apply computations or manipulations to this representation.

Another source of information regarding the differences between inner and overt speech comes from brain imaging studies of language in normal subjects. Many of these studies use a covert response (inner speech) as the preferable response mode, apparently assuming that overt and inner speech differ only in the articulatory motor component present in overt speech. However, other studies run contrary to this assumption (Huang et al., 2002; Gracco et al., 2005; Shuster and Lemieux, 2005). Direct comparisons between conditions of overt and inner speech indicate that although they yield overlapping brain activation, the two conditions also produce separate activations in other regions of the brain, reflecting distinct non-motor cognitive processes (Ryding et al., 1996; Barch et al., 1999; Palmer et al., 2001; Huang et al., 2002; Indefrey and Levelt, 2004; Shuster and Lemieux, 2005; Basho et al., 2007).

When studying inner speech using functional imaging, participants are asked to covertly perform tasks such as semantic or phonological fluency, verb generation or stem completion, among others. In these cases, the experimenter cannot reliably determine whether participants perform the task using the desired cognitive processes or whether they perform the task at all. If the task is performed, in some cases it might be that ‘lower’ levels of inner speech are used, such as the abstract or phonetically impoverished ones, and the researchers cannot distinguish between, or control these cases. Additionally, informative and important data regarding performance (type of response, errors and reaction time) cannot be obtained (Barch et al., 1999; Peck et al., 2004). Lastly, some studies do not ensure that participants refrain from producing overt speech when asked to generate only inner speech (reviewed in Indefrey and Levelt, 2004).

In conclusion, studies of inner speech alone produce replicable data regarding inner speech but in those studies the relation between inner and overt speech is not explored. Other studies reviewed here made direct comparison between inner and overt speech but used tasks that do not monitor participants’ performance. The purpose of the current study was to further our understanding of the neural mechanisms underlying inner speech and its relation to overt speech, while controlling for participants’ performance.

this continues at the source link ...
« Last Edit: June 12, 2019, 01:44:50 PM by Chip »
Over 90% of all computer problems can be traced back to the interface between the keyboard and the chair !


Related Topics

  Subject / Started by Replies Last post
7 Replies
Last post February 01, 2016, 06:24:57 PM
by Chip
0 Replies
Last post July 15, 2015, 01:55:04 PM
by Chip
0 Replies
Last post July 27, 2015, 09:11:48 PM
by smfadmin
6 Replies
Last post July 03, 2016, 10:10:42 PM
by Chip
10 Replies
Last post January 14, 2016, 08:32:22 PM
by sk8phaze
1 Replies
Last post April 14, 2016, 02:30:02 AM
by 10kites
12 Replies
Last post August 23, 2016, 01:03:30 AM
by Chip
0 Replies
Last post May 18, 2018, 08:54:51 AM
by smfadmin
0 Replies
Last post June 12, 2019, 07:59:11 PM
by Chip
0 Replies
Last post June 14, 2019, 12:50:55 PM
by Chip


In no event will d&u or any person involved in creating, producing, or distributing site information be liable for any direct, indirect, incidental, punitive, special or consequential damages arising out of the use of or inability to use d&u. You agree to indemnify and hold harmless d&u, its domain founders, sponsors, maintainers, server administrators, volunteers and contributors from and against all liability, claims, damages, costs and expenses, including legal fees, that arise directly or indirectly from the use of any part of the d&u site.


Founded December 2014