There is an ongoing transition in cognitive science from internalist to externalist, or so-called 4E, models of human cognition  . We argue that the resulting extension of the very notion of “cognition” requires a reassessment of what is meant by “cognitive deficit”. In particular, we focus here on 4E (embodied, embedded, enactive and extended) models of cognition couched in terms of the predictive processing framework  , to show how we should now construe the notion of cognitive deficit, and illustrate the resulting notion by showing how the “deficits” traditionally associated with autism should properly be understood. One reason for this suggestion is epistemic coherence: psychiatry’s conception of cognitive deficits should stay in touch with theoretical developments in cognitive science. But more importantly, internalist (non 4E) models have mostly failed to give us an understanding of the nature of cognitive psychiatric conditions generally. The current state of psychiatry’s Diagnostic and Statistical Manual (DSM)  is a case in point: Moreover, internalist models, focusing on internal causes of psychiatric conditions (genes, neurotransmitters, brain regions, cognitive modules) have failed to truly help people. 4E models would allow a re-conceptualization of the question that may shed new light on old conceptual and practical problems plaguing psychiatry. Our intention is not to oppose “entirely external deficits” to “entirely internal deficits”, but rather to advocate for a transition from a strictly internalist conception to an externalist conception of deficits, which construes deficits not as properties of organisms but as properties of extended systems that include the organism’s brain and body, but also its environment.
In this paper, we thus question the dominant conception of cognitive deficits to propose an alternative view, the latter being the main development we propose here, and then we use the case of autism to illustrate this alternative and discuss some of the issues that emerge from the proposed shift. If cognition is indeed external, or more external that what the dominant view once supposed, 4E models of “cognitive deficits”, compared to the old, dominant, conception, should allow better-focused interventions, that is, interventions that are focused on the actual source of the deficit. Moreover, by replacing an essentialist view of deficits, which understands them as intrinsic features of individuals, with a non-essentialist view that understands deficits as deficits of a larger system than the individual, 4E models of cognitive deficits should allow for more just practices and interventions. Recent developments in autism research now offer alternative explanations    to the mainstream options    . With these models, and our externalist concept of deficits, we believe it can now be argued that the so-called “social deficits” associated with autism have been mischaracterized   or, at least, grossly oversimplified. We accomplish this in three main steps. First, we present the new conception of cognition, which we refer to as 4E predictive models. Second, based on this new conception of cognition, we propose a redefinition of the notion of “cognitive deficits”. To do so, we discuss the traditional conception of cognitive deficit, that is, the internalist view of deficits, and then sketch the nature of cognitive performance in a 4E predictive framework. This will allow us to properly characterize the nature of the externalist view of deficits. Finally, we review the cognitive deficits associated with autism. For this, we adopt a recently developed family of predictive explanations of autism, and then analyze the so-called social deficit as a mismatch between autistic cognitive processing, as understood by such models, and the autistic individual’s environment, built by and for neurotypicals. We end this analysis by showing how the resulting testimonial and hermeneutic injustices  feed back into research on autism as well as clinical practice with autistic individuals to entrench the externally construed deficit into the autistic individual’s cognitive system.
2. A New Conception of Cognition: 4E Predictive Models
We maintain that cognition is predictive and extended (i.e., 4E: embodied, embedded, enactive and extended  ; see below), and this entails that cognitive deficits are deficits of extended predictive systems. “Deficit” is a normative notion, meaning that something that should be working in a certain way is not. Viewed from a 4E predictive perspective, that “something” can be the brain but also, and equally, it can be the body, the environment or the complex set of relations that link them. The extension of the concept of deficit allowed by predictive models   within a 4E conception of cognition opens the opportunity (and responsibility) to view our assumptions on questions of cognitive deficits in psychiatry through new lenses. We look especially at the case of autism, which is the object of a very active and constantly developing scientific literature.
2.1. Predictive Processing
Predictive processing (henceforth PP) is a view of the underlying neurological and computational mechanisms responsible for cognitive capacities. PP inverses the traditional bottom-up conception of cognition: instead of viewing the mind as a mostly passive system set up to receive and process information, PP takes it to be an active inferential system that constantly seeks to predict its incoming sensory input    . It does this by adjusting its current model of the environment, developed based on past encounters with the environment, or by acting on its environment. Action (or output), which is traditionally conceived as following perception, is now viewed as simultaneous and intertwined with perception.
By comparing the actual input from the environment with the internally generated predicted input, which may be wildly off the mark in unknown environments, predictive systems are thought to compute an error signal that, on a short time-scale, drives model selection and local action in the environment, as well as, on a longer time-scale, learning (viewed as model adjustment). Mathematically, such systems can be understood in Bayesian terms. Their models are then the prior probability distributions (or densities) and likelihoods (together understood as a generative model) used to inferentially generate a prediction on the basis of the computed error signal, which serves as Bayesian evidence for inference.
On the most popular accounts of PP, predictive systems are taken to be highly hierarchical, that is, many-layered or deep. Thus viewed, each layer seeks to predict the activity of the layer just below itself, and so on, until the sensory or motor periphery is reached, in which case predictions are about incoming sensory signals or serve to drive muscles and other effectors (e.g., glands). In hierarchically layered predictive systems, lower level priors are concerned with the most proximal regularities in the organism’s environment, that is, those that are at the smallest spatiotemporal scales (e.g., quick local changes in the sensory periphery). Proximal regularities that are perfectly predicted by lower-level priors do not generate any error signal to go up the hierarchy: the error is said to have been explained away. As the signal moves up the hierarchy, higher levels are concerned with more distal regularities, that is larger and larger spatiotemporal scales (objects moving across the field of vision, day changing into night, vision becoming much noisier as myopia progressively sets in, and so on). Although we presented the levels from the bottom up, recall that the primary flow of information in predictive systems is thought to be top-down: higher levels can thus be viewed as providing context for the lower level predictions. It is as if the visual system was saying: on the basis of my predicted current level of my myopia, the predicted current ambient daylight, and the predicted cat moving across the room, my lower level predicts that this current patch of dark grey will be replaced by a patch of lighter gray. Recall also that such systems should never be viewed as exclusively perceptual, for the predictions simultaneously reach the motor periphery, which may be taken to predict how such and such movement will result in such and such sensory input (active inference).
Predictive systems are driven by prediction error. All prediction errors are not created equal however: some reflect inadequate models but some reflect environmental randomness (noise) or volatility (frequent changes in regularities). While prediction errors caused by poor models and environment volatility is reducible by changing models (model selection) and learning (model adjustment), prediction errors caused by environmental randomness (noise) is not. No amount of model selection and correction can reduce noise. Active inference, however, can reduce all source of error by making the environment less noisy (although this might sometimes be a practical impossibility). For instance, the fact that visual models cannot properly predict the pattern of light as the sun reflects on the ocean, and thus generate a lot of error, is not a sign that the visual model is incorrect but that light patterns on the ocean are mostly visually random. Similarly, poor prediction performance in environments that are perfectly regular but that change often (as in strobe-lit rooms or mood swings) are not a reflection on the quality of the generative model but of the environment’s volatility. To properly drive model selection, action and learning, it is thus thought that the error signal should be modulated by another prediction, generated by hyper-priors (priors about priors), called the prediction’s precision. Mathematically, model precision is the inverse of model variance (in statistics, variance is standard deviation squared). If the mean is a model of class performance on a given exam, then variance is a measure of the precision of predicting a student’s grade based on the mean: the larger the variance, the less precise the prediction.
Finally, it is useful to understand the activity of predictive systems as the use of prediction error to build hierarchical models of the causal structure of the world, and the use of these models to minimize variational free energy, that is the probability that the system finds itself in unexpected states. Organisms whose predictive system minimizes their free energy will, all else being equal, survive longer by maintaining themselves at acceptable values in a range of life-promoting parameters (right temperature, right blood-sugar levels, right distance from walls, right number of predators, and so on).
2.2. 4E Cognition
Although PP is an exciting development both in neuroscience and in cognitive science, many see its most interesting feature as its promise to offer a neurological foundation for 4E cognition, a loose collection of views of cognition that emphasize the deep interaction between the brain, body and environment. Specifically, 4E cognition refers to embodied, embedded, extended and enactive cognition. Proponents of the embodied approach traditionally claim that cognition is not bound within the confines of the skull (i.e., the brain)   . The bodily organization of the whole organism is thought to be implicated in cognition. For instance, some embodiment theorists claim that the very shape and organization of the human body shape some of our most basic concepts and cognitive processes. Theories of embedded cognition claim that cognition depends on a complex interaction with and exploitation of the cognizing organism’s environment (i.e., some set of external resources)  . Enactive cognition, for its part, claims that cognition is not a state of the organism or a product of the brain’s activity: it is something performed by the cognizing organism. Finally, theories of extended cognition claim that cognition, mostly—perhaps not exclusively   —human cognition, spreads across the confines of the body  (the “skin/skull boundary”). On this view, external resources are not simply important for, but constitutive of cognition.
One way to understand 4E cognition is to view brain, body and environment as forming a coupled dynamical system. When two or more systems interact in such a way that they each cause instantaneous changes in the other (changes describable by coupled differential equations), it makes sense to say that they strictly speaking form a single higher-level system (any division between them—e.g. “the skin”—being somewhat arbitrary) and pragmatic—e.g., the pragmatics of explanation). Studying 4E cognition means studying those cognitive capacities and situations where brain, body and environment form such a system. Claiming that “cognition is 4E” amounts to the very strong claim that all cognitive capacities do; conversely, claiming that cognition is not 4E (a view often labeled “internalism”) amounts to the equally strong (but more in line with orthodox cognitive science) claim that they never do. Our only claim here is that some cognitive capacities are 4E, and that many “disabilities” are disruption of one or more components of the coupled system that sustains a given cognitive capacity. By opting for an externalist conception as opposed to an internalist conception, we are expanding cognition outside the brain, but we are not denying its internal aspects: the goal is in fact to blur the usual inside/outside frontier, if there even is such a frontier, not to deny some things are internal.
Although we stay clear of the debate’s stronger claims, even the weak claim has deep consequences for cognitive science. Since systems that bear 4E cognitive capacities, and whose dysfunction are the source of 4E disabilities, are brain-body environment coupled dynamical systems, models of their system-level properties cannot be formulated in the traditional taxonomies of cognitive science so long as they are typologies of capacities that belong to individual organisms. Perception, on the 4E view, and to take but one example, is not a property of organisms (subjects) but of coupled brain-body-environment dynamical systems. At the system level, the main cognitive concept may be the relational brain-body-environment concept of affordances, which are meaningful structures or situations of the environment, whose meaning depends on the organism’s body structures, its set of motor skills and its brain’s proper attunement to them, and which are organized into fields of affordances1 through series of movements (active inference) designed to shape the signal inputs. At lower levels of explanation, models of the brain processes that attune organisms to certain affordances in the field, and models of the physical structures that make situations parts of organisms’ sets of affordances, will of course be neurological and physicochemical or biological, respectively. For instance, explanations of why complex sugars afford sustenance to a given bacterium will refer to chemical properties of the sugars and of the bacterium’s metabolism.
2.3. Predictive 4E Cognition
1Affordances are organism-environment relational properties of environmental situations—i.e., the presence of a door affords exiting, the presence of certain plants afford sustenance, etc. A field of affordances is an ecologically structured set of affordances  .
One reason why many view PP as an exciting foundation for 4E cognition is that it offers a way to see how brain and environment can be dynamically coupled, by supplying models of brain processes that neatly mesh into models of coupled brain body-environment cognitive capacities. Brains are dynamically coupled to the body that houses them and the environment in which this body is embedded by having rich non-linear feedback interaction with its partners (indeed the sheer amount of such interactions is a measure of coupling   . PP increases the amount of non-linear feedback interaction between the systems that are coupled, over simple feedback relations, between them in two ways. First PP systems are predictive: brains, according to PP, form probabilistic generative models of their input so that, in a well-trained PP system, most of the feedback is expected (the rest flows up the system as prediction error). Such systems interact smoothly with their partners, and as such are said to skillfully cope with them   . Second, PP generative models are hierarchical: each higher-level layer in the hierarchy implements models of processes at ever larger temporal and spatial scales, and can thus provide predictions that are context-sensitive at many levels of interaction. As such, PP might be the neural architecture that 4E cognition has long been waiting for (see Clark  for an attempt at uniting PP and 4E cognition).
According to 4E cognition, organisms tend to modify their physical and biological environments so as to alter the affordances they offer, either by introducing new affordances in the environment or by altering those that are already there. An instance of the first case is the ubiquitous example of beaver dams, which afford beavers protection against predators and easier access to food sources during the winter months. An instance of the second case is food caching by birds, which affords them food storage. Once the niche is constructed, beaver perceive the pond as affording protection and birds perceive cache locations as affording food sources. Humans are of course champion niche builders (to such an extent that they are currently responsible for the mass extinction of most other species). Like beavers and birds, humans build structures that afford protection against predators and easy access to food sources, and much more, but they also build what are called “cultural niches”, that is, niches made up of information. Just like beavers are born in a built niche that affords them protection and access to food, humans are born in built niches that let them know how to build canoes  , when is the best time locally to plant seeds, how people greet each other in a certain location, and so on.
As Constant et al.  put it, cultural niche construction in the context of PP corresponds to outsourcing socially relevant information in the environment. This information can be defined in terms of cultural affordances. Cultural niche construction is thus a form of disambiguation rather than confirmatory predictive activity  , that is, it allows organisms to access information that guides the choice between hypotheses. Accordingly, the process of communal active inference that is cultural niche construction can be defined as a form of meta-learning: “[the niche] can function as a meta-learning mechanism, by which socially relevant cues in the environment come to guide the agent’s acquisition of adaptive cultural knowledge and skills” (  , p 6). This suggests niche construction as a prime candidate for an extended (even distributed) cognitive complement for a cognitive system that tolerates less environmental ambiguity (as is the case with autism) by providing insight into the environmental elements (i.e., sensory cues) that should trigger learning.
3. A New Conception of “Cognitive Deficits”
3.1. “Cognitive Deficit”
Traditional accounts view cognition as computationally mediated information processing, where innate or learned (or acquired) representations and computational mechanisms are instantiated in the brain. On this account, cognition is very much internalist. On an internalist cognitive science account of language, for instance, syntax is said to be built by an innate computational mechanism (the Language Acquisition Device or “LAD”), on the basis of an innate representation (the Universal Grammar), resulting in acquired mechanisms (local syntax; e.g., English)  . Similarly, semantics is thought to be built by an innate hypothesis formation and confirmation mechanism, which links innate primitive-concepts to local arrangements of phonemes (spoken words) or graphemes (written words) (e.g. English words)  . This view puts constraints to how “cognitive deficits” can be construed: they can derive from false or absent representations, or from ineffective or absent computational mechanisms, whose origin can be cognitive or neurological. To take language again, traditional accounts of cognition will regard absence of proper input at a specific age (the so-called “critical period”) as preventing the construction of the local syntax and lexicon because the LAD will miss the syntactic inputs it needs to set the local parameters on the UG and the hypothesis and confirmation learning mechanisms will miss the phonemic inputs it needs to formulate hypotheses regarding the local phonemic form and internal primitive-conceptual structure. Similarly, again to take language, traditional accounts will attribute dyslexia to differences in brain structure (e.g., decreased white matter integrity in bundles that extend from the back of the brain to the parts responsible for speech articulation  ).
Autism spectrum disorder does not escape this tradition. Traditional models situate the deficit in the individual’s internal cognitive system, something that is apparent in, for instance, the search for the genetic causes of autism. This approach, in addition to being localizationist, seeks to reduce various behavioral phenomena (e.g., social and communication deficits, and restricted and repetitive behaviors) associated with autism to a single biological cause. As a prominent example, consider Baron-Cohen’s  work: he originally  laid the foundations of an approach that considers autism primarily a social deficit. Like linguistic deficits, autism is thus explained as a deficit in a so-called Theory of Mind Module (ToMM), which is postulated to be deficient because, like the UG, ToMM does not receive its proper inputs, in this case from an upstream module called the Shared-Attention Mechanism (SAM)). More recently, he added yet another system responsible for autism, The Empathy System (TESS), which is thought to be deficient because it does not receive proper input from The Emotion Detector (TED), thus again following the pattern already established by linguistics in its understanding of deficits  . Without a properly functioning TESS, he believes that many autistic people overly develop their systemizing abilities, and are thus left with an extreme form of the male brain (i.e., a brain characterized by high systemizing and low empathy) or as a deficit in theory of the mind (lack of empathy, absence of mentalizing, etc.)2. Autism, like dyslexia, is therefore understood within an internalist conception of human cognition: autistic people lack a certain component of what constitutes the essence of human cognition (empathy) and have too much of another (systematizing). Another example of an internalist conception of autism is the theory of social motivation  , which postulates a deficit of social motivation in autistic individuals. These accounts take an internalist stance on deficits (deficits as internal impairments). In the following section, we will propose an externalist stance on cognitive deficits (describing deficits that can depend on something wrong occurring outside the individual organism). 4E predictive models of cognition force us to reassess what “cognitive deficit” means by integrating the environment not only in its usual sense (evo-developmental), but by understanding all cognitive performances as embedded in environments or fields of affordances that shape and sustain them. In the case of autism, we will see in later sections that the 4E predictive conception of cognition steers us towards explanations based in differences in perceptual processing rather than explanations based on social deficits     and that view the social difficulties as consequences of complex relations between autistic individuals and their environment.
3.2. 4E Cognitive Performance
2Many of Baron-Cohen’s proposals have since been severely criticized on methodological grounds. Fine  , for instance, has shown the methodological flaws in the experiments supporting his gendered models of cognition they presuppose (i.e., studies on gender differences in newborns which did not take precautions to hide the baby’s sex (  , p 95)).
By adopting a 4E predictive perspective on cognition, we find ourselves in need to shift our perspective on cognitive performance. Di Paolo and De Jaegher  , while arguing for their interactive brain hypothesis (an enactive theory of cognition) introduce an interesting typology of contributions to cognitive performance. A given process, within the organism or the environment, can 1) be necessary, 2) have contextual influence or 3) bear no relation to a given cognitive performance. Although Di Paolo and De Jaegher present the three possibilities as somewhat distinct, either-or, affairs, nothing precludes understanding them as regions on a continuum. Moreover, Di Paolo and De Jaegher’s typology of contributions to cognitive performance is currently limited to neutral or positive contributions: a process can have no impact (no relation) on the performance or it can bring something to it (e.g., contextual information, enabling conditions, etc.). In order to have a more complete view of the relevant possibilities, we propose expanding the continuum in the opposite direction, that is towards negative contributions. Contributions to cognitive performance would thus sit somewhere on a continuum going from disabling all the way to enabling. The whole continuum of possible contributions to cognitive performance would thus be: 1) disabling, 2) disruptive, 3) no relation, 4) contextual and 5) enabling.
We mentioned above that the organism forms a coupled dynamical system with external processes and resources. On this view, cognitive performance depends on the harmonization of the coupled system that sustains a cognitive capacity through a given period of time. At a broad level of explanation, the coupled system can be said to recruit resources or enter into dynamical relations with processes that tend to have a neutral or positive contribution to its current cognitive task. At a lower, more mechanistic or subpersonal, level of explanation, the coupled system can be said to minimize prediction error through cognitive processes (e.g., the correction of its models guided by prediction error) or through action (e.g., performing active inference in order to conform sensory data to prior models). By reducing prediction error, and thus surprisal, the organism harmonizes itself with its environment in the sense that the organism seldom gets into states that it did not anticipate. Since higher level priors are about the statistical nature of the environment at ever higher spatial and temporal scales, an organism that would have completely minimized prediction error (which is strictly speaking impossible) would flow smoothly in its environment: its cognitive performance would be optimal.
A harmonized coupled system might be described as one in which the various internal and external contributions to cognitive performance are globally positive. Consider the seemingly simple task of multiplying two three-digit numbers. Human memory alone typically does not allow such a mathematical operation (or to put it in terms of affordances: sets of three-digit numbers do not afford products based on memory skills alone). It requires an additional material substrate (e.g., pen and paper) to serve as external memory, a cultural niche in which culturally designed and numerals and long-multiplication algorithms (at least one) are transmitted, and skills in manipulating the material substrate (e.g., arranging the numerals in a specific form required by the algorithm, focusing attention on relevant parts of the paper, inscribing relevant numerals at specific locations, and so on). When each skill is applied at the right time, brain, body and environment form a coupled system able to perform the algorithm that allows multi-digit multiplications: written outputs on the paper bring inputs for the next skill, whose output bring input for the next one, until the written numeral on the paper corresponds to the products of the multiplied multi-digit numbers. This can only occur when there is a certain degree of match between organism’s skills and the environment in which they are deployed. If the numerals disappear from the substrate a bit too quickly (writing on sand in windy conditions) or the individual has difficulty writing legible numerals then the ability to multiply multi-digit numbers slowly dissolves: sets of multi-digit numbers less and less afford products.
In terms of the typology introduced earlier, the pen and paper, as well as the muscles that allow pens to form symbols on paper, and the bones and articulations that provide the necessary flexible rigidity, can be said to provide anywhere between a contextual and an enabling contribution to the cognitive performance undertaken by the organism. In a predictive 4E framework, the qualitative value of cognitive performance fluctuates as a function of the efficacy of the coupling between organism and environment: if the environment is not adequate, or muscle paralysis prevents writing numerals, or a vascular-cerebral accident prevents remembering the proper form of numerals, then cognitive performance is diminished, and we would then speak of a cognitive deficit: an inability to skillfully multiply multi-digit numbers.
3.3. Redefining “Cognitive Deficits”
According to the PP4E conception of cognitive performance laid out in the previous section, cognitive deficits are no longer dependent on properties (or lack thereof) of the organism alone. We saw that cognitive performance in any given cognitive task is the result of a dynamical process accomplished by a harmonized coupled system encompassing brain, body, world and even cultural resources. A cognitive deficit with respect to a given task is anything in the brain, the body or the environment that disrupts the proper accomplishment of the dynamical process underlying the performance of the task. This is thus not a rejection of the idea that deficits can have a cerebral cause, but recognition that, like cognition itself, deficits are not restricted to the brain, which is all that is needed to move from an internalist to an externalist conception of cognitive deficits.
The cognitive deficit literature is replete with examples of potential neural sources of deficits, and so we will concentrate our efforts here on cognitive deficits where the potential source is the body or the environment. Our strategy will be twofold. To explain and illustrate our externalist conception of cognitive deficits, we first start (in this section) with admittedly simple and obvious examples, where everyone would agree that the source of the cognitive deficit is external and does not specifically depend on the brain. Then, to motivate it and make it relevant to cognitive science, we turn our attention (in the next section) to the case of autism where, we will argue, many of the same external sources are potentially at play, although the source of the condition is viewed by most, and especially by cognitive and neurological models, as internal.
Let us thus start with an admittedly simple example that brings out the normative nature of the deficit/variation distinction. Human capacities vary as a function of age: babies cannot walk or talk whereas children and adults can. This is normal human variability. Similarly, sensibility to sound frequencies varies as a function of age: younger humans can hear frequencies older humans cannot (a phenomenon known as “presbycusis”). For instance, sounds at about 16 kHz tend to be heard by individuals under thirty years old but not by those over thirty. Let us suppose that society comes to be structured in such a way that virtually all auditory cues (announcements, warnings, alarms, etc.) have a 15 kHz frequency. In regards to public transport announcements or (for a more impactful example) fire alarms, humans over thirty would become disabled because they could no longer function like younger ones: they would have to use alternative cues for public transport (e.g., visual cues), wear specially designed hearing aids, and they might even miss fire alarms entirely. In predictive terms, older humans in such circumstances lose a set of inputs their brains could previously rely on to guide their predictions about upcoming events. Announcements and alarms were regularities that their predictive system had come to integrate as indicative of, say, arrival at desired destination or imminent danger from fires. In both cases, the ability of learned models (prior beliefs, which we could roughly translate as conditional statements like “If I hear this sound, I am under threat from a fire” and “If I am under threat from a fire, I should get to safety”) to guide perception and action based on given sets of inputs is impaired. Let’s reiterate that this example presupposes no impairment in the auditory systems of people over thirty. Presbycusis is normal human variability, but because it does not significantly impact everyday functioning in current societies, it is not considered a disability. It is instructive to reflect upon why presbycusis is not a disability in current societies. As they are currently organized, societies are built mostly by people at about or over thirty years old: such people would not think of using sounds they cannot hear as signals or alarms and hence no one or very few are disabled by presbycusis. As we will see presently, this is in sharp contrast to other conditions of human variability where one subset of the human population has built structures, set signals and designed cultural affordances that create cognitive disabilities and impairments in another. In terms of cognitive performance, note first that we just described a case where an enabling or contextual contributor was subtracted, showing that cognitive deficits can appear even without disruptive or disabling contributions.
The sound frequency example can be extended further if we imagine a situation where there is a constant ambient noise of 16 kHz. Let us say that every technological device (which are ubiquitous in today’s—especially western—societies) emits such a sound as a byproduct. We can imagine that such a sound would be so invasive that it would interfere with daily cognitive tasks undertaken by individuals who are less than thirty years old, thus negatively impacting their cognitive performance in many regards. In the present case, the cognitive mechanisms and processes involved in a particular cognitive task, say the computation of long multiplications, are quite functional: memories of multiplication tables are present and the material substrate is suitable to sustain the computation. But here some of the cognitive resources necessary to accomplish the task, such as attention, are recruited to cope with the constant disrupting noise. In this case, an individual’s predictive system is constantly trying to reduce prediction error generated by environmental noise (in both the acoustic and informational meanings of the word), thus leaving much less resources for the cognitive task at hand (if we suppose that attention is a finite resource in predictive systems). This specific example is a case of disruptive contribution to cognitive performance.
Another possible external source of a cognitive deficit becomes clear through the lenses of the predictive processing framework: cases where the type of prediction error minimization known as active inference is somehow prevented by external factors. As we saw, active inference is a process of prediction error minimization through action that falls under two (non-exclusive) functional categories: confirmatory and disambiguatory  . Confirmatory active inference is the confirmation of the organism’s beliefs (priors, and thus predictions) about the world through action. If my brain predicts that, given my hand’s current movement, it should be getting sensations that it is not getting from it as I grasp a cup of coffee, prediction errors will be generated due to the mismatch between prediction and proprioceptive and visual signals. One way to reduce that error is by moving my hand to match my prediction, thus confirming it. Disambiguatory active inference, on the other hand (no pun intended), serves the purpose of gathering evidence in support of a hypothesis about the world (i.e., to augment the precision of one set of predictions). This can raise confidence in one hypothesis or, if there are competing hypotheses, allow the brain to choose between them. To go back to the long multiplication example, one could be prevented from writing the numerals by an external contraption of some sort (much like the case of muscle paralysis preventing the writing of numerals mentioned in section 3.2). In such a situation, the individual trying to accomplish the task would not be able to modify the environment (the paper) to match their predictions (e.g., “there should be a 1 over the 6”). We can also imagine that, for some other reason, the individual may be physically prevented from looking at the written numerals, which would prevent them from disambiguating their memory of the numeral they had previously written. In this example, an external cognitive deficit is generated simply from preventing simple forms of active inferences (i.e., physical action). This example may seem a bit farfetched, but it has been argued  that the famous rubber hand illusion is caused by preventing, through experimental set-up, disambigutory active inference and thus that the illusion is much less robust, if not simply absent, when subjects are allowed to move.
Although these are simple examples that are far from the systemic and oppressive nature of most disabilities, they nonetheless bring forth the contextual and potentially external (at least, non-essentialist) character of cognitive deficits. To clarify our externalist proposal, it will be useful to compare it to a common conception of disabilities, called the “social model” of disabilities    , according to which disabilities are social phenomena suffered by those with (bodily) impairments: impaired individuals are being excluded through social level phenomena such as oppression and prejudice. This conception follows a paradigm change brought about by the field of disability studies, which aims to analyze disabilities in relation to various social, cultural and political factors. Disability studies also aim to enact a shift from approaches that put responsibility on individuals (rehabilitation approaches) to approaches that put a duty on society to adapt its practices to the needs of various individuals  .
Although the social model is mostly concerned with physical “impairments”, we include here neurocognitive variations (i.e., neurodiversity) in how we understand “impairments” in the social model. The reason why we bring attention to this model of disability is to point out its conception of impairment and how our proposed redefinition of cognitive deficits relates and differs from it. The social model makes an externalist move by situating disabilities in the interaction between individuals and their environment, but its internalist roots are very much persistent as is evidenced by its conception of individual impairment: In other words, the social model of disability does not fully consider human (cognitive) variety as a natural phenomenon: it still takes some differences as impairments of the “normal” (i.e., normative) human phenotype. As much as we commend the externalist move, we believe that the social model’s internalist roots should be extended: impairments, or in the present case cognitive deficits, can be found outside the individual organism.
4. Autism’s “Cognitive Deficits” and Their Source
Our externalist conception of cognitive deficits was explained above, and was illustrated with simple examples. We now intend to motivate it and make it relevant to cognitive science. We do so by showing how it affords a better understanding of some of the key traits of autism, whose source, we argue, is similarly external. We will show that most of the cognitive deficits characteristic of autism result from a mismatch (or set of mismatches) between the specific form of neurological functioning typical of autistic people and the environment in which they find themselves. This mismatch is brought about by the fact that current cultural niches that set up the relevant fields of affordances are structured by and for neurotypicals   . As we will further argue, this mismatch leads to epistemic injustices  that feed back into research on autism and clinical practices  , thereby making the deficits appear based on individual shortcomings.
4.1. Autistic Traits and Environmental Mismatch
As we have stated in Section 3.3, traditional explanations of autistic traits have been largely internalist in nature (i.e., resting on strictly internal deficits). Not only does this seem at odds with the increasing tendencies in cognitive science towards 4E approaches, traditional theories of autism, which often emphasize the social aspects of autism, face various problems: for instance, they fail do address traits specific to autism or traits found in all autistic individuals  . Moreover, these theories face contradicting testimonies from autistic individuals, notably, the social motivation theory of autism, according to which autistic individuals have a deficit in social motivation, is challenged by autistic testimonies claiming that social motivation is present, but its behavioral manifestations can vary from neurotypical behaviors  . Alternatively, we can turn to the predictive processing framework for a theory that specifically aims to avoid its predecessor’s flaws. In fact, a number of predictive models of autism spectrum disorder have been formulated recently   , which mostly differ with respect to the phenotypic properties of autism they emphasize  . We chose to adopt the HIPPEA model, developed by Van de Cruys et al.     , according to which autism is explained by high, inflexible precision of prediction errors  . Although we believe that HIPPEA and similar theories are on the right track when it comes to the type of neurological functioning that is characteristic of autistic people, its conception of the cognitive deficits typical of autism remains internalist. In this section HIPPEA is not used to characterize autistic cognition as a disorder but to describe a specific human neurodiversity, that is the difference between autistic and neurotypical cognition, which can be viewed as a normally distributed neurocognitive variability in the human population. In fact, to stay true to our externalist deficit model, we should not even speak at this point of autisticvsneurotypical cognition, since we are only describing variation on the brain side of an extended or 4E system. We then use our description of these two types of cognitive functioning to characterize the kind of niche each would prefer (and therefore select or build, presumably). We finally show that autism, as a disorder, results from the fact that people with one type of cognitive functioning must live in an environment built by people with the other.
4.1.1. The HIPPEA
We noted that some prediction errors reflect inadequate models but, more importantly here, some instead reflect environmental randomness (noise) or volatility (frequent changes in regularities). We also saw that to properly learn and function, predictive systems must be able to distinguish reducible uncertainty from irreducible uncertainty, i.e., signal from noise. To achieve this, the system must learn to identify situations in which its error signals indicate hidden unpredicted environmental causes and when they do not  . It must, so to speak, (meta-) learn when to learn. According to HIPPEA, neurotypical cognitive systems (NCS) and autistic cognitive systems (ACS) differ in the way they distinguish between signal and noise, both of which leading to specific ways of dealing with prediction error. NCS are characterized by high differentiation between signal and noise, which leads, in extreme cases, to overemphasizing the context-dependent nature of some prediction errors. The ASC, for its part, is characterized by low differentiation between signal and noise, which leads, in extreme cases, to underemphasizing the context-dependent nature of some prediction errors. Van de Cruys et al.  and Lawson et al.  describe this difference as an underestimation (NCS) vs overestimation (ACS) of environmental volatility. Palmer, Lawson and Hohwy  describe it rather as a persistently low (NCS) vs high (ACS) learning rate. In other words, ACS form relatively stronger hyper-priors concerning the expected accuracy of prediction error, thus invariably according higher gain to the prediction errors of the lower (mostly sensory) levels of the predictive hierarchy whereas NCS form relatively weaker hyper-priors concerning the expected accuracy of prediction error, thus invariably according lower gain to the prediction errors of the lower levels of the predictive hierarchy.
The expected high accuracy of the prediction error in the lower levels of the hierarchy has effects on the system’s overall functioning, especially on learning. If HIPPEA is correct, then, given the differential accentuation of prediction error of their respective cognitive system, the lower levels of the ACS’s neural hierarchy are much more frequently in a state of learning as opposed to the NCS’s neural hierarchy. In the ACS case, learning will try to integrate a high proportion of the new sensory input into its low-level models. These lower level models are therefore closely adjusted to previous input signals, so that every small difference between it and the next one will generate a prediction error signal which, in turn, will lead to new learning, and so on. When the environment is relatively regular and uncertainty is low, this will allow the low-level models to snuggly fit the data: incoming data will be finely modeled (categorized, predicted) allowing for high level perceptual precision. HIPPEA thus explains various traits associated with ACS, including increased perceptual performance such as superior pitch processing or superior contrast sensitivity  . The constant accentuation of the prediction error precision in the lower levels of the neural hierarchy has the effect of accentuating each perceptual deviation from the predicted input that is detected. This increased sensitivity to small differences because of the overfitting of predictions may offer a perceptual advantage, as in the example just mentioned. On the other hand, when the environment is particularly noisy or volatile, this can cause overfitting of predictions to sensory data: a given set of data is so closely and precisely modeled that predictions concerning data cannot be generalized to new incoming data from the noisy or volatile environment. Overfitting also explains the negative aspect of this type of increased attention to detail: hypersensitivity and sensory overload. Since the system is constantly working to reduce prediction error being continually generated at the lower levels of the hierarchy, cognitive resources are constantly solicited and the system as a whole can become overloaded, especially in unstable environments or when facing unpredicted stimuli  .
In the second (NCS) case, a lower proportion of the new sensory input will be integrated in low-level models, which may lead to underfitting signals from the environment, and make regular environments perceived less precisely (e.g., details missed or lumped together) and become uninteresting, but at the same time make generalization more efficient in noisy and volatile environments. If the neurological characteristics attributed by HIPPEA to ACS and NCS distribute normally, most individuals reside somewhere around a mean value of noise/signal differentiation: the mean is by definition the NCS mode of learning. As we move away from the mean towards lower values of noise/signal differentiation, the cognitive traits typical of autism emerge, and get more pronounced the further we are from the mean. As we move away on the other side, other cognitive traits could be expected, though these do not appear to have been assigned psychiatric categories (it would be interesting to see if some do fit the profile of psychiatric categories).
In view of the differences between the ACS’s and NCS’s modes of learning and cognitive processing, due to their different accentuation of prediction error, we can conjecture the type of physical and cultural niche they would prefer, and thus select or build. Because ACS will integrate a higher proportion of signals into its lower level models, which will increase its perception of details in regular environment and, because of overfitting, make noisy or volatile environments difficult to model and thus predict, it seems plausible to surmise that such cognitive systems will be particularly efficient in regular environment and in tasks that make perception of details relevant, but less capable in volatile environments, which will appear puzzling, since change will seem random, uncaused or unmotivated, and even less in noisy environment challenge because ACS will continually attempt to reduce the unreducible. We also saw that NCS will integrate a lower proportion of the new sensory input in low level models, leading to underfitting, and thus less preciseness in regular environments but more efficient processing (generalization) in noisy and volatile environments. It is thus reasonable to conjecture that, since stability, regularity and anticipated stimuli are barely noticed by NCS, people with this type of neurocognitive variation will tend avoid stable and regular environments (fleeing mathematics and science) in favor more volatile environments or environments where the regularities appear at higher levels of processing, such as in politics and gossip. Note that, by definition (i.e., it is “neurotypical”), this is the type of cognitive system that predominates in the current human population (had the other type been predominant, it would have been identified as the neurotypical form), and we can thus believe that its preferred form of environment will be common. We will now explain how autism, as a disorder, comes about because of the mismatch between ACS and the NCS preferred environment.
According to HIPPEA, autistic traits (i.e., the behavioral patterns associated with autism) become apparent when there is a mismatch between the organism and the environment, especially when environmental regularities are unstable or volatile. The high and inflexible precision of prediction error alone is thus not sufficient to generate all autistic traits. Recall that the diagnosis of autism spectrum disorder, as understood by the DSM-5  , is established in regards to two axes of observable behaviors: 1) persistent deficits in communication and social interactions in various contexts and 2) restricted and repetitive behaviors, interests and activities. The first axis, which identifies social atypicalities, is explained in the predictive framework by the higher instability of social interactions. The second axis, which identifies behavioral atypicalities, is explained by a preference and tendency to generate environmental stability (i.e., predictability).
4.1.2. Perceptual and Behavioral Axis
If instability, ambiguity, and unanticipated stimuli require significant investment from ACS, they can be expected to avoid unstable and ambiguous environments. The search for stable environments, due to the high precision of prediction error, plays an important explanatory role in HIPPEA. It accounts for an important behavioral aspect in autism: intolerance to change, inflexible adherence to routines and ritualized behaviors. The formation of very precise priors over-adjusted to previous data, as well as the constant upscaling of prediction error, give autistic individuals accurate and rigid models of their environment and of their interactions with it. Changes in this environment or in their routines therefore generate significant prediction error and potentially a form of cognitive overload. According to HIPPEA, the search for stability and predictability in the environment is partly responsible for the so-called “limited interests” of autists. Overfitting and the tendency of predictive systems to search for, or even construct, stability enable autistic individuals to develop increased skills and interests in one or more specific areas.
Another trait associated with autism is the stereotyped or repetitive character of movements such as self-stimulation. Self-stimulation behaviors, which are often defined as “repetitive movements that do not serve any apparent purpose in the environment”  (our translation) in fact seem to serve specific purposes in regards to autistic individuals: “either to provide sensory stimulation or to reduce overstimulation” (our translation)  . HIPPEA’s explanation of these behaviors rests on active inference: self-stimulation generates stability and reduces prediction error through the confirmation of predictions by self-generated, hence already predicted, sensory signals   .
4.1.3. Social Axis as an Environmental Mismatch
According to HIPPEA, social cognition is not fundamentally different from perceptual cognition, they are both accomplished by the same predictive mechanisms    . The main difference between social and non-social situations lies in their complexity and volatility: as Van de Cruys et al.  put it, “social can just be a synonym of complex here”. Consequently, the explanation of social difficulties offered by HIPPEA is simple: social situations, which are mostly dominated by neurotypicals, are extremely complex, permeated by contextual and volatile information, which makes them difficult to manage for a system whose priors and predictions are precisely attuned to specific situations.
This description brings us to reflect on the appearance of these atypical traits in stable environments: if the specific cognitive characteristics of autistic individuals are not sufficient for said traits to emerge, what remains in stable environments? We are not speaking of, say, eliminating social interactions to favor greater stability. We are rather imagining the possibility of a social environment where interactions are perhaps not governed by implicit norms and not dominated by implied meaning. The appearance of autistic traits or difficulties is intimately related to the nature and structure of the particular cultural niche people find themselves in. As an example, in a community whose social standards have been offloaded in the environment in an explicit form by (and for) autistic people, social difficulties do not correspond to those that appear under the first axis of the autism spectrum disorder diagnosis. In other words, such a cultural niche would contain information about the precision that can be given to the behavior of others: one could take their words at face value without the need for much contextual interpretation or disambiguation. Social comprehension and reciprocity difficulties seem to be reducible to a gap between the cultural niches built by neurotypicals and those built by autists. The information contained in neurotypical cultural niches is volatile and ambiguous, while that contained in autistic cultural niches is more rigid and clear: “as people with ASC (Autism Spectrum Conditions) are forced to fall back on actions to reduce uncertainty and input variability, they implicitly construct rigid and inflexible attentional loci, on the basis of which they are able to assess the reliability of sensory fluctuations”  .
4.2. Testimonial and Hermeneutic Injustices
One might see in the foregoing a looming threat of some kind of cognitive incommensurability between neurotypicals and neurologically diverse people (notably, autistic individuals), as some studies seem to indicate  . We believe however that such a gap between neurologically different groups is the result of existing inequalities in the epistemic contribution to cultural niches and could be bridged were the inequalities corrected (or at least attenuated). Adapting Dotson’s concept of epistemic incommensurability, which refers to a phenomenon where epistemic divergences are so important they generate fundamentally incompatible points of view on the world  , we speak of cognitive incommensurability to point out the lack of hermeneutical (i.e., interpretative) resources to describe the experience of neurodiverse people, which results in their experience being misunderstood or even ignored by neurotypicals   .
To better understand this phenomenon and its consequences on research and on the creation of cognitive deficits, it will be useful to consider an important conceptual contribution, increasingly present in feminist research, made by Miranda Fricker. Fricker  identifies two types of epistemic injustices, namely hermeneutic injustices, which refer to prejudices regarding access to knowledge, and testimonial injustices, which refers to prejudices relating to the credibility of testimonies. These two types of injustices are epistemic insofar as they affect the epistemic agency of individuals. Various aspects of the two types of injustices have been analyzed    , but we will focus on some of their more practical impacts. A classic case of hermeneutical injustice mentioned by Fricker is the absence of the concept of “sexual harassment”. Before the concept’s formulation and integration in the collective hermeneutical resources, it was difficult, maybe even impossible, to express (or to testify) such an experience. Fricker illustrates testimonial injustices through an example from literature: in To Kill a Mockingbird, the full value of a racialized man’s testimony is not considered in court, and he falls victim to a deficit in credibility because of his social identity.
These two key examples in the epistemic injustice literature illustrate well-known phenomena emanating from sexism and racism, but can we speak of epistemic injustices for neurocognitive variations? In a 2016 article, Leblanc and Kinsella  reflect on the relations between the phenomena of epistemic injustice, on the one hand, and the marginalisation of experiences and knowledge of people self-identifying as “mad”, on the other. They shed light on the historical entrenchment of the mad/sain dichotomy using Rimke’s  concept of psychocentrism: an internalist perspective on cognitive variations (or divergence) that identifies deviations from the cognitive norm as pathological and as internal to the individual rather than as the result of a complex social structure. Birnbaum  goes so far as to speak of “sanism” on the same level as racism, sexism, etc. Although we are not attempting an analysis of psychocentrism in traditional accounts of human cognition, we should point out that they do not escape Rimke’s critique, according to which psychocentrism is rooted in a biomedical paradigm that hides power relations under the concept of deficit  .
Accordingly, we believe it is justified to speak of epistemic injustices towards cognitively varied people. For instance, Jaswal and Akhtar  provide a clear example of testimonial and hermeneutic injustices when they show that autists are thought to have social motivation impairments due to a lack of variety in the dominant, neurotypical, interpretation of social interactions. Under our revised conception of cognitive deficits, such injustices generate both disabilities in daily life and steers autism research towards a loop of empirically inadequate self-realizing prophecy. In terms of the daily life of autistic people, the fact that the collective hermeneutic resources (which are an integral part of any cultural niche) are dominated by neurotypicals excludes the autistic experience, specifically their social experience. The difficulties of social understanding and reciprocity seem to be reducible, from the predictive 4E perspective, to a mismatch between the cultural niches built by neurotypicals and those that would be built (and that are increasingly being built) by autists. For instance, given the practices established within the (self-constructed) cultural niche, neurotypicals interpret certain behaviors as typically social and others as typically antisocial or asocial: notably, eye contact, physical contact, small-talk, spontaneous interactions, etc., behaviors that, if HIPPEA is on the right track, are challenging for autistic individuals because of their inherent volatility. Although autistic social behaviors take different forms, they still are eminently social (as evidenced by any gathering of autistic people).
This specific hermeneutic injustice results from an erroneous interpretation of autistic social behaviors on the part of neurotypicals because of a lack of cultural hermeneutic resources to express the aspects and qualities of such experience. We can easily conceive that such misinterpretations cause frictions (e.g., a neurotypical person interpreting an autistic person’s behaviors as signaling low—or nonexistent—social interest, and an autistic person interpreting a neurotypical person’s behaviors as rude or careless) and lead to real social difficulties. Over and above the sources of autism described above, some of the autistic person’s social disability thus depends on the absence of an enabling environmental contribution (i.e., inadequate hermeneutic resources). This is a prime example of a deficit in the externalist sense described in Section 3.
Research on autism is also influenced by epistemic injustices insofar as testimonial injustices towards autistic people deprive researchers of an important source of empirical data on autism: autistic people’s first-person experiences in the form of testimony. We suspect there might be multiple issues that suffer from this exclusion of data, but we will focus on the specific case recently explored by Jaswal and Akhtar  : the misinterpretation of autistic behavior (which is a hermeneutic injustice) is also committed by researchers because testimonies that contradict their research are not being considered. Hens et al.  go even further and show how researchers’ theoretical assumptions generate a testimonial injustice. Referring to Frith and Happé’s work  on the theory of mind in autism, they identify what might be the core source of the rejection of autistic people’s testimonies: Extending the theory of mind hypothesis, Frith and Happé argue that people with autism exhibit deficits in attributing mental states not only to others but also to themselves. On this view, although autistic people undoubtedly have mental states, their ability to reflect on them is impaired. Hence, self-reports by people with autism should not be assumed to be veridical  .
Theories such as Frith and Happé’s (as well as other social theories of autism) assume a deficit in the very capacity that would allow autistic people to provide evidence that might contradict the theory itself, thus generating a self-fulfilling loop. From the outset, social theories of autism reject an entire set of valuable empirical data in the form of first-person testimonies. Studies even suggest that this testimonial injustice deprives research from essential partners and experts and in doing so, are less able to produce ethical and empirically valid contributions. For instance, Jivrajet al. state that: “proponents of [participatory research] have argued that gathering input from community partners is critical to producing valid and ethical scientific information that is inclusive of the partner’s perspectives”  . More recently, Gillespie-Lynch et al.  found similar results when trying to assess the expertise of autistic people on their own condition:
Autistic people who have developed heightened understanding of autism may be particularly well suited to teach other people about autism, as they tend to endorse less stigmatizing conceptions of autism, have reduced interest in making autistic people appear more normal, and may often have heightened empathy for the challenges others face  .
The mind-body problem has been the subject of many philosophical debates attempting to settle the opposition between monism and dualism. Although varied, the arguments in this literature have mostly (if not all) been internalist in regards to the nature of cognition: the question concerning the ontological identity or difference of body and mind, but not concerning the reach of the mind. In recent years, a somewhat similar debate has been taking place in cognitive science, but in regards to the mind’s frontier: with proposals from 4E approaches, the problem is now to determine where the mind ends. With growing evidence that the workings of cognition cannot be fully explained without bodily and environmental processes, some in cognitive science are open to novel analyses. In this paper, we took on board lessons from 4E cognition and predictive processing to question one of these conceptions: the nature of cognitive deficits. We argued for a move from an internalist conception to an externalist conception of cognitive deficits. This opened the possibility that a given deficit might result from the introduction of disruptive contributions to cognitive performance or from the elimination of contextual or enabling contributions to cognitive performance in an individual’s environment.
We used the case of autism to show that certain socially constructed environments (i.e., cultural niches) can generate various cognitive deficits because of inequities in their construction. Because of the adoption of more traditional views on cognition, these deficits are considered to be individual and internal impairments, or even pathologies, leading to further inequities, namely epistemic injustices. These injustices, hermeneutic and testimonial, have stigmatizing consequences in neurodiverse people’s daily lives, but they also have unfortunate effects on research. The example of autism shows that its traditional explanations tend to exclude autistic people from either research itself or from providing testimonial evidence or counter-evidence for said explanations. This leads to a misguided conception of autism, which, in turn, furthers epistemic injustices that feed back into research, and so on. What predictive 4E conceptions of cognition suggest about autism’s so-called deficits is rather that they mostly result from their specific neurocognitive variations (e.g. high learning rate, leading to ease in building low-level (modal) priors and hyperpriors in regular environments but difficulties in building high-level priors and hyperpriors in noisy and volatile environments) not being taken into account in neurotypical environments. Taking these explanations seriously should lead to better research (i.e., more ethical and empirically adequate) and better clinical approaches (i.e., better adapted to the actual needs of those concerned). In this context, autism interventions (for example) should partly focus on the development of social policies and practices aimed at modifying those aspects of neurotypical cultural niches that make environments unsuitable for the full development of all individuals. Take the case of flapping, (i.e., the act of flapping one’s hands or arms that usually accompanies relatively strong emotions): the predictive conception of autism developed above interprets these repetitive movements as a form of self-stimulation that reduces prediction error since the predictive system itself produces predictable and actually predicted stimuli. A clinical practice put forward since the 1980s via the work of Lovaas  consists in modifying behavior by conditioning (Applied Behavior Analysis, ABA). The goal is to encourage, in an intensive and early manner (Early Intensive Behavioral Intervention, EIBI), desired behaviors and reduce unwanted behaviors. The strategies applied consist in eliminating, most often in the child, stereotyped behaviors and promote social behaviors. For example, flapping will be discouraged by punishment and eye contact encouraged by rewards:
ABA was strictly empirical: the child was rewarded with M & Ms, sips of apple juice, and phrases like “Good job!” for doing things like making eye contact and sitting at a table, and punished with a loud “NO!” for hand flapping and stimming  .
Now, if self-stimulation allows prediction error reduction predictive systems, a cultural niche that discourages or prohibits such behavior, (e.g., a cultural niche that would prohibit flapping), would force these systems into a suboptimal state. The creation of inclusive social policies could begin with the de-stigmatization of non-harmful self-stimulation (i.e., that does not cause physical harm to oneself or others) such as flapping or fidgeting). Besides, is it not common to see non-autistic individuals jumping up and down out of intense joy? If it would seem extreme to punish non-autistic children who jump up and down to express intense joy, it seems just as odd to punish autistic children who flap their hands. Similarly, a less pronounced emphasis on the importance of eye contact in social exchanges could reduce the apparent lack of social motivation postulated by some theories  since the disruptive effects on cognitive performance of forced eye contact would thus be reduced. Interestingly, the prosocial nature of eye contact is far from universal: some cultures even deem it offensive  . Such policies or shifts in values could be the first steps towards more inclusive environments (i.e., environments that generate fewer and fewer deficits in neurodiverse populations). Just like few today would consider organizing a social event at a strip-bar, perhaps one day few would consider organizing mostly unpredictable social gatherings.