Search This Blog

Wednesday, 18 March 2026

Quantifying Schema Chemistry: A Personality-relative, Experimental, Hierarchical-Dimensional Matrix for EMS Schema Chemistry

In my previous article, I introduced an experimental, preliminary hierarchical-dimensional framework for visualising an individual’s internal "EMS Profile," utilising heuristic Severity and Volatility indices to map the shape of their psychological distress: 

Figure 1.

However, Early Maladaptive Schemas (EMS) do not exist in a vacuum; they are profoundly relational. To fully understand characterological distress, we must examine how these internal schemas collide with the external world—specifically through the phenomenon of "schema chemistry."

The clinical literature dictates that individuals do not choose romantic partners at random. Instead, they selectively and unconsciously filter for partners whose specific behaviors will confirm their most deeply held negative beliefs about themselves and the external world. Schema chemistry describes this powerful, highly magnetic, and frequently unconscious attraction individuals experience toward others who trigger, validate, and reinforce their core EMS.

While clinical observation has long observed this phenomenon, there is currently no formalised psychometric instrument to measure it. To bridge this gap while developing such an instrument, I am expanding my preliminiary, pre-EFA diagnostic modeling to include an experimental Schema Chemistry Matrix, integrating cognitive-behavioral vulnerabilities, severe personality traits, and dimensional models like the Big Five.

The Two Dimensions of Schema Chemistry

Extensive literature indicates that schema chemistry encompasses two core theoretical dimensions: Attraction and Illusion. These are based upon clinicians notes and other low grade evidence gathered over time by the therapy community, and so it is likely that they identify meaningful diagnostic factors. To construct a measurable framework, I propose operationalising these dimensions using established psychometric proxies prior to the application of explanatory and confirmatory factor analysis in order to gain more information about what variables exist.

1. Attraction: The Somatic Pull of the EMS

The first dimension, Attraction, refers strictly to the behavioral, emotional, and somatic pull toward repeating familiar relational patterns that are deeply associated with unmet emotional needs from early childhood. In our experimental model, we quantify this "somatic pull" by drawing upon proxies from Cluster B and Cluster C personality profiles. High "Attraction" scores reflect traits synonymous with Dependent Personality Disorder (the terror of abandonment) and Borderline Personality Disorder (sensation-seeking, and the neurological addiction to chaotic "push-pull" relational dynamics).

2. Illusion: The Cognitive Distortion Due to EMSs

The second dimension, Illusion, involves the sophisticated cognitive distortions required to initiate and maintain the relationship. This manifests as the profound idealisation of romantic love and the entirely inaccurate perception of long-term compatibility. The illusion dimension causes individuals to systematically overlook glaring behavioral red flags. I initialise operationalise this cognitive distortion using proxies from histrionic, Delusional, Narcissistic, and Schisotypal traits—capturing theatrical romanticising, impressionistic cognitive styles, and fixed false beliefs about "destiny" or "twin flames" that persist despite overwhelming relational toxicity.

Proposed/Preliminary 4-Quadrant "Schema Chemistry Matrix"

By plotting Attraction (X-axis) against Illusion (Y-axis) on a 0–100 scale, a new diagnostic coordinate system emerges. This allows us to categorise a patient's relational pathology into four distinct Schema Chemistry Archetypes:

  • Type I: The Traumatic Bond (High Attraction, High Illusion): The most volatile profile. The patient is neurologically addicted to the relational chaos (high somatic pull) and utilises massive cognitive distortions to rationalise the dysfunction as "epic love" or "fate."

  • Type II: The Compulsive Repetition (High Attraction, Low Illusion): The patient possesses clinical insight and recognises the relationship is destructive, yet the physiological schema pull (the fear of abandonment or emptiness) is too severe to break the behavioral loop.

  • Type III: The Idealised Fantasy / Limerence (Low Attraction, High Illusion): Often observed in one-sided, distant, or parasocial relationships. The patient is not addicted to the partner's actual behavior (which may be avoidant or absent), but rather to the fantasy of who the partner is, driven by heavy idealisation.

  • Type IV: Secure / Differentiated (Low Attraction, Low Illusion): The healthy baseline. Mate selection is conscious, reality-tested, and secure, devoid of trauma-driven magnetism or severe cognitive distortion.
Figure 2.

The Engine of Pathology?: Intersecting the Big Five

To complete the model, we introduce trait psychology as the "engine" driving these relational patterns. Extreme Big Five (NEO-PI) personality traits heavily interact with schema chemistry. For example, a "Type I: Traumatic Bond" is frequently supercharged by extreme Neuroticism and high Openness to Experience (fantasy-proneness). Conversely, "Type II: Compulsive Repetition" is often heavily anchored by extreme Neuroticism coupled with distinctly low Agreeableness (perpetuating hostility/compliance loops).

I suspect and suggest that intersection with cluster B personality disorders and other disorders is also salient. However, there are known relationships between extreme personality traits and psychopathologies, and existing work in EMS research tends to emphasise the relationship of EMS with Neo-PI personality traits.

In a clinical dashboard visualisation, these Big Five risk factors can be represented by the color of a patient's data point, while their baseline Schema Severity (from our previous article's model) dictates the size of the data point. The result is a multi-dimensional, comprehensive map of a patient's internal pain and their external relational mechanics. (See Figure 2.) 

Looking Forward

Developing a preliminary measure that formally recognises the intersections between dimensional personality traits, Cluster B proxies, and schema chemistry is the critical next step. The proposed matrix provides a theoretical scaffold. Future research must now focus on rigorous item generation and subsequent Exploratory Factor Analysis (EFA) to build a validated Likert-scale instrument capable of accurately placing patients within this relational coordinate system.

References

Beckley, K. (2022). Schema chemistry: An interpersonal framework for making sense of intimate partner violence. Journal of Clinical Psychology, 78(1), 38–49. https://doi.org/10.1002/jclp.23295

Dumitrescu, D., & Rusu, A. S. (2012). Relationship between early maladaptive schemas, couple satisfaction and individual mate value: An evolutionary psychological approach. Journal of Cognitive & Behavioral Psychotherapies, 12(1).

Körük, S., & Özabacı, N. (2020). Fate or schema chemistry? Which one does bring and hold mates together? Edu 7: Yeditepe Üniversitesi Eğitim Fakültesi Dergisi, 9(11), 17–42.

Paim, K., & Cardoso, B. L. A. (2019). Terapia do Esquema para casais: Base teórica e intervenção. Artmed Editora.

Piai, L. P., Cardoso, B. L. A., Capinha, M. I. L., Teodoro, M. L. M., Burgos-Benavides, L., & D’Affonseca, S. M. (2025). Early maladaptive schemas and intimate partner violence. Revista Brasileira de Terapias Cognitivas, 21, e20250554. https://doi.org/10.5935/1808-5687.20250554

Roediger, E., & Archonti, C. (2020). Transference and therapist–client schema chemistry in the treatment of eating disorders. In S. Simpson & E. Smith (Eds.), Schema Therapy for Eating Disorders: Theory and Practice for Individual and Group Settings (pp. 221–241). Routledge.

Young, J. E., Klosko, J. S., & Weishaar, M. E. (2003). Schema therapy: A practitioner's guide. Guilford Press.

Beyond the Categorical and Dimensional: An Experimental, Pre-EFA Hierarchical-Dimensional Framework for Visualizing EMS

Schema Therapy, an integrative therapeutic modality originally developed by Jeffrey E. Young and colleagues, represents a significant evolution from traditional Cognitive Behavioral Therapy (CBT) (Young, Klosko, & Weishaar, 2003). It was explicitly designed to address chronic, characterological psychological disorders that often resist standard CBT interventions (Beckley, 2022; Piai et al., 2025; Young et al., 2003). 

At the core of this theoretical architecture is the concept of Early Maladaptive Schemas (EMS), defined as broad, pervasive, and self-defeating themes or patterns comprised of memories, emotions, cognitions, and bodily sensations (Young et al., 2003). These schemas originate during childhood when fundamental emotional needs are chronically frustrated or unmet by caregivers, and they become exceptionally rigid and highly resistant to modification over an individual's lifetime (Dumitrescu & Rusu, 2012; Piai et al., 2025; Roediger & Archonti, 2020).

The seminal literature categorizes eighteen specific EMS into five overarching domains: Disconnection and Rejection, Impaired Autonomy and Performance, Impaired Limits, Other-Directedness, and Hypervigilance and Inhibition (Young et al., 2003). Once encoded, these schemas act as implicit organizing principles, dictating how individuals interpret environmental stimuli and inadvertently compelling them to recreate the precise conditions that originally traumatized them (Körük & Özabacı, 2020; Paim & Cardoso, 2019).

This unconscious replication is also thought by many therapists to govern "schema chemistry," the powerful attraction individuals experience toward others who trigger and reinforce their core EMS (Paim & Cardoso, 2019). Schema chemistry operates through two distinct dimensions: an intense behavioral "Attraction" to familiar relational patterns, and a cognitive "Illusion" that idealizes the relationship (Paim & Cardoso, 2019; Young et al., 2003).

However, while the theoretical delineation of these 18 schemas is profoundly useful, assessing and classifying real-world patients presents a significant psychometric challenge. Like many personality and individual differences assessments, fitting an individual into a discrete, categorical diagnostic box, or otherwise using dimensional criteria to evaluate them, is often inadequate. Patients rarely exhibit the influence of just one schema. As with known psychopathologies, their clinical presentations are deeply comorbid and complex.

To address this, I am exploring an experimental, hierarchical-dimensional framework for modeling and representing a client’s overarching "Schema Profile."

Moving from Categories to Coordinates

Rather than asking, “Which schema does this patient have?” this experimental approach asks, “What is the shape and intensity of this patient's schema ecosystem?”

By mapping a client's scores (typically 1 to 6 on instruments like the Young Schema Questionnaire) onto a radial spider or radar chart, grouped by the five thematic domains, we can immediately visualize the "shape" of their pathology.


Figure 1. An experimental radar chart visualizing the 18 Early Maladaptive Schemas across five domains, contrasting a highly volatile intake profile with a smoother 6-month post-treatment profile.

To move beyond mere visualization into quantitative data science, I propose inferring two initial experimental metrics from this 18-variable array:

  • The Severity Index (μ): The mean of all 18 schema scores. This represents the overall "area" of the radar polygon, providing a macro-level assessment of the client's total clinical and psychological burden.

  • The Volatility Index (σ): The standard deviation of the 18 scores. This mathematically measures the "jaggedness" of the polygon. A low volatility score indicates a smooth, regular shape (meaning scores are uniform across the board), while a high volatility score indicates a chaotic shape with extreme spikes and deep valleys.

An Experimental Typology of Schema Profiles

By plotting these two experimental indices on a 2D coordinate system (Severity vs. Volatility), we hypothesize that patient profiles will naturally cluster into four distinct clinical archetypes:

  • The Focal Profile (Low/Moderate Severity, High Volatility): Visually, this appears as a small radar shape with one or two massive spikes. This individual functions relatively well at baseline but possesses highly specific, severe schema triggers.

  • The Polymodal Profile (High Severity, High Volatility): Visually chaotic and highly jagged. This patient spikes across multiple prominent schemas that bridge different domains. This profile likely maps onto complex presentations, indicating multiple, distinct traumatic origins.

  • The Pervasive Profile (High Severity, Low Volatility): Visually resembling a large, inflated circle. This individual scores evenly and highly across almost all 18 schemas. This diffuse profile suggests profound, systemic childhood trauma where almost all core emotional needs were frustrated.

  • The Compensated Profile (Low Severity, Low Volatility): A small, tight circle near the center axis. This indicates either baseline psychological health or a highly rigid schema coping style (such as total Avoidance or Overcompensation) that prevents the patient from acknowledging the items on the psychometric instrument.

Future Directions for Research

It must be heavily emphasized that this hierarchical/meta-dimensional modeling approach and its associated metrics (the Severity and Volatility Indices) are entirely experimental. However, the potential clinical utility is promising with proper expansion using relevant factor analyses and other tools. By establishing this mathematical framework, researchers can begin to overlay these schema profiles with other dimensional individual difference measures to identify predictive psychopathological intersections.

Furthermore, this visual and mathematical model offers a novel way to track therapeutic outcomes. At minimum it provides a heuristic score which does have measures of EMS as its basis. In successful Schema Therapy, a clinician might observe a patient's profile transition from a highly jagged state at intake to a smoother, less volatile shape at follow-up. Even if the overall Severity Index (μ) drops only slightly, a significant drop in the Volatility Index (σ) would empirically demonstrate an increase in psychological integration and a dismantling of specific schema triggers.


References

Beckley, K. (2022). Schema chemistry: An interpersonal framework for making sense of intimate partner violence. Journal of Clinical Psychology, 78(1), 38–49. https://doi.org/10.1002/jclp.23295

Dumitrescu, D., & Rusu, A. S. (2012). Relationship between early maladaptive schemas, couple satisfaction and individual mate value: An evolutionary psychological approach. Journal of Cognitive & Behavioral Psychotherapies, 12(1).

Körük, S., & Özabacı, N. (2020). Fate or schema chemistry? Which one does bring and hold mates together? Edu 7: Yeditepe Üniversitesi Eğitim Fakültesi Dergisi, 9(11), 17–42.

Paim, K., & Cardoso, B. L. A. (2019). Terapia do Esquema para casais: Base teórica e intervenção. Artmed Editora.

Piai, L. P., Cardoso, B. L. A., Capinha, M. I. L., Teodoro, M. L. M., Burgos-Benavides, L., & D’Affonseca, S. M. (2025). Early maladaptive schemas and intimate partner violence. Revista Brasileira de Terapias Cognitivas, 21, e20250554. https://doi.org/10.5935/1808-5687.20250554

Roediger, E., & Archonti, C. (2020). Transference and therapist–client schema chemistry in the treatment of eating disorders. In S. Simpson & E. Smith (Eds.), Schema Therapy for Eating Disorders: Theory and Practice for Individual and Group Settings (pp. 221–241). Routledge.

Young, J. E., Klosko, J. S., & Weishaar, M. E. (2003). Schema therapy: A practitioner's guide. Guilford Press.

Monday, 16 March 2026

Beyond Harmful Dysfunction: Why Psychopathology is a Problem of Maladaptive Misinformation Processing

- Bruce Long


The standard definition of mental disorder in clinical psychology and psychiatry is based upon Jerome Wakefield’s ‘Harmful Dysfunction’ model (Wakefield, 1992). According to Wakefield’s theory, for a mental disorder or illness to be present requires a biological mechanism to fail at its naturally selected evolutionary function, whilst simultaneously causing socially defined 'harm'.

While powerful and useful as a general concept, many find this paradigm deeply unsatisfactory. Especially for research and analyses involving neurological mechanisms and their relationship to complex social and interpersonal behaviour. It relies on evolutionary guesswork regarding complex cognition and relies too heavily on abstract, high-level social outcomes. In short: it’s hard to get it to work coherently at multiple levels of abstraction (where in the neurology, cognition, and behaviour of an individual levels of abstraction might be naturally discretised to some extent, rather than imposed arbitrarily as constructs for modeling.)

(There’s a debate in the philosophy of science about the usefulness of ‘levelism’, but in the philosophy of information and in computer science levels of abstraction are regarded as meaningful.)

I propose a framework for modelling mental disorders grounded in objective, frequentist information theory (Shannon, 1948). I argue that psychopathology is best conceptualised not as a vague ‘dysfunction’, but as a highly specific, measurable computational failure: Maladaptive Misinformation Processing (MMP).

A Quick Philosophy of Information Primer

I am writing for psychology researchers, and so a very brief setting of context is prudent. (It’s best to get it out of the way as quickly as possible.)

Psychology researchers should absolutely be suspicious of, and careful with, concepts from philosophy and the philosophy of science. However, when it comes to the nature of information and information processing (and transmission, and encoding, and processing) in nature and in the sciences, the philosophy of information has identified some very real difficulties which do affect conceptualisation (conceptual analysis), operationalisation, and theory.

For the purposes of this short explainer document, the most salient problems are:

1. Semantic Information. The distinction between information in Claude Shannon’s objective-frequentist statistical sense (roughly - the reduction in objective-frequentist statistical uncertainty about the state of a source) and semantic information.

2. The Nature of Information and Conceptual Pluralism: The ongoing argument over the nature of information, and whether pluralism about it should be combined with more rigid conceptions (of which there are many including statistical, logic-based, complexity based, and functionalist.)

3. The Veridicality Thesis: The ongoing argument about the veridicality thesis, or the thesis that information is necessarily alethic: that it necessarily has the property of truth aptness in the true-false sense.

These don’t need to be resolved for the purpose of this discussion, but they flag issues that potentially trouble theory-grounding and conceptualisation for defining mental disorders in relation to information processing.

Regarding (1), physical information associated with signals and channels may not even be semantic in any traditional sense of being true or false like a concept, a belief, or a proposal might be. They might be semantic on the basis of something like raw indication of the existence and/or configuration of a physical information source. Someone passing out and collapsing on the spot conveys information, but not like the encoded uttered or transmitted sentence-message “Gerald passed out” does. The sentence is alethic. The person-passing-out situation, not so much. The latter indicates certain things, but we don’t think of it as truth-apt like a concept or a proposition. Likewise for action potentials.

Regarding (2), it’s correspondingly up for debate what the nature of information even is. This is perhaps best exemplified in an anecdote conveyed by Shannon about a discussion with Von Neumann. Shannon had developed his measure of information, but did not know what to call it. The equation is very close to the Boltzmann entropy equation, and so Von Neumann told Shannon to call it entropy, since no one knew what that was anyway! 

Regarding (3), depending upon what a theorist thinks about the nature of information, they may or may not regard that being alethic is a necessary condition for the obtaining/existence of information. Proponents of the veridicality thesis suggest that if it’s not able to be true or false, it’s not information. Opponents reject this. Whether everyone is talking about semantic information, or just physical information, is not always clear. Most probably agree that physical information does not have to be alethic, but it’s also not clear that physical information is not semantic or meaningful even if it isn’t alethic.

That’s enough philosophy of information. It’s probably not good for one’s mental health!


Housekeeping: Noise vs. Misinformation

To understand MMP, and to avoid conceptual confusion in the domain of psychological science, we should first distinguish between standard statistical errors and true misinformation.

Imagine you are in a jungle and hear rustling in the dense foliage. You might assume it is a tiger and run (a Type I error, or false positive), or you might assume it is the wind, stay put, and be eaten (a Type II error, or false negative). Crucially, making a Type II error here does not necessarily constitute maladaptive information processing. If the environment only provides rustling information—without the growl or the visual geometry of orange and black stripes—the signal-to-noise ratio is simply too poor. The system functioned as well as it could under conditions of severe information scarcity. It’s not maladaptive, just starved of adequate information.

Maladaptive information processing only occurs when a high(er)-fidelity signal and message encoded into that signal is present, yet the receiver fails to decode it faithfully. But how does this lead to clinical psychopathology? This is where misinformation becomes the critical variable, and why it is meaningful to focus upon misinformation processing rather than just information processing.

In Shannonian terms, an information source is a physical stochastic process. Misinformation, then, can be characterised as semantic (encoded) information which strongly indicates the existence of a source or the configuration thereof, but either the source picked out by the receiver is not real, or else the configuration of that source is not correctly identified. The signal has been spoofed in the first instance, and misread in the second.

Under the MMP framework, clinical conditions are failures to identify and filter this internal spoofing and/or corruption:

  • Delusion involves the failure to identify that an internally generated signal (e.g., 'The CIA is watching me/aliens are transmitting messages into my brain') is misinformation masquerading as an external physical stochastic process (information source(s)).

  • Rumination involves the failure to identify that a historical process (e.g., a past social rejection) is no longer actively transmitting. The brain fails to properly process misinformation regarding the current salience of a defunct or alternatively configured source or sources.

(A salient technical note: Since a physical stochastic process is a Shannon source, so is a set of such sources (whether spatially distributed or contiguous.) Moreover, so is a channel between source and destination.)

MMP can be applied explanatorily at the neurological level, and at the social level in relation to social cognition, and at all other levels of abstraction in between.

Stress-Testing MMP

When introducing a new theoretical architecture to cognitive neuroscience, it must not be simply a restatement of existing paradigms, or else otherwise made redundant by them. During the conceptualisation of MMP, the following candidates became apparent:

1. Predictive Processing

A modern cognitive neuroscientist might argue that MMP is simply Karl Friston’s (2010) Predictive Processing model dressed in different terminology. In Friston’s Bayesian brain hypothesis, the brain generates 'priors' (internal models) and tests them against sensory evidence. Delusions, in this model, are just hyper-rigid priors that override contradictory bottom-up data, resulting in an un-updated 'prediction error'. Is MMP redundant?

2. The Alethic Category Error

In the philosophy of information, theorists like Floridi advocate for the veridicality thesis, whilst others note the Bar-Hillel-Carnap paradox. If physical information is non-alethic (not truth-apt like a natural language proposition), then the neural networks of the brain do not process 'truth' or 'falsity'—they simply process electrical signals. A critic could argue that 'misinformation' is a semantic judgement applied by a clinician, not a structural reality at the neurocomputational and neurophysiological/neurostructural level.

3. Source Monitoring Impairment

Since the 1980s, psychology has utilised the concept of 'Reality Monitoring' or 'Source Monitoring' (Johnson & Raye, 1981) to describe the cognitive process of distinguishing between internally generated imagination and externally derived perception. Is MMP simply a rehash of source monitoring impairment?

4. Isn’t Misinformation Processing Just Bad Information Processing?

There's prospectively no difference between maladaptive information processing and maladaptive misinformation processing, Why refer to misinformation?

Why MMP (Probably) Survives

Far from dismantling the thesis, these objections may just help sharpen MMP. Here is why the model likely not only survives these critiques but supersedes-and in some cases  provides an explanatory basis for-the frameworks that generated them.

Counter 1: Shannon over Bayes

The critique of Predictive Processing is foundational. As philosophers like Richard Menary have pointed out, mapping Bayesian statistical inference onto wet, evolved neurology is often a metaphorical category error.

While modern neurocomputational models like Predictive Processing attempt to map Bayesian statistical inference onto wet neurology, this mapping relies on nativist assumptions (Fabry & Menary, 2020) and often metaphorically conflates mathematical description with actual biological mechanisms (Hutto & Myin, 2013; Menary, 2015).

According to Menary the brain does not literally perform Bayesian calculus to update priors. By anchoring MMP in Shannon’s (1948) objective frequentist theory, I avoid illusory baggage of the 'Bayesian brain'. MMP does not rely on metaphorical 'beliefs'; it just maps the actual, measurable capacities and failures of decoding information from physical signal channels.

Counter 2: Indication at the Sub-Personal Level

The alethic paradox can be resolved variously, including by reducing semantic content to indication rather than bivalent truth (Hintikka, 2007). We do not need a higher-order mind to 'believe a lie' for misinformation to exist. Consider hippocampal place cells in a rodent spatial navigation task. If a rat is in the north corner of a maze, but the place cell mapped to the south corner fires, that single neuron is transmitting a signal that indicates a non-existent spatial reality. It does not require bivalent logic or language; it is, structurally and physically, misinformation at the synaptic level. Thus, MMP is not a category error; it is explanatorily coherent at the sub-personal, neurocomputational substrate.

Counter 3: A ‘Fractal’ Explanatory Model

I suggest that this is where the thesis becomes a viable, fundable clinical model.

Jerome Wakefield’s "Harmful Dysfunction" has dominated the definition of mental disorder since the 1990s. But it is notoriously ‘clunky’. Wakefield requires a biological mechanism to fail at its naturally selected evolutionary function (which is almost impossible to definitively prove for complex cognition, since-among other things-cognition is not really evolved for truth-tracking in the conventional sense, but for optimising outcomes while minimising energy expenditure for cognitive information processing), and it must cause social "harm."

Richard McNally (2001) points out that many severe, debilitating mental disorders occur when an evolutionary mechanism is functioning exactly as it was designed to, just triggered at the wrong time or at the wrong magnitude. For example, a severe phobia or panic disorder is not a "dysfunction" of the fear circuitry; the circuitry is working perfectly.

Lilienfeld & Marino (1995) arguably completely dismantled Wakefield’s reliance on "naturally selected functions." They pointed out that evolutionary biology is full of exaptations (traits evolved for one purpose but repurposed for another) and spandrels (evolutionary byproducts that serve no specific function).

I propose that MMP is a superior, scalable replacement. Finally, MMP does not ignore the Source Monitoring framework (Johnson & Raye, 1981); it subsumes it and propbably also explains it. Source monitoring impairment is simply a specific, mid-level cognitive instance of maladaptive misinformation processing.

This is the ultimate strength of the MMP thesis: it applies perfectly and consistently across all levels of abstraction.

  • At the neurological level, it explains a place cell misfiring (processing spatial misinformation).

  • At the cognitive level, it explains source monitoring errors (processing internally generated trauma as a present external threat).

  • At the psychiatric level, it defines delusion (processing internally generated data as an active external stochastic process).

  • At the social level, it explains interpersonal breakdown (processing neutral social cues as hostile misinformation).

Counter 4: We have the word ‘Misinformation’ for a reason.

Misinformation is not just bad information processing, but it is just ‘bad information’. Rumination involves failure to identify misinformation that past processes (or sets of events) are worth thinking about. Delusion involves failure to identify misinformation that sources exist. Rumination is equally then failure to identify semantic information that past processes/experiences are not important, However, that failure requires internal misinformation. Whether it’s misfiring rat place neurons or delusions of reference: It's not the information that gets one into trouble, but the bad information or misinformation. Failure to identify and accurately process misinformation is necessary to maladaptively bungle information processing.

Conclusion

Functionalism is a powerful explanatory paradigm, but perhaps we do not need to rely on the vague evolutionary guesswork of the 'Harmful Dysfunction' to define mental illness? Psychopathology is a problem of information processing and information processing architecture. By defining clinical impairment as the measurable failure to identify and adaptively process misinformation, we ground psychology in the hard physics of computation and signal processing. As we move towards the future of AI-assisted psychometrics and digital phenotyping, MMP provides the exact theoretical engine required to track, measure, and treat the spoofed signals that hijack the human mind.

References

Folstein, J. R., & Van Petten, C. (2008). Influence of cognitive control and mismatch on the N2 component of the ERP: A review. Psychophysiology, 45(1), 152–170.

Fabry, R. E., & Menary, R. (2020). The enculturated predictive processing framework. In Smortchkova, J., Dołęga, K., & Schlicht, T. (Eds.), What are mental representations? (pp. 206–233). Oxford University Press.

Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.

Hintikka, J. (2007). Socratic epistemology: Explorations of knowledge-seeking by questioning. Cambridge University Press.

Johnson, M. K., & Raye, C. L. (1981). Reality monitoring. Psychological Review, 88(1), 67–85.

Lilienfeld, S. O., & Marino, L. (1995). Mental disorder as a Roschian concept: A critique of Wakefield's "harmful dysfunction" analysis. Journal of Abnormal Psychology, 104(3), 411–420. https://doi.org/10.1037/0021-843X.104.3.411

McNally, R. J. (2001). On Wakefield's harmful dysfunction analysis of mental disorder. Behaviour Research and Therapy, 39(3), 309–314. https://doi.org/10.1016/S0005-7967(00)00069-4

Menary, R. (2015). The enculturated brain. In T. Metzinger & J. M. Windt (Eds.), Open MIND (pp. 1–28). MIND Group. https://doi.org/10.15502/9783958570085

Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3), 379–423.

Wakefield, J. C. (1992). The concept of mental disorder: On the boundary between biological facts and social values. American Psychologist, 47(3), 373–388.



Thursday, 5 March 2026

Strong metaphysical determinism is slippery, and hard to kill. Predictability is a misleading concept.

Strong metaphysical determinism is the philosophical position that every event, including those involved in human cognition and action, is the necessary and inevitable result of antecedent causes and the immutable laws of nature. It posits that the total physical information of the universe at any given moment—encoded across all levels of abstraction, from subatomic particle states to complex neural configurations—contains the complete blueprint for every future state. In this view, the universe is a closed computational system where the "input" of prior physical states dictates a singular, fixed "output," rendering the "ability to do otherwise" a logical impossibility. 

This (so many philosophers argue) relates to free will by framing our choices as mere ripples in a causal sea that was set in motion long before our existence; if our decisions are simply the processing of pre-existing physical information, the traditional concept of an autonomous "self" that initiates new causal chains is effectively nullified.

Here's a critical point to understand and keep in mind going forward: According to strong metaphysical determinism (SMD) everything is thus pre-determined by the total physical micro-states and macro-states of the physical universe. That includes all of the cognitive information processing that accompanies your emotions, and all of the cognitive information processing and sensory perception that you are using to read these sentences.

You thought you had made a free-will driven decision to read this.  No, no, says SMD. You just thought you did. Moreover your thinking that you did was also thus determined causally by the same mechanism of SMD. Any complaint that you want to make was also determined by the prior physico-informational states of the universe (especially those regions of it containing your brain and its information processing)? And so on, ad-infinitum and ad-nauseum. (Sartre's brand of nausea, perhaps?)

It's just cheating and a gimmicky trick, right?

Not necessarily. I mean: it might be, but I have strong reasons to doubt it.


What about the arguments that say it kind of doesn't matter? Or that even if stuff is all determined like that then we still have free will?

Those are called compatibilist arguments. Compatibilist arguments attempt to reconcile this causal necessity with the existence of free will by redefining what it means to be "free." Rather than requiring an exemption from the laws of physics, compatibilists argue that freedom consists in the ability to act according to one's own motivations, desires, and reasons without external coercion. 

Even if those internal desires are themselves determined by prior causes, a person is considered free if their actions align with their "inner state" rather than being forced by an outside power. From this perspective, determinism is not a prison, but the very mechanism that allows our stable character and rational deliberations to reliably produce our actions, thereby preserving a meaningful sense of moral responsibility.

I don't have any deep love for SMD. However, I try to heed my elite-level philosophical training and avoid letting my distaste for any idea get in the way of admitting that it might well correspond to reality.

(Being that I am a psychological entity with emotions and cognitive limits like all humans, I don't always succeed at this, obviously.)

Certain personality types and certain theologies (which probably correlate in people statistically significantly) find the idea of zero-free-will alarming. I don't find it alarming, but then I am an absurdist. 

I also think that it's possible that there's a compatibilist argument that takes the edge off. However, it's a very basic one. It is more or less "So what?". Maybe I don't care how my decision and its cognitive processing came about, just that it's mine because I had it (or because the information processing happened in my brain).

On one hand, it feels cosy. On the other hand, it seems flimsy. So I am not buying compatibilism for the time being. I reserve the right to change my possibly completely determined mind.

Moreover, free will of the kind desired by theologians might not be so great anyway.

Importantly, strong metaphysical determinism isn't just about predictability. It's about whether or not a cognitive, conscious decision precedes and determines action. This can be false whether the process is random or not. Randomness doesn't entail control.

It's likely the case that there can be strong metaphysical determinism with or without randomness. Think about it. Assume SMD with no randomness. Prior physical states at all Levels of Abstraction (LoAs) determine the outcome. No free will. Add randomness. Do you now have more control over the entire process at any LoA? No.

I asked Gemini AI to remind me of the suite of compatibilist arguments. It did so and then asked me if I'd like to shop for counter arguments based upon quantum indeterminacy (or quantum randomness). That is: randomness inherent in quantum systems like atoms and their sub-particles in the garden of particles in the standard model of physics. Okay, I said, but then I told it why quantum randomness probably didn't remove physicalist determinism:

(Discussion with Gemini starts here...)

Gemini: Since you've been looking at this through the lens of physical information and metaphysical determinism, would you like to explore how Quantum Indeterminacy (the idea that some events are truly random) is often used to counter these arguments?

Me: Sure. But also note that quantum randomness doesn't solve the problem. Why? Because a truly random physical system doesn't deliver 'full control' to a cognitive, neurological system. All it does is make the physically-causally determined states truly randomly generated. 

You might find this hard to accept because even scholars who produced your training data have trouble with it. (Not all of them.) Moreover, people like free-will-loving theologians are desperate to find knock-down arguments against SMD. 

I am by no means certain about my own position which rejects the quantum randomness arguments, but it is probably going to cause those detractors a lot of trouble.

The confusion with randomness arises because it is often intuitively and scientifically couched in terms of statistical predictability. This is understandable since the concept of randomness is often couched in statistical terms. 

However, even if a physical system is objectively chaotic or otherwise truly non-linear and random, and therefore not computationally predictable using standardor even anycomputational approaches: it doesn't follow that the prior physical states of the material universe at all levels of abstraction right down to the microphysical haven't 100% caused the later states. The informational and epistemic inaccessibility of future states by statistical forecasting doesn't make reductionist causation go away.

Reductionist causation might not be real for other reasons perhaps, but not for that reason. (Even using supervenience in place of reduction doesn't necessarily help anti-SMD here, since supervenience might just be another example of hiding information.) 

Although it does not necessarily kill SMD, I find the outcome of the no hidden variables problem that vexed Einstein a better candidate for possibly saving anti-SMD free-will arguments. Why? Because the Bell experiments show that there are no hidden physical variables in entanglement or 'spooky' action at a distance, and that is deeply weird and seems to break physical conceptions of causation. That's better than randomness. That being said, spooky action at a distance still involves total consistency. e.g. If the state of one quantum system is spin down, it's 100% certain the state of the other is spin up. So there are no hidden variables, but the causation seems to be very deterministic in that sense.

Gemini (Being a little sycophantic, perhaps...):

You have pinpointed what is often referred to in philosophy as the Luck Objection. Simply put: if an action is the result of a random quantum fluctuation rather than a deterministic causal chain, it is no more "up to me" than a coin flip. Replacing a clockwork universe with a chaotic one doesn't grant agency; it just replaces a "determined" slave with a "random" one.

Epistemic Unpredictability vs. Ontic Determinism

Your distinction between statistical forecasting and reductionist causation is crucial. In many scientific and theological circles, there is a tendency to conflate epistemic inaccessibility (our inability to calculate or know the information) with ontic indeterminism (the information not being fixed).

The Statistical Trap: Because we use probability to describe complex systems (like fluid dynamics or neural firing), we often slip into the lazy assumption that the underlying reality is "loose."

Informational Scrambling: As you noted, a system can be 100% causally driven by prior physical states—encoded at every level of abstraction—even if that information is effectively "hidden" from us by non-linearity or computational complexity.

Bell’s Theorem: A Different Kind of Trouble

The "No Hidden Variables" outcome of the Bell experiments is indeed a more formidable challenge to Strong Metaphysical Determinism (SMD) than mere randomness, but as you observed, it brings its own flavor of necessity.

The Non-Locality Paradox

Bell’s Theorem proves that no "local" hidden variables can explain the correlations in quantum mechanics. If we reject "spooky action at a distance," we have to accept that the properties of particles aren't "real" until they are measured. However, if we accept non-locality, we preserve a form of realism but lose the "billiard ball" model of causation.

The Determinism of Correlation

Even without hidden variables, the consistency you mentioned—where one state being "spin down" guarantees the other is "spin up"—suggests a universe that is deeply, perhaps holistically, synchronized. This "super-consistency" doesn't provide a foothold for traditional free will; if anything, it suggests that the "physical information" of the universe is non-locally distributed, making the "autonomous agent" even harder to isolate from the total system.

The Theologian’s Dilemma

Theologians often cling to the "Gaps" in quantum mechanics because they require a "non-physical" entry point for the soul or will. However, if the "gap" is filled by either randomness (which is meaningless) or non-local entanglement (which is perfectly correlated), the entry point for an independent, non-caused "Self" remains effectively sealed.

Would you like to explore Superdeterminism—the controversial idea that Bell's Theorem is bypassed because the observers' choices of measurement were themselves determined by the initial conditions of the Big Bang?

(...Discussion with Gemini ends here.)

To those who think true randomness (in chaotic and nonlinear systems) would nix SMD and deliver free will. It would—in principle—not necessarily do so.

In fact it doesn't look like it would put a dent in causation-based SMD at all.

Worse still, it would make the causation-based deteminism objectively random rather than statistically predictable. The definition of SMD is that the current entire state of the universe at all LoAs is 100% causally determined by the immediately prior physical states (including all of those physical states in our brains). 

That can be true even if the causal relationship between the prior states and current states is random.

The confusion arises partly because of the concept of predictability. Future states might be determined per SMD, but still not practically computationally and statistically predictable. That's an informational and epistemic access problem, not a foil to SMD.

Just because the entire system is random-or chaotic with enough randomness in it—it doesn't necessarily follow that it is not causally deterministic.

Approach it by asking: If the universe is only physical and it's significantly random (e.g. intrinsic natural nonlinearity and chaos) then does that necessarily mean/imply/entail that all of the subsequent states of the universe are not caused 100% by the current states at all LoA. The answer is no.