Search This Blog

Monday, 5 May 2025

Therapists are not AI proof. Why? Because the nature of an AI therapist changes everything, including the essential dynamics of rapport building.



Does an AI therapist or counsellor need the same set of soft skills and rapport building capabilities/practices as a human therapist?

It's very unlikely.

In fact: no. It doesn't.

But it can deploy near flawless soft skills anyway.

The rapport building is not necessary because human trust is not needed in such a situation. The GPR is just reliable and non-judgmental.

Therapists need rapport building because the client knows they're dealing with a human with human emotional and moral habits and attitudes, and with all of the properties which make negotiating asymmetric interpersonal relationships difficult. Judgementalism. Moral umbrage. Superior intellect. Condescension. Intrinsic bias. Patronising views. Politics. Gender bias. The list goes on.

Human clients - including young people - know that most such dynamics and issues do not matter with an AI GPR, but that the AI GPR can still provide enormous epistemic-therapeutic resources and emotional support, including emulating empathy if necessary.

The client doesn't have to worry about what the therapist really thinks about them, but can still rely upon it having better knowledge, skills, and engagement than most error-prone and flawed therapists.

Forbes overestimates how AI proof teaching roles and therapy roles are...



Philosophy and perhaps performance art might be an exception because of the abstract and complex nature of many of the concepts involved. But all other teaching roles which involve teaching mostly quantitative and procedural skills (law, finance, most sciences) are vulnerable to replacement with an AI GPR.

I also think that the therapy role optimism is misplaced. There are several reasons for this:

1. Iatrogenic injury

Iatrogenic injury is caused not only by physicians by but psychiatrists and psychologists, and the rate of II is very high. AI and AI GPRs are far less likely to make the kinds of errors that lead to iatrogenic injury. That reason alone is enough.

2. Therapist Quality

A lot of human therapists might not cause iatrogenic injury, but are just not very good, and figuring this out can take a long time and cost money and time. Quality control is much easier with AI agents and AI agentive GPRs.

3. Therapist Fit is Solved

It is not easy to find a human therapist who is the right fit. This IS something that can be solved by AI which a) has massive resources for assessing what the client needs and b) can use almost perfect knowledge of all salient theories and methodologies in care delivery to adjust its approach and response, constantly and near-flawlessly measuring behavioural and emotional feedback from the client.

The AI therapist can change to fit the client's needs easily and dynamically.

4. Rapport building is NOT NEEDED

Rapport building and human trust is unnecessary for a human client of an AI GPR because human clients - including children - know that they don't need to worry about the social and interpersonal safety and comfort of the therapist. Assuming otherwise is a mistake. The human client knows that they will get all of the answers and guidance required on the basis of far more complete knowledge than a human therapist can offer without any need to worry about human awkwardness, judgement, moral disgust, or any kind of ick factor.

The AI GPR therapist doesn't have to convince the client that it's their friend and likes them, because the client doesn't need to care about it when they're not being assessed by a human. This doesn't necessarily negatively affect the quality of therapy, and to think otherwise is almost certainly a mistake.

The need for human rapport is just an interpersonal roadblock or bottleneck that is removed with an AI therapist. Nonetheless, a good AI GPR will have the ability to induce calm by way of very good tone, emotional responses, and gently spoken assurance. The massive available expertise levels alone are comforting. The client will be aware that the AI GPR has vastly more resources to draw upon and can do so quickly and accurately. Integration with facial recognition systems that read facial cues by using face image deep learning systems and similar systems for body language already exist.





Thursday, 1 May 2025

Absurdist scientism will save you. I am pretty sure.

Be a positive-scientistic physicalist and a utilitarian, Ayerian-neo-sentimentalist, absurdist like me.

You know you want to. 😁

What? How does the utilitarian, Ayerian-neo-sentimentalist absurdist thing work?

Well, I am not sure I am certain how it would work exactly, BUT here's a red hot try...

The felicific calculus is a bust, and so I am left with just 'optimise/maximise pleasure' and 'minimise suffering' (where the latter is a means to the end of the former Epicurean, hedonistic outcome). But how do I know what is a right-actiony and 'good' way to reduce suffering and maximise pleasure? Like Ayer I tend to think that propositions probably won't help me describe and quantify it - nor even give me any qualitative insights - and that's probably one of many reasons why the felicific calculus is a chubby no go. 

So, I have Hume's observations about the sentiments of 'yay' versus 'boo' to fill the functional role of telling me what suffering is. That, coupled with Hume's other important view that we don't reason so much as emote. Of course, these don't seem to be terribly reliable ways of determining anything so allegedly important as morals and right action. (But at least I am not trying to pull Kantian transcendental good will out of my metaphysical patootie.)

So, one can see where the absurdism comes in, but that also doesn't seem to be a very good and intellectually satisfying way of rounding off a method of not being an awful person to be around. Therefore, I suggest the scientism is pretty important as a deployable deployable. 

How?

Sam Harris is sort of effectively trying to make Kant into a naturalist, and although that's brave - it's chock full of the naturalistic fallacy and puts too much dependence upon evolved psychology. On the other hand, it does have the benefit of being nicely scientistic, and so that's a hint.

(In fact I have no idea what Sam Harris is trying to do, but in may experience Buddhists are very confused, and so I am just going to deploy more absurdism about it. Don't ask the fish about water.)

Loverly scientistic Hypothesis/posit: Suffering is whatever the sciences of evolutionary psychology, neuroscience, behavioural psychology, behavioural economics, social psychology, psychiatry, and social science manage to agree that it is. 

It's also anything that makes the average me averaged with the average everyone else subjectively want to be diagnosed as a depressive and suicidal.

'Boo' to it, say Hume and Ayer and I all together in our little pea green boat.

Absurdism gets another look in because it's quite possible that strong metaphysical determinism is also true and was maybe true the whole time. (This might still hold if time is an illusion). Potentially bummerifically, then, however, it would also seem to follow that making meaning in accordance with Camus' views is both metaphysically determined to be necessary and metaphysically determined to be ridiculous. All at the same time.

Still. For moral-sounding imperatives it's better than 'My imaginary friend says you should' and the categorically arbitrary imperative.

The most important thing to keep in mind, however, is that if you're not a scientistic absurdist then the Flying Spaghetti Unicorn is going to have to smack your bottom in the dungeon of eternal fluffering. 

You know that you definitely should believe this because I had a very realistic fever dream that Joseph Smith, Satan, the angel Gabriel, Buddha, Zoroaster, Ra, Rasputin, and Puff the Magic Dragon told me that the Flying Spaghetti Unicorn would make anyone denying their existence suffer in a jello hell that is totally a bazookillion times worse than anything that any theist's imaginary friend can throw you into. 

Therefore, it obviously kind-of-sort-of necessarily logically follows - per Pascal's potty wager - that you had better believe in the Flying Spaghetti Unicorn too. Because the jello hell - although ensconced in a possible world that has a big fat k-distance value - makes hell look like a summer camp.

Therefore, I win.

I told you absurdism would save you.

Here's a nice AI absurdist rescue vehicle to make you feel better after reading that...


Thursday, 24 April 2025

If I were tasked - as a philosopher of information and cognitive scientist - with delivering the next major AI inflexion to get truly closer to AGI...

As I have said elsewhere previously, scalability of compute is not enough to deliver inflexion points in technology progress towards AGI.

The brain has a great variety of different neuronal glial cell types. Combination of those types in Hebbian/neuroplastic neural circuits and Hebbian/neuroplastic assemblies of neurons makes for even more heterogeneity of structure, and therefore of function. LLM adaptors and transformers alter architecture of ANN/LLM machine learning systems in such a way as to make the structure/function coupling of such system more heterogeneous, but what is likely most required is new 'neuron' types (entirely new ML models and architectures) and new ways of emulating neuroplasticity. 

In a real and powerful sense adaptors which add significant new functions and new ways of doing functions are analogous to new neuron types. That's a big hint, and the approach to new 'neuronal' architecture delivering new functionality must be further pursued.

The good news is that multiple realisability is evidently at least partly true. ML/ANNs don't have to work exactly like human neurology to produce superior results to the human brain, just as an F-16 doesn't have to produce flight like a sparrow does to leave the sparrow in the dust. However, they likely need more ways to emulate neuronal structure-function heterogeneity and neuroplastic combination thereof than they currently have.

WHERE WOULD I START TO GET THE NEXT MAJOR INFLEXION POINT TOWARDS AGI?

What is the big, obvious hint?

[W]hatever the human brain does to provide reasoning and the ability to hypothesise: (further) emulate that.
Mimicking nature within the permissable limits of our technology and materials seems to be a winning strategy. It worked for machine learning. What Geoffrey Hinton and the team at Google did in the 1980s-90s didn't initially look like it was working, but now we have ChatGPT 4.5 and generative video which makes entire production studio departments completely redundant (and let's be honest: most visual artists are largely out of a job.)

So, whatever the human brain does to provide reasoning and the ability to hypothesise: emulate that.

How? Emulate Neurological Structural-Functional Heterogeneity, and Neuroplasticity

1. Emulating neuronal and neurological structural-functional heterogeneity including emulating neuron types and brain modules, at least in terms of functional roles (since exact emulation of function is perhaps not as straightforward as hopeful multiple-realisability might suggest.)

2. Better emulating Hebbian neuronal and neurological neuroplasticity.

How Would I Start (Very Specifically)?


Move #1: Generative imagery plus language processing

The first (obvious) thing that occurs to me is that generative image ML systems and LLMs for text are doing two different jobs and each job respectively maps fairly coherently to each of two large categories of brain function which are combined in human cognition, mental imagery and language processing:

- ANN generative image systems do something akin to mental imagery (the lateralgeniculate nucleus, striate cortex, PFC, and central executive of the brain, among other components), albeit using very different physical apparatus. 

- ANN LLMs do something very like what the language centres (phonological loop and central executive) of the brain do. 

The traditional and still current basic model of the function of the brain in combing these looks like this:

Now, except for people with aphantasia (the inability to do mental imagery), all of us should be able to agree that we often think and reason using both language and mental imagery. Recent experiments in neuropsychology support this.

It occurs to me that combining text prompting and image generation is a hint, but not quite close enough. What is required is something more like producing text in images based upon prompts. In other words: the system we're looking for to get started start is a generative image system that can accurately produce properly spelled phrases and text in images.

The overarching objective is to emulate whatever human neurology does to 'seemlessly' combine mental imagery and language processing in conceptualisation and reasoning. (Although the existence of aphantasic humans might indicate this is not necessary.)

This has been a difficult ask for a while, but it evidently not impossible. I would approach the people at Luma Labs and ask them how their Photon generative image system does this very accurately (>95% accuracy) for phrases like "The Chronicles of Xeo Woolfe." It's that system which needs to be considered as a primary candidate for producing a new model based upon mental imagery plus something like mentalese. The idea is to get the architecture of the system to combine mental imagery and propositional/sentential statistical inference. The point is not to make an image from a prompt, but to make an accurate image of a sentence.


While the exact architectural details of the Luma Photon models are proprietary to Luma Labs, it's clear they are leveraging their own advancements in machine learning to power their generative image system. They emphasize a novel architecture, efficiency, quality, and features tailored for creative workflows.

Generative image systems already do what we could fairly call mental imagery: producing many very good visual representations of something requested linguistically and propositionally by synthesising 'remembered' statistical 'representations'.

Move #2: Mental imagary using representations which are not just weighted statistical inferences.


Ideally, the mental imagery component should probably involve more than one conception and implementation of representations: not just a statistical inferential approximation of representation based upon weighted mid teir nodes.

What's needed next? Concepts. Internal Representation for Full Conceptualisation.


To begin to do emulation of cognitive conceptualisation and concepts it's likely that what's needed is to emulate what human neurology is doing in Baddeley's model of working memory:



There are no generative image systems currently which can fulfil the following kind of prompt:

"Make a very simple and sparse psychological/neurological scientific diagram of Baddeley's model of working memory including only the episodic buffer, visuospatial sketchpad, and phonological loop."
If there were, it would suggest that this requirement had already been partly answered.

Gemini Flash 2.0 came closest in my experiment:


And Luma couldn't do it:


And neither could ImageFX:


The Difficult Ask: Representation and Integration for Semantic Memory, Procedural Memory, and Episodic Memory.


Semantic and procedural memory emulation will likely be necessary to provide fuller concepts and conceptualisation, and this will likely require the emulation of the human neurological integration of mental imagery with language and concepts, and new ways of representing concepts will likely be necessary. So again different approaches to representation in ANNs is the likely requirement. This is the hardest ask, but again I would approach Luma Labs first, or someone with similar models.

The Next Step: Proprioception, Interoception, and What it Feels Like


The next frontier is already being approached by numerous manufacturers of AI based general purpose robots: the integration of environment navigation and body-image emulation. However, it may well be necessary for some sense of limbic and emotional cognitive processing to be emulated. I am not sure how necessary this latter addition is for reasoning. I suspect it could possibly be undeleteriously left out of the equation just like the F-16 doesn't need to flap its wings, but this is far from clear. We may end up faced with Hume's challenge that human beings are emotional decision makers, and not reasoning/rational decision makers.

As I have suggested, the first step is to emulate the way human neurology generates mental imagery and concepts, but more importantly to emulate the way it uses these in conjunction with language processing (from the phonological loop) to do conceptualisation and reasoning.


Monday, 24 February 2025

Saturday, 22 February 2025

Open an Imgur account and you can use the Longshot simple social media post tool for free right here.

I got tired of all of the free trial social media post and schedule tools that would severely limit my ability to simply post a given image and some text across the social media platforms that I use: Facebook, X, Threads, Bluesky, Mastodon, Substack, Blogger. So I used LLMs to generate a very basic system (which I had to tweak a bit, again using LLMs) which at least prevented me having to copy and paste to each new social media site.

TO USE

1. Log in manually to all of the social media sites you want to post to in the same browser window using different tabs. *Required
2. Go to this page.
3. Enter the content in the form fields, and use the post buttons to publish.
4. Go to the auto-opened pages and check and post.

YOU NEED AN IMGUR CLIENT ID

This is so you don't have to upload your images to this blog, but still don't have to use URLs for images, which approach often fails in social media sites.
 
Go to imgur to set up an imgur client ID. 

NO TO imgur

If you don't want to use imgur, there's a less powerful clipboard based version here which uses your local system's clipboard to store the image for you to paste into each target site.

NEXT VERSION

Update 1: Make the post action for each target SM site automatic too. This might take some time as it involves the use of each target platform's API.