AI’s Subjectivity

October 14, 20245 min read

AI cannot currently be objective. From underrepresentation of women in healthcare data to facial recognition software being unable to detect darker skin tones, AI is shown to suffer from reflecting human biases.1 Counteracting these AI biases is one major problem in the field of AI. But there is another different aspect to subjectivity in relation to AI. Whether AI can experience the world in a subjective, conscious sense is potentially just as important, concerning not only the philosophy of how we experience but perhaps limiting what AI can accomplish.

The mind is often characterised as something only we ourselves can experience in a sort of ‘inner theatre’.2 What it feels like to experience things, for instance how it feels to see a brown table, is often termed ‘qualia’3, alternatively defined as ‘the way things seem to us’4 by Daniel Dennett. Qualia are snapshots of sentient experience. Examples of qualia include the sensation of pain and the redness of a matador’s flag. But qualia slightly differs from this definition. Qualia must also have certain, special properties including that they are: ineffable and incomprehensible unless you experience them directly; intrinsic to the thing being experienced;non-physical and private; and they are ‘given’ to their subjects without any error or are incorrigible.5

Subjective experience is, if we accept the above, a sequence of many qualia. Subjective experience appears to be both the cognitive and the emotional response to objective, physical, real experiences or events. However, issues arise regarding AI. Multiple digital intellects can all function in the same way, unlike humans or any analogue intellect. This means that they are able to share information far more quickly than is possible for humans, for instance, this article has already taken just under 300 words to try and share information as clearly as possible. However, humans provide the best current model of intelligent AI to imitate. If there are parts of our thinking that are irreducibly non-physical and subjective, AI will fail to mimic human thinking and behaviour and therefore be unable to fully accomplish its fundamental aim: to extend and augment human capabilities.6

The very idea of qualia can be rejected. Dennett starts off by rejecting the idea of an inner theatre or, as he terms it, a ‘Cartesian theatre’7 and instead proposes the ‘Multiple Drafts model’.8 This model theorises that all forms of perception and thought ‘are accomplished in the brain by parallel, multitrack processes of interpretation and elaboration of sensory inputs’.9 In other words, consciousness and thinking does not occur in a single place by a singular, supervising mind that interprets all sensory inputs but in fact happens in unwhole parts everywhere in the brain simultaneously. Having removed the nice, simple image of a Cartesian theatre in our brains, Dennett proposes that qualia do not exist.10 He accepts that real things have properties and that conscious experience is real and that conscious experience arises from reaction to real things.11 He simply argues that the special properties of qualia are incompatible and incoherent.

Where does this leave us? Our definition of subjective experience is in need of adjustment. In light of the non-existence of qualia, Geoffrey Hinton puts forward pithily that subjective experience is ‘a counterfactual description of the world such that if the world were like that, your perceptual system will be working properly.’12 What does he mean by this? A mental state, he argues, is the normal cause of what your perceptual system is telling you.13 Thinking is believing or disbelieving what your perceptual system is telling you.14 If I try and convey to you my mental state, for instance that I see pink elephant in front of me, I might say “I have the subjective experience of pink elephants floating in front of me.”15 What I mean by this is that my perceptual state is like that were there to be pink elephants floating in front of me.

I can phrase this in another way: “If there were to be pink elephants floating in front of me, then my perceptual system would be functioning correctly.”16 This phrasing arrives back at Hinton’s definition of subjective experience. This phrasing also helps us to relate subjective experiences to AI. Hinton gives another example of a multi-modal chatbot with a camera. If a prism were to be put in front of its lens without its knowing, we could say that it has the subjective experience of an object being off to one side, due to the bending of the light rays. This resolves the previous issue of AI being unable to fully accomplish its aim of human imitation.

However, if AI can have subjective experiences, the popularly much-wanted distinction between humans and AI becomes a lot harder to see clearly. One argument still remains that humans are able to imagine whilst AI can only recite, even if in synthesised forms of multiple recitations together.17 Another is the ability of humans to learn well off a small database of samples whilst AI require vast numbers of samples to learn as well as humans.18

To what extent, then, do we want AI to be objective? For biases in data, if somehow AI could overcome these and act objectively this would create a fairer and better AI. But if an AI itself has subjective experience, which without it would be limited in its usefulness, how can a state with no external or internal biases be well achieved? This remains unclear.