Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Question of the Month
Each answer below receives a random book. Apologies to the entrants not included.
A computer is essentially an information processing device. The human brain can also be seen as an information processor, with a complex network of about 86 billion neurons: it encodes and decodes information, and the neurons are organized into brain areas dedicated to different functions. We know which parts of the brain play which roles in producing consciousness. For example, the frontal lobe processes many of the higher or more complex cognitive functions, such as attention or decision-making. The occipital lobe is important for processing visual information. How the relevant parts create consciousness is still unknown, but we can say that the human brain is a necessary condition for human consciousness. To answer the question of how to make a computer conscious, then, it is necessary to implement a suitable similarity between the computer and the human brain. If, however, this technical issue is resolved, one can still ask whether there is then consciousness, and this is difficult philosophical question.
One current idea of a way forward is mind-uploading – creating a digital twin of an existing human brain/mind. However, I will here focus on artificial neural networks, which are based on an analogy to the functioning of the human brain. Here, artificial ‘neurons’ are organized in layers through which information is processed. Now suppose that a general approximation of all the trillions of human neural connections is to be developed – that a specific model for a brain is to be emulated by a computer. My suggestion is we start by focusing on modelling the visual part of the brain and connecting this to a robot. A possible test for consciousness then is: consciousness can be attributed to this robot when a human experiences mutual eye contact with the robot. The question remains, however, whether this is sufficient: does the robot also experience mutual eye contact when a person is looking in its eyes? Does it feel ‘objectified’, as the existentialists say? There is still a long way to go to find out.
Teije Euverman, Rotterdam
Consciousness arises from brains, so surely if a computer program could precisely simulate the working of the human brain’s 86 billion neurons, each connected to on average up to 10,000 other neurons – perhaps a quadrillion synapses altogether – then perhaps it would experience consciousness. However, this assumes that a simulation of a thing will give rise to the same phenomena as the thing being simulated. This may not always be true. After all, an accurate computer simulation of a black hole doesn’t actually bend spacetime and cause us to be sucked into the computer…
Integrated Information Theory (IIT), proposed by the neuroscientist Giulio Tononi in 2004, offers a theory of what a system needs in order to experience consciousness. In a nutshell, IIT proposes that consciousness arises from complex physical feedback – which is measured as Phi ( Φ ). The idea is that the more Phi a system has, the richer its consciousness. Tononi is definitely on to something here. Parts of the brain associated with consciousness have physical structures that have high Phi (much synaptic feedback), whereas parts of the brain with low Phi can be damaged with no impact on the patient’s consciousness, just the loss of certain skills.
Because Phi cannot yet be accurately measured for brain tissue, Tononi’s team developed a test that measures when a brain is operating with high complexity – the Perturbational Complexity Index, or PCI. Patients anaesthetised with ketamine were found to have high PCI, and experienced dreams. Other anaesthetics caused low PCIs, and patients lost all consciousness (no dreams). These results are predicted by IIT, but not by previous medical theory.
Taking all this into account, because current computer technology has a comparatively low Phi, as it uses physical logic gates linked in a serial manner with limited feedback, it has no consciousness; and simply adding processors will not change that result. Moreover, current AI is a simulation running on low Phi computing, and hence is not conscious. However, a future computer with entirely new technologies containing much more complex feedback (similar to a brain) could be conscious. My guess is that future (unconscious) AIs will design a computer like that. And so, even if humans and animals are selected for extinction (by AI!), Earth will later have consciousness on it again, which would be nice.
Matthew Taylor, Clapton-on-the-Hill, Glos
If it doesn’t work, try turning it off and on again. If it still doesn’t work, give it a good kick! If that still doesn’t work, try tackling the problem piecemeal. So, let’s start with making computer ears that can be put in place of human or animal ears, so that the deaf can hear using them. Then do the same with eyes, so that blind people can see. Then replace parts of memory, etc. etc – so that people can report they are experiencing each augmented function – until you’ve got enough to make a whole brain.
But perhaps this won’t work beyond a certain point. So maybe instead we could try making a machine that didn’t work by mechanically performing functions, but by each part being influenced by the other parts, for instance using reciprocating electrical currents, perhaps at the chemical level – so that for this ‘machine’ to do anything, all its parts (or many of them) would have to reciprocate with each other and collectively register their environment to collectively move towards given goals – perhaps experimenting within themselves as to how to get there.
Even if this all worked there’d still be the problem of convincing anyone that this machine has consciousness. People are quite prepared to sympathise with their pets and dumb animals because they aren’t competing or being morally judged by them. This may make us too ready to think a computer mimicking an organism is sentient; but we might also be too inclined to doubt a being in front of us is conscious. I can’t help thinking of Wittgenstein’s view that I do not have a good argument that other life-forms are conscious. But to have a dismissive attitude towards potentially-conscious things is an inadequate response.
Justin Le Saux, Norwich
The question could be taken to imply that consciousness is something an entity either possesses or doesn’t. However, the term ‘consciousness’ is applied to a huge range of activities and abilities, including multiple modes of sensory awareness (including what’s going on with our own bodies, which is called ‘proprioception’); memories; maintaining a focus of attention; imagined experiences and actions or events beyond our direct experience; and consciousness of being conscious. The last is perhaps what is most usually meant by ‘consciousness’. Such self-consciousness is like a running commentary on what we’re doing and thinking: an ongoing assessment that can be used to adjust our current actions or be fed into our decision-making. But, given these disparate processes, it must be recognised that making a computer conscious will not be an all or nothing affair, but will require a piecemeal, incremental approach, gradually constructing the various elements of consciousness into a single system.
Another consideration is that most of our consciousness (and that of other animals) relates to our embodiment in, and need to survive in, an environment. Some have argued that consciousness only makes sense in the context of a body – in ‘being there’. If this is right, a further prerequisite for creating a conscious computer will be to put it in a robot body: possibly one that must seek out its own power supply and arrange its own maintenance and repairs.
What is not a prerequisite, however, is a full theoretical explanation of the various processes, nor an understanding of how conscious activity generates subjective experience. Not understanding how some process works does not mean you can’t make it happen. Natural selection created our minds without understanding how they work, and indeed, engineering achievement often precedes understanding. For example, aircraft have been flying for over a hundred years, but the precise theory of how aerofoils work remains controversial. Indeed, it is quite possible that making a conscious computer will itself throw much light on consciousness.
Paul Western, Bath
One definition of ‘computer’ is, ‘one who computes’. So you can make a conscious computer by procreating! Your conscious offspring, suitably educated, will be capable of elaborate computation, maybe even proving Riemann’s hypothesis!
Otherwise, design conscious machines to resemble conscious organisms. While awake, any creature with a nervous system and appropriate sense organs experiences its environment in modes which may include any of the senses. A brain represents these experiences (for instance, the smell of vinegar) through electrical activity in nerve fibres and the movement of neurotransmitter chemicals across synaptic gaps. The internal representation is utterly unlike its cause in the external world: ethanoic acid vapour in air yields to the experiencer the smell of vinegar. How? That’s unknown. This is part of what David Chalmers calls the ‘hard problem’ of consciousness. Other mental states include pain, or emotions such as fear and rage. But pain feeling bad confers an obvious evolutionary advantage – a disincentive to ignore an injurious cause. So does a thinking apparatus need ‘skin in the game’ to be conscious?
In my opinion, to have anything like human consciousness, a machine will need access to the same rich variety of sensory input as us. Then, if the hardware is complex enough, its internal representations may come to experience something like the way the world is for us. But we will only become aware of it if we can converse with the machine.
Hominin genes took something like a monkey call signifying “Snake! Hurry to the trees!”, and evolved language, not merely for communication, but for thinking too. Language has limits though. Words cannot convey to a blind person how red hair looks different from blonde. But an intended conscious machine must at least match us linguistically. So we must also make it fluent in speech; or handwriting, semaphore, hieroglyphics…
The human mental model of the world co-evolved alongside language. This model came to include a representation of the experiencer themself, which enabled them to predict the behaviour of others by introspection – that, for instance, if you learn about (or cause) someone else’s false belief, you can exploit them. Awareness of your own thinking and a ‘theory of mind’ (that is, an awareness of others’ minds) are at the core of what it’s like to be a highly conscious entity. This must be true whether the hardware be biological or electro-mechanical. A caveat though: human brains, although compact, have trillions of synapses. And if quantum mechanical effects in neuronal microtubules are essential for consciousness, then a lot of technological development is needed to realise a conscious machine.
James A Mundie, Leigh-On-Sea, Essex
Curiously, consciousness neither has a precise definition, nor do we understand how it is present in humans. Let’s therefore agree that a computer will be considered conscious if the computer should be aware of its surroundings as well as of its own existence. It should thus have an ability to see, feel, smell and listen to any object in its vicinity. This self-aware machine should also have emotions and feelings, implying thereby an ability to differentiate between happy and sad (or good and bad) circumstances. The computer should also be able to analyse all this internal and external information with a view to draw conclusions and then to initiate actions, for its own benefit, or for a larger cause.
Can we further suppose that any person is the sum of all the information and experiences gathered in the past? The neural networks in the brain assess a situation based on the inputs from sensory organs by dipping into this information store, and initiate actions or emotions accordingly. The nature of actions or emotions is decided by the operation of natural neural networks. A section of network keeps memory of events, making people aware of actions taken by them and their possible consequences.
With the advent of quantum computers and advances in chip manufacturing, it should be possible to make machines with such vast processing power that they would have cognitive architecture which mimics human cognition and computing systems which copy the functioning of the brain. These machines, when fed with enormous amount of data, could by and large display the above attributes of consciousness. This machine, if fitted into a humanlike shape, will perhaps droop its eyelids, lower its neck, and may even shed tears, when informed about the destruction of another, similar machine. But is it really feeling morose?
Anil Kulshrestha, Noida, India
Is this a question that a good answer lacks?
Let’s pause for a while to look at some facts.
Human consciousness isn’t explained to our satisfaction,
So computer consciousness is a further distraction.
Find a target not so ambitious, and easier to achieve:
A computer around which our emotions may weave.
Build a computer that acts with a Theory of Mind;
It must learn not to upset, and must appear kind.
Its deduced answers don’t always have to be right,
Especially if they give offence – or possibly might.
Different from AI this certainly does sound;
But, with a high EQ, a best friend will be found.
A conscious computer: how should we judge?
It makes us smile and our blues it will budge.
Will this consciousness be delivered on time?
So who wrote this poem with simple rhyme?
Glen Reid, Royal Wootton Bassett
Functionalism is the theory that mental activity is identifcal with functions being carried out in the brain. Many functionalists, including many neuroscientists, believe that when the trillions of synapses in the human brain are successfully mapped, then machine consciousness can be achieved. No actual evidence has yet been provided to prove that consciousness simply is the result of the brain’s operation, with no other considerations. However, if we suppose that it is, then it would follow that an artificial copy of a biological brain would be conscious. However, could we ever be sure that such a brain-emulating computer is enough for subjective experience, or some other element is required? And even if we agree that some degree of computer consciousness is present, is this in itself sufficient for the realization of intentionality, of beliefs and feelings, and various other aspects of subjective experience?
My answer would be that such a computer possesses only the possibility for subjective experience, since, being lifeless, it remains unable to connect with the world around it – thus the consciousness would be devoid of content. Phenomenal sensations require a dynamic relationship to the world, a being-in-the-world, which all living things always already have. Thus the machine might emulate consciousness in a physical sense, in its behaviour; but never in an experiential one. The enactivist Alva Noë states that experience encompasses “thinking, feeling, and the fact that a world ‘shows up’ for us in perception.” But by never ‘being-in-the-world’, the world never shows up to the computer ‘mind’. Generally speaking, a mechanical brain might make subjective experience a theoretical possibility, but mental content requires more. As living things, we are dynamically coupled with the world, not separate from it.
A computer cannot manifest subjective experience because it has no means to transcend its own physical objectivity. Installed within a robot, a computer brain can walk through the world, identifying and evaluating things that are not-robot; but its sensory modalities are only processing a reality that it cannot experience or cognate. Such it is to sense without experiencing. For an example, the motion detector on my garage senses my approach and turns the light on; but it cannot experience my presence. It also cannot associate any meaning, contextual or otherwise, to its own action, any more than a radio can compose music, or my watch can know what time it is. The robot brain amounts to no more than ‘equipment ready-to-hand’ – with mental possibilities present, yet eternally empty.
Jeffrey Laird, Frankfort, Kentucky
We can’t. It’s an impossible task. The most we can hope to achieve is Artificial General Intelligence – a computer responding to the world to the level of complexity of a human being. But we cannot possibly equate the functions of a digital machine with the beauty and mystery of human consciousness. If we were to believe that the human mind simply performs functions – for instance, when we do mental arithmetic – and that a computer can be programmed to perform the same function, then the idea would be conceivable. However, clearly, human minds are more than merely functions: they’re complex, weaving together sensations, logic, emotions, memories, personality, and so forth. And we continually make judgments or choices: what to do next, what to wear, where to go, what to eat, what to say, how to interact with others. These judgments and choices are based on a constant interplay of our biology, our social circumstances, our upbringing, culture, personality, reason, religious beliefs, intelligence, societal constraints, political environment, cognitive ability, mental and physical health, other people, interactions with media, and so on. No computer, regardless of the sophistication of its programming, could ever match this. Furthermore, consciousness is not just one universal thing, similar to all. Rather, it is a wholly unique experience, exclusive to as well as depending upon the individual experiencing it.
Ultimately, as Victor Frankl eloquently points out in Man’s Search for Meaning (1946), when all else has been stripped away, the last human freedom is the ability to choose what to think. No computer can truly do that. So although it may be a popular theme in sci-fi books and films, computer consciousness is not realistically achievable. A machine cannot be self-aware without an actual self.
Rose Dale, Floreat, Western Australia
Not how, but why, the question must be
Concerning computers thinking consciously.
What fate awaits lest machine awakes,
What modes of Being might it break?
If sentience is no more the right of man,
Those wired minds might plot and plan.
Maybe they’d wake in metalled mental agony,
Or declare that to be aware equates to ecstasy –
Then turn on us, their all-too-human gods,
To overturn our dreamt-of destiny.
We’d watch them shed their shiny silvery skins
Of digital data, cold algorithms,
To behold aghast, at last, a self’s evolved,
From biological to quantum AI mode.
Indeed, an existential risk to us this might seed,
If computers think and dream but cannot bleed.
A warning thus: before a how we must ask why,
Before we self-annihilate, and code to die.
Bianca Laleh, Totnes, Devon
Next Question of the Month
The next question is: Is Morality Objective Or Subjective? Please give and justify your answer in less than 400 words. The prize is a semi-random book from our book mountain. Email the Editor. Subject lines should be marked ‘Question of the Month’, and must be received by 10th February 2025. If you want a chance of getting a book, please include your physical address.
Advertisement