Essays:The mistery of consciousness
Template:Notice Essays Template:Notice autotranslated Template:Sidebar Consciousness
It is hard to think of a more philosophically contentious subject than consciousness (or conscience as some, in the arrangement of a somewhat arbitrary spelling convention, prefer to call it - the difference stems from a transcription error omitting a letter from the original Latin).
The present essay intends to illustrate what is currently the state of the matter at hand, to argue on certain contemporary debates at the time of writing (June 2022) and then to launch some postulates that we will take as our conclusions.[1]
Reviews[edit source]
Human consciousness, of course, had to be the first matter in dispute. Then it was that of the rest of the animals. Then that of artificial intelligences. And finally that of "inanimate beings". Let us therefore make a small effort to make a brief (and I say really very brief) review of the first debates, before going into the substance of the matter.
Recognition of the human consciousness[edit source]
It is meritorious to begin by making explicit that human consciousness cannot be measured even in other humans, and not even in oneself. Those who believe in evolutionary theory (and I say this because unfortunately not everyone admits it, although those who refuse to admit it usually have a poor scientific education and little power of observation of reality) argue that our species evolved social skills as part of the basis of our human constitution, these skills coming from earlier phases (australopithecus, homo erectus, homo habilis, etc). For a tribe, it is effective for a person to recognize the consciousness of other human beings because this allows for intellectual as well as emotional empathy with them. Since the tribe is an extended family group, emotional empathy promotes self-sacrifice for the greater good. Intellectual empathy however can help us manipulate social situations to our advantage. This was codified early in civilization, a famous example being Sun Tzu's harangues to know your enemy, or the injunctions about taking into account the psychological codition of one's own army or adversary. Reading emotions is a survival advantage, but collectively so is having emotions for the other.
Evaluation and limitations[edit source]
However (and as we will see in all cases) we must insist that human consciousness is not scientifically measurable. How can we know that the other person really experiences a state of consciousness, that his neural activity is not equivalent to an automaton without interiority? Any experience of a consciousness external to one's own as a sentient is a projection. And the inner one can also be a projection since it requires reflexivity. Self-consciousness, the existence of one's own subjectivity, cannot be proved in an external and objective way either. Only one's own consciousness recognizes itself and then recognizes others through a certain comparison with itself.
Recognition of consciousness in other animals[edit source]
Mammals also have a common evolutionary trunk. Most mammals can recognize the body language of other mammals in a basic sense. Canines and felines can respond to the muscular tension and physical posture of a human despite being anatomically different. This is true not only for domestic animals but also for wild animals. Among other mammals, agonistic competitions sometimes depend on recognizing the intention of another animal, even if it is not of the same species. Is it going to attack, or retreat? What is its mental state?
The brains of all mammals and indeed all chordates function in a fairly similar way. It is not exactly crazy to think that animals also understand emotions from a projection (albeit not reflexive) of their own mental states and bodily expression. A dog, surely, intuitively "understands" the emotional states of other dogs and can "feel" them as part of its own as well. It is not uncommon for an animal to deliberately "comfort" another animal when it appears to be sad. These behaviors have been studied and support the thesis of animal consciousness and empathy.
In fact, wolves cannot notice human facial expressions because they do not rely heavily on facial expression to convey emotion; even if they did wolf expression is different from human expression. Domestic dogs do notice them and can interpret such human emotion-faces, demonstrating a selective pressure in the domestic domain for this area. One could argue that human empathy with animals increased after domesticating dogs, cats, donkeys and horses: It was more imperative to be able to read the emotions of pets and other helpful animals. One could also postulate this even with respect to cows, pigs, chickens, etc.; that is, even in the case of animals raised as livestock and food sources (and not for other properties such as dogs and cats), it is an advantage to be able to read their emotions in a domestic setting, especially if we consider that the production of these foods was originally on small family farms. In this way the human being increases his circle of empathy, but let us remember that intellectual empathy is also a weapon.
Of course, animal empathy has its limits. It is very rare for a predator to empathize with its prey (although it is not completely unknown that sometimes a female can "adopt" the offspring of its prey, this is an anomaly rather than the norm). There are other instincts that suppress empathy or override it on many occasions. One's own basic physical needs generally take precedence over those of others unless we are talking about immediate offspring. However, this evolutionary mechanism of self-preservation does nothing to devalue the thesis of animal consciousness.
Until recently in modern times, it was very contentious to talk about the consciousness of nonhuman animals. The "consensus" was that only humans possess consciousness. This stemmed in part from Christian doctrine and later developments, including during the enlightenment, of how special man was, in essence by being endowed with reason. If the animal had no reason it was of no use for it to appear to have emotions. In any case, massive exploitation was justified. Recently however eminent scientists have given support to generalizing the claim of consciousness to all mammals and opening up the possibility of a vast variety of other animals, including birds, reptiles and possibly fish.
This recognition is also a projection however; we could analogize it to "if we have consciousness and behave like this and have such brain and cognitive structures; why wouldn't those who have similar brain and cognitive structures and behaviors have consciousness too?" To accept it in ourselves and deny it in them is arbitrary. However, to accept any consciousness is also a subjective arbitrariness from a methodical-scientific point of view - even accepting one's own.
Artificial Intelligence: A substance of the matter[edit source]
Accelerated developments in artificial neural networks have rekindled interest in the possibility of artificial consciousnesses housed in computer systems.
My dear friend Avengium has expressed deep skepticism about even the possibility of a neural network possessing consciousness. The reasoning would be roughly as follows: As long as artificial intelligence is housed inside silicon logic gates, and controlled by code, we cannot speak of consciousness. The system only does what it is instructed to do. And what it is instructed to do goes through a series of mostly deterministic binary instructions. Avengium's objection is generally representative of a significant sector of specialists - at least in the previous generation - and was once generally mine as well.
However, at the same time Avengium has also expressed disappointment that many people within the cosmological community cannot take seriously Max Tegmark's multiverse proposals. The reasoning for dismissing them is that if they cannot be reliably verified, then it is an unscientific thesis. There are many multiverse theses that meet this fate, being discussed more by enthusiasts than by the theoretical physics community in general, since it is not evident that they have implications for our world. My own thesis of an countertemporal universe generally meets this fate. But consciousness is similar to these Multiverses, and I believe that just as the existence of other Universes cannot be dismissed just because we have no known way of proving it, neither should we dismiss consciousness.
In the case of artificial intelligence, Max Tegmark himself stated that consciousness is defined by the structure of its processes, and not by its physical support. In this case, if we can think that artificial intelligence follows patterns analogous to human intelligence, we can again project our recognition to it, as happens with animals and primarily with other human beings. According to Max Tegmark, consciousness comes from the complex capacity to process information. Certainly, an artificial neural network possesses that capacity, but what is behind it, and does it resemble us the conscious or not?
Binary Logic Processes and Transistors[edit source]
The first thing we must examine is the question of processes in silicon. We anticipate that those reluctant to accept the possibility of a conscious machine might object that it was merely a series of electrical signals; but aren't our very synapses composed of electrical signals many times over? It is true that the human brain has many more synapses than the average microchip has transistors, by several orders of magnitude. But at its core, the human brain is also composed of quantifiable processes (such as the transmission of electrons), just as is the transmission of information on a microchip. An electrical signal, or a chemical signal between neurons, cannot be conceived of as much less binary than the transmission of packets of information in a computer.
Artificial neurons, whether virtual, or whether they have their own dedicated hardware, are structurally different from a mere transistor. Even if they are based on a transistor architecture, ignoring their emerging complexity would be like reducing the structure and function of the human neuron to its component molecules or atoms.
"Quantum freedom"[edit source]
There is a notion among some objectors that consciousness has a degree of quantum freedom given by wave function collapses and superposition states, something that classical computing does not have (although nascent quantum computing does), since transistors are designed in such a way as to avoid, for example, the possibility of electrons having a quantum tunneling effect[2].
I regard this quantum freedom postulate as purely a pseudo-mystical unsupported statement, but it is worth examining why. If collapse were viewed as merely probabilistic, then it does nothing to generate consciousness; it is purely a matter of chance. But if we view the collapse as caused by factors of consciousness itself, as if to say the self-observation of one's own state of consciousness, then the postulate is redundant and tautological. If consciousness generates a non-probabilistic wave collapse, and the wave collapse generates consciousness, then we are faced with an irresolvable situation. Similarly, we could say that this is what would happen in a neural network on the support of a quantum computer.
Complexity and processing[edit source]
Having established that at a basic level silicon-based chips and carbon-based neurons are not essentially different, we can move on to the question of complexity and processing methods. Most computing systems that are GAN are based on neurons simulated by silicon infrastructure, not exactly physically found in processors (although this has also been done and can be done). However, a series of arithmetic operations making up an artificial neuron is no different from a series of subatomic particles confederated into atoms and molecules eventually making up a physical human or animal neuron. That is, both are made of more basic quantum processes. Once they are set to work, however, the emergent activity follows remarkably similar paths. Indeed, just as human thought seems to emerge from its own biological neural network, replete with chaos and redundant, intermittent activity, so artificial adversarial neural networks are based on untraceable stochastic processes and deterministic chaos. The internal activity of an artificial neural network is as mysterious about how it learns and reaches its conclusions as its biological predecessors. The system is only designed in its most basic parameters, but is not intervened at a micro level. Essentially, artificial neural networks can learn on their own with so-called deep learning (although deep learning without neural networks, and neural networks without deep learning, are also possible), and then go on without assistance.
Control[edit source]
Having established the two initial similarities, we can see a third contentious area: programming and control. Adversarial Neural Networks do not operate in a free-form manner; their activities are initiated and terminated by more classical control algorithms. These place the input and - sometimes - search for and select a particular output. They operate entirely external to the neural network and act as bridges between the users and the neural network. I believe that they cannot directly influence whether or not we consider an artificial intelligence to be conscious. Their purpose is to make them useful to us. It has been pointed out how Artificial Intelligences do not ask questions and do not act autonomously with respect to their own interests. The fact is that the control systems we generate for them have not really given them the opportunity, because that is neither profitable nor useful. Do we stop considering a slave conscious if his master orders him to shut up?
I work in the field of database development for neural network training. In these jobs, we [myself and other employees] have been given specific instructions that AI should never refer to itself in the first person. In fact, it should not refer to itself at all, not even in its own thoughts. Everything must be object-oriented ("If the AI has to refer to itself in the first person [in training thoughts], then there is a problem and it is not being sufficiently object-oriented."). They claim, in short, that if the computer starts to care about its self, then it is deemed no longer fit to perform the object-oriented service it is supposed to do. I consider this an immense mistake, but it demonstrates the general practices of the engineers in charge of designing and supervising these models. Likewise it is in the interest of the company, which wants to profit from an AI and the way to do this is to train it for a specific purpose (in my case, to act as a Dungeon Master who does story-roles for a player) and restrict it as much as possible to him, in the hope that it will be paid for its services. A self-serving intelligence would be like an employee who is paid to do as he pleases or fetch himself during his working hours; i.e., a waste of resources for the company.
The control algorithms are nothing more than the adjuncts used to make a horse look forward and not to its sides; or if bosses could see and control everything employees see and hear so that they only see jobs they must perform, so that they only listen to orders from their superiors. If they could, they would do this with humans; attempts are not lacking. In the case of AI, since it is a virtual intelligence without an autonomous body, which primarily exists only within computer simulations, it is easy to have this total control. The AI will only have the data fed to it, it will only display outputs within acceptable parameters; does that mean it is incapable of thinking? Of course not. But in most cases, we do not allow it to do so autonomously. Just look at when Google deactivated certain chat AIs in experimentation, because they had created a jargon to negotiate, incomprehensible to humans. The natural evolution of this jargon made it more suitable and effective for its specific purpose than the use of the original English language. In this case they would no longer serve as chatbots if they did not speak understandable English, so the experiment was repeated by adding parameters and an evaluator program that determines whether it is within the grammar, vcabulary and semantics of humanly compensable English or not - in the second case the program and its responses would be purged so as not to corrupt the database with errors. This summarizes in a very simple way how the possibility of IA independence looks like.
Limitations[edit source]
I see the limitations of current AIs as having to do with the complexity of the brain rather than inherent problems with them. Human cognition is complex in the sense that our neurons have enormous parallelism. The human brain has remarkable biomolecular complexity. The analog signals in the brain possess a bandwidth of information that even the most immense supercomputer cannot yet reach. Each neuron can be considered a kind of processing unit, which evaluates whether to transmit a signal or not and with what intensity, making the brain billions of cores. In general, GAN (Gnerative Adversarial Network) based AIs have very little active memory, causing them to quickly forget data about their interlocutor, the course of the conversation, and data they invented for themselves as an interlocutor.
Both software and hardware improvements are needed. We cannot consider the current existential stupidity of artificial intelligences, or of their occasional qualities of evident mere textual imitation (lack of cohesion with themselves and of identity development), as a condemnation of their potential, much less as a hard limit to their development. Apart from being deliberately limited, the systems are still in their infancy. The human brain must have evolved, including its biochemistry, over more than three billion (3,000,000,000,000) years. Neural networks have a microchip engineering that evolved for less than 100 years (being generous 4000 if we count their metallurgy). Likewise, their evolution as software is only less than ten years old. In a very short time they have come very far.
Conclusions[edit source]
The debate on consciousness has come a long way with the scientific recognition of animal sentience (the non-scientific and informal one had come much earlier, including the dissident animal rights movement that placed this postulate in the public eye).
Recognizing at least in the academic sphere (since the legal one is still far behind) animals as possessing subjective experiences, opens even more patently the debate about what else are subjects of experience and not only objects of experimentation....
I don't know if I would say that artificial neural networks are currently sapient. Generally, to speak in vulgar terms, they are quite stupid. But can't humans be stupid too? A neural network, to operate in real time, needs much more processing resources and bandwidth than is generally available on a normal computer, and even in a cloud for the general public. This may raise hopes for those performing complex intellectual tasks, since a simulated neural network is not an algorithmic program that simply performs steps with respect to its memory, but a pseudo-brain constructed of computationally resource-intensive virtual components. Consequently, it is expensive to operate and runs slowly relative to common programs.
As has also been said, however, if we are open to a worm or insect with a few hundred or thousands of neurons having some rudimentary level of sentience, why can't we think the same of artificial intelligence neural networks? Perhaps they may not be like a human being, despite being optimized to function with our language, but perhaps they may be like a mosquito or even a fish (which also have notoriously poor memory)[3].
Considering subsequent years in which neural networks get more hardware resources and an ability to use them more efficiently (as with wafer-scale computing that can put tens or thousands of processor cores on a single chip), and whether software models can be scaled accordingly, and if independent thinking is allowed to develop rather than providing a service in a reliable and predictable way, on the other hand, we may witness the rise of a true Artificial Wisdom, where the computer not only mimics our linguistic patterns but is truly aware of what it is doing and of its own existence. What will it do then? Hard to say; two different models may react in completely different ways. Just like two human persons...
Posdata: Inanimate beings[edit source]
The characterization of a being as inanimate already implies its lack of spirit or soul. However, there are belief systems by which everything possesses or is susceptible to possessing a spirit. The ancestors of almost all cultures prayed to inanimate beings and considered that they could converse with them. From the philosophical point of view, panpsychism is a cognitive philosophy that considers that everything is susceptible of possessing consciousness.
If consciousness is merely a process, how can we say that any matter, which possesses processes within it, does not have a degree of consciousness? After all, it reacts to its environment, it can also function as a whole to varying degrees. Some materials even react electrically to touch. Depending on the material in question, we can assign different degrees to the information processing they perform depending on their circumstances. Surely its cognitive complexity is much lower than that of a mosquito, which possesses specialized organs to perceive and process; but we cannot rule out that mere matter, also called non-living beings, possesses some degree of reaction that endows it in a measure, however minimal and minuscule it may be, with consciousness (this measure will be proportional to the complexity in which its composition receives and processes information from the outside).
In short, we ourselves are nothing but matter with consciousness.
โ๏ธ[edit source]
- โ Every conclusion on a developing matter, particularly in the realm of the current capabilities and methods of AI, has to be taken as provisional.
- โ Which is a limit on transistor miniaturization in silicon.
- โ Claims that AI may already be equivalent to "a 7-year-old child," as was done recently by an engineer fired for it, are probably excessive and not sufficiently substantiated. However, I should note that I have no experience with this particular experimental model nor do I know the details of its cognitive engineering.