Search
Close this search box.
Search
Search
focus on september 2024

"Artificial Intelligence: foundations and perspectives" by Mario Cimino

Precisely in order to preserve human society from a technocentric drift, subservient to professions that increasingly take human beings to the limit of their biological capabilities, artificial intelligence will enable the development of more human-centered professions and technologies.
MSA intelligenza artificiale fondamenti e prospettive
a detail of Superimposed Forms (1938), Jessica Dismorr (English, 1885 – 1939), artvee.com

In order to introduce the fundamentals of artificial intelligence, a few premises should be made. Human beings tend to have an anthropocentric view of intelligence, which is an obstacle to a deep understanding of it. Consequently, the first concern is often to reiterate what a machine will never be able to do, with the aim of drawing a clear line between what can be equivalent to a human and what cannot. In recent times, artificial intelligence offered a new paradigm for the design of machines that, from being systems capable of performing specific and repetitive tasks, are becoming systems capable of performing different activities in a coordinated manner, of adapting, and even of evolving. After all, human intelligence itself, which was initially guided exclusively by natural evolution, has gradually adapted over the centuries to intercept the needs and aspirations of civilisations, becoming increasingly ‘artificial’ as it serves the principles and values of societies and their organisation into professions, each of which is based on specialised knowledge, skills and technologies. Therefore, even the distinction between natural and artificial is ambiguous and can be an obstacle to a profound understanding of intelligence.

There is also a major problem with the naming of technologies, a process that would require expert linguists, but which in the case of artificial intelligence is carried out with arbitrary criteria, becoming another misleading factor. Terms such as neuron, learning and attention are often used metaphorically to describe computational models that, although inspired by biology, function very differently from their biological counterparts. In the most widespread artificial neural networks, the artificial neuron corresponds to a computational procedure that receives numerical input values, aggregates them with simple arithmetical operations, and produces a numerical output value. Although the name derives from biological neurons, the mathematical operations of the artificial neuron are also designed to be performed in parallel by electronic boards and integrated circuits. Similarly, the concept of machine learning refers to the optimisation algorithm that enables an artificial neural network to improve its correspondence with a set of input and output data, such as the recognition or generation of text, images, sounds, smells, mechanical actions, and so on. This algorithm was also designed to be executed by electronic computers. These are therefore mathematical procedures of a different nature from human learning, which on the contrary involves more articulated and complex cognitive processes. The last example, the attention mechanism in neural networks, is a method of giving different emphasis to different parts of the inputs, improving the network's correspondence with a set of input and output data. This method does not correspond to biological attention, which is a much more complex and dynamic focusing process. In general, artificial intelligence systems developed for products and services have an engineered design to behave effectively, but they do not employ the internal structures and processes of the biological mind, which are far from being entirely clear. Nevertheless, it is amazing to think how artificial mechanisms so simplified compared to biological ones can manifest intelligent behaviour when connected in large numbers. Last but not least, human beings tend to focus too much on the brain when studying intelligent behaviour, whereas intelligence is a paradigm that can also explain the behaviour of apparatuses such as the digestive system, systems such as the immune system, cellular organisms, micro-organisms, and has made possible, for instance, the development of new intelligent algorithms inspired by them, of intelligent artificial tissues and materials.

Thus, there are different intelligences developing and evolving, in a hybrid ecosystem inspired by biology and human activities. In particular, one branch, humanoid artificial intelligence, develops artificial intelligence systems designed to mimic the appearance, behaviour and cognitive abilities of humans. Humanoid robots, endowed with human features, are able to interact with the environment and people in a natural and intuitive way, which is particularly useful in areas such as healthcare, education, entertainment. In this context, it is increasingly important to develop new disciplines and professions to harmonise the coexistence of these different types of intelligence, preserving health, freedom, ethics, and many other values that are inalienable to human beings.

Intelligence is thus a compound capability, which can express itself through different physical means and with different combinations of characteristics. An intelligent system does not necessarily have to have consciousness to manifest the ability to behave effectively in new situations. Today's intelligent systems can use data and machine learning algorithms to develop their own logic, without relying on the rules defined by human experts with the same data.

One of the most interesting qualities of the mind, as it is commonly regarded as the prerogative of human beings, is consciousness, i.e. the ability to understand oneself and the world, to have subjective experiences. From an anatomical/functional point of view, it is well known that the biological brain is composed of subsystems, and from a structural point of view, it is possible to develop an artificial subsystem that observes the activities of the artificial system containing it, in order to support it in the short, medium and long term. However, from a functional point of view, it is by no means easy to develop models of this kind, as they are based on unknown internal representations of the mind.

In the field of models of the mind, the development of artificial intelligence systems consisting of different artificial neural networks with the role of specialised agents is a relatively new and promising field, and poses several scientific and engineering challenges. Extending this design paradigm, of great interest is the development of intelligent multi-agent systems capable of manifesting emergent behaviour. Emergent behaviour is manifested when a large number of agents characterised by simple and adaptive properties, operating in a collaborative and/or competitive ecosystem, give rise to behaviours that represent a subsequent level of system evolution, where learning is the set of mutual adaptations of agents' properties, made possible by progressive interactions between them. In mathematical terms, the properties of agents can be made adaptive through an appropriate calibration of parameters. In mathematics, a parameter can take on different numerical values in order to modify the agent's behaviour. For example, the focusing of a photographic lens corresponds to the calibration of a parameter, and can be performed manually or by an intelligent system. Focusing can be more or less accurate, and evolve with experience. Fundamental to artificial emergent systems is the mechanism of parametric evolution, which can be driven by evolutionary algorithms, mathematical models inspired by the laws of natural selection. The emergent paradigm in turn can be used to develop a connectionist model of the mind. The connessionist approach holds that in order for a system to behave intelligently, it is necessary to reproduce the functioning of the brain at the cellular level. To do this, the connessionist approach postulates the use of neural networks, although, as already mentioned, in reality artificial neurons are quite different from biological ones whose functioning is far from clear. The connectionist paradigm has become predominant in recent years due to advances in deep network learning. Instead, the symbolic paradigm aims to build intelligent systems through symbols to represent concepts and relations, and logical rules to represent and process knowledge. The symbolic paradigm works very well in the organised and technologised activities of industry and services. Today, attempts are being made to combine the two approaches in order to achieve greater interoperability between humans and artificial intelligences, through explainable artificial intelligences, i.e. artificial intelligences capable of describing their decisions through concepts and relations.

In the field of consciousness, there are several models, among which one model that has become popular among scholars of the mind from different disciplines is the emergent model of consciousness, which hypothesises that consciousness is not the result of a single process or a single part of the brain, but rather emerges from complex interactions between multiple brain components. This approach is used to some extent in theories in neuroscience, artificial intelligence and philosophy of mind. In particular, the focus is on the self-organising properties of agents, which require less external guidance. In this regard, a technique known as reinforcement learning allows an agent to improve in performing actions in its environment to maximise a cumulative reward. In reinforcement learning, the agent learns through trial and error. It is a paradigm inspired by the processes by which the human brain learns from past experiences. Through reinforcement learning, complex strategies emerge from the interaction between the agent and the environment. Well known are the applications of reinforcement learning in the context of complex board games, which demonstrate how learning by reinforcement can lead to emergent complex behaviour and exceed human abilities. Theories, such as that of Integrated Information, have been developed that attempt to explain how consciousness can emerge from complex neural networks and the capacity for unitary integration. A highly integrated system cannot be reduced to independent parts without losing essential properties regarding its function.

With regard to the potential of artificial intelligence, important philosophical questions arise. Weak artificial intelligence is the assumption that artificial intelligence will only be able to reproduce the behaviour of the mind, i.e. an externally convincing simulation of the human mind. For example, when we interact with modern language models, we are under the impression that they understand what we say and answer our questions accurately, even though this is not in fact the case. In contrast, strong artificial intelligence is the assumption that artificial intelligence will sooner or later develop not just simulations, but real minds, conscious machines with consciousness. Significantly, the distinction between the two assumptions is based on a specific function of the mind, consciousness, considered in this sense to be a defining characteristic of mankind. Weak artificial intelligence has as its prerogative the improvement of the operational efficiency of human activities. In contrast, strong artificial intelligence has significant ethical, philosophical, legal, and so on, because with it the machine becomes a subject. But the human being also becomes an object, since his biological mind is distinguished from the artificial mind by the mere fact that it functions with biological instead of synthetic tissue. The biological brain thus becomes an object of improvement as it is equivalent to an artificial one. Strong artificial intelligence therefore demolishes the principle of responsibility as a human prerogative. Or, seen from another perspective, it strengthens it. Because even strong artificial intelligence becomes an object of improvement, being able to be enhanced beyond the limits of biology, to provide cognitive support capable of solving problems not within the reach of the human mind, but still according to the principles of human nature. Human beings thus become capable of understanding and adopting accurate solutions to a greater number of problems, making more informed decisions and thus becoming more aware and responsible. For example, in the area of environmental impact, many decisions are currently not responsible because the consequences are not fully predictable and understandable. In many societies, primarily in Europe, the principle has been established that artificial intelligence must be able to explain how it achieves its results. However, at the moment, artificial intelligence systems lack contextual understanding and common sense, they do not equal human beings in creativity and intuition, emotionality, sociality, ethics and morality, all fundamental aspects in conscious human decision-making.

The chances of a machine becoming conscious, i.e. assuming a state of awareness of itself and its environment, depend on the models accepted by modern neuroscience, a relatively recent multidisciplinary field. Other mammals manifest some kind of awareness, in addition to humans. Therefore, it appears that cognitive architecture capable of providing awareness has some evolutionary advantage, in terms of autonomy, robustness, resilience, and so on. Some developments in artificial consciousness are based on a working hypothesis called computational functionalism: the thesis that the implementation of appropriate computations is necessary and sufficient for consciousness. Notable among models of consciousness is the ‘global workspace’, which was proposed in the late 1980s and is still being developed today. This workspace, although limited, is available to many specialised processes of the brain, such as vision, language or memory, some of which are unconscious. Attention acts as a spotlight, bringing some of these unconscious activities to conscious awareness in the global workspace. The global workspace functions as a centre for the transmission and integration of information, which can be shared and processed by different specialised modules. To realise this, observable structural properties are needed, e.g. in terms of connections. Such properties may constitute indicators of consciousness. This brings us to the problem of testing cognitive capabilities. Since Alan Turing's well-known test of the 1950s, which only measures the ability to imitate human behaviour, many other tests have been developed that assess self-awareness, contextual understanding and the ability to feel emotions. Recently, scholars from different disciplines have attempted to bring together the various theories of the mind, determining a collection of observable structural properties for each theory, thanks in part to modern technologies for analysing cellular signals, translated into indicators, in order to assess whether a given system can be conscious. These test batteries are targeted at different agents, from subjects with disorders of consciousness to infants, fetuses, animals, artificial intelligence (AI) systems, neural organoids and organisms created from biological tissue. The tests are of various types, and involve recording neural activity and comparing it with the activities of reference subjects. In particular, the subject can be instructed to imagine situations, to enjoy content, to smell pleasant/unpleasant odours, to receive an impulse of magnetic stimulation, to listen to different sequences of sounds one after the other with progressive divergence, to learn the relationships between compound and novel stimuli, separately in time, in order to assess whether he/she is conscious from different perspectives.

In conclusion, ongoing research in the field of artificial intelligence may lead to the creation of increasingly sophisticated models of the mind, capable of simulating aspects of human consciousness, and to redefine many of the fundamental concepts of our civilisation, influencing work, social relations and economic structure. Precisely in order to preserve human society from a technocentric drift, subservient to professions that increasingly take human beings to the limit of their biological capabilities, artificial intelligence will enable the development of more human-centered professions and technologies. In this sense, pursuing strong artificial intelligence is opportune before it is possible, since it is an artificial intelligence that is fully comprehensible to human beings, endowed with consciousness and thus capable of improving their responsibility. We need only think of the contributions that artificial intelligence could make in synthetic biology, a discipline straddling engineering and biology, to develop biological components and systems that do not yet exist in nature, to redesign and produce biological systems that already exist in nature, to imagine a future characterised by many types of intelligence, where the line between biological and synthetic, human and non-human, will be difficult to draw.

Share this article

Other entries

Recommended readings