Search
Close this search box.
Search
Search
focus on september 2024

"Embracing freedom in the age of Artificial Intelligence (AI)" by Marco Schorlemmer

Maybe we should orient our research in AI in a direction that promotes the creative freedom of human beings of all other beings of our environment of reality itself.
MSA liberta in era di IA_Schorlemmer
a detail of Rainbow (1912), Jalmari Ruokokoski (Finnish, 1886-1936)

Thanks to Science & Wisdom LIVE for the kind contribution of Marco Schorlemmer. Listen the podcast of Marco Schorlemmer.

In the last decade artificial intelligence has gained a lot of attention from industry from the media and from government agencies and this is largely been due to the impressive achievements of AI-based systems when tackling very complex problems that until recently had resisted to be solved by a computer programs. So the successes of AI have been accomplished using a particular kind of algorithmic technique known as deep learning and this technique is based on so-called neural networks which is the name given to to a kind of computational model that is loosely inspired by the neuronal structure of our brain and the spreading of electrochemical waves by mean of synapses.

Neural networks are not new to the field of computer science, their theoretical foundations go all the way back to the beginnings of artificial intelligence research itself in the mid-20th century. What has changed in the last decade though and which has led to the recent successes of the application of this technique is the high-speed and power of today's computing machines and the enormous amount of data available to train these algorithms for solving a particular problem or task. The basic idea is that a computer engineer sets up this computational model and then uses the Deep learning technique to start to adjust the parameters of the model by exposing the computational system for instance to a set of images that need to be classified let's say some brain scans together with the appropriate category for these images, if this scan displays a particular kind of brain tumor or not. Once the computational model reaches an appropriate success rate in distinguishing images with tumors from those without them it can then be deployed for instance in an automatic diagnostic system that processes new and classified brain scans. In the scientific literature many of these sorts of applications have been reported with an accuracy that equals or even outperforms that of human experts, leading to the expectation that in the near future many tasks that are now reserved to humans could eventually be replaced by A-based computing systems with better performance results.

Now this obviously raises many ethical concerns about how AI-based systems should be developed and deployed in our society and the urgency to address these concerns is becoming more pressing as these systems are applied in more and more domains such as medical diagnosis or approval of probation requests or selection of personal, policing and even automated warfare.
So in recent years we have seen many attempts by way of manifestos, declarations or guidelines of principles to reflect on how the development and deployment of these AI-based computing systems may foster the common good so it's not to become detrimental to society. My own research institute in Barcelona has been putting forward the Barcelona declaration for the proper development and usage of artificial intelligence in Europe, which proposes a general code of conduct for AI practitioners. In the first point for example it calls for prudence in the actual deployment of AI-based-systems. Very often the success stories of deep learning as reported in the scientific literature are limited to very constrained scenarios with very carefully designed neuro-networks that are in general inappropriate to be deployed in real world situations. So a premature application of AI technology can be on one side disapponting but it can be even very harmful. Other issues in the Barcelona declaration are the reliability, the accountability and the constraint autonomy that AI systems should display and also what the human role should be in highly sensitive scenarios. These guidelines and principles are an important step towards beneficial and trustworthy applications of AI technology however the attention is primarily put on the outcomes and the effects of this technology. Rarely the actual assumptions or motivations or research directions that drive the actual professional practice of artificial intelligence practitioners is made explicit and is brought to the foreground. So let me reflect here today on some of these core assumptions and motivations of the artificial intelligence research program.

Western thought since the Ancient Greek has put a lot of value on processing intelligence although the Greeks called it primarily reason or rationality they saw it as a core part of the intellect that what sets us humans apart from the rest of creation and it also justified that rational beings could rule over irrational ones. I think that today's quest for artificial intelligence should be understood with respect to this importance we have given to intelligence in Western thought. As a consequence on the one hand we dream of programming computing machines that might become as intelligent as us or even surpass human level intelligence so as to help us tackle the big problems of humanity such as war or disease or poverty and on the other hand we also fear that such super intelligence machines would end up ruling us, would end up dominating and exploiting us. The reason for these dreams and fears in my view is to be found in our understanding of this phenomenon that we call intelligence. Today many scholars advocate for much broader understanding of this phenomenon which goes beyond this narrow view of intelligence based on reason and rationality alone and we also have started to go beyond the human and consider today also animal and plant intelligence. Still I think there are certain aspects of our current understanding of intelligence that go largely unquestioned. Namely that intelligence is correlated to our capacity to learn, to adapt and to succeed in our interactions with other organisms and the environment, that intelligence is observable so that we can see if someone is acting intelligent or not. it's also sensible to say that we have more intelligence if we better learn, adapt and succeed so that intelligence is somehow measurable, that we can determine to certain extent if someone is more intelligent than someone else and then that intelligence can be attributed to individual entities, to persons, to animals, to plants. Scientific research on intelligence is based on this understanding that sees intelligence is an objective, observable and measurable property of individual entities.

AI research adds to this an additional assumption namely that intelligence is based on the information processing capabilities of organisms and that consequently, information processing machines such as computers could eventually also exhibit some kind of intelligent behavior. This is assumption however I think rests on a confusion and the misuse of the notions of information and of intelligence. Computers are essentially the data processing machines with data represented and processed in a digitized form. Information though is an abstract form of knowledge based on the regularities we observe in the world and that we then use for predicting and controlling phenomena. As such information is the form of knowledge that is characteristic of the technosciences but information is actually a human concept. Information carries meanings for us humans and it is us humans that see the data processing of computers as information processing, computers do only number crunching and they do it without ever knowing what a number is or even what crunching is, so actually they do it without knowing anything at all.

Intelligence as manifested in humans is not based on information processing only, such an understanding of intelligence touches in my view only one aspect of this phenomenon what we might call the functional dimension of intelligence. This is the dimension that focuses on what intelligence is for, its function for adaptation, for problem solving, for attaining goals, for success, for survival. But in addition to this functional dimension I think it is fair to say that there's also an important evaluative dimension to intelligence one which provides meaning and value to what we do, it connects us to the aesthetical dimension of life. Artists for instance exploit very much this dimension of intelligence. Knowledge then is an essential aspect of intelligence when taking into account both of its dimensions: the functional and the evaluative. Knowledge is not only descriptive which would be its information or content, it is fully charged with sense, with emotions and with values. I may have the information about what a certain disease such as cancer does to a human body and how best to tackle it by means of drugs and therapies but it is an entirely different issue to know how cancer is affecting my body or that of a loved one and to know what it means to suffer this disease. Still this evaluative dimension like the functional dimension as well is a dimension that is relative to our needs as human beings, so functional and evaluative intelligence are conditioned by our biology.

Ultimately our intelligence depends very much on our bodily interactions with other organisms and our environment and we function and experience differently as other living beings and what is meaningful and of value for us humans might not be for other creatures of this planet. Now this insight namely that our intelligence is relative to our contingent needs as human beings, this insight freezes us from getting caught within this particular reality we conceptualize and experience, this particular reality we sense and makes sense of in this particular place and time in history and this is a liberating insight. A reality is not only what we can know, each knowing is entangled with an unknowing, this unknowing dimension transcends the relativity of our human needs and points to an absoluteness of reality consequently it's also the source of our capacity for changing reality itself so this relative understanding of it, to respond creatively to it, to be co-creators of reality to participate in its creativity so to say. The technocientific progress that we do would not be possible without this free and creative response of intelligence so let's call it the liberating dimension of intelligence. This unknowing should not be confused with ignorance, ignorance is not knowing, not knowing that something that can be known. Ignorance can be detrimental to our survival as living beings.

We are a society that is very much focused on knowledge and in particular on information which is this abstract form of knowledge that is characteristic of the technosciences. So scientific progress is primarily focused on generating this information on knowledge, for instance in the in the current covid-19 pandemic we have often heard about the importance to listen to science because ignorance causes deaths and AI can be a wonderful tool in this information knowledge dimension, it can be a wonderful tool for understanding the working of the sars to virus or for understanding our immune system and to understand the effectiveness of certain drugs syntling the covid-19 disease for predicting the spread of the virus and for fostering also the communication and cooperation of us humans that are coping with this pandemic. So AI can be a wonderful tool but not on its own. Remember computers do only number crunching without knowing what the number is and what crunching is. AI's role is always to be situated in this broader picture of intelligence with its functional evaluative and liberating dimension and also within a socially embodied intelligence that grasps the meaning of disease of pandemic, of suffering, of death that knows what all this is with all its evaluative content.
When I say: socially embodied intelligence I mean that this intelligence transcends our individuality. We participate in it as human beings but it is not a property that we possess as individuals. We cannot be isolated intelligent entities because it is through our sense making of the interactions with our environment and our participation in society that we are constituted as intelligent human beings. And when I say we need to situate AI in a human society that knows what disease, pandemic, suffering and death is, I mean that knowing runs deeper than having information, it touches life in its concrete manifestation and it has also this unknowing dimension to it which we fail to appreciate when we focus only on the abstract information or content of knowledge.

So when addressing the ethical issues surrounding AI research and the development of aAI-based systems and the deployment in society I think we need to go beyond the current focus of intelligent as an objective, observable, measurable property of individual entities and of this view of intelligence mainly grounded on the processing of informational knowledge, because staying at this data and information processing level we do not reach the value laden level of our experiential knowledge that is directly relevant to ethical and moral issues. Most importantly we need to acknowledge the liberating dimension of intelligence that comes from being aware that knowing is always relative to our needs as living organisms and that knowing is entangled with this unknowing dimension which lies at the heart of the creative freedom of reality itself. Trying to address ethical issues of AI by means of computational information theoretic model although done out of good intentions is in my view not only insufficient it carries also the risk of degrading these issues as if they could be addressed effectively at an information theoretic level but data processing systems cannot know what it means to act ethically in a human society, much less the aware of this annoying dimension of knowledge is to be able to to respond and act creatively in a given situation.

So as a society I think we need to grow out of a predominant technoscientific view of progress and problem solving although obviously accumulating informational knowledge is necessary to address many of the challenges that our societies face today but we need also to grow in awareness of the creative freedom at the core of our reality. And here is why I think that cultivating contemplative ego decentering practices such as silence, attention, meditation are of utmost importance for today's societies and should be a core aspect in education and also in higher education because these practices exercise our detachment from the content of our knowledge and they help us grow in the awareness of the unknown.
For scientific research in general these practices allow us to get again in touch with the contemplative core of scientific inquiry itself, the genuine all and wonder towards reality and the love for knowledge which drives research, the silence and attention that leads to a deeper understanding of the phenomena under study, the sharing trust and respect that creates a transnational cooperative community of peers, the egoless or ego de-centered service towards society and also the humility to be careful about what one can state and what not and to always be open to be corrected and to change one's understanding of reality. So for AI research in particular, growing in awareness of the unknown through these practices might make us reassess up to which extent the original aim of endowing machines with more and more autonomy and intelligence is actually meaningful or valuable as a scientific objective, we might be wasting too much effort in this direction. Clearly the technology we develop under the label of AI, necessarily is transforming our informational reality, it creates new realities and so far it changes the interaction between us and our environment and therefore also transforms this social embodied intelligence that is manifested in our interactions and it is in this new realities, dynamic and constantly changing, with the new ways of living and feeling that arise in them, where we must learn to live our intelligence fully along all its dimensions. Maybe we should orient our research in AI in a direction that promotes the creative freedom of human beings of all other beings of our environment of reality itself.
As of today unfortunately much of AI research seems to has fallen into the hands of a few big technological companies that follow mainly an economic rational and so strengthen current power distributions and inequalities that limit our intelligence and threaten our creative freedom. Consequently I think AI should put its focus on this joint collaboration of human and non-human entities and the interaction with the environment let's call it and ecological intelligence, the shared intelligence that arises within this collaboration so as to help us accomplish our human fullness as beings who live on the one hand immersed in the informative and value laden reality we create with knowing but on the other hand also live in the absolute free and creative reality we can only grasp by unknow.

Share this article

Other entries

Recommended readings