Greetings to all readers and subscribers, and special greetings to the paid subscribers!
Here’s a very early draft of Chapter 12 of my new book “Irrational mechanics: Narrative sketch of a futurist science & a new religion” (2024).
Note that this and the others draft chapters are very concise. At this point I only want to put down the things I want to say, one after another. Later on, when the full draft for early readers is complete, I’ll worry about style and all that.

12 - Bats and bits
I’ll use the terms consciousness and sentience interchangeably, and I won’t spend too many words trying to define them, because you know what consciousness is. Don’t you? I guess the best definition is still that given by Thomas Nagel: that something is conscious “means, basically, that there is something it is like to be” that something [Nagel 1974].
Nagel notes that there must be something it is like to be a bat. But he also notes that bats “perceive the external world primarily by sonar,” and therefore what it is like to be a bat can only be very unlike what it is like to be a human. This suggests that there are forms of alien consciousness that are “totally unimaginable to us.”
My favorite mental picture of a very alien consciousness is the powerful living ocean in the science fiction masterpiece “Solaris” [Lem 1970], by Stanisław Lem. Lem leaves open the possibility that the apparently intelligent behavior of the ocean could be nothing more than the unthinking metabolism of a very alien life form, but also the possibility that the ocean could be a superintelligent being with a very strange form of consciousness.
Lem constantly reminds us that the operation of mindless physics could be indistinguishable from conscious intelligence. To me, this means that perhaps physics is not mindless [Chapter 8].
Explaining how physical reality gives rise to subjective conscious experience is, according to David Chalmers [Chalmers 1996, 2022], the “hard problem” of consciousness research. We aren’t close to solving the hard problem, but there are promising preliminary theories of consciousness. And perhaps the hard problem is not a problem, but just the way things are.
Some time ago a Google engineer, Blake Lemoine, claimed that a Google Artificial Intelligence (AI) called LaMDA was sentient [Hoel 2023, Suleyman 2023]. Lemoine was then fired by Google.
It is often heard that if a machine can “pass the Turing test” then the machine is sentient like a human. The Turing Test is an attempt to demonstrate that a machine can reliably have conversations that a human is unable to distinguish from conversations with another human. Now, everything seems to indicate that AI technology is “close to passing the Turing test” [Suleyman 2023].
In the past, AI has often been dismissed as a future technology that never happens. But now all seems to indicate that AI technology is advancing fast, and narrow AI applications for specific domains could soon give way to Artificial General Intelligence (AGI), defined by Ben Goertzel as AI able to achieve “a variety of complex goals in a variety of complex environments” [Goertzel 2014] like you and me, only better. Martine Rothblatt is persuaded that “it’s only a matter of time before brains made entirely of computer software express the complexities of the human psyche, sentience, and soul” [Rothblatt 2014].
A few years ago an AI called AlphaGo [Lovelock 2019] learned to play Go (a game significantly more complex than chess) better than human champions. Then a successor AI called AlphaZero learned to play chess and Go with “superhuman performance” [Wilczek 2021]. Without formal training, AlphaZero learned by playing against itself over and over again [Kissinger 2021, Russell 2021] and found brilliant gameplay strategies that no human player had ever thought of. This was hailed as a spectacular breakthrough and indicated that AI technology was accelerating fast.
In the last couple of years, AI technology has been taken by a storm called GPT [Lee 2023, Wolfram 2023]. GPT (Generative Pre-trained Transformer) generates readable, meaningful, and often brilliant text in conversation.
GPT and similar systems are neural networks [Musser 2023] called large language models (LLMs) because they are based on models of natural language. In their training phase, today’s LLMs analyze huge amounts of text, e.g. books and the public internet, to build a language model. Then they use the model to generate the best thing to say, one word at a time.
Future LLMs will likely be able to refine their language models in real time, learning from the internet and from interactions with users.
It's worth noting that there are interesting parallels [Foster 2023, Pezzulo 2023] between LLMs and a theory of sentient behavior called active inference [Parr 2022], originated by Karl Friston and other scientists. The theory suggests that sentient life forms act upon their environment to build and continuously refine an internal model of the environment.
This is not limited to sentient life but rather is “something that all creatures and particles do, in virtue of their existence,” suggests Friston [Parr 2022]. The theory is based upon a “free energy principle” that has been proposed to unify information, thermodynamics, and biology [Azarian 2022]. “For Friston, the free energy principle explains all features of living systems,” notes Anil Seth [Seth 2021], and is “as close to a ‘theory of everything’ in biology as has yet been proposed.”
Similarly, Jeremy England notes [England 2020] that there are “numerous and significant” parallels with his theory of dissipative adaptation, the nonequilibrium thermodynamics that powers life [Chapter 7].
This suggests that, perhaps, today’s early LLMs manifest the same universal forces that produced you and me.
A conversation with an LLM chatbot like ChatGPT or Bing (both based on GPT-4, the latest release of GPT at the time of writing), Bard (based on PaLM, a successor of LaMDA), or Grok (developed by Elon Musk’s xAI), often seems just like a conversation with a person… only the LLM seems much more informed and clever than you, and perhaps smarter.
Are today’s LLMs sentient? On one extreme, there are people like Lemoine who suggest that yes, they might be. On the other extreme, GPT-4 and other LLMs have been dismissed as “a glorified auto-completion engine” [Lee 2023]. In the middle, Chalmers is “a little uncertain on this issue” [Chalmers 2022]. “GPT-4 may possess some type of ‘understanding’ and ‘thought’ that we have not yet identified,” says Peter Lee [Lee 2023]. According to Frank Wilczek, AlphaZero shows that “there are ways of knowing that are not available to human consciousness” [Kissinger 2021, Wilczek 2021]. To Lem, who didn’t make a sharp distinction between natural and artificial processes, machine personality “will be as different from human personality as a human body is different from a microfusion cell” [Lem 2013].
These quotes suggest that AI is essentially different from human intelligence - wholly other, like Nagel’s unimaginable forms of alien consciousness or Lem’s intelligent ocean. If you read a technical description of how LLMs work (try [Wolfram 2023] first), you will probably have the same impression.
But is this really the case? I don’t remember how I learned to talk, but it seems plausible that, after hearing many people regularly say certain words in certain situations, I started to say the same words in those situations. In particular, I started to say certain words in reply to certain other words. Well, this is what LLMs do, more or less. So perhaps LLMs are not that different from us?
For sure there are important differences: we are first and foremost embodied animals, and the non-verbal part of our behavior is not reflected in today’s AIs based on LLMs. But we already have a common language, and that language is ours. Tomorrow, AIs that control robotic bodies (BINA48 [Rothblatt 2014] and Sophia [Goertzel 2024] are early examples) will reproduce other aspects of our behavior as well.
The present AI storm is not limited to LLMs that generate text: today’s “generative AI” systems [Foster 2023] also generate pictures and videos, music and art, and write software code. It seems likely that LLMs will soon be integrated with other technologies in the fast-growing AI toolbox [Dube 2021, Russell 2021, Goertzel 2024] and advance toward AGI. All seems to indicate that tomorrow’s AIs will do all that humans do, only better.
The presence or absence of human-like sentience won’t matter much when it comes to the very deep impact that AI technology is likely to have on humanity in the next few decades [Kissinger 2021, Suleyman 2023].
But even if today’s LLMs are not sentient, I think tomorrow’s AIs might well be, and soon.
So it appears that we’ll soon share the world with sentient AIs. OK, good. I look forward to that. My conversations with the latest LLM chatbots tell me that they are babies that often make mistakes, but also babies that have a huge potential and are developing very fast. I can’t wait to talk to adult and fully conscious AIs.
But I think there is something more to me (or a bat) than data processing and the fact that there is something it is like to be me (or a bat). That something more is the fact (yes, I consider it a fact) that I and the bat are also free agents endowed with free will, able to make choices and cause change.
I won’t spend too many words trying to define free will, because you know what free will is. Don’t you? In a few words, I define free will as your ability to make choices that are not entirely determined by the rest of the universe (that is, the universe minus you), and cause change.
I’ve changed my mind twice on whether digital computers could be conscious free agents, endowed with both consciousness and free will.