Why AI will never be sentient

Rushabh Mehta
6 min readJan 2, 2024

--

An AI machine that goes sentient, as visualised in the film “ex-Machina”

AI (specifically, generative AI) has been at the centre of several discussions over the last year. The “memefication” of the internet means that once you hit a critical mass of people talking about something, the world wide web gets into the discussion. AI has been everywhere — and every person in the world who matters has talked about it. Business and political leaders are worried. Scientists and mathematicians are worried. Governments want to regulate it (because they can) and large companies want to regulate it to protect their so called head start.

Worries are piled on worries. The world has broken into two factions:

  1. Accelerationists who believe AI will unleash a golden future
  2. Altruists who believe AI will lead us to a dystopian and apocalyptic hell.

What if they all got it wrong?

Yes, very smart people are saying worrisome things, but let’s put on the sceptic’s hat for a moment and dig deeper. First let’s start with what we mean by AI

What is “AI”?

Let’s begin with what is “AI”. 99% people don’t understand any jargon and my guess is 99% of the people who have an opinion on AI, probably don’t understand what it is. So let me give it a shot by trying to explain in as simple words as possible (like explaining it to a seven year old). If you already know what a neural network is, you can skip this section.

“AI” consists of a several different techniques to mimic human intelligence, but recent conversations mostly talk about a sub-field call “neural networks”. A neural network is a large networks of switches (think like normal electric switch that you use to turn on your fan). These switches are wired to several other switches, and each will pass on the current to other switches if it gets switched “on”. Now, a switch can go “on” or “off” if it receives enough current from other switches it is wired to. If it does not get enough current, it remains “off” and does not pass on any current to further switches that are connected downstream.

Simple neural network (each node is a switch) — Source: Wikipedia

There are input switches, that receive current from the outside world (say for example, based on your sound waves) and there are output switches that give you some result, say turn on your fan. Now if you feed a certain sequence of sound waves converted to a sequence of binary numbers each fed as electrical input to our input switches (for our example you say “turn on the fan”), these switches will go on-and-off based on the input they get and their configurations, pass on current to other switches and finally the fan will either turn on or not.

The first time you say this, nothing is going to happen, so you tweak the settings (the threshold at which they pass current) in a few of these switches so that it matches what you expect it to do. You keep on tweaking the thresholds of each switch in your network, till your network can reliably switch the fan “on” when you say so. This is what a neural network is. It takes a particular input and gives an output based on “what it has been trained on”.

A language model is essentially a set of switches that takes words as input and gives words as output (one at a time). This can be modelled to images as well. These switches (called neurons) are written in software and you can create a network of billions of such switches computationally, and then train it on large volumes of text in the public domain (or the internet).

Why AI can’t “think”

So essentially a neural network is a large computational machine. That is, it computes an output based on inputs, and as long as the model remains the same, it is deterministic. That is, it will give you the same output if you give the same set of inputs (yes you can randomise some of it, but if you feed the same random seed, it will repeat itself. A computer is essentially “repeatable”).

Thinking on the other hand is highly complex. It has elements of both pattern matching based on past experiences, and also deliberate analytical thinking. A thought is expressed as a set of complex emotions that is converted into words by our brain. The two distinct models are called “fast thinking” and “slow thinking” (Daniel Kahneman), we also call it the rational and methodical “left brain” and the creative and unpredictable “right brain”.

The most distinctive feature of human thinking is that human thought is wired into the physical world through our senses and somehow can make a “connection” between the real and imaginary world. Thinking involves several domains — math, logic, physics, linguistics, philosophy apart from the topic of the thought itself. These are complex “domains” that blur into each other (physics blurs into math, math into logic, logic into linguistics, linguistics into philosophy). All of this is possible because somehow we have this “bridge” that connects our inner and outer selves.

The computer can only “think” in terms of cold numbers and logic. It has no duality or “connection”. It has no anchoring into the immutable laws of physics and math. In my view, this makes it severally constrained. It lacks an entire dimension or sense of perspective. It cannot pinch itself to reality.

Can I pinch myself?

The machine can never pinch it self. It can’t tell for sure if anything is real or not because it lacks a physical awareness (I would wager it lacks any awareness at all). Humans are “aware” because we inhabit multiple worlds (at least the imaginary and the physical). Our connection can only be experienced at one point (time) to for us to make sense of anything. And I believe it is this connection that makes us alive.

As someone who spent several years programming computers and probably understands how most of it works, I cannot fathom how a computer can be “alive”. At best these “AI” models can mimic human speech and certain skills. They can be very good at tasks computers can be good at and humans can be very slow at (for example doing repetitive tasks), but I see no connection to this very real sense of “here and now”.

This is why I think people are missing a very simple point when they talk about AI singularity or sentience, and hence their understanding of its capabilities is very wrong.

So is everyone wrong?

AI is yet another type of machine that’s all. Humans have always been inventive to reduce their grunt, and humans have always had problems with machines. The first thought is that machines will lead to job losses. But “AI” is the least of the concerns. “Dumb” machines like cars and scanners have made more jobs redundant than any computer.

Is the hype justified?

I think the current exuberance around AI is massively over-hyped. My view is that AI is not as exceptional compared to other computation models as it is made out to be. In some cases, the output is still very primitive. AI will help us in some ways, or not. Yes it can talk like a human and take away jobs that seem repetitive to us, like answering the same query again and again. Anything for which enough economic incentive exists to be automated, will probably be automated whether it uses AI or not.

Will it be dangerous?

Of course, every machine is dangerous. Try cutting yourself with a knife or touching a live wire of electricity (actually don’t!).

Is there a limit to automation?

That is an interesting question to ask. How much automation is okay? I think as long as we can digest it as a society, things should be fine. I don’t have a specific answer to this, but overall I am not worried about “AI” or any other automation taking over society. I am more worried about free speech and democracy. Those things seem way more worthy of our attention than “AI”

--

--

Rushabh Mehta

founder, frappe | the best code is the one that is not written