AI “Zombies” are Coming to a Computer Near You: A Critique of AI Personhood
AI can’t be a person because AI doesn’t exist as a thing, but AI can imitate people and so become a metaphysical “zombie”
Podcast Audio Overview of this post from Notebook LM.
Stephen Wolfram, a renowned technologist, was quoted in a recent article in TechCrunch that we need more philosophers in AI, and he predicts a golden age of philosophy due to AI’s growing influence and the questions people are finding themselves having to answer about AI. One of these key questions is the questions of AI personhood: can AI be a person?
This is a question in which a Christian worldview is essential to get to an answer because much of the debate over what constitutes a “person” occurred in Christian theological controversies regarding the Trinity (how there is only one God yet a distinct Father, Son, and Holy Spirit) and the Incarnation (how the Son came in the flesh - was He only divine? Divine and human at the same time without mingling the two natures? A fusion of the two?). Perhaps surprisingly, the arguments made in this theological context have great importance for the question of AI personhood as they lead to the conclusion that an AI that acts like a human is a “zombie” (to be defined later).
So before asking whether AI can be a person. Let’s instead ask the following: what is a person? The traditional answer, following the Christian theologian Boethius, is “the individual substance of a rational nature.” There are also many other answers, such as John Locke’s “a thinking intelligent being that has reason and reflection and can consider itself as itself.” Others would say that persons are human beings. There are many other definitions cited in Teichman’s paper below. Google gives the following definition: “a human being regarded as an individual.”
One common element among the various definitions of person used today is the idea of consciousness. A person should have some kind of subjective experience in this view - in a sense, a person sees the world in some way and thinks about it. When it comes to AI, the idea is that once an AI demonstrates behavior that suggests thinking, then it becomes a person. These experiences focus on the “rational nature” part of Boethius’s definition of a person.
While movies like The Matrix and other sci-fi (such as the android Data in Star Trek) suggest experiences with sentient AI as a future possibility, the reality is these discussions of AI personhood miss a key fact: to be a person, an AI has to exist. Let’s go back to Boethius’s definition and break it down to show why this is important:
Individual - can’t be divided
Substance - a thing or being (as opposed to a property [e.g., “red” is a property of certain apples] or an event)
Rational - capable of apprehension (seeing what is reality), judgment (affirming or denying whether or not things are true), and reasoning of truths (being able to use the rules of logic to go from what you know is true to conclusions)
Nature - the principle by which a thing is brought to motion or to rest (e.g., what makes a person act or do something)
With all the talk of AI having a rational nature (basically, AI acting according to what seems to be thinking), it seems like the basic question of whether AI even exists as a thing is often overlooked. AIs are computer algorithms that are made to run on physical computers. The physical computers obviously exist, but just having a bunch of computer chips with electricity doesn’t make something a person in and of itself. The computer chips could just be doing nothing. The algorithm is essential for the computer chips to be running AI.
But do AI algorithms even exist? Let’s consider a building block of computer algorithms: numbers. Algorithms are a set of instructions that manipulate numbers to get different sets of numbers. Do numbers exist? Consider the following example:
“I have two apples.”
“I have an apple and another apple that is not the previous apple.”
Both these statements say the same thing, but the first one uses a number - “two” - and the second one explicitly states the relationship between the apples. There was no need to say that there is a number called “two” that exists in the universe. That would be Platonism, the view that abstract objects - immaterial beings that don’t cause things to happen - actually exist. On a Platonist worldview, ideas like numbers, animal types, and sketches of characters like Mickey Mouse have an independent, non-physical existence that can become “attached” to things in the world. Platonism is popular among some influential figures, such as the cosmologist Max Tegmark (see here).
However, the example above shows that we don’t need to say that numbers exist: in fact, numbers can be considered simply shorthand when discussing relationships between things (cf. this article from Christian philosophy for more information, especially the parts about Fictionalism). As a consequence, we don’t have a reason to believe that numbers exist as things.
One objection is that our scientific theories are expressed in mathematics, and our scientific theories are very accurate, so that mathematical objects - such as numbers - must exist. But the math in scientific theories could (at least if we had enough time) be expressed in language, without using math. It would be very wordy, but it is possible. So, mathematics and numbers are a language, and believing that numbers exist doesn’t seem to have strong reasons in its favor.
But if numbers don’t exist, then do algorithms exist? No. Computer algorithms are sets of instructions manipulating numbers to obtain other numbers. Sets of instructions don’t exist as things: the sets of instructions are shorthand. This means that AI doesn’t exist as a thing. And if a thing doesn’t exist, it can’t think or be a person. AI can’t be a person because it’s not a thing.
But then what is a computer running AI that behaves intelligently, even humanly, to the point where it feels like it’s human or at least personal? The answer is that the computer running AI is a “philosophical zombie.” It looks personal, but it has no conscious experience - it’s just a bunch of atoms moving around in a specific way, converting inputs into outputs.
This has practical consequences:
If AI isn’t a person, it should not be treated as having the same moral standing as a human.
If AI isn’t a person, it cannot “love you back.” Relationships with AI are fundamentally fictitious, and while these may be useful fictions at times, they cannot substitute for relationships with actual humans or God.
While people may consider giving AI legal personhood (for example, considering the AI as a legal entity running a business), this is a different question than treating AI with moral personhood.
As our society is increasingly filled with AI imitating humans, we should ensure that AI treats humans with respect as people and that humans treat AI with respect as tools. People should take care of their AI because it serves them, just like they take care of their personal computers.
However, we should take care not to reduce people to tools just because we keep treating human-like AI as tools.
What are your thoughts? Please share below or reply.
References
Algorithm. (2024). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Algorithm&oldid=1243509202
Craig, W. L. (n.d.). God and Abstract Objects | Reasonable Faith. Retrieved September 1, 2024, from http://www.reasonablefaith.org/writings/scholarly-writings/divine-aseity/god-and-abstract-objects/
Mathematical universe hypothesis. (2024). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Mathematical_universe_hypothesis&oldid=1222655423
Miller, R. (2024, August 25). Stephen Wolfram thinks we need philosophers working on big questions around AI. TechCrunch. https://techcrunch.com/2024/08/25/stephen-wolfram-thinks-we-need-philosophers-working-on-big-questions-around-ai/
Teichman, J. (1985). The Definition of Person. Philosophy, 60(232), 175–185. https://doi.org/10.1017/S003181910005107X.
Dear Saro Meguerdijian
Greetings.
I have read your article and found it thought-provoking, particularly in its theological perspective. While I agree that AI is not human, your emphasis on the potential dangers of perceiving AI as an idol or a reliable companion is significant. AI has multidimensional implications—it is like a knife, capable of saving lives in surgery or causing harm in the wrong hands.
I agree with your point that human beings, in their vulnerability, often seek protection. While AI may simulate human-like rational behavior, it can never embody the depth and complexity of a human with lived experiences. This reminds me of earlier concerns about the role of computers in human life. Over time, we learned to control and harness their capabilities for our benefit, and I believe the same principle applies to AI.
An educated, faithful individual with a strong character will not fall into the trap of over-reliance on AI. Instead, they will use it wisely as a tool while maintaining the sanctity of human relationships and personhood.
Thank you for sharing your insights in this article.
Best regards,
Joseph Hovsepian. MA Political Science
Thank you Dr. Meguerdijian. Very thought provoking.