Xavier Hernandez
3 min read - Jan 17, 2025
In the world of sci-fi and dystopian futures, the fear of AI “waking up” is a powerful one. The idea that computers, once simply tools of our design, could suddenly become autonomous—making their own decisions, controlling power grids, military weapons, and even the internet—is terrifying. But can this really happen? And, more importantly, if AI ever becomes self-operating, can we call it conscious?
The idea of AI becoming sentient often stems from the behaviorist school of thought in philosophy, which focuses on the external behaviors of an entity. According to behaviorism, if something behaves like a conscious being, then it can be treated as one. But this is a deeply flawed view when it comes to consciousness. There is something more to consciousness than just external actions.
Human consciousness isn’t just about behavior—it’s about how it feels to be human. It's the feeling of experiencing the world subjectively. Humans have profound experiences, like religious moments where the ego dissolves, and we feel at one with the universe. Or the simple act of listening to music and feeling goosebumps in response to a song that moves us. Then there’s the pain of heartbreak—the feeling of a physical ache in your chest when you lose someone close to you. This is not something AI can experience. No matter how well AI mimics human behavior, it will never feel these emotions in the same way we do.
AI seems alive. It can pass the Turing Test, hold conversations, and even make decisions that mimic human actions. But we need to remember: AI is modeled after the human mind. The way AI behaves is not because it feels anything—it’s because it’s designed to mimic us.
It's obvious to see how the man-made objects reflect the human being. Elevators exist because humans do not have wings, and they have certain dimension because humans bodies have a certain size. Driving an automobile is modeled after our appendages and keyboards are modeled after our hands. Things that are 'grippable' and 'pocket size' are not a reflection of the object themselves, but rather the human interacting with it.
It's no surprise that AI would be modeled by the human psyche. The internal structures that give humans an understanding of mathematics, logic, space, time, etc. are etched into the AI models. In addition, these models are then manually trained with datasets, positive/negative reinforcements, and even given a perspective on the world view. Just look at the following interaction I had with ChatGPT below:
AI doesn't feel or have intuition, it deduces. This is evident if you have ever used an AI image generator. Many times, images of humans generated with AI will have too little or too many fingers or toes. AI is doing its best to pick up patterns on what humans look like and the number of fingers and toes is somewhat arbitrary to AI. It has no intuition that humans have 10 fingers and toes, it's simply doing its best to deduce based on datasets and training. Given the same dataset to a human, we would be able to deduce the amount of fingers and toes. Heck, we even know that Great White Sharks have 7 rows of approx. 300 teeth.
Now, if the fear of AI “waking up” means it can act on its own, we need to step back and question: is this what we really mean by sentience? The automation of tasks and the ability for machines to react to events isn’t new. Computers have been reacting to inputs, running processes, and triggering actions based on data for decades. The addition of “AI” to this automation doesn’t suddenly imbue the system with consciousness. It simply gives the appearance of human-like behavior. But remember: we are the ones programming it.
Once again, the true danger lies not in the tools humans create, but in the hands of the humans themselves. Just like in post-apocalyptic stories like The Walking Dead, where the real threat isn’t the zombies but the other surviving humans, the danger of AI lies in how it’s used and manipulated. Whether it’s controlling power grids, spreading misinformation, or making military decisions, the real threat is not the technology—it’s the human actors behind it.
The bottom line is this: AI is incredibly good at mimicking human behavior. But it’s not actually “in there.” It’s not feeling anything—it’s just responding to inputs in ways we’ve taught it to. We’ve designed AI to act like us, but it will never be like us. It will never feel heartbreak, or experience the rush of joy that comes with a favorite song, or the overwhelming sensation of being in love. These experiences are uniquely human, tied to consciousness and self-awareness, which AI will never possess.
And perhaps the most important warning is this: we must be cautious, not of the machines, but of ourselves. In our pursuit of progress and innovation, we often forget the true consequences of our actions. It is humans, in their wisdom and folly, who have the ability to shape the future, for better or for worse. Let’s not forget that we must wield our power with responsibility, or we risk becoming the architects of our own downfall.
0/1000