“Captain, I Feel That Would Be Illogical”: Star Trek, Valence, and the Human Superpower AI Can’t Replicate
How Emotional Valence Shapes Human Decisions—and Gives Us an Unbeatable Edge in the Age of Artificial Intelligence
“Logic is the beginning of wisdom… not the end.”
— Spock, Star Trek VI: The Undiscovered Country
Looking back at the original Star Trek series that many of us grew up with, it’s striking how prophetic and insightful the show’s writers were—not just in envisioning technological advances, but in their grasp of human nature. Few dynamics were as compelling—or as psychologically revealing—as the one between Captain Kirk and Dr. Spock. Kirk was a highly instinctive and emotionally attuned leader, often charging ahead guided by both his mind and his gut. Spock, the half-Vulcan science officer, filtered everything through the sieve of logic. Their tension embodied a question that philosophers and social scientists have grappled with for centuries: Should we trust our feelings—or suppress them in favor of cold reason?
Yet what Star Trek suggested—long before modern science could prove it—is that humanity’s greatest strength may lie not in choosing between emotion and reason, but in integrating both. A key to understanding that integration lies in a concept shared across motivation theory and emotion science: valence.
The Myth of the Rational Mind
For centuries, Western thinkers imagined the ideal human as a rational calculator—capable of weighing facts and acting accordingly. This view was upended by neurologist Antonio Damasio, whose groundbreaking work with brain-injured patients revealed something astonishing: when people lose access to their emotions, their ability to make decisions collapses (Damasio, 1994).
These individuals could still solve logic problems or discuss ethics—but when asked whether to schedule an appointment at 10:00 or 10:30, they became paralyzed. Why? Because decision-making isn’t just about logic. It’s about what we value, and value is felt, not computed. —Recall who made the final decisions on Star trek.
Valence: The Emotional Currency of Choice
At the heart of emotional decision-making is valence—a psychological construct that refers to the intrinsic positivity or negativity of a mental state, sensation, or thought (Russell, 2003). Simply put, valence answers the question: How does this make me feel?
Every choice we face carries emotional weight:
A job offer may bring excitement (positive valence) but also dread (negative valence).
A relationship decision may elicit hope, but also fear.
Even mundane choices—chicken or fish?—trigger subtle valence signals.
These feelings are not noise in the system; they are the system. Valence provides a fast, embodied appraisal of what matters, what aligns with our goals, and what threatens our well-being.
Modern affective neuroscience views valence as one of the core dimensions of emotion, along with arousal. Together, they inform cognitive appraisal—our brain’s way of assigning meaning to events and making sense of the world (Barrett, 2017).
Feelings as Navigational Tools
Rather than cloud our thinking, feelings, in the right balance, animate it. They:
Highlight value: Emotions flag what is meaningful.
Facilitate rapid decision-making: Intuition—shaped by emotional valence—enables fast, adaptive responses under uncertainty, often before conscious reasoning can engage. (For a deeper exploration of this dual-process model, see Kahneman, Thinking, Fast and Slow, 2011.)
Enable moral reasoning: Research by Jonathan Haidt (2001) suggests we often arrive at moral judgments emotionally first, then rationalize them after the fact.
Foster social cohesion: Feelings like empathy and guilt serve as social glue, regulating cooperation and trust.
Kirk and Spock eventually come to mutually respect each other’s strengths. Spock learns that pure logic often falls short, and Kirk realizes his instincts benefit from rational constraint. The series thus proposes a model of integrated intelligence: reason guided by emotion.
Implications in the Age of AI
As artificial intelligence grows in capability, it's tempting to once again idolize the logic-driven mind. But AI lacks valence—it cannot feel what is good or bad, meaningful or empty. As such, it cannot assign value. That remains a human responsibility.
The future will not belong to those who suppress emotion in favor of cold reason, nor to those who abandon reason for raw feeling. It will belong to those who can synthesize the two—who can navigate a world of infinite information using the compass of felt meaning.
But Can’t AI Learn to Feel?
Some AI researchers are indeed working on systems that simulate emotional expression—teaching machines to recognize human feelings, respond empathetically, and even mimic emotional tones. This field, known as affective computing, aims to improve human-computer interaction, not to create genuine feeling (Picard, 1997).
But here’s the catch: simulating emotion isn’t the same as experiencing it. Machines can generate phrases like “I’m sorry” or modulate their voice to sound empathetic, but they have no valence — no internal sense of what is good or bad, joyful or tragic. They don’t suffer. They don’t yearn. They don’t care.
This limitation becomes especially stark when we consider the idea of machine consciousness. While some theorists speculate that future AI might attain awareness, we currently have no scientific framework for how subjective experience (what philosophers call qualia) could arise from code. This is known as the “hard problem” of consciousness (Chalmers, 1995)—not just because it’s unsolved, but because it may be unsolvable in purely computational terms.
So while AI can mimic emotional responses, it lacks the biological, evolutionary, and phenomenological grounding that gives human emotions their depth—and their moral weight. This is not a minor bug. It’s a defining boundary. And for now, it remains a uniquely human superpower.
Closing Thought
Perhaps Star Trek had it right all along. Spock’s evolution—from emotionless logician to someone who appreciates the depth of feeling—mirrors our own scientific journey. What we’re discovering now is that our humanity doesn’t lie despite our emotions—but because of them.
For a deeper exploration of this and many other themes related to the dance between humans and AI, please see the following book (available here on Amazon):
References
Barrett, L. F. (2017). How emotions are made: The secret life of the brain. Houghton Mifflin Harcourt.
Damasio, A. R. (1994). Descartes’ error: Emotion, reason, and the human brain. Putnam Publishing.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834. https://doi.org/10.1037/0033-295X.108.4.814
Picard, R. W. (1997). Affective computing. MIT Press.
Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychological Review, 110(1), 145–172. https://doi.org/10.1037/0033-295X.110.1.145