What If Emotion Is the Edge? A New Study Aims to Map the Boundaries Between Human and Artificial Cognition
Emotionally-driven reason vs. AI pattern recognition
By David Ragland, DBA
Managing Director, FuturePoint Digital
What if the real boundary between human and artificial intelligence isn’t logic, memory, or speed—but emotionally-driven cognitive capabilities?
That’s the question driving a newly proposed empirical study I’ve submitted to a peer-reviewed academic journal. While the manuscript is currently under review, I want to take this opportunity to outline the scope and significance of the research—and invite collaboration from researchers, technologists, and leadership practitioners alike.
Note: This article outlines a proposed empirical study currently under academic review. No data has been collected or analyzed. All content reflects research intentions rather than results.
Rethinking Intelligence: Emotion as Cognitive Infrastructure
For centuries, emotion was seen as the enemy of reason—something to suppress in the pursuit of rational thought. From Plato to Spock, our cultural imagination treated emotion as something separate, even an impediment to rational thinking.
But neuroscience now tells a different story.
Antonio Damasio’s somatic marker hypothesis demonstrated that emotion is not a disruption to cognition—it is a precondition for effective decision-making. Patients with damage to emotional-processing centers of the brain could solve logic problems but failed catastrophically in daily life. Devoid of emotional signals, they couldn’t prioritize, assess risks, or make sound decisions in real-world contexts.
Emotion, in this view, and in the right balance, isn’t irrational—it’s a critical part of our cognitive infrastructure. It allows us to weigh social meaning, infer intention, and navigate ambiguity. These emotionally-driven cognitive capabilities are fundamental to what we consider human intelligence.
So where does that leave artificial intelligence?
The LSAT: Where AI Hits a Cognitive Ceiling
AI systems like GPT-4 have shown stunning performance on standardized exams. According to OpenAI’s own benchmarks, GPT-4 scored in the 99th percentile on GRE Verbal—a test that relies heavily on vocabulary and semantic pattern recognition. Other models have posted similarly elite results on the SAT, GMAT, and Bar Exam.
But the LSAT is different. It requires no prior knowledge—just raw reasoning, critical reading, and analytical thinking.
And on the LSAT, GPT-4’s performance dropped to the 88th percentile, notably lower than its scores on other assessments.
Why? Possibly because the LSAT doesn’t just test logic—it tests how people interpret arguments, identify implicit assumptions, and navigate ambiguous or ethically nuanced scenarios. Many of these items are infused with rhetorical complexity, social context, or subtle emotional cues.
These are domains where emotionally-driven cognitive capabilities—those that allow us to detect subtleties—play a decisive role—and where AI models, which simulate emotional language but lack emotional grounding, tend to struggle.
The Study: Isolating Emotion in Human vs. AI Reasoning
The proposed empirical study is designed to test this distinction directly.
In brief, the study compares the performance of human participants and large language models (LLMs) on a set of LSAT-style questions stratified into three types:
Purely logical: deductive reasoning, conditional logic
Emotionally nuanced: arguments involving social context or emotional subtext
Ethically ambiguous: scenarios involving moral dilemmas or conflicting values
Human participants will also complete two established psychological assessments:
The MSCEIT (Mayer-Salovey-Caruso Emotional Intelligence Test) to measure emotional intelligence
The DIT-2 (Defining Issues Test) to measure moral reasoning
The hypothesis is straightforward: LLMs will perform best on purely logical items, while humans will demonstrate a performance advantage on emotionally and ethically complex questions—especially those whose scores correlate with high emotional intelligence or moral reasoning.
This approach aims to empirically isolate emotionally-driven cognitive capabilities as a domain where human intelligence remains distinct—and potentially irreplaceable.
From Research to Real-World Application
Beyond academic interest, this research has immediate practical implications.
To translate the study’s findings into applied tools, I’ve proposed two frameworks:
Emotionally Informed Cognition Framework (EICF): A dynamic, modular model designed to help organizations assess and strengthen human reasoning in emotionally and ethically complex contexts. This could be applied in leadership training, AI-human team design, hiring, and risk management.
Emotionally Informed Reasoning Index (EIRI): A proposed new assessment instrument that evaluates a person’s capacity for emotionally-driven cognitive reasoning—potentially useful in high-stakes professions like law, diplomacy, education, and ethics-driven AI oversight.
These tools could form part of a broader strategy to ensure that human reasoning isn’t devalued in an AI-driven world—but enhanced.
The Importance of This Research
We now live in a world where AI can simulate intelligent behavior. But simulation is not understanding. And when it comes to high-stakes decisions that involve people’s well-being, moral judgment, or social meaning, that difference may matter significantly.
Emotionally-driven cognitive capabilities may be the last mile of the intelligence race—and the most important.
This study seeks to map that territory with empirical precision. And I’m actively looking to partner with universities, research institutes, and companies who want to explore it further.
For collaboration, inquiries, or to discuss research alignment:
David.Ragland@FuturePointDigital.com
Let’s ensure we know where human intelligence still leads—and how best to protect and leverage it.
About the Author: David Ragland is a partner at FuturePoint Digital, a research-based consultancy focused on exploring the intersection of social science and artificial intelligence. He is the author of Classical Wisdom for Modern Leaders: Emotional Intelligence and AI, The Multiplier Effect: AI and Organizational Dynamics, and The Human Renaissance: Why AI Will Make Us More Human, Not Less. His forthcoming book, The Intelligence Divide: Mapping the Boundaries Between Human and Artificial Minds, investigates what fundamentally separates human and machine cognition. A former senior technology executive and adjunct professor, David holds a Doctorate in Business Administration from IE University (Madrid), an M.S. in Information and Telecommunications Systems from Johns Hopkins University, and a B.S. in Psychology from James Madison University. He also completed a certificate in Artificial Intelligence and Business Strategy at MIT.
References
Barrett, L. F. (2017). How emotions are made: The secret life of the brain. Houghton Mifflin Harcourt.
Bechara, A., Damasio, A. R., Damasio, H., & Anderson, S. W. (1994). Insensitivity to future consequences following damage to human prefrontal cortex. Cognition, 50(1–3), 7–15.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.
Damasio, A. R. (1994). Descartes’ error: Emotion, reason, and the human brain. Avon Books.
Liu, L., Liu, Y., Zhang, Y., & Tang, J. (2024). Evaluating the performance of GPT-4 on the GMAT Focus exam questions. ResearchGate. https://www.researchgate.net/publication/378502169
Malik, H., Agarwal, A., & Kapania, R. (2024). Logical reasoning and large language models: A case study on LSAT performance. arXiv preprint. https://arxiv.org/abs/2401.02985
Mayer, J. D., Salovey, P., & Caruso, D. R. (2002). Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) user's manual. Multi-Health Systems.
OpenAI. (2023). GPT-4 Technical Report. https://doi.org/10.48550/arXiv.2303.08774
Rest, J. R., Narvaez, D., Bebeau, M. J., & Thoma, S. J. (1999). Postconventional moral thinking: A neo-Kohlbergian approach. Lawrence Erlbaum Associates.
Ziems, C., Wu, P., Fu, Y., Wang, D., & Yang, D. (2023). Can large language models reason about the world? arXiv preprint. https://arxiv.org/abs/2305.13937