By Marc Prensky, as re-written by Chat GPT. Original, by Marc Prensky alone, published on THE PRENSKY PERSPECTIVE here.
In Daniel Kahneman’s renowned book, “Thinking Fast and Slow,” he illuminates the existence of two distinct thinking systems within humans. For centuries, we relied on heuristics and critical analysis to navigate the complexities of our world. The lesson learned is the importance of combining the strengths and limitations of both systems. However, a paradigm shift has occurred, introducing a new form of thinking that has profound implications.
Throughout history, brilliant minds like Albert Einstein have conveyed a pressing message: our current thinking is insufficient to tackle the challenges we face. In response, various attempts have been made to pioneer “new kinds” of thinking. One such endeavor is exemplified by Pattern, a company in which I’ve invested, claiming to have developed “new mathematics” for AI reasoning.
Enter Large Language Models (LLMs), epitomized by Chat GPT. They represent a third type of thinking, distinct from its predecessors. This novel thinking approach is purely statistical and algorithmic, diverging from traditional mental processes. It exists beyond the confines of our minds, as long as we have access to the vast expanse of the internet.
Consequently, humans now possess three distinct thinking types: fast (heuristic), slow (analytical), and statistical (algorithmic) thinking. The recognition of statistical thinking elucidates its rapid proliferation, captivating billions worldwide. It’s plausible that other forms of thinking, such as imaginative fantasizing, exist as well.
Some may hesitate to categorize LLMs as “thinking.” However, their cognitive processes are strikingly similar, albeit refined through decades of evolutionary advancements. When given a prompt, Chat GPT generates coherent and dynamic responses, incorporating new word combinations based on statistical patterns. This mode of thinking yields outcomes previously unattainable through conventional means — not merely faster results, but ones that were previously unattainable altogether. The crux of this new thinking lies in the integration of the world’s written knowledge into our individual cognition. In contrast, our other thinking types rely solely on the information stored in our own minds. Extensive reading has long been revered for enhancing thinking abilities, as evident in the academic journey of Ph.D. candidates who devour vast bodies of knowledge.
However, the precise mechanisms behind LLMs’ thought processes and conclusions remain elusive, concealed within a software “black box.” This is not unlike the hidden operations of our brain-based thinking, despite claims of comprehensive understanding. For instance, we still lack a definitive understanding of how our brains produce a single coherent thought in response to complex situations.
Moreover, LLMs occasionally generate output without explicitly notifying us, akin to human brains “hallucinating.” Humans struggle to discern fact from fiction and often construct explanatory narratives. Reliable methods for human lie detection are scarce, necessitating fact-checking. When engaging with LLMs, we can mitigate this concern by consistently prompting them to verify the truthfulness of their responses. An “under oath” mode could be integrated into LLMs, especially for inquiries related to existing matters like legal cases.
The true value lies in the fusion of these thinking types. As with fast, slow, and imaginative thinking, humans wield their greatest power when harnessing the synergy of all available modes. A friend recently shared, “I use Chat GPT every day,” and I concur — I do too. Yet, should we rely on it exclusively for all our cognitive endeavors?
Some advice suggests treating Chat GPT as an assistant or intern, avoiding complete dependence or unwavering trust. Paradoxically, this aligns with how we should approach our own internal thinking, despite most of us considering it infallible (as married individuals often discover otherwise).
It is imperative that we swiftly discover effective ways to leverage this new thinking modality and integrate it with existing ones. Many heuristics ingrained in fast thinking are culturally transmitted as “common sense.” When professors claim, “My job is to teach my students to think,” they primarily refer to teaching analytical, critical, and now, statistical thinking. A novel profession known as a “prompt engineer” has already emerged to facilitate statistical thinking’s instruction. As AI continues its rapid evolution, we must remain adaptive. Inequalities in adopting statistical thinking may arise, as with other modes of thought, but proactive embrace can mitigate such disparities.
Concerns linger regarding the potential for this new thinking, and AI in general, to spiral out of control, leading to human annihilation — a threat analogous to the nuclear bombs born from our previous thinking. However, given humans’ profound self-interest, it seems unlikely that this new thinking — or any thinking, for that matter — would bring about annihilation.
Now is the time to foster the growth of this new form of thinking and integrate it harmoniously with existing modes. We must ensure that as many young individuals as possible embrace this novel human thinking approach alongside the established ones from an early age.
Labeling this new thinking “artificial” proves unhelpful. Instead, we require a more fitting term for this innovative human thought process, preferably sooner rather than later. It took humanity far too long to conceive “fast” and “slow” thinking, the application of which to decision-making earned their creator a Nobel Prize. Perhaps we can perceive statistical thinking as a new, external “brain component” that synergistically complements our evolving AORTA (Always on Real-Time Access)?
I invite further ideas to christen this emergent thinking paradigm.