The AI Panic Letter of 2023
What is it about artificial intelligence that has the world in such an uproar? An open letter is being circulated (Future of Life, 2023) after being signed by heads of industry and famous thinkers including Max Tegmark and a number of other respectable minds. In the letter, referred to in this piece as the AI Panic Letter, which calls for a pause in development of AI systems more powerful than ChatGPT4 for six months, the argument is made that people aren’t ready to be bombarded with poor quality baseless content. The rationale given for this atypical course of action is poorly explained and out of touch with reality: IT systems already flood the market with untruth and misinformation, the only difference being that most of it today is more or less written by people. Further, experts from across fields in the cognitive sciences relevant to the claim that ChatGPT represents a “digital mind” would dispute the claim that seems implied here, that ChatGPT is a sentient, conscious entity. Even if one dissents with respect to the AI chatbot’s powers described in this way, as cognitive science all-star Dr. Melanie Mitchell does, the array of famous names that signed the letter demands attention and perhaps even gives us a reason to reflect deeply upon the situation we find ourselves in. This essay will lay out the situation around the AI Panic Letter in terms of a newly developed interdisciplinary approach to moral philosophy called Worldview Ethics.
With respect to the general uproar around AI that is presently making the rounds in the media, Worldview Ethics is a robust framework that combines moral philosophy and cognitive science to produce a useful working base from which to understand these issues that is consistent with contemporary science. For our purposes as observers of a panic over a crisis that is, as best anyone can tell, showing no signs of actually happening, Worldview Ethics - by its insistence upon engagement upon scientific grounds where possible - serves as a grounding rod, a baseline from which to evaluate claims like “the machines have come to life” or “the machines are going to come to life if we don’t change our evil ways and pause the GPU operations.”
As Dr. Mitchell states, “The glib use by researchers of ill-defined hype-y terms such as “Artificial General Intelligence” does not serve the public (or even the research community) in making sense of what these systems can and cannot do,” (Mitchell, 2023). Our desire to create quality policy is not benefited by rash action and unclear thought. At the least, a well-structured account of the present situation could enable a more efficient conversation to take place around the set of issues being pointed to and shouted about in this AI panic event. Perhaps without setting our sights too high, we could even hope to produce an agreeable definition of consciousness acceptable to participants in the discussion and thereby clarify matters more broadly.
First things first, is ChatGPT self-aware?
The answer is clearly no, at least if we listen to researchers with many productive years studying under masters of cognitive science and complexity theory who make coherent and easily intelligible arguments clearly stating the evidence that ChatGPT is not conscious and we have no reason to believe it or anything even remotely similar in design to it could be thought of as conscious. To put it bluntly: Yes, the thing in this case uses language in an advanced way. No, the thing is not conscious and does not possess a will. As Mitchell puts it: “...there is a lot we do understand—e.g., enough to know that they are certainly not sentient for any useful meaning of the word—and scientists are making progress all the time in understanding them better,” (Mitchell, 2023).
Why We Need Moral Philosophy
There is an academic discipline that is supposed to help with issues like these, but it has spent about a century outside of mainstream scholarly interest after being set aside by G.E. Moore in Principia Ethica around the turn of the twentieth century. Normative moral philosophy has developed over the interceding time and is already used in many institutions to structure norms for behavior that enable people to more easily navigate bureaucracies. However, moral philosophers in general have not been doing a great job of keeping up with the developments in the cognitive sciences, and now moral philosophy seems out of sync with the day’s developments. It seems that the modern discipline of university-led normative ethical theory has long been failing to achieve its purpose, at least with respect to contemporary cognitive science. This is an easy judgment to make because we see issues such as consciousness and alignment being debated fiercely outside the jurisdiction of academic normative ethicists. Unfortunately, the discipline most equipped to answer these increasingly important inquiries is nowhere to be found. Applying the language of moral philosophy alongside cognitive science will yield a robust and accessible conversation around even advanced concepts such as AI will alignment.
In philosophy, alignment of will is a problem that is encountered not only at the social level, but at the individual level as well. Cognitive science and the therapeutic approaches it has given rise to such as cognitive behavioral therapy and dialectical behavior therapy involve the use of the conscious mind to heal the symptoms experienced by the affected body. It has long been possible for a computer to beat the best human chess player alive, but the first major app capable of interacting with text at a high enough level to captivate humanity at scale has given rise to many hyperbolic accounts of the power and potential for harm represented by systems that process language. Moral philosophers, by working with cognitive science, can field a robust explanatory framework for lively debate around issues such as what constitutes consciousness and what agency entails.
The absence of moral philosophy from the contemporary field of cognitive science that is giving rise to new modalities of access to language is evidenced by the difficulty with which people discuss moral problems such as whether AIs are sufficiently controllable by their operators. For better or worse, the language of morality and the language of bodily autonomy are a core component of a healthy discourse around artificial intelligence. The disorganized state of the contemporary debate is evidence that moral philosophy’s leadership is needed.
Though many conceive of moral philosophy as answering only questions about rightness and wrongness with respect to actions, qualities people can have or should work for in themselves, and individual bodily autonomy, the fact is that moral philosophy has always been the study of what people think of themselves. Establishing a clear literature with a coherent account of what, across sciences and the vast body of knowledge we have created in our time on this earth, we think we are, will provide us with the necessary tools to have an informed discussion about when the machines might be able to come alive.
The AI Doom Thesis
Proponents of the AI Doom Thesis, such as Eliezer Yudkowsky, subscribe to the seemingly outlandish belief that AI will end life on earth as we know it if it is not somehow brought to a halt because the AI’s will and the will of humankind cannot be aligned (Yudkowsky, 2023). Melanie Mitchell disagrees with essentially every layer of this thesis, providing counterpoints to what critics point out are unproven assumptions in Yudkowsky’s arguments. The thesis Yudkowsky advocates is related to Nick Bostrom’s 2004 book Superintelligence, which has, in retrospect, gotten a lot of things fundamentally wrong about AI. Mitchell’s constructive criticism is that “I think that the only way to resolve the debate is to gain a better scientific understanding of what intelligence is, and what diverse forms it can take,” calling into question the core assumption that a machine or a program can be intelligent in the first place (Bostrom, 2004; Mitchell, 2023). A better scientific understanding of intelligence will come with an understanding of the sorts of things that can be thought of as intelligent. Presently, these sorts of things include people and animals, but not machines.
The status of AI at the present time is still not thought to reach the level associated with what Nick Bostrom referred to as AGI, or artificial general intelligence, which Mitchell explains is poorly defined and mainly employed as a fear-inducing buzzword by AI Doom Thesis proponents. In fact, the argument could be made that ChatGPT is fairly similar to Google, but instead of being programmed to serve links as intelligently as possible, it is programmed to tell the person what the links would inform them of, should they go and click through and read them all. Google isn’t an intelligence, but rather an index of the product of thousands of individual human intelligences. ChatGPT is fundamentally the same in this regard. That said, ChatGPT has been brilliantly used by developers and hobbyists around the world to do a wide range of things that weren’t possible before, which is certainly a testament to the power and novelty of the world’s first advanced chatbot - it just isn’t an intelligence and isn’t a person because it doesn’t have content without people first doing or writing things it can be “trained” on or asked about.
While the release of ChatGPT and its subsequent massive growth has undoubtedly ushered in a tremendous shift in the accessibility of advanced capabilities from tiny groups of scientists to millions around the world, ChatGPT nonetheless does not have consciousness. The LLM-based model is only different from its predecessors by a matter of degree, and not by anything fundamentally different in terms of what sort of thing a chatbot is. Part of this is mixed signals about what AI is. Frankly, Siri and Alexa are AIs, and the piece of code that suggests words when you type into the Google Search Bar is an AI. But AI is not like a person, it does not have a will, it is not sentient, it is not conscious. AI is possible because language allows information to be stored and accessed later. There is truly nothing to be afraid of from ChatGPT that we didn’t already fear from Facebook and Twitter and the titan technology companies that have been using this technology for decades to impact the lives of billions of people without asking the question “should we?” advocated relentlessly in the AI Panic Letter.
The reason we can be so confident that AI is unlikely to suddenly take over the world and kill everyone is that it’s already around, and lots of people use it all the time. It is more accurate to say that these technologies are primarily ways to augment normal human capabilities than to say that they are the sorts of things that can be thought of as conscious.
A Response from the Standpoint of Worldview Ethics
Worldview Ethics is an expansion of the discipline of ethical philosophy from its current status, known as normative theory, to predictive science, which represents an application of the theoretical groundwork pieced together over the past two and a half millennia. The power of the philosophical system of Worldview Ethics is intrinsic to the philosophy of liberal individualism that undergirds the government of the United States among many others. Many human beings think about the world in terms of liberal individualism because it is an efficient framework to use for discussions about rights and laws without doing unnecessary damage to the people these codes are ultimately intended to serve. Worldview Ethics is a scientific extension and clarification of moral philosophy which holds that cognitive investigations of mind and brain can yield important theoretical insight, assisting us as we work to create a body of language to describe things like intelligence and personhood clearly. Yes, this effort will be interdisciplinary, but it will be worth it.
Human beings are more than nodes in a network, and the fabric of society is changing at a fundamental level in response to continual increases in throughput that its cybernetic technology can support. Instead of the former relegation to “possible universes” of armchair philosophy, ethics will now find itself fully competent in a predictive capacity insofar as it interacts with cognitive science. Thanks to the cognitive sciences on one hand and to blockchain & AI technologies on the other, ethics now is more than capable of making empirically verifiable predictions about matters on-chain and in civil society - as well as within individual bodies.
The response from the Worldview Ethics framework to the AI Panic Letter of 2023 is fairly simplistic. First of all, applause is due to Dr. Mitchell for highlighting important issues with the letter in her recent blog article:
“The “unexplainable” narrative gives rise to fear, and it has been argued that, to a degree, public fear of AI is actually useful for the tech companies selling it, since the flip-side of the fear is the belief that these systems are truly powerful and big companies would be foolish not to adopt them,” (Mitchell, 2023).
It is of course important to note that the primary function of the AI Panic Letter is to generate as much attention as possible and direct all of it at AI development. Perhaps at a deeper level there is another game unfolding, as David Hoffman rightly points out by calling this a “Moloch problem” wherein the researchers who actually go on vacation for six months only end up hopelessly behind in the same arms race when they return. Even deeper lies the possibility that Elon Musk has some scheme that competes with OpenAI that he is trying to clear a path for, or that Apple executives and founders are signing the letter to cast shade on an industry arising without their assistance to buy their own engineers time to produce alternatives.
In essence, we have people arguing that an unexplainable bogeyman is going to incomprehensibly come out of nowhere and destroy the world. They then recommend that the industry just stop developing this technology, which seems unlikely because it is a sort of thing that has never happened before, the rule being that technological revolutions are a one-way street. Finally, they baselessly claim that the model has become a “mind” and seemingly assume that it is conscious, which we know is not true because we know what consciousness is, and ChatGPT is not that. Now, as to the intentions of the signatories of the letter it is not our place to speculate, but their methodology raises cause for alarm, as the letter does not succeed in its purpose - the making of a persuasive case to get people to believe that what it says is true.
Broken logic and flawed assumptions are not persuasive.
Worldview Ethics is a framework applied to the study of moral philosophy that calls for broader integration of external bodies of scientific research. Rather than merely pondering the quandaries that exist at the edge of our understanding of morality and culture’s metaphysical basis, ethical theory can inform our direct investigation of the world through science. By stipulating that our ethical thinking take as its subject a worldview, we enable ourselves the ability to think clearly about things that matter deeply and solve an extremely meaningful set of problems. The core tenet of Worldview Ethics is that the role of the agent’s mind or brain is to construct a worldview, or a perspective from which to understand things. Within this worldview, we maintain an awareness of our primary assumptions and update the model over time as we gain new information. In Formal Dialectics, I treated with the concept of a worldview’s foundational assumption set, which we can refer to as metaphysics.
From Noam Chomsky’s account of innate grammar to Hermann von Helmholtz’s idea of a “prediction machine,” everyone who attempts to solve the puzzle of how the mind works gets it right to some extent and wrong to some (likely much greater) extent (Daniel, 2018). Worldview Ethics is the ethics concerned with the model world that conscious people hold in their minds. The big difference between Worldview Ethics and say Virtue Ethics, or Deontological Ethics or Normative Ethics is that we have a direct scientific subject to investigate in Worldview Ethics.
In Normative Ethics, the subject is a person making a decision or a group of people and the decisions they make over time - which is rather vague in some ways and which also seems to incorporate a lot of stuff we don’t want to discuss alongside the various good bits we like about it. In Virtue Ethics, at least at the time it was written, the subject was only a man, so we see normativism contains at least one vector that shows progress. In Kant’s Deontological Ethics the subject wasn’t a man or a person more broadly but rather the person’s will, which could either be good or bad. A good will means the person is engaged and present and focused and being themselves (legislating their moral will), whereas a will being bad means something is in the way and perhaps the person in question is acting in bad faith by not participating earnestly in a conversation, swearing away their own agency in return for a measure of security or other perceived benefit, or telling a lie.
Many of these ideas are remarkably strong. We have built a most remarkable society atop them. They enable human minds to coordinate at a scale that would scarcely be imaginable to any of the thinkers who dreamt them up. Worldview Ethics will make it possible for moral philosophy to return to its place in the conversation around current technological developments by reducing focus from the complex norms that arise somewhat unpredictably from social interactions between people at scale and instead turning our attention to the one behavior they all share in common: predictive modeling of complex situations. This subtle tweak will enable the study ahead to extend from the highest theoretical heights of complexity all the way back down to the tiniest, simplest observations of the natural world. Consistency across the theory is enabled by scientific rigor and a characteristic willingness to investigate across boundaries between academic disciplines.
At its heart, Worldview Ethics represents a very minor revision that greatly empowers moral philosophy as a participant discipline in the new paradigm of mind and brain research. And yes, with this empowerment will come the ability to speculate in some detail upon the nature of cognition in human beings as well as other organisms and perhaps even non-organisms, should there prove to be an acceptable answer to the question of what a non-organism consciousness would be conscious of in the first place.
References
1. Bostrom, Nick. (2004). Superintelligence. Oxford University Press.
2. Daniel, Thomas Dylan. (2018). Hermann von Helmholtz (1821-1894). https://www.academia.edu/38581885/Hermann_von_Helmholtz
_1821_1894_
3. Daniel, Thomas Dylan. (2023). Formal Dialectics, Serious Philosophy. https://opensea.io/assets/optimism/0xe7f967ed990c4db5262b59
2a3e5b70e29dd585c3/2
4. Future of Life Institute. (2023, March 22). An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence [Letter to the scientific community]. Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/
5. Mitchell, Melanie. (2023, April 3). Thoughts on a Crazy Week in AI News. AI Guide. https://aiguide.substack.com/p/thoughts-on-a-crazy-week-in-ai-news
6. Yudkowsky, Eliezer. (2023, March 29). Why An Open Letter to the AI Community Is Not Enough. Time, 198(14), 7. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/