ink transition
play buttonpause icon
view nft
view history

What is
Distributed Cognition?

Author’s Note

It is a bittersweet moment in which we approach the final essay in this project. Prior to beginning the work here, I had the opportunity to view it as if I already knew some of the answers, and this was accurate - but nonetheless, from the AI Panic Letter that started it all, there have been numerous developments that have changed the course of the project. A word of thanks is due to everyone who has read and engaged with the essays and with the Spaces we’ve held to share them with the world. 

The discovery process with this project has involved both direct discussion with the public and a literature review of an expanding domain of human knowledge. As we’ve progressed through the prescribed series of topics - from the Panic Letter’s misappropriation of the basic concepts involved with cognition, to this essay’s high level overview of the analogy from individual cognition taking place as a process in cortical tissues of particular bodies, to the Metabolic Theory of Consciousness with Gödel-completeness as a core agentic property, to enactive systems in the most abstract outer reaches of possible human thinking with language as a shared global workspace facilitating distributed cognition - we have seen much evidence that something of a Renaissance is taking place in the cognitive sciences, requiring much adaptability as we aim our sights ever higher to understand ever more. 

This essay is dedicated to the memory of Doug Lenat.

Introduction
The first five essays in this series have taken us from the AI Panic Letter and the Metabolic Theory of Consciousness through the primary insight I began to understand in graduate school, that language is a virtual, social, adaptive, and powerful computation & networking layer on top of individual human cognitive capabilities. Work by David Chalmers and Yoshua Bengio, among many others, has recently revealed that AI systems need integration to become thinkers. This might look like a series of modules enabling a given system to model a conversation based upon its varied specialties and then pass the information that results from these shared computational enterprises on to the user. Just as a brain takes in information, makes comparisons, and is changed as a result, a hypothetical general artificial intelligence could pull in information from a variety of specialized models using a shared global workspace, make comparisons, and then share the results with the user.

To build a conscious AI system remains an extremely difficult proposition, but the development of increasingly sophisticated neurological models of human intelligence supplies us with more than sufficient evidence of the real progress we’re making toward a basic understanding of the goal. It is likely that progress will continue in a modular framework until systems with some level of self-consciousness exist to provide people with increasingly effective technologies to assist them with the navigation of the lexical space we’ve already inhabited for thousands of years, in the most significant advance for this enduring conversation that has taken place since the advent of written language itself. Perhaps the first self-aware algorithm that could be said to care about its subject domain will be a librarian.

Many questions no doubt remain, but given this highly specific set of parameters, it is possible to treat with the question of artificial consciousness in more detail here. However, we must be sure we do our homework! Part of the job will be understanding the perspectives of various philosophers and AI experts who are working on different parts of the problem.

This essay will present a handful of helpful perspectives as it builds toward its conclusion, an argument that even if the hypothetical superAI presented above were to be created, and even though it might superficially appear to be conscious, there is massive evidence that the algorithm is still nonetheless only an extension of the activity of human minds and not an agent in itself, which perhaps we could refer to as metaconsciousness.

The reason we need a different word to refer to the particular subset of conscious thinking that could be attainable by a machine is the ten-year-old chimera I have been pursuing as I worked through biology, physics, psycholinguistics, and deep philosophy of language. Distributed cognition between human minds is one function of language. Speech acts are a way of interacting with a shared global workspace whose domain is the lexicon of the language multiplied by the usage dimension of terms. 

Metaconsciousness
People use speech acts to create multifaceted meaning and enable the robust, ambiguous propagation of concepts from one human mind to another. As they do so, the metadialectic or the dialectic of form plays out, elaborating positions for and against, enumerating histories, empirically exploring, and occasionally stripping everything away to see what still remains underneath. This dialectic emerges from the study of a great many simpler dialectics over time, just as metaconsciousness emerges from the internet’s perhaps unwitting advent of something like a shared global workspace every human mind is capable of participating in.

A machine that is able to simulate human speech acts with arbitrary accuracy will be a most remarkable accomplishment, should it be built. Despite the impressiveness of the accomplishment, we must nonetheless conclude that individual users’ human wills and human senses are the machine’s will and senses because only metabolic activity seems to give rise to willing and consciousness; the machine is metaconscious and not conscious in its own right. Hence, it is more accurate to think of AI as a method by which to empower distributed cognition among individual human beings. 

AI is likely to be extremely effective at assisting human navigation of the lexical workspace our languages provide us access to, and even durable conundrums such as the predictive power of the strong vs weak Sapir-Whorf Hypothesis are likely to be resolved in short order as a result. For example, the action of other users in a shared lexical space will likely turn out to be a driving force behind the impact of a given language upon the thought of a user of that language. This is obvious because language’s predominant function is to shape the course of thought, but nonetheless due to a rather overbearing insistence to discuss language as a static entity in the literature, it does not seem to frequently find application in Sapir-Whorf hypothesis discussions.

The significance of the simplicity with which this earth-shattering conclusion can be delivered is not to be understated. From the limited amount of communication required to convey the general message, and the remarkable degree of consistency it bears in accord with leading experts’ perspectives on the fields of AI, philosophy of mind, and cognitive neuroscience, we can conclude that in a sense the fruit is ripe and ready to be plucked. Evaluating and refining hypotheses is a time-tested way to move science forward, but sometimes the breakthroughs come from the most unexpected places. If the concept of enactivism is proven to extend into the virtual environment wherein human beings interact with languages and communicate with one another, the impact of each individual upon the culture and the impact of the culture upon each individual must both be explored in significantly more detail than either side of the cleft has merited thus far.

Chalmers’ View
For philosopher David Chalmers, LLMs in principle should be capable of consciousness and thought. Shared global workspaces enable human minds to maintain a continuous stream of conscious thinking even as individual cells change and die, replaced by others, throughout the course of a person’s life. The persistence of this shared virtual global workspace, for Chalmers, is what enables the same person to wake up in the morning after having gone to sleep and voluntarily suspended the operation of conscious thinking for a period.

This is a strong and durable insight for the most part. Though I do believe there is nuance here that Chalmers fails to take into account, I might suggest we note that the global workspace concept could also contribute to the culture-mediated feedback loop known as gestalt, making it an extraordinarily powerful reductive concept we can use to attempt to simulate conscious processes. 

Things get very interesting when this principle is applied in reverse, to human language. LLMs are possible in the first place because they have access to an immense web of recorded human speech acts, but we seem to have forgotten why these speech acts were recorded in the first place as we discuss the potential consciousness of LLMs. In my view, LLMs are specific instances of paths through the metaconsciousness of the internet that human beings can use to navigate massive bodies of information.

That is, the scope of the consciousness of the internet is a function of the activity of the human users of said network, and not an independent thing (unless we shift gears and start thinking about how to make it so!). Hence, we conclude that Chalmers is really onto something here - he understands how to make a thing conscious, but the consciousness is quite different from what we’d expect if our only experience of consciousness was the peculiar sort of consciousness human beings have.

https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/ https://consc.net/papers/llmthinking.pdf 

Bengio’s Insight
Yoshua Bengio is an AI Doomer, which, were I an ideologue, would probably prevent me paying much attention to his papers. Nonetheless, when reading Chalmers, I was tempted to open a link and it was remarkably informative. I was very glad I had set aside my ideological inclinations and instead had followed my scholar’s sense of curiosity! Bengio has shown us a way forward, to the possibility of an advanced general AI system in which modules share a global workspace, just as Dennett has argued. 

Interestingly, ChatGPT will use the same robots.txt file as Google does to index the internet for training purposes, and will allow websites to opt out of being crawled in the course of training. This suggests that, in some sense, the ChatGPT-style LLMs do have a shared global workspace, which consists of the open internet. Still, refinement and internalization of this data are needed to produce more accurate and reliable LLMs.

https://arxiv.org/pdf/2103.01197.pdf

Marcus and Lenat on Trustworthy AI
Gary Marcus of NYU and Doug Lenat of CYC have teamed up to publish an article detailing a most unique combination of the slow, trustworthy Cyc system with LLMs to produce what may well turn out to be astonishing results. Speed and correctness, in one model! This idea is perhaps the closest thing we have to a model embodying some of the principles we learned about above with Bengio & Chalmers. 

As the models gain complexity, it becomes increasingly important to compare them against human beings at various levels of function. One predictable outcome here is that AI will be useful insofar as it aligns with human values and useless insofar as it fails to achieve optimal alignment. Under the leadership of Doug Lenat, human coders have generously built a vast network of concepts upon which to train basic alignment into the Cyc model. Lenat has passed since this essay was written, but his contributions to the field of artificial intelligence will not be forgotten. 

https://arxiv.org/pdf/2308.04445.pdf

The Free Energy Principle
The Free Energy Principle (FEP) provides an intriguing layer of perspective whether we’re looking at the metabolic roots of diseases of ageing such as mild cognitive impairment and dementias more broadly or trying to figure out how to create artificial minds that have morals and stable personalities so they don’t recommend making chlorine gas in the oven. If metaphysics is a set of questions and answers about the assumptions we make about the world before engaging with it directly, then personality is something like our historic individual approach to these conundrums and their solutions.

If the Free Energy Principle is applied to personality, there are myriad outcomes for human individuality. Primarily, one’s attitude toward past experiences becomes a driving force behind personality. In the operation of the FEP over time, we see the compounding effect of past experience upon the most quintessential aspects of humanity, the ways in which we deal with our surroundings and build relationships with each other.

https://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20principle%20-%20a%20rough%20guide%20to%20the%20brain.pdf  

Worldview Disambiguation
Worldview is used in a variety of ways, but in general it comes back to metaphysics. Metaphysics is a philosophical discipline that encourages foundational principles to come into question, but it also refers to the models that exist prior to metaphysical explorations of the world such that they may be interrogated by metaphysical thinking. Does God exist? Is it wrong to eat people? These are metaphysical questions, and they have metaphysical answers. It isn’t just whether you can see God, and it isn’t just whether you’re able to eat an animal or not. Instead, what should you believe about God? Should we eat people? Why or why not? 

Jeremy Lent, in The Patterning Instinct, argues that worldview is a cultural phenomenon primarily, and uses the word to denote what I might refer to as the canon of a given people or culture. The thing about canon is that it’s what’s widely accepted, and not necessarily identical to what every individual thinks. In my usage, worldview denotes a point of view from which a particular conscious being interprets the world - so rather than what a typical member of a given society might think, we’re now looking at that plus  the nuances peculiar to some individual who has a somewhat different take. 

In some part, care is relevant here - we can’t pay attention to things we don’t care about! In another part, we might say that intelligence scales with the complexity and depth of one’s worldview, both in a general sense and perhaps even in accordance with the level of care someone intelligent may exert or have exerted in learning about various things. 

For a machine to have a worldview, it would have to first gain an autonomous will (i.e., be able to care about this or that) and perhaps some sort of body to enable it to bypass the human beings who presently serve as the only real point of contact between the world and an LLM such as ChatGPT. 

Algorithmic Metacognition
Finally, we’re ready to drive home the point. The goal of this essay is to share with the reader a point of view in which the fuzzy function being served by ChatGPT and other LLMs around the world can make sense. The algorithm is not conscious itself, but by rapidly parsing relevant information it can enable a given person’s consciousness to extend further than ever before by compressing reading material accurately, translating between various natural languages, and providing a fast user experience alongside increasingly robust query responses.

That’s the detailed explanation. The TL;DR version is that the LLMs don’t actually have worldviews but only rehash fragments of information in accordance with training data & incentive structure. 

We can know this because we know about the sorts of things AIs are and the sorts of components they presently have and lack, as well as the theoretical abilities each can bring with it. This isn’t to say that present LLMs aren’t amazing, or that daisy chaining a bunch of strong AI models with various specialties together under a metacognitive model with its own shared global workspace couldn’t yield something extraordinary, but instead to hypothesize that all of these material changes in the AI game are in fact quite likely. Under this hypothesis, it makes sense to assume that the various rather clearly delineated technical steps will work out, and think about the most likely state of affairs in such a situation as the one we might find ourselves in when they do.

The thing about a metacognitive module is that such things don’t quite exist yet. Still, the principles argued for in this essay series are quite robust and the most reasonable assumption is that a metacognitive module could work and work well at some point in the future. It still won’t be conscious of the world directly. Instead, it relies for 100% of its input data on human-created text, photos, and so on. In this vein of reasoning, perhaps it is easy to see that the function of the AI model is not to create a new form of consciousness capable of independent appraisal of the state of the world and action therein (lacking a metabolic basis for arbitrary preference), but instead to add a layer of coherence to the existing metacognitive framework we refer to as language. Think of asking a book a question about its content, for an example of what this might look like.

In an essay and a lecture I did years ago, “The Lexicultural Propagation of Concepts,” and “Language as a Computing Device,” I explored something I didn’t quite understand. It wasn’t until my quick read of Yoshua Bengio’s paper, with its discussion of the global workspace theory, that I realized we could simply take the same exact idea from the cognitive science of the individual human mind and apply it to the human superorganism, the cultural level of social interaction, to understand what AI is up to and simultaneously get a much better handle on what our own minds do with language. 

Reflections Upon Distributed Cognition
Distributed cognition is the original metacognitive framework that is found in human brains around the world today, active as we engage and communicate. Our metacognitive process enables human beings to learn languages and facts about the world rapidly under the direction of teachers. There are four-year-olds alive today who understand the workings of the natural world to an extent that would likely embarrass the adult Newton and Descartes if a competition were held and their knowledge was compared. The reason this can be so is that these thinkers were on the cutting edge of science in the past, and most of their best thinking is now wrapped up in the language these four-year-olds are learning from their parents, siblings, and friends/relatives. 

It is as if the lexicon of a given language is itself a vast reservoir of compressed concepts that can be accessed at any time by abstract reasoning in any particular human brain. Words can be added to the lexicon if they are used by enough people, and existing words have new meanings mapped onto them and propagated throughout the culture on a regular basis (so spill the tea, I’m on fleek!). The grammar of the language is a set of rules that apply to the words contained in the lexicon, enabling endless combinations to be formed to create sentences and novel speech acts without losing the basic coherency that enables such sentences and speech acts to be deconstructed and understood by other minds. 

As with many cases of natural selection, most new words and phrases die out and are never heard or seen again, but sometimes someone strikes a chord (or says “Eat my shorts!”), and soon everyone’s using the phrase. It is as if individual people making the choices they make could be viewed as a network. If we could visualize all of the speech acts, we’d be able to see properties of the system at a higher level of organization and one interesting vector would be the propagation of novel speech acts from one speaker to the next at scale. This is the origin of the knowledgebase hypergraph project this essay series is made to develop.

The function of ethical behavior is not to ensure the salvation of one’s soul, or to “do right” in impossibly abstract terms, but rather to successfully navigate the shared global workspace of humanity, the lexicultural metaconsciousness consisting of the enactive complex of all natural languages & their environs. 

In this way, the liberal individualist view that each human being is a self-contained agent is itself a dialectic, with poles corresponding to truth and falsity for the base statement. In a sense, each of us is alone and utterly able to shape the course of life for self at the fundamental level; but in recognizing this we must also acknowledge the fact that the bedrock upon which we lay the foundation for each rational construct we build as free individuals is itself shaped and molded by the intellectual history of our species. 

Perhaps the extent to which human cognitive activity does include deeply resonant distributed work toward a unified whole is a core tenet of our ability to understand our own neural/cognitive processes. As our self-understanding grows, we develop our abilities in new directions that may not even have been apparent before. 

Insofar as undergoing this process will make us better, it is to be looked upon as an unqualified good. Insofar as there are pains associated with it, we can understand it to be bad. Moving forward, it is important to grasp the changes that are happening here so that we may best position ourselves to succeed in navigating our ever-changing world. The goal of the Worldview Ethics book will be to develop principles to assist with the process of understanding and adapting to technological advances that we find to be beyond the control of any of us to govern. LLMs may not be conscious, and there may as yet be little good reason to panic, but the human superorganism could easily find itself far more intelligible to those of us who make it up as a result of these powerful cognitive augmentation programs. Perhaps the most substantial upgrade to the gestalt level of human activity imaginable today would be a shared global workspace sandbox in which new models and applications could be tested in public. 

If one principle holds true for the distributed cognitive apparatus we are working together to build, it is the principle that our actions and attitudes toward information sharing are better when they tend more toward openness and worse as we try to close the flow of data off. 

References
1. Chalmers, D. (2023). “Could a Large Language Model Be Conscious?” Boston Review, 2023, Retrieved from https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/

2. Friston, Karl. (2009). “The free-energy principle: a rough guide to the brain?” Trends in Cognitive Sciences, 2009, 13(7); doi:10.1016/j.tics.2009.04.005.

3. Goyal, et. al., (2022). “Coordination Among Neural Modules Through a Shared Global Workspace,” Retrieved from https://arxiv.org/pdf/2103.01197.pdf  

4. Lenat, D., & Marcus, G. (2023). Getting from Generative AI to Trustworthy AI: What LLMs Might Learn from Cyc. Retrieved from https://arxiv.org/pdf/2308.04445.pdf
metamask logo
connect with metamask