Huxley + Jung + Brooks x Progress: Building products that enhance society’s ability to cope with a rapid cognitive evolution

Author’s note: If this is your first time reading, it might help to have a look at ‘the concept‘ for an introduction.


Woody Allen and Diane Keaton in Allen’s dystopian comedic commentary on progress, Sleeper.

Introduction: Is there more to Googling than we think?

What was the last thing you Googled? Consider what the search process for that answer would have been like before automated search. Different, right? If you were searching for information about a problem or product, you might have gone to the store. Maybe you would have called a friend. Looked in an encyclopedia. Consulted a Chamber of Commerce.

Whatever it is, automated search likely changed the way we look for it, and the questions we had to ask of ourselves or others to get it. To some extent, it even introduced us to new things we would have never bothered to search for, like a chubby kid wielding a tennis-ball retriever as a light-sabre, while flailing around for nearly two full minutes. We could argue about the usefulness of Internet distractions like Star Wars Kid—but this surface-level side of the argument is well established and simply a matter of opinion.

Instead, let’s explore a different perspective–what Columbia professors Sparrow, Liu, and Wegner recently discussed in their study of the ‘cognitive consequences of having information at our fingertips.’ In Cognitive Effects of Google they conclude the Internet is causing an evolutionary shift in the way our minds work. From the abstract:

The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can “Google” the old classmate, find articles online, or look up the actor who was on the tip of our tongue.

The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it.

The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.

Notice the study’s conclusion was not to qualify the cognitive effects as good or bad, healthy or unhealthy—but rather to suggest that our minds are evolving. The level of granularity in our conscious stored knowledge is decreasing, because—as Sparrow, Liu, and Wegner conclude—we can use these new tools as external storage devices. Considering the pace of technology development, it seems this evolution to further ‘dissociate’ our minds from the rich, granular knowledge that has dominated our thinking since the beginning of time is inevitable. The primary questions then, are what are the effects and what is our responsibility as humans to control the cause? More pointedly, what is our responsibility as product developers to control the pace of this evolution? In order to answer these questions of control, let’s first take a deeper dive into the evolutionary process and its impact to see if a way forward emerges.

The dilemma: control

Our minds are evolving—but what’s actually happening here? First, we are offloading specificity. Let’s classify these specific pieces of knowledge we are giving up control over as ‘action-level’ intelligence (i.e. the level of information that we can act on). For example, you can’t know how to drive from Austin to El Paso by knowing where a map is located. You can only get there by using the map to plan each part of the route, then acting accordingly. But with a navigation system, you can act almost immediately. You can start driving West and trust the system to deliver action-level intelligence about your route.

Offloading the duty of such action-level intelligence can be tremendously useful and timesaving. But, we have also seen drastic negative effects in certain situations—such as the cases of people dying in Death Valley’s salt flats, because they blindly trusted GPS onto a road that no longer existed when there was a visible road literally right in front of their eyes. And this senselessness has already seen one evolution, with teenagers now who would be hard pressed to navigate north and south, because they never learned to reason and rationally calculate the direction they started traveling, and how their turns might have affected their path to becoming lost.

Clearly, there is some level of control over reason and judgment at stake here. Assuming the pros such as more free time or easier on-the-go navigation are desirable and useful, is preserving the brain’s ability to process action-level intelligence more or less valuable? Getting outside of the navigation example: Does societal progress mean more efficient humans with more free time? Or, are we giving up control over mental processes worth far more to societal progress than we realize?

The rational processes: losing progressive judgment capability

Let’s first look at the rational processes involved by examining a more specific example of precisely what action-level intelligence our brains are offloading to automation tools: Googling the word ‘intelligence’ rather than looking it up in the dictionary.

Consider the process. Two primary things occur. First, rather than exercising the cataloguing capacity of our brain by forcing it to frame the scenario, Google helps us skip the entire parameter setting process (i.e. I do not understand the word intelligence. This is a book containing all words from a to z; among these words from a to z, those starting with I will be the 9th section of 26 sections. Arriving at the I’s, n is the 14th and will be somewhere near the second half of I’s pages; Arriving here I rapidly process which page I have arrived at by scanning the upper and lower corners to determine whether to thumb forward or backward a few pages. Rapidly processing more words, I scan through, downwards through the “ina’s’ to the “int’s,” and so on to locate intelligence).

I would imagine the progressive nature of such a process is as important as the process itself. At each turn, we are making minor judgments. And, the results of these judgments build on each other to determine each next step. In all, this process of source identification to final answer takes maybe 20 seconds, if the dictionary is nearby—but it also engages the mind in a variety of cataloguing tasks and progressive judgments that are absent in a Google search. Second, we must decide how to reconcile it with our existing repository, and our brain shuffles through a series of value questions—how long will I need this definition? When might I need it again? What are the implications of this meaning on other things stored in my mental repository? And this is just one piece of information. According to Timothy Wilson at the University of Virginia, the human mind can process 11 million pieces of information at any given moment.

Now, if Sparrow, Liu, and Wegner’s conclusion is true—that the internet has become a primary form of external or transactive memory—it seems that only the reconciling processes found in the latter would evolve in a useful way. The parameter setting and progressive judgment process, on the other hand, evolves only to the extent of categorizing an increasingly meta-level of knowledge. The gains in scope are lost in specificity. And, my hunch is that setting parameters and progressively judging action-level information will slowly shift away to no longer be second nature. But, perhaps this loss of action-level judgment is a consequence of the cognitive evolution that we should take in stride. If technology is doing the work for us, why preserve mental capacities we don’t need?

Value: a technologist’s perspective

For eminent futurist Ray Kurzweil, letting technology assume an increasing position in our mental routine is not simply good, but a tiny step en route to the future of singularity—where humans are fully integrated with artificial intelligence. From Wikipedia:

… [Kurzweil] reserves the term “Singularity” for a rapid increase in intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine.[19]

I was introduced to the concept of singularity and Kurzweil’s work through the film Transcendent Man. And, the title says it all—Kurzweil is among those leading the march toward literally meshing mind with machine, and his own ambition is to transcend humanity to become an omniscient, ‘god-like’ being. The inflection point of singularity is just the beginning of an increasingly rapid pace at which artificial intelligence becomes capable of reinventing increasingly smarter versions of itself. Kurzweil’s latest prediction is that this will occur in 2045. In fact, as early as 1998, University of Reading professor Kevin Warwick was completing the first experiments to integrate a sensor network with his body to control an external AI device with his mind (Project Cyborg).

So, in the words of Kurzweil, the singularity is near. But, it seems Kurzweil would even go so far as to say the singularity is natural. And, given our relationship with automation tools like Google and the current course of technology development, I agree. Singularity will simply be the compound effect of millions of seemingly small decisions about what your product does for people. And while he may seem radical now, as 2045 draws nearer, the future will increasingly have already been written. The inflection point of singularity won’t be the time to decide how we should control the future we want. That ship started sailing with the invention of the wheel, picked up steam in the 60’s with the invention of the Internet, docked in 2010 long enough to pick up Star Wars Kid, and will be full speed ahead until Kurzweil reconfigures a Chinese production line from an easy chair in Palo Alto using only ‘his’ mind.

Only reason affected by offloading action-level intelligence?

So, while this loss of judgment may be a natural evolution, as David Brooks suggests in The Social Animal, there is more to what makes us human than pure reason. Brooks claims, ”Thus, it is not merely reason that separates us from the other animals, but the advanced nature of our emotions, especially our social and moral emotions,” and, “We are smart because we are capable of fuzzy thinking.” He offers the example of a child’s imagination as proof:

One of the things all [recent neurological] research shows you is how humble you have to be in the face of the complexity of human nature. We’ve got a 100 billion neurons in the brain, and it’s just phenomenally complicated. You take a little child who says, “I’m a tiger,” and pretends to be a tiger. Well that act of imagination–conflating this thing “I” with this thing “tiger”—is phenomenally complicated.

No computer could ever do that, but it’s happening below the level of awareness. It seems so easy to us. And so one of the things these people learn is they contain these hidden strengths, but at the same time they have to be consciously aware of how modest they can be in understanding themselves and proceed on that basis.

Up until this point, we have only considered how the cognitive effects of Google might shift our ability to make rational judgments, but Brooks suggests fuzzy thinking and human emotion might be more important to the future we build. In the footsteps of Thomas Jefferson, he goes so far as to claim that man is ‘destined for society’—that our complex emotions are as much a distinguishing characteristic of humans as our ability to reason and judge. With such a view of the importance of societal connection to future progress in mind, perhaps we would do best to further examine the emotional effects of the shift. If Brooks is accurate, the rational growing pains of shifting judgment processes will seem minute compared to the emotional toll of coping with such a brave new world.

The soft side: emotion and the unconscious

So, let’s turn now to the emotional effects. First, it can be said that when we offload the action-level intelligence gathering and progressive judgment process, we are temporarily dissociating our mind from our body. The tool does the work of the mind, and feeds its findings back to the mind as action-level intelligence. The mind is allowed to temporarily remove itself from the equation. In The Doors of Perception, Aldous Huxley describes a similar dissociation effect during his experiment with mescaline:

…The investigator suggested a walk in the garden, I was willing; and though my body seemed to have dissociated itself almost completely from my mind—or, to be more accurate, though my awareness of the transfigured outer world was no longer accompanied by an awareness of my physical organism—I found myself able to get up, open the French window and walk out with only a minimum of hesitation. It was odd, of course, to feel that “I” was not the same as these arms and legs “out there,” as this wholly objective trunk and neck and even head.

It was odd, but one soon got used to it. And anyhow the body seemed perfectly well able to look after itself. In reality, of course, it always does look after itself. All that the conscious ego can do is formulate wishes, which are then carried out by forces which it controls very little and understands not at all. When it does anything more—when it tries too hard, for example, when it worries, when it becomes apprehensive about the future—it lowers the effectiveness of those forces and may even cause the devitalized body to fall ill. In my present state, awareness was not referred to as ego; it was, so to speak, on its own. This meant the physiological intelligence controlling the body was also on its own. For the moment that interfering neurotic who, in waking hours tries to run the show, was blessedly out of the way.

In Huxley, we have a prime example of the emotions of a man in this completely ‘dissociated’ state, and the profound effect it has on his desire and ability control his actions. Huxley hints that this dissociation can be freeing. It frees the unconscious to let the body act. It might free up time for other thoughts—for example, to be more creative. Perhaps thinking less might lead to more fulfilled lives. Huxley even says, “One soon got used to it,” suggesting this march toward singularity won’t be a painful emotional evolution at all.

But, are there negative effects to turning off the rational, conscious part of the mind? If the body can be controlled very little, and understood not at all, might freedom also equate with a further loss of control when we outsource more of the cognitive tasks and ‘dissociate’ more often? And could it be that Huxley feels so at ease because he is acting out an experiment in isolation? Would it be as easy to cope if the effects were more permanent and the society around him was growing similarly detached?

Facing singularity with a fragile consciousness

As Huxley briefly introduced, I believe this question of humanity’s ability to cope with a rapid technological evolution rests on the control of and interaction between mind and body. But, what is only now becoming clearer is precisely how much the impact the unconscious mind has on controlling human action. According to Brooks, latest research indicates nearly 95% of all actions originate subconsciously. In fact, Brooks argues that humans often act according to their unconscious desires without even recognizing them as the cause—later inventing rational reasons for their behavior.

But, even if Brooks is correct, is there cause for concern? We have evolved as healthy people for thousands of years without ever paying concern to which mode of thought actually controls our actions. Or have we? Through first 100 years of the industrial revolution, and especially within the last 30 with an even more accelerated pace of change, many still suffer the same unhealthy emotions of depression, anxiety, dementia, and worse. Technology’s promise of happier, healthier, more productive people hasn’t materialized on a mass scale. According to the Surgeon General, 28% of Americans were diagnosed with a mental illness last year. And, in the grit between the statistic we observe the stunning range of outcomes brought on when emotional expectations are ripped apart by a distorted reality—students bursting into schools and shooting freely. A suburbia littered with mid-life crises and broken marriages.

The world is as much a place of mental angst as ever before. And, I don’t blame this wholly on technology—but, I don’t think it’s a stretch to say technology is a major force on how we interact with the world, and think and conduct our lives, which in turn affects our consciousness. And this is not a new sentiment—as described in Wikipedia’s History of Mental Disorders, prehistoric mental disorders were also attributed to evolutionary change:

Evolutionary psychology suggests that some of the underlying genetic dispositions, psychological mechanisms and social demands were present, although some disorders may have developed from a mismatch between ancestral environments and modern conditions. Some related behavioral abnormalities have been found in non-human great apes.

If mental health is as prone to disruption in the midst of our modern technological environment as these sources suggest, imagine the march toward singularity, where the pace of technological change draws closer and closer to doubling every second. What if the emotional effects of dissociation are more serious than Huxley’s feeling of detachment and nihilism as he rises from the chair and walks to the window? And, what if what we view as a stable consciousness is actually less stable than we think? In Man and His Symbols, a prescient Carl Jung explores these ideas of fragmentation and instability of what we view as a sophisticated consciousness, and synthesizes the emotional with the rational to present an encompassing picture of humans’ ability to cope with evolution:

What we call the ‘psyche’ is by no means identical with our consciousness and its contents…Consciousness is a very recent acquisition of nature, and it is still in an experimental state. It is frail, menaced by specific dangers, and easily injured. As anthropologists have noted, one of the most common mental derangements that occur among primitive people is what they call ‘the loss of soul’—which means, as the name indicates, a [dissociation of consciousness]. Man…never perceives anything fully or comprehends anything completely. He can see, hear, touch, and taste; but how far he sees, how well he hears, what his touch tells him, and what he tastes depend on the number and quality of his senses. These limit his perceptions of the world around him.

By using scientific instruments he can partly compensate for the deficiencies of his senses. For example, he can extend the range of his vision by binoculars or of his hearing by electronic amplification. But the most elaborate apparatus cannot do more than bring distant or small objects within the range of his eyes or make faint sounds more audible. No matter what instruments he uses, at some point he reaches the edge of certainty beyond which conscious knowledge cannot pass.

This means the individuals psyche is far from being safely synthesized; on the contrary, it threatens to fragment only too easily under the onslaught of unchecked emotions. While this situation is familiar to us from the studies of anthropologists, it is not so irrelevant to our own advanced civilization as it might seem. We too can become dissociated and lose our identity. We can become possessed and altered by moods, or become unreasonable and unable to recall important facts about ourselves or others, so that people ask: “What the devil has gotten into you?” We talk about being able to control ourselves, but self-control is a rare and remarkable virtue. We may think we have ourselves under control: yet a friend can easily tell us something about ourselves of which we have no knowledge. Beyond doubt, even in what we call a high level of civilization, human consciousness has not yet achieved a reasonable degree of continuity. It is still vulnerable to fragmentation.

This capacity to isolate part of one’s mind, indeed, is a valuable characteristic. It enables us to concentrate on one thing at a time, excluding everything else that may claim our attention. But there is a world of difference between a conscious decision to split off and temporarily suppress part of one’s psyche, and a condition in which this happens spontaneously, without one’s knowledge, or consent, or even against one’s intention. The former is a civilized achievement, the latter a primitive loss of soul, or even the pathological cause of a neurosis.

Thus, even in our day the unity of consciousness is still a doubtful affair; it can too easily be disrupted. Ability to control one’s emotions that may be very desirable from one point of view would be a questionable accomplishment from another, for it would deprive social intercourse of variety, color, and warmth.

Jung touches on nearly every angle of our dissociation dilemma. First, he mentions a ‘loss of soul’ that sounds rather similar to this idea of mental angst—the disillusionment that occurs when shattered mental expectations shake our conscious emotional reference points with the world, and we are forced to search within the subconscious for new ones. This is a terrifying dive into an unknown realm, and what emerges from this dissociation is often a forfeiture of mental control that results in negative social actions—i.e. the mid-life crises and shootings.

Next, Jung explores the idea that scientific instruments are only as good as the ‘fuzzy’ human filter behind them. And this is the distinction between Kurzweil’s technology-based thinking, and Jung’s more emotional-based thinking—enhancing the capacity of the senses rather than augmenting calculative brainpower. As Brooks’ concludes in The Social Animal: “The brain is not separate from the body—that was Descartes’ [who believed he could construct a world out of pure reason] error. The physical and mental are connected in complex networks of reaction and counter-reaction, and out of their feedback an emotional value emerges.” Thus, the imperfect and sometimes irrational aspects of our emotional complexity breed social value—and Jung acknowledges that in this economy, variety, color, and warmth are the currency.

Lastly, and perhaps most importantly, Jung unmasks what we view as a sophisticated consciousness as frail and fragmented. Even in modern civilization, consciousness, which is already ‘controlled very little and understood not at all,’ will be in further disarray. Becoming less capable of judgment is bad, but recoverable. Losing the level of self-control that separates us as rational and social animals becomes increasingly irreversible as the systems and institutions that guide society become more complex and immovable. And, we are giving this control away one search at a time.

The way forward: building products to enhance social value

This idea that our brains might evolve but our souls are not ultimately dissociated is reinforced by recent breakthroughs in how human knowledge advances. Many of us view the brain’s acquisition of knowledge like a plant sprouting—we feed it more and our mind expands. But, research shows that it is more akin to a football stadium full of spaghetti—with the ‘complex networks of reaction and counter-reaction’ Brooks’ references. So, this cognitive shift is more complicated than ditching action-level information in favor of recalling the place to find it. And, what emerges from this complex view of humanity is a respect for the social aspects of progress over the technological ones. In fact, Brooks’ concludes that cultivating a deeper view of humanity and society is more important than technological advances, and also believes these factors been largely ignored over the last century:

…[policy] failures have been marked by a single feature: reliance on an overly simplistic view of human nature. Many of the policies were based on the shallow social-science model of human behavior. Many of the policies were proposed by wonks who are comfortable only with traits and correlations that can be measured and quantified. They were passed through legislative committees that are as capable of speaking about the deep wellsprings of human action as they are of speaking in ancient Aramaic. They were executed by officials that have only the most superficial grasp of what is immovable and bent about human beings. So of course they failed.

While Brooks was speaking to his field of expertise, policy and politics, his characterization of policy failures as a failure to recognize mental complexity rings true across disciplines. We are a society with an unhealthy respect for technological breakthrough and analytical rigor, and a need to cultivate greater respect for human intricacy and social desire.

How might we accomplish this? First, we need to recognize that integrating mind and machine temporarily or permanently is breaking the classic rule of comparative advantage. We don’t need to make human minds better at computing. We need to make computers better at computing, and minds better at fuzzy thinking.  We do that by innovating in a way that respects the world’s complex social fabric, and encompasses new learnings of how healthy people develop and thrive. We create in a way that respects human dignity, and humanity’s need to connect—social currency.  We replace old modes of social variety, color, and warmth with richer variety, more vibrant color, and deeper warmth. That doesn’t mean we ignore all technological opportunities that don’t immediately promise social value. It does mean we prioritize the ones that do, and consider ways we can alter the ones that don’t to build social value back in. For Google, that might mean creating distinct search processes for different types of queries that find ways to replace the lost cognitive value of the search while still making search easier. If online search was viewed how it actually occurs—like complex hubs of information people are trying to access and potentially explore further, rather than one giant filter for everything on the web—people might not just find what they are looking for each time they search, but weave a perpetually richer shared cognitive fabric as well. Quora is a great example of a technological innovation that scores high on social value, because it creates a rich and accessible form of shared knowledge that re-introduces the social aspect of online search. It’s less automated, but miles ahead in terms of facilitating societal progress by connecting sources of privileged knowledge with anyone interested enough to ask a question.

Lastly, but perhaps most crucially, I believe success first requires a new definition of progress. One where technologists realize—as Samuel Johnson’s Rasellas does after shedding all of life’s comforts in the Happy Valley—that what people truly want is not an easier and happier life, but to feel awake and alive. However, the people most responsible for inventing the way forward often forget that increasing social capital is humanity’s true aim. They judge a new product’s goodness by its cleverness, novelty, utility, or a thousand other dimensions that have little to do with social value. As such, even in our advanced modern civilization, we see increasingly mixed cognitive results from technology use beget increasingly fragmented consciousness and a society ultimately more incapable of progress than ever before.

Have you previously thought about the effects brought on by the cognitive shifts Google and other tools that outsource action-level intelligence? Have you now? Do you believe the singularity is near? Do you believe that the sigularity is simply the cumulative impact of millions of seemingly tiny decisions about what products do for people? Do you accept, as Brooks and Jung propose, that consciousness has the clever ability to make us believe we are in control, when we are in fact subject to fragmentation? Can you envision the impact of a road to singularity that produces increasingly dissociated people? In light of Huxley, Brooks, and Jung’s ideas on the unconscious, do you see how what might seem like it is an advance could actually be deteriorating individual consciousness and societal value? If so, will you stop this deterioration? Will you choose to orient your product strategy to the social value model of progress?

For the sake of finishing this post in under 5000 words, I have saved further exploration of ideas on the social value model of progress for an upcoming post (Brooks x Innovation: The social value model of progress) that will take a deeper look into more specific parameters for creating products in ways that build social capital. In the meantime, I highly recommend taking a look at ‘The Social Animal’ for yourself, as Brooks’ 300 pages are already an ambitious condensation of an immense body of research.

Books Mentioned:

Aldous Huxley, The Doors of Perception

Carl Jung, Man and His Symbols

David Brooks, The Social Animal



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s