A conversation with Kevin Kelly
An organization's reason for being, like that of any organism, is to help the parts that are in relationship to each other, to be able to deal with change in the environment. That means that they are trying to anticipate the future in some way or another. This is true of the separate pieces of a cell, of the cells in a body, of the bodies in a society, of the societies in an economy. Each system is trying to anticipate change in the environment.
The most interesting thing about change in the environment is that for the most part the environment isn't changing. The most certain thing you can say about the environment tomorrow is that it probably is going to be just like today, for the most part.
That may not sound as profound as it is, but it's actually a very important thing to have in mind when you are trying to structure something to adapt. The way that organizations and organisms anticipate the future is by taking signals from the past, most the time. Knowing what has just happened in the past, an organism can bring that data forward to modify its future actions.
That is the definition of a feedback loop: a circuit that is taking information from the past and bringing it forward. We have common mechanical examples all around us, such as thermostats, or flush toilets. You have a signal that something has just happened, and you're bringing that signal from the past back into the present. The mechanism uses that signal to attempt to anticipate the future: a change in the temperature of the room, or the rising level of water in the toilet tank.
These are simple mechanisms that do not entail intelligence, or consciousness. They are very elementary, yet these feedback mechanisms are basic to complex systems.
When a system is in turbulence, the turbulence is not just out there in the environment, but is a part of the organization or organism that you are looking at. The organization and the environment are in concert. You can't easily divorce the organization and say, "Look at all that turbulence out there. How do we respond to it?" The organization is already responding to it, no matter what you do, and is indeed, in some measure, causing the disturbance.
For instance: there is a sense in which evolution is driven by random mutations. The current understanding was that it was impossible to predict how something would evolve because it was a very turbulent environment full of things interacting with each other. But in fact, when you try to model that on a computer you find that because of the very structure of matter and of the chemical bonds that are the basis of every organism, evolution is not random at all. It will tend to follow certain paths. Those paths are what in complex systems theory are called basins of attraction.
When you are run a model of a complex system, you cannot determine, from where you start, exactly where you will end up. They call that sensitivity to initial conditions. On the other hand, you can say that it is likely that you will end up in one of, say, five different basins. That's a great advance.
In biological evolution, these basins of attraction say to us, "No, you can't predict what kind of organism will evolve from an amoeba. But one of the basins of attraction is a form with bilateral symmetry and tubular guts, mouth at one end, anus at the other, homologous appendages arranged down the sides. If you could run the tape of evolution over and over again, you might never again get humans or giraffes, but you're always going to find organisms evolving tubular guts and bilateral appendages. It's built into the nature of the underlying forces and materials."
Basins of attraction, of self organization, show up as well in our complex social environment, in human organizations. Here again, while we cannot predict the result of any given input, we can say that it will likely fall within one of several areas. All imaginable futures are not equally possible.
This is actually a very important principle that science is learning about large systems like evolution and that futurists are learning about anticipating human society: just because a future scenario is plausible doesn't mean we can get there from here. The dynamics of that plausible future may be possible, but the pathway from here to there may be too arduous. It may be impossible, it may require certain things that we don't have at hand. It may require a political will or a belief system that we no longer have. It may require a sacrifice of some sort that we are not willing to make.
Organizations do this. You reach a point where you have to sort of de-evolve, you have to let go, you have to cross this valley, this desert. And sometimes there is not enough will, or resources, or sense of direction to be able to cross this desert to get to the other peak with is actually higher.
This happens to individuals within organizations, as well. In medicine, for instance, there are entire careers built on certain specialties that are in the process of being rendered obsolete by technological change, or de-emphasized because of the shift toward primary care, or because of other changes. The doctors in those specialties are not necessarily interested in, or skilled at, doing any other kind of medicine. In order to change, they have to back down on the evolutionary scale, perhaps go down an income level, perhaps go down a level of power in the organization from what they were before -- because their training and experience optimized them for that one particular thing.
Over the past few decades, people have worked very hard to build robots with artificial intelligence. One of the surprising discoveries that came out of that intense experience is that trying to make a central brain run things does not work. If you try to make a robot that walks, and you give it a brain that has some sort of eyes to see with, and give that brain the job of notifying the legs when to move, it will never fail to flop over. Using a centralized brain for the task of trying to anticipate the future and deal with change with just doesn't work.
Some researchers found they made more headway when they started from the bottom up, instead of working from the top down. They decided to build intelligent robot that was only as smart as an ant. They had observed that ants walk really well. The little tiny ant's brain did that job a lot better than any robot. So the researchers wondered how they were doing it. And they discovered something very interesting: when it comes to walking, most of the ant's thinking and decision-making is not in its brain at all. It's distributed. It's in its legs.
What you have is a very complex circuit of small, simple reflexes that are inhibiting, suppressing, or launching other reflexes -- and out of a collection of simple reflexes you get complex behavior.
This is actually how our brains work. A brain is a society of very small, simple modules that cannot be said to be thinking, that are not smart in themselves. But when you have a network of them together, out of that arises a kind of smartness.
Organizations work very much like that. The idea that there is somebody actually directing them, that there can be a "I" at the center, an autonomous person or directorate making everything happen, is an illusion. The "I" of an organization is an emergent phenomenon, greater than the sum of its parts, which arises out of the whole thing.
My favorite example is the bee hive. People think that the queen bee is directing the hive. But she's just following it. Where does the "I" of the bee hive reside? It resides in the hive as a whole. The hive makes decisions without any of the individual bees even being aware of making a decision, or even that a decision is being made. And yet the distinctive personality of the bee hive emerges.
Organizations are a lot like that. Certainly there are people that are in charge, people whose job it is to try to look for and embody the "I" of the organization, people who make decisions and sign the documents. But if you try to isolate who is really thinking for the organization, it's a futile quest. The organization as a whole does the thinking.
An organization's intelligence is distributed to the point of being ubiquitous. It's distributed to every component of the organization, and there is no place that you can put your finger on it and say, "Here's the mind," just as intelligence is ubiquitous and holographic in the brain.
Managers tend to treat organizations as if they are infinitely plastic. They hire and fire, merge, downsize, terminate programs, add capacities. But there are limits to the shifts that organizations can absorb.
In some cases there are quantum levels of change: an organization may be able to be what it is now, or be something else that its managers can envision, but not necessarily be the things in between those two states. Or they may be able to go quickly through those transitional states, but they can't be stable in them.
Species go extinct because there are historical contraints built into a given body or a given design. If organizations were not embodied in physical buildings and real people in real locations, they could become anything they want at any time. But when you are embodied in a location, in a physical plant, in a set of people, and in a common history, that constrains your evolution and your ability to evolve in certain directions.
If you are the executive or the board of an organization and you can see, looking at your environment, that you really need to have some capacity that is significantly different from the capacities that you have now, you are more likely to succeed if you start a wholly new enterprise than if you try to shift what you are doing now.
It's psychologically easier to think of it as siring offspring that retain some of the genes, some of the stuff that you have learned, and growing something new. But it's becoming ever more important to be able to let go and to kill things.
When you start asking those questions you find that just doing what you have been doing, but doing it better, is not necessarily the answer. You often have to stop doing what you're doing, and go do something new. This process of letting go, of stopping what we are doing entirely and doing something else in its place, has not really made it into our vocabulary of corporate management.
This kind of change is disruptive. It's heart-rending. It's not easy. We see this often not with entire organizations but with products. Organizations get invested into a particular product. And sometimes the best thing is to stop making that product, even though it's profitable, because it has optimized at a local peak. This is where it becomes hard. It's like saying, "This is a nice peak, but we are going to get stuck here. We've got to let go. We've got to kill it and climb down this hill, so that we can cross the valley and move on to larger peaks."
We don't have the organizational and managerial tools for giving things up very easily.
For instance, as health care moves from just fixing people who are sick toward more prevention and community work, it is finding out something important: the kind of organizational and individual skills that you need to fix people are very different from the kind of skills that you need to get people to change the behaviors that make them sick in the first place. This is a very different business.
Healthcare has been something that happens in hospitals, hidden, removed from view. The growing ability of information databases to have all my medical records together, in a format that I can understand, with backup materials to deepen that understanding, will help me deepen my sense of my own medical history.
Technological advances could allow us to see more clearly into our own lives. They will lend greater transparency, and a longer view, both to the people and to the organizations serving the people, so that we together can see what our environment is.
For instance, two weeks ago our three-year-old daughter fell out this window here, onto the concrete one floor down. She's okay, but it was pretty traumatic and scary. The hospital did a great job, but I still intend to write them to get copies of the CAT scan and X-rays that they did, just for our own education, for my daughter to see herself in a different way. Why shouldn't I be able to put that onto our computer monitor, learn from it and show her what happpens? If we can bring people that information, and make it evident to them, medicine will not be such a black art.
That's what personal computers did. The great advance of personal computers was not the computing power per se but the fact that it brought it right to your face, that you had control over it, that were confronted with it and could steer it. As new technologies enrich our environment, the health care system need not seem as remote from our own lives as it does right now.
Think for a moment about the difference between a free market economy and a Soviet-style command economy. A free market economy allows you to have mini-revolutions on a daily basis, rather than have a big one once every fifty years.
So how do you get a large organization to respond to the environment? You can do it in two ways. One is you can wait and then change the entire organization all at once and hope that it works -- and if it doesn't work you're dead. Or you can allow all the little parts of the organization to adapt on a regular basis, so that you don't have your change happen all at once.
Changing things from the top down works when things are stable. It's a very efficient way to do it. But in a turbulent environment the change is so widespread that it just routes around any kind of central authority. So it is best to manage the bottom-up change rather than try to institute it from the top down.
There is an art to this. There are some processes that don't allow for experimentation, creativity, or error at all. We don't want the people who run nuclear power plants to get creative and make up their own rules. Some things are "mission critical." Those are usually simple, proven processes on which you can build complexity. That's probably not where you are going to find a creative advantage.
Almost all innovation in all a system happens at fringes. So maximize fringes. The nature of an innovation is that it will arise at a fringe where it can afford to become prevalent enough to establish its usefulness without being overwhelmed by the inertia of the orthodox system.
Maximize fringes. Have more skunk works. Emphasize subsidiaries that are some distance from the main office. Give people some slight autonomy. Have a few hidden budgets. The fringes are where innovation and change originate. If you try and do it in the central area, if you have a "Department of Change" it will get overwhelmed by the orthodoxy that is surrounding it. You have got to give it some room out there where the innovation can blossom and take root for a little while, so that it can prove itself and move back into the center.
In healthcare, for example, suppose you have a new drug. If you are the patient, you don't want to get an experimental drug. You want the healthcare system to optimize a known drug and make it better. The exploration for new drugs involves a certain level of experimental loss -- there is no guarantee that the new drug is as effective and problem-free as the old one. But unless you explore, you'll never have new drugs, you'll only have better old ones.
With the increasing use of outcomes research and outcomes management, healthcare is shifting the emphasis slightly from being a very exploratory environment, to being more exploitive. Much of outcomes research is a systematic attempt to exploit what is known and make it better.
The necessary exploratory component that a system must have to stay adapted to its environment is very small, something in the neighborhood of one percent. In an organization that might mean one percent of your people, or one percent of your revenue -- one percent of your actions are sent exploring in a very open way.
That's a small amount, but it is necessary. If outcomes research were used to so regularize medical practice that there was no variation whatsoever, you could lose that wild, random data. You would have no comparable data with which to continue to improve the system.
That threshhold into complexity is very different for different systems. But if you are inside the system you can often tell when it crosses the threshhold. Things are suddenly not acting at all like they were before.
There will be a different kind of bigness to deal with, a complexity that is dispersed geographically, temporally, and organizationally.
This calls for an organizational model of loose affiliation rather than tight control, with the hierarchy determined not so much by rank as by time and size: the higher levels are those that are concerned with longer periods of time over greater parts of the organization. This is much more realistic than thinking of a hierarchy as the person at the top giving orders that the people at the bottom must follow.
When we surrender some of our control we get all these cool things like artificial evolution, adaptive materials, autonomous agents, smart buildings, and so forth. It's not that we entirely accept whatever behavior they have, and say that we'll just tolerate whatever happens. It's more along the lines of raising a child: we train the system to a certain range of behaviors that we find most useful. But then we let it go, because we don't want to have to be babysitting it the whole time.
Many of our systems are approaching that level of complexity. That means that there are going to be surprises. That means that we may not understand exactly what is happening.
Everything that we are making, we are making more and more complex. As the complexities of the things that we make increase, they become somewhat biological in nature. In order for us to manage them, we need to import some of the principals that nature uses to manage those natural ecosystems -- trees, bushes, birds, meadows, and jungles.