“To imagine a language is to imagine a form of life.”
—Ludwig Wittgenstein, Philosophical Investigations (1953)1
Jeremy England is concerned about words—about what they mean, about the universes they contain. He avoids ones like “consciousness” and “information”; too loaded, he says. Too treacherous. When he’s searching for the right thing to say, his voice breaks a little, scattering across an octave or two before resuming a fluid sonority.
His caution is understandable. The 34-year-old assistant professor of physics at the Massachusetts Institute of Technology is the architect of a new theory called “dissipative adaptation,” which has helped to explain how complex, life-like function can self-organize and emerge from simpler things, including inanimate matter. This proposition has earned England a somewhat unwelcome nickname: the next Charles Darwin. But England’s story is just as much about language as it is about biology.
There are some 6,800 unique languages in use today. Not every word translates perfectly, and meaning sometimes falls through the cracks. For instance, there is no English translation for the Japanese wabi-sabi—the idea of finding beauty in imperfection—or for the German waldeinsamkeit, the feeling of being alone in the woods.
Different fields of science, too, are languages unto themselves, and scientific explanations are sometimes just translations. “Red,” for instance, is a translation of the phrase “620-750 nanometer wavelength.” “Temperature” is a translation of “the average speed of a group of particles.” The more complex a translation, the more meaning it imparts. “Gravity” means “the geometry of spacetime.”
What about life? We think we know life when we see it. Darwin’s theory even explains how one form of life evolves into another. But what is the difference between a robin and a rock, when both obey the same physical laws? In other words, how do you say “life” in physics? Some have argued that the word is untranslatable. But maybe it simply needed the right translator.
While other 12-year-old boys were reading Marvel comics, England was reading The Cartoon Guide to Genetics. The cover depicts a scuba diver encountering a life-sized strand of underwater DNA. Inside is a tour of biology basics, from ribosomes to plant sex. England was immediately intrigued.
“I thought it was really marvelous to behold how functions are accomplished by molecules,” he tells me. England speaks ebulliently, and with his hands, and wears a kippah atop his head.
Take DNA polymerase, he says. In biological terms, its job is to create new DNA molecules by assembling nucleotides, which consist of a chemical base, a sugar, and a phosphoric acid. “When you see that story laid out before you, it all makes sense—it just looks to us like it’s working to accomplish a goal,” England says. “And yet, these things are barely distinguishable from inanimate matter. You break them into slightly smaller pieces, and all they can do is spin and vibrate.”
England saw more of the same as an undergraduate at Harvard University, where he studied protein folding with the biophysicist Eugene Shakhnovich. Protein data banks held files detailing “beautifully delineated ribbons and sheets,” color-coded by property. Unraveled, each protein is made up of the same 20 amino acids. Yet somehow, once they are folded into shape, each carries out a specific and vital process required for life. “Amino acids are not going to write you a sonnet,” England says. “But when you string a few hundred of them together, suddenly you get this machine that looks like it is made for a particular purpose.”
Wittgenstein argued that a word’s meaning depends on its context, a context determined by the people who are using it.
Somehow, from the churning of blind gears, something like purpose emerges. The pieces, individually obeying nothing more than the basic laws of physics, collectively accrue function. Function seems absent from the world of physics: Time and space don’t exist for any express reason, but just are. In biology, systems are fine-tuned to act. To move, catalyze, and construct. The word “function” trapezes between life and not-life. Is it a word we bestow on things that merely seem life-like, or is it something more inherent? As England would tell an audience at Sweden’s Karolinska Institutet in 2014, physics doesn’t make a distinction between life and not-life. But biology does.
After his Ph.D., when he was a research fellow at Princeton, England would sometimes drive up to New York City to visit his oldest friend from childhood, a philosophy major. His friend took England to some of his favorite Lower East Side haunts and engaged him in long conversations about the Austrian-British philosopher Ludwig Wittgenstein.
Wittgenstein lived in solitude for a portion of his life in the forests of Norway—waldeinsamkeit—and wrote of so-called “language games,” or shared sets of conventions about communication. Some philosophers had maintained that a word’s meaning inheres in the physical object out there in the world. Wittgenstein, however, argued that a word’s meaning depends on its context, a context determined by the people who are using it. Playing a language game is sort of like speaking in code—if two people are participating in an activity that’s well understood by both parties, they can use fewer and simpler words to make themselves heard. Different groups of people—musicians, politicians, scientists, and so on—employ language games that suit their separate needs. New language games are constantly bursting into being. Meaning changes shape. Words adapt.
“In making that kind of a point, he’s channeling the same kind of idea that I also locate in, among other places, the opening passages of the Hebrew Bible,” says England.
“In the beginning, God created the heavens and the earth ...” Here, the Hebrew word for “create” is bara, the word for “heavens” is shamayim, and the word for “earth” is aretz; but their true meanings, England says, only come into view through their context in the following verses. For instance, it becomes clear that bara, creation, entails a process of giving names to things; the creation of the world is the creation of a language game. “God said, ‘Let there be light,’ and there was light.” God created light by speaking its name. “We have heard this phrase so many times that by the time we are old enough to ponder it, we easily miss its simplest point,” England says. “The light by which we see the world comes from the way we talk about it.” That might be important, thought England, if you’re trying to use the language of physics to describe biology.
Which he was compelled to do. As a young faculty member at MIT, he neither wanted to stop doing biology, nor thinking about theoretical physics. “When you refuse to let go of two things that are divergent in the way they cause you to talk,” he says, “it forces you in the direction of translation.”
In the Jewish tradition, “miracles” don’t necessarily defy the laws of nature. They’re a bit less grandiose than that—instead, a miracle is a phenomenon that was previously considered unimaginable. Witnesses to that miracle are called upon to reframe their assumptions and resolve contradictions. In short, they must start to think about their world in a new light.
To the physicist steeped in statistical mechanics, life can, in this sense, appear miraculous. The second law of thermodynamics demands that for a closed system—like a gas in a box, or the universe as a whole—disorder must increase over time. Snow melts into a puddle, but a puddle does not (on its own) spontaneously take the shape of a snowflake. Were you to see a puddle do this, you’d assume you were watching a movie in reverse, as if time were moving backward. The second law imposes an irreversibility on the behavior of large groups of particles, allowing us to play with words like “past,” “present,” and “future.”
The arrow of time points in the direction of disorder. The arrow of life, however, points the opposite way. From a simple, dull seed grows an intricately structured flower, and from the lifeless Earth, forests and jungles. How is it that the rules governing those atoms we call “life” could be so drastically different from those that govern the rest of the atoms in the universe?
In 1944, physicist Erwin Schrödinger tackled this question in a little book called What is Life?. He recognized that living organisms, unlike a gas in a box, are open systems. That is, they admit the transfer of energy between themselves and a larger environment. Even as life maintains its internal order, its loss of heat to the environment allows the universe to experience an overall increase in entropy (or disorder) in accordance with the second law.
It could be easier for things like proteins and enzymes to emerge than we’d thought.
At the same time, Schrödinger pointed to a second mystery. The mechanism that gives rise to the arrow of time, he said, cannot be the same mechanism that gives rise to the arrow of life. Time’s arrow arises from the statistics of large numbers—when you have enough atoms milling about, there are simply so many more disordered configurations than ordered ones that the chance of their stumbling into a more ordered state is nil. But when it comes to life, order and irreversibility must reign even at the microscopic scale, with far fewer atoms in play. At this scale, atoms don’t come in large enough numbers for their statistics to yield regularities like the second law. A nucleotide—the building block of RNA and DNA, the basic components of life—is, for example, made of just 30 atoms. And yet, Schrödinger noted, genetic codes hold up impossibly well, sometimes over millions of generations, “with a durability or permanence that borders upon the miraculous.”
So how does a gene resist decay? How does it not collapse under the weight of its fragility? Something deeper than statistics had to be at play, something that could allow small groups of atoms to irreversibly pull themselves up by their bootstraps and become something “alive.”
A clue came half a century later, when an English chemist named Gavin Crooks mathematically described microscopic irreversibility for the first time. In a single equation, published in 1999, Crooks showed that a small open system driven by an external source of energy could change in an irreversible way, as long as it dissipates its energy as it changes.
Imagine you’re standing in front of a fence. You want to get to the other side, but the fence is too tall to jump. Then a friend hands you a pogo stick, which you can use to hop to the other side. But once you’re there, you can use the same pogo stick to hop the fence again and end up back where you started. The external source of energy (the pogo stick) allows you to make a change, but a reversible one.
Now imagine that instead of a pogo stick, your friend hands you a jet pack. You fire up the jet pack and it launches you over the fence. As you clear the fence, the jet pack dissipates its fuel out into the surrounding air, so that by the time you land, there’s not enough energy left in your pack to get you back over the fence again. You’re stuck on the far side. Your change is irreversible.
Crooks showed that a group of atoms could similarly take a burst of external energy and use it to transform itself into a new configuration—jumping the fence, so to speak. If the atoms dissipate the energy while they transform, the change could be irreversible. They could always use the next burst of energy that comes along to transition back, and often they will. But sometimes they won’t. Sometimes they’ll use that next burst to transition into yet another new state, dissipating their energy once again, transforming themselves step by step. In this way, dissipation doesn’t ensure irreversibility, but irreversibility requires dissipation.
Crooks’ result was very general, applying to any transformation of a system out of equilibrium—including, potentially, life. But, says England, “there was maybe caution about the question of what could be said about a big messy many-body system with huge amounts of dissipation in it. It seemed like these results were true but maybe difficult to operationalize for calculation.” In 2013, while England was in California to give a talk at Caltech, he kept playing with the variables of Crooks’ equation in his hotel room. It was clear from the Crooks equation that in order to achieve the kind of irreversibility that is a hallmark of life, a system would need to be particularly good at absorbing and dissipating heat. But he knew that wasn’t the whole picture.
“It’s like wandering around the vicinity of the same basic point,” he says. “Then you put it down, and you sleep, and you think about different things. When you come back to it, sometimes there’s an opening in the wall. You receive things differently, time passes.”
Finally, something clicked. Given a particular energy source, some arrangements of atoms will be better at absorbing and spending it than others. These arrangements are more likely to undergo an irreversible transformation. What if some systems get better at doing this than others over time? Then the series of irreversible transformations become an effect that compounds, pulling itself up by its bootstraps. England put pencil to paper and wrote a generalization of the second law of thermodynamics that takes into account a system’s dissipative history, and which, he says, sheds light on the emergence of the structures and functions of life. In a paper late last year, he put it this way:
While any given change in shape for the system is mostly random, the most durable and irreversible of these shifts in configuration occur when the system happens to be momentarily better at absorbing and dissipating work. With the passage of time, the “memory” of these less erasable changes accumulates preferentially, and the system increasingly adopts shapes that resemble those in its history where dissipation occurred. Looking backward at the likely history of a product of this non-equilibrium process, the structure will appear to us like it has self-organized into a state that is “well adapted” to the environmental conditions. This is the phenomenon of dissipative adaptation.1
Of course, a system of atoms isn’t trying to do anything—it’s just blindly, randomly, shuffling itself around. And yet, through its journey from one shape to another, a constellation of chemical stories, it self-organizes into something that looks to us like it has adapted. “Language is a labyrinth of paths,” said Wittgenstein. To England, the translation felt right. How do you say “life” in physics? He called it “dissipative adaptation.”
It may sound as though dissipative adaptation reduces us to mere cooling towers for the sun. But the theory means much more than that. Darwinian natural selection could be recast as a special case of the more generalized phenomenon of dissipative adaptation, a dialect of a more fundamental language. Whereas dissipative adaptation occurs on the micro-scale, natural selection takes place in the world of macroscopic self-replicators. And self-replication is an excellent way to consume and dissipate energy. In the language of dissipative adaptation, words like “fitness” take on new meaning. “Fitness is defined here not in terms of a set of optimal functionalities, but rather as its ‘give and take’ relationship with available energy from the environment,” says Meni Wanunu, assistant professor of physics at Northeastern University. As systems dissipate energy, they drift in an irreversible direction and by doing so become “exceptional,” as England puts it, not perfect or ideal. “A bird is not a global optimum for flying,” England says, “It’s just much better at flying than rocks or worms.”
The theory challenges us to rethink the remarkable functions that make life special: “We have more flexibility in the places we look for function,” says England. The emergence of complex function from a collection of weakly interacting particles, without any strong coordination, is now a process that can be broken down into many small irreversible transformations driven by an external drive. It could be easier for things like proteins and enzymes to emerge than we’d thought. “It might not be an issue of exquisitely selecting the amino acid sequences over eons of self-replication,” says England. “There may be faster time scales on which you can self-organize things. If we can convince ourselves that the very beginning of life looks a bit more like a ramp or stairway with lots of smaller incremental changes that point in the right direction, then that may at least reset our notion of what kinds of scenarios we should be imagining.”
The theory doesn’t just help us peer into the past— it also suggests new design and engineering approaches. “If I want to mimic something that living things do, maybe it doesn’t have to mimic living things as much as I thought it did.” One example may be something called “emergent computation,” which England and members of his lab are currently studying. The goal is to get systems of particles to evolve an ability to predict changes in their environment, without receiving any design instructions on how to do so. Getting good at absorbing and dissipating energy in a fluctuating environment requires some degree of anticipation, after all. “If we succeed in doing this, the argument will be that somehow the particles in the system are interacting in such a way as to effectively implement a calculation about the future based on the statistics of the past,” England says. That could impact technologies that are based on predictive power, from neural networks to bots that tell us when to buy a plane ticket.
This is the surprising power of translation. If it works, it could be the proof in the pudding that dissipative adaptation needs. For now, Wanunu is reserving judgment. “England proposes a new set of ingredients. Just how the pudding turns out will be exciting and interesting to see.” Jeremy Gunawardena, associate professor of systems biology at Harvard University, isn’t entirely sold on the approach either. “Jeremy is hoping that he can avoid thinking about the chemistry and see the abstract essentials of life emerging as a physical necessity,” he says. “I am not convinced. However, I think it is great that he is working on the problem and I am sure we will learn something interesting from it.”
Which is fair enough. After all, in the words of the late Umberto Eco, “translation is the art of failure.” The failures and trade-offs in this brand-new translation remain to be discovered. There may not, at the end of the day, be just one language to express the complexities of life. But England wants us to try a new one. He put it this way in Commentary magazine last year: “There is more than one viable language for describing the world, and God wants man to speak all of them.”
Allison Eck is a science writer and a digital associate producer for NOVA Online. She lives in Boston.
1. England, J.L. Dissipative adaptation in driven self-assembly. Nature Nanotechnology 10, 919-923 (2015).