The Chinese artist Xu Bing has long experimented to stunning effect with the limits of the written form. Last year I visited the Centre del Carme in Valencia, Spain, to see a retrospective of his work. One installation, Book from the Sky, featured scrolls of paper looping down from the ceiling and lying along the floor of a large room, printed Chinese characters emerging into view as I moved closer to the reams of paper. But this was no ordinary Chinese text: Xu Bing had taken the form, even constituent parts, of real characters, to create around 4,000 entirely false versions. The result was a text which looked readable but had no meaning at all. As Xu Bing himself has noted, his made-up characters “seem to upset intellectuals,” in a sly sendup of our respect for the written word.
There was a long way to go from recording goods to writing great works of literature.
In another room was Book from the Ground, a slim volume, displayed in a room of Xu Bing’s inspiration: symbols and emojis, gathered from around the world and from different contexts, from an airport to a keyboard. Xu Bing scoured the world to find universal images and the result stands in stark contrast to Book from the Sky: This book was designed to be read by anyone. The first page was slightly awkward to read, translating the pictures to the (in my case, English) word. But as I turned the pages, the meaning emerged more fluently, and I was drawn into its story of a day in the life of an office worker. It was as if Xu Bing was asking me to wonder what was happening in my brain as these tiny pictures on the page transformed into meaning, a narrative. How was the process of reading pictorial symbols different from reading letters based on phonetic symbols?
Xu Bing was illustrating what recent studies in neuroscience have revealed: People everywhere read words made from pictures, such as Chinese characters (known as pictographs), and words made from letters, in a remarkably similar way. It’s an insight that opens a window on how writing developed and how we read—and how we might tap deeper wells of creativity and communication.
Humans in different places and times have felt impelled to overcome the limitations of pictures in communicating. Despite the pressing need to capture spoken language in this form, some societies never felt the demand. Until colonialism, aboriginal communities in Australia lived in societies governed by extremely complex laws that passed through the generations entirely through oral means. For tens of thousands of years, rules governing hunting, finding your way, marriage, and ceremony have been embedded in song and performed, learned, and taught in everyday life. There’s beautiful sacred rock art throughout the continent, and symbols used for specific identification, but neither developed into a written system to capture a whole language.
Some of the earliest writing—symbols for meaning rather than pictures alone—is from Mesopotamia, dated to around 3000 B.C.; clay tablets dug up in the archaeological site of Kunara, near the Zagros mountains in modern-day Iraqi Kurdistan. These tablets record quantities of goods in a form of bookkeeping—incoming and outgoing amounts of flour and grains. “The thing about human ingenuity is that when there’s a sharp need for something, it tends to crystallize in discovery,” says Irving Finkel, an assistant keeper of ancient Mesopotamian script, languages, and cultures in the British Museum. Necessity being the mother of invention, in other words. “It’s very probable that it was [a] kind of administrative responsibility which produced the first stumbling attempt at writing and then eventually a proper fluent script,” Finkel says.
Egyptologist Gunther Dreyer came to similar conclusions during a lifetime of excavating Ancient Egypt, discovering artefacts crucial to our understanding of the development of writing. “Why is there a need to write something down? I think the reason for that is simple,” Dreyer says. “Those are the requirements of accounting.” Dreyer points out that ruling back then, as today, involved “collecting taxes and redistributing. And in a big area, you somehow needed to note down who delivered what when.” Indigenous Australians feeding themselves and their communities through a hunter-gatherer lifestyle, bartering goods with other communities, didn’t need to record such trade, either for a third party far away (such as a tax office) or for posterity.
No writing system goes back much further than 5,000 years, a blink of an eye in evolutionary terms.
But there was still a long way to go from recording goods and quantities to writing great works of literature. Humans all over the world faced the same problems in expressing themselves beyond the here-and-now that speech has covered. It turns out that every ancient writing system solved these problems in exactly the same way. “We like to call it the giant leap for mankind,” Finkel says. The leap is from using a picture as a picture (a logogram) to using it to portray a sound (or phonogram)—the Rebus Principle. Many children play a game using this principle, when they discover that a bee can be used for the sound “be,” and combined with a drawing of a leaf, these two unrelated objects can suddenly produce a meaning—belief.
But then ambiguity arises: When is a bee a bee, and when is it a sound? Cuneiform, Egyptian, and Mayan hieroglyphs and Chinese all solved the problem in the same way: They added unspoken elements now known as “classifiers” to clear up whether the writer is talking about keeping bees or simply using “be” as a sound. Chinese still uses this system, with picture, phonetic, and classifier elements all crucial to their written system. But in other places a different system took over: the alphabet, invented around 4,000 years ago in the Sinai Peninsula. Stripped of anything but sound, this handful of symbols can be learned quickly, unlike the thousands of Chinese characters that must be mastered for literacy. After a few centuries of remaining at the margins, the alphabet from the Sinai swept through Europe and much of Asia and Africa, changing into the dizzying range we have today.
No writing system goes back much further than 5,000 years, a mere blink of an eye in evolutionary terms. “Relative to speech [reading is] very young,” says Tae Twomey of University College London, who has spent her career looking into this new trick of Homo sapiens. “The part of the brain that deals with reading had to evolve somehow from the brain that we used before writing was invented.” And it wasn’t just one part that was recruited. “If you think about it, it’s a complex task. You are extracting visual information in order, ultimately, to get to a meaning.” Once I do start to think about this process—a process I can’t remember not being able to do—it starts to seem extremely alien: Thoughts, ideas, instructions, information are being transferred from one human brain into mine, via my optic nerve. But the visual element is only part of the story.
Twomey’s research uses scans to show the different areas of the brain that are active when we read. “It’s a distributed network,” she explains. Neurologist Thomas Hope, a senior research associate at University College London, offers an analogy. “Like most cognitive behavior, we think reading works like the Nile Delta.” It’s not fed by one stream, he says, “but a bunch of potentially redundant streams.”
For reading, there are two large tributaries, broadly correlated with sound and vision. (The third major area working on the task is the Broca’s area, in charge of executive function, which acts as the conductor, orchestrating all the inputs.) Beginning readers sound out each letter to get to the meaning. “Reading is not just to communicate meaning, but also to communicate generally,” Hope says. “And the most common way that we communicate is by speaking. So when you read a word, some part of your brain is sounding out what that word would sound like if you were saying it or if someone was saying it to you.” And that act of speech communication is the same across cultures, whatever the written form of the language, so most readers will be hearing as they read.
But sound isn’t all. “I’ve been watching my children learn to read,” Hope says. “You can’t learn to read just by learning the letters. You have to learn to understand and recognize the words, too.” Readers in an alphabetic system have to learn the equivalent of characters: Learning the shape of a word is basically the same job as extracting the meaning from a pictographic character. But once we get more fluent at reading we tend to use a different tributary more. “Another way, that most skilled readers prefer, is to recognize the whole word as a single entity and connect it directly to meaning,” Hope says.
The so-called “Cambridge letter,” a meme in 2003, gives a proficient reader a chance to test this latter mode of reading, through shape recognition rather than sounding out the letters:
Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.
Most people can extract the meaning from this quotation without too much problem, which seems to prove its point: You can read using a general impression of the word rather than relying on the sound. But as Hope tells us, the history of research is a history of overturning simple explanations to find more interesting, if complex, stories below. In fact, jumbling the letters of a word does matter, for some words more than others, and in some sentences more than others.
Matt Davis, at the University of Cambridge (where this research did not take place; the first mistake of the meme), put together a handy blog post on faulty thinking about the letter. First is the fact that two or three letter words do not change at all: the second sentence leaves “the,” “can,” “be,” “a,” “and,” “you,” “can,” and “it” unchanged, giving our brains a lot of easy information to go on. Another feature of the meme is that no word has been misspelled in such a way that it spells a different word—Davis uses the example “salt” and “slat” as a problem that’s been avoided—and further, each jumbling has put letters close to where they originally were: “Cmabrigde” might be recognizable (especially when followed by “Uinervtisy”) but is far harder when written “Cgbaimrde.” Finally, the examples chosen all retain the right sounds of the original words they’re scrambling; the “th” in “without” is preserved in the way they’ve scrambled the letters. And that’s because sound, as it turns out, does matter.
In a recent experiment, Twomey scanned individuals’ brains as they read. Her experiment was based on her own roots of learning to read in Japan. Every Japanese child learns two systems of writing, the kanji system, based on Chinese characters, and the kana system, which is purely phonetic (though the units are syllables rather than the individual sounds of alphabet systems). This dual approach lasts their entire lives, with all books written in both systems (except for children’s books, for learning purposes). This means that you can test the difference in reading different scripts without worrying about controlling for reading proficiency or language differences. The working assumption of many scholars was that the brain scans of people reading pictographic scripts would show an emphasis on the visual part of the brain, the meaning extracted by recognizing the character, in contrast to those reading a phonetic script, who would be using the sound of the letters to arrive at meaning. What Twomey’s scans showed was that the same areas were activated when reading both types of script.
It was a giant leap for mankind—using a picture to portray a sound.
The experiment compared reading strategies within one individual who had learned to read in two systems. Twomey conducted other research that compared reading strategies between individuals, scanning the brains of people reading in Chinese and English. The differences between readers in this experiment wasn’t straightforward to understand. “At first we thought the difference we saw in the brains was due to the difference in scripts they were reading,” Twomey says. “But when we looked at dyslexic readers, they were using both areas, regardless of what script they were reading, which suggests that it has nothing to do with script itself.”
Twomey interprets this surprising finding as evidence of a difference in reading strategies that result from how we learn to read. English readers are taught with a phonics system, using rhymes and other sound-based exercises; Chinese is taught through writing, and associating the character on the page with the meaning directly. Twomey says that dyslexic readers, in their struggle to learn to read, are calling on more of the tributaries in the brain to overcome their difficulties of whichever script they are being taught. This showed up in their brain scans: The pathways used to extract meaning were the same for dyslexic readers whether they were reading pictographic Chinese or the phonetic alphabet. There were no differences between reading picture-based and sound-based words for the brain, just differences in how we’ve been trained to do the job.
Hope, who has read and admired Twomey’s research, offers a summary. “The key point is we’re all of us using both of these pathways all the time,” Hope says. “You and I might differ slightly in our preferences for them, but we’re still using them both.” This 5,000-year-old technology of humans, which arose at different places around the globe, first used similar systems combining phonetic, pictographic, and classifier elements; a divergence came with the invention of the alphabet, which itself proliferated into such differing forms as Cyrillic, Arabic, Armenian, Tibetan, and Hindi—to name a few. But when we look deep inside the brain, it turns out that we are all doing this strange activity in similar ways.
What this says about teaching is yet to be fully explored, but Twomey’s research suggests that our teaching systems aren’t penetrating the depths of our reading brains. Of course we learn to extract meaning from squiggles on a page, or you wouldn’t be reading this. But we could be taught to use more of the tributaries involved, as dyslexic readers seem to be doing to compensate for their difficulties. If non-dyslexic readers of phonetic scripts, which are usually taught initially through sound-based learning, were also encouraged to learn the word shapes from the start; if those learning pictographic characters chanted them out loud as well as copying them out to memorize them; who knows what new creativity would be unleashed? As we learn more about the mysterious tributaries activated in reading, perhaps there are more teaching strategies to be discovered, helping those who do not find it a natural activity, or for those around the world who miss out on early education.
As I left the Centre del Carme, I saw Xu Bing standing at the exit and asked him to sign my copy of Book from the Ground. He smiled and asked me to write the letters of my first name on a piece of paper before crafting them anew— not in a linear line, as an alphabetic system requires, but in a block, producing the effect of a Chinese character, another trick he has devised to disrupt our experience of reading, which he calls “square brush calligraphy.” He signed his own name in an emoji: round-lensed spectacles. He also included two Chinese characters, though I couldn’t tell whether they were from a Chinese dictionary or Book from the Sky, which he no doubt would have been pleased to know.
To me, the experiences of reading Xu Bing’s various scripts feel vastly different. That’s because I learned to read an alphabetic script and continue to read alphabetically all the time. Perhaps one day children brought up on emojis will learn to read a combination of pictures and letters just as fluently, returning us to the age of Egyptian, Cuneiform, or Mayan systems, where sound and pictures mixed to produce meaning together. Xu Bing reminds us that the way we read is not hard-wired into our brains but can be learned and re-learned. The way we write in the future may take on entirely new, now-unimaginable forms. The artist is now echoed by scientists, who offer one more piece of evidence to explain the success of our species: The superpower of our brain lies with its extraordinary ability to adapt to situations and challenges, bestowing advantages far more quickly than anything evolution can offer.
Lydia Wilson is a researcher at the University of Cambridge’s Computer Laboratory, and a visiting scholar at the Ralph Bunche Institute at CUNY’s Graduate Centre. She recently presented the BBC’s series A Secret History of Writing, and edits the Cambridge Literary Review.
Lead image: Rawpixel.com / Shutterstock