Home social The Guardian of All Things

The Guardian of All Things

by devnym

Memory – which ironically is popularly portrayed by the elephant – is what really sets us apart from the rest of the animal kingdom. Not learning by rote or repetition so as to become an instinct over time, but as a conscious, intellectual point of reference in our decision making process that leads to better longterm planning and advancement for the species.

By Michael S. Malone

All animals, even microscopic one-celled creatures, have memory. But about one million years ago, our hominid ancestors experienced something extraordinary that would play a critical role in turning them into modern human beings: they created a second, artificial, form of memory that existed outside their skulls that could exist outside their presence, even their existence.

Now, those two forms of memory are again about to converge and become one – and the implications to humanity may be as profound as those from that initial schism.

This transformation, as profound as any in human history, began with a creature called Homo Egaster. H. Egaster was an impressive character. Literally: the males stood over six feet tall, with the kind of brow we associate with the much later Neanderthals. H. Egaster was also among the most accomplished hominids in our family tree. For example, he discovered fire. He also led mankind out of Africa for the first time. But Egaster’s greatest achievement was one over which he had little control: he learned to talk, and to do that he first learned how to listen.

These new skills, as profound as any in human history, were the product of three seemingly minor evolutionary changes. The first was a widening of the cervical vertebrae in the throat, which realigned the larynx and tongue and enabled Egaster to introduce unprecedented complexity in the sounds he could make. The second was a new configuration of the middle and outer ear that allowed Ergaster to hear sounds, especially vowels, that other apes still struggle to hear. Finally, and most important, Egaster began to develop a thin new layer of nerve cells, the cerebral cortex – wadded up, they filled all of the folds of the brain, stretched out they made a sheet about the size of a Hermes scarf – that allowed for the powerful processing of this new verbal data.

Unfortunately, Homo Egaster’s brain wasn’t big enough to take full advantage of these new tools, so while he likely did invent human speech (fire, exploration and speech – not a bad legacy) it could not have progressed far. Rather, the task of inventing language would fall to his successor, H. Heidelbergensis. Heidelberg Man is one of the most impressive hominids of all (ourselves included). Standing as tall as seven feet (the average Heidelberg Woman was well over six feet), Heidelberg Man was the first true toolmaker; he buried his dead; and appears to have invented both clothing and adornment – all suggesting a creature with a strong sense of self, of life and death, of the cosmos, and most important for our purposes, language.

Why is language so important? Four reasons. First, it enables one to convey information over large distances, even beyond line of sight, a skill that is vitally important in travel and hunting. Second, it acts as a social glue, further tying together members of the family or tribe into larger social structures. Third, it creates metaphor, the bundling together of two diverse ideas into a something new and thus an addition to knowledge. Fourth and most important, it allows knowledge to be conveyed through time, from one generation to the next, thus accumulating human intellectual capital. The human oral tradition begins here; the acquired wisdom of one generation doesn’t die with that generation.

Evolving alongside language was a gestural tradition, much of it related to hunting. Even today, modern hunters like the Bushmen of the Kalihari, so as not to spook prey, will organize themselves with a combination of hand gestures and images drawn with a stick or finger in the dirt. At some point in human history – we used to think with early Homo Sapiens (Cro-Magnon Man), but now perhaps late Neanderthals – those two traditions merged. The picture became the representation of the spoken word. Painted deep in caves, these pictograms became the language of ritual and religion; the keeper of those images, and the leader of the rites surrounding them, was the shaman or priest – the first separate class in human society and the beginning of the division of labor that led to modern society. Just as important, those images, whether painted on a wall or etched into bone or rock gave mankind the first medium of memory into the indefinite future and not be dependent upon the survival of human intermediaries.

It is probably not a coincidence that it was during this same period that what we think of as human consciousness, that great divider between humans and animals likely first appeared. Certainly we thought before language, but language enabled us to think in ways, and about ideas, that no living thing on Earth had ever thought before.

Writing had begun, and with it the split between ‘human’ and ‘artificial’ memory had begun as well.

From this point – between 50,000 and 20,000 years ago – on, human and artificial memory take on wildly divergent paths, yet still remain tethered to each other by their need to operate through living beings. The human brain spends the next Millennia exploring its capabilities and limitations. Its facility is made manifest through art and literature, religion, commerce, politics and science – its first efflorescence, about 5,000 years ago during the Bronze Age, is one of mankind’s greatest achievements . . .one that can be read in the first epics, the Pentateuch of the Bible, the Egyptian Book of the Dead, the Mahabharata, the Iliad, and earliest of them all, Gilgamesh, itself the story of the creation of human civilization. At one point, each of these epics were part of the oral tradition, their thousands of lines evidence of the power of human memory, and the need to safeguard them outside the heads of a few poets and minstrels an important force in the creation of national languages.

By the time of the Roman Republic, the power of human memory was being pushed to limits rarely reached before or since. On a daily basis in this bureaucratized state, scribes and priests carried around massive amounts of record data and rituals. But it was the orators, spurred by treatises like Rhetorica ad Herennium and those by the greatest practitioner of all, Cicero (it was he who called memory ‘the guardian of all things’), learned to use mnemonic devices and other tricks to memorize speeches that could run for hours or hold in their heads vast documents. This tradition, all-but alien to us now, survived almost to the twentieth century as both feats of memory and the mysterious syncretic tradition of mystics, numerologists, kabbalists, alchemists, the likes of Giordano Bruno and Renaissance ‘memory theater.’ By comparison, the history of artificial memory is the story of technological innovation as it was applied in daily life.

Whereas spoken language has largely evolved through the collision of different cultures – and more recently, the need to develop new words to describe new ideas or phenomena — written language has mostly been changed by its medium and application. In Sumer where modern writing is generally considered to have been invented, wet clay was the most common medium on which to write, first as counting marks on jars used for commerce, and eventually on clay tablets. As everyone knows, writing in mud is difficult, so the Sumerians used reed styluses to impress straight indentations in the clay – cuneiform. Happily, when those tablets dried in the sun, they became hard and durable; unhappily, they also cracked or melted in rain. On the other hand, if they happened to be in a library that burned down, they were fired rock hard and could last centuries – which is why we have Gilgamesh.

In Egypt, where you could write on stone walls and it almost never rained, you could paint curved, colored and realistic forms – hieroglyphics. But walls weren’t portable, so the Egyptians created another medium from layers of pulverized water reeds – papyrus. But papyrus didn’t fold well, thus scrolls. Millions of these scrolls were collected together in the first great repository of humanities memory: the Library of Alexandria.

The Romans adopted Egyptian scrolls, but soon discovered that in wetter, colder Europe, papyrus rotted away not in hundreds of years, but in just a couple decades – meaning that records had to constantly rewritten. So they turned to a new technology: scraped and preserved animal skins: vellum. In MesoAmerica, the Incas, not having developed a flat medium just as they had not invented a wheel, turned to a complex system of knots in thread that have only been deciphered in recent years. In China, as legend has it, Cai Lin, studied wasps nests and realized that he could duplicate their substance by combing the traditional bark medium of his time with old rags, hemp fiber and other substances – and invented paper. In the Middle East, scholars, facing the ‘indexing’ problem of finding anything on hundred foot long scrolls, began to cut them into sheets and sew together one side, creating a codex, or book.

The fall of the Roman Empire and the Dark Ages that followed were a devastating reminder of what happens when a civilization loses its memory. In a heroic last-ditch effort,St. Isadore, seeing the approaching end, had tried to catalog all of human knowledge. But he was too late. Luckily, the Byzantine Empire survived, and soon developed an obsession with gathering and organizating every bit of knowledge it found. It also captured China’s formula for paper. And, as the Islamic world rose, conquered the Eastern Empire and spread its power across north Africa and into the Iberian peninsula, it also created glittering libraries for those Byzantine intellectual treasures . . .and more than five hundred years after it was lost, Europe’s lost memory was restored, kicking off first the late Middle Ages, and then as more and more of the old texts were translated, taught in the new universities and printed using Gutenberg’s invention set off one of mankind’s greatest burst of intellectual achievement, the Renaissance.

It wasn’t long before all of these books, increasingly in the ehands of not just monarchs but academics and wealthy citizens, grew so numerous that they themselves needed to indexed and cataloged. This set of a burst of guidebooks, manuals, histories, dictionaries, textbooks and most famously, encyclopedias, that would continue until the 19th century and, for the first time, bring the world’s memory to the average citizen. That new knowledge, spread to masses in newspapers, books, broadsides and the first public schools, would prove to be literally revolutionary.
There was another technological force at work as well, and it came from, of all places, the toys of the nobility. Automatons, machines that looked like and (somewhat) acted like, living things, had been popular for three thousand years when, thanks to the new clockwork machinery, they enjoyed a huge fad in France in the 18th century. No one was better at building these devices to be both realistic and whimsical than Jacques de Vaucanson. His two great automatons, the Flute Player and the Digesting Duck (yes, a robot duck that pooped) were not only miracles of design, but more important, performed one action after another in sequence. That last required not just an internal clock, but also a set of instructions tied to that clock.

De Vaucanson was so successful with his automatons, and so popular with the aristrocracy that he was soon awarded with the sinecure of the Minister of France’s textile industry. There, he used his skills to design a new kind of loom that used a succession of cards to tell the loom where and when to insert each thread. Textile workers, feeling threatened by the loss of jobs resulting from this new invention, destroyed De Vaucanson’s invention (as they soon would the French aristocracy) and drove him into retirement. But that technology, rediscovered a few years later, would result in the Jacquard loom and the birth of the Industrial Revolution in England.

Just as important, that new paradigm of a stored ‘program’ of operating steps – which when reversed created a stored ‘memory’ of results would until result – as it passed through Herman Hollerith’s census machines in the late 19th century – in the computers of the 20th century. And the heart of these tabulating, then computing, machines, first as mechanical, then electro-mechanical switches, then vaccum tubes and transistors, would lead to the semiconductor integrated circuit and our digital age.

And human memory? The 20th century saw the greatest advances in our understanding of the brain in Millennia. Psychoanalysis, behaviorism, psychoactic drug research, saw enormous advances in brain science in the first decades of the century. Then, thanks largely to the miniaturization and data acquisition made possible by the new digital devices and the large volume memory storage they demanded, research into how the brain worked took off. That revolution continues to this day. Meanwhile, thanks to Moore’s Law, the pace of innovation in the digital world has raced even faster, and at an exponential pace. The 5 million byte IBM RAMAC that required a specially equipped Boeing 707 to deliver has become a trillion byte disc drive the size of dime.

Add to this rise of the Internet, which promises to put all of the world’s memories at the fingertips of every living person and suddenly the two tracks of human and artificial memory seem to be converging after all these centuries at a shocking – some would say alarming – rate. New digital technologies are enabling researchers to look into the brain, map its neurons and watch as new knowledge is capture and stored. Inventor Ray Kurzweil talks of the impending ‘singularity’, when we will map our brains into computers and gain immortality. Computer pioneer Gordon Bell is using computers, cameras and sensors to track his every act and thus become the most recorded human being who has ever lived. And other people, many of them having lost limbs or other organs, are quietly experimenting with man-machine interfaces to give themselves the equivalent of superpowers.

It all sounds either thrilling or terrifying, this destiny that seems to have been playing out for twenty thousand years or more. But it shouldn’t distract us from a more immediate responsibility – that of passing our memories from one generation to the next. We can still read Sumerian tablets, hieroglyphics covered papyrus fragments and vellum books that are centuries old. Today, we have accumulated, addressed and stored many times more of mankind’s memories than ever before. We are poised on the brink of a Great Convergence. But as our magnetic tape recordings begin to fade, our CDs and DVDs begin to skip, and our billions of microprocessors and magnetic disks risk erasure from some unpredictable, but likely, Carrington Event of the sun – we have to ask ourselves if we are so busy looking to the future that we have forgotten our duty to the past to safely preserve the present.

You may also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy