Tuesday, January 28, 2014

Seven thousand trillion floppy disks


I thence concluded that I was a substance whose whole essence or nature consists only in thinking, and which, that it may exist, has need of no place, nor is dependent on any material thing.

(René Descartes, Discourse on Method)


Tacky image via
Twenty years ago, the freshly privatised British Telecom embarked upon a quest to end death. It might seem an unusual endeavour for a telecommunications company, unless you consider that Microsoft was hatching similar plans at this time, or you cast your mind back to Norbert Wiener’s prediction that it might be possible some day to send a human being through a telegraph wire. ‘Conceptually’ possible is how Wiener put it, so at BT they were now going to explore that concept.

The research, carried out for a trifling £40 million at the company’s Martlesham Heath Laboratories, was centred around the hubristically named ‘Soul-Catcher chip’, a device designed to be neurally implanted onto the visual cortex of the bearer in order to capture the entirety of his or her visual input. Once perfected and extended to the other senses, the chip could amass and store a record of the entire sensory experience of its host, and function as a perfect memory repository, to be replayed and searched at will. It is clear from their public pronouncements, however, that the scientists working on the project regarded memory as the essence of consciousness and personhood. The stored data, therefore, would be the same as the person. Here’s an item published in Earth Island Journal in the Fall of 1996, reproduced in its entirety:
UK -- The Liverpool Echo reports that British Telecom (BT) is working on the “Soul-Catcher”–a microchip device small enough to be implanted in the optic nerve and capable of capturing “a complete record of every thought and sensation experienced during an individual’s lifetime.” Dr. Chris Winter, head of BT’s artificial life team, declared that the Soul Catcher chip promised “immortality in the truest sense.” Winter suggested that police could use implanted chips “to relive an attack, rape or murder from the victim’s viewpoint to help catch the criminal responsible.” Each individual’s visual record would be transmitted to central computers for storage. Within 30 years, BT predicts, Big Brother could be watching – from inside your own head.

What’s most striking here – even more explicitly than in the case of Gordon Bell’s vision of the dullest life imaginable, lived forever – is the connection between immortality and a surveillance society described with all the tropes of the available dystopian literature. Nowadays this connection is framed a little less crudely or naively, but it survives in the efforts to master immortality by the likes of Google, whose overarching goal remains that of controlling and manipulating the majority of the world’s personal information. Your past is who you are and who you are is what the computer networks seek to acquire and reproduce with absolute fidelity, until it’s more you than the original. Hyper-text. Hyper-real.

Immortality as a corporate goal makes perfect sense in an industry and a culture fuelled by such messianic impulses. ‘The Net will save us,’ said Beppe Grillo some years ago, before articulating that pronoun, ‘us’, into a nine million votes-strong political force, but it would be a very ordinary kind of salvation if it were limited to the economy, or the environment – or even to humanity itself. That promise must extend, at least for some, into personal salvation and a hyper-connected afterlife.

The eighties and nineties were great decades for the revival of Cartesian dualism, on which the idea of a digital afterlife rests. This is as much a rhetorical as a techno-scientific project, as is evidenced by the efforts to sell the Soul-Catcher chip to the consumers of reports about the outer edges of scientific research. Consider again David Winter’s grandiose line about ‘immortality in the truest sense’, or the pronouncement by Peter Cochrane – then Head of Research at BT – that the human body is a ‘carcass’, a mere ‘transport mechanism’ for the mind. But I am especially fond of the statement to The Ecologist by another scientist involved in the project, Ian Parson, who explained that the idea to digitise human experience was based on a ‘solid calculation of how much data the brain copes with over a lifetime’, yielding the figure of ‘10 terabytes of data, equivalent to the storage capacity of 7,142,857,142,860,000 floppy disks’.

The idea that an unfathomably large pile of floppy disks should be regarded as a functional equivalent of the human mind must have sounded comical back then, too. Consider also how by this time (1996) the floppy disk was well on its way to becoming an obsolete technology. How secure would you feel, if someone told you that the record of your existence were to be transferred onto seven thousand trillion of these bad boys?


Nothing is ever a figure of speech. Nowadays we talk of the cloud as if it, too, weren’t made of disks stored in large stacks in vast data centres, here on Earth, with power bills and outages of both the scheduled and the unscheduled kind. You wouldn’t be safe. You wouldn’t be immortal. Not even in the cloud.

But perhaps the point of that wonderful image – of a tower of colourful plastic disks climbing into the heavens – was to distract you from the premise: that your mind is data which can be quantified, therefore organised, therefore acquired and stored. That is the essence of the belief in the digital age and in its powers of salvation. If you can buy that, you are sold.

Monday, January 20, 2014

Mega Memory


We moderns who have no memories at all…
(Frances Yates)


Kevin Trudeau is in prison these days. Two separate judges have found him guilty of criminal contempt for refusing to reveal where he has hidden the profits of his commercial empire in order not to pay the $38 million he owes to the US Federal Government for making false claims concerning one of his books (The Weight-Loss Cure ‘They’ Don’t Want You to Know About).


In 2010, Trudeau left a Federal court looking like Al Capone and, like Capone, they nailed him because he couldn’t account for the money he spent. Cited in a judgment handed down last September were
two recent $180 Vidal Sassoon haircuts; a $900 cigar bill; $1,000 in high-end meats; a $900 liquor spree, and an $800 grocery bill from Whole Foods.
Perhaps he just forgot that he was supposed to be poor.


It wasn’t Mega Memory that got Trudeau into trouble, although the Federal Trade Commission at one point tried to stop him from claiming that it would enable people to develop a photographic memory (later versions of the programme still made the claim, but were careful to spell out that ‘individual results may vary’). It was Mega Memory, however, that made Trudeau famous. The programme made its appearance in the early nineties, as an audiocassette course read by the author and sold under the imprints of Simon & Schuster, William Morrow, HarperCollins. I first saw the informercials on New Zealand television in the late nineties. For a while, they seemed to be everywhere.

One claim that Trudeau seemed particularly fond of was this: ‘If you have a great memory, people think you’re smart.’ Apparently someone had said this to him to explain why they had taken the trouble of learning his system. People will think you’re smart. In late industrial societies aspiring to the title of post-industrial, it’s a sentiment that taps into a deep well of anxieties. Giving the appearance of intelligence can make the difference between belonging or not belonging in a world that supposedly trades in information and knowledge, as opposed to labour.

Yet Mega Memory is ancient in its design. Building on the work carried out by Michael Van Masters, whom he met in the early eighties when he was a car salesman, Trudeau steers clear of the prescriptions of most modern memory systems (including the father of them all, Bruno Furst’s), with their reliance on patterns of association, and opts instead for an architecture of vivid imaginings which has its distant origins in the practice of Latin rhetoricians.

Unlike the anonymous author of the first century BC textbook Ad Herennium, Trudeau doesn’t quite instruct his students to build vast and elaborate memory palaces in their heads, and fill them with symbols for ideas and symbols words, but the basic prescriptions are the same. Consider for instance Mega Memory’s section on pegging, which relies on a very rudimentary series of universal memory places based on the human body (toe, knee, thigh muscle, rear end, love handles, shoulders, collarbone, face, top of the head). Faced with the task of memorising a shopping list for your grocery, as in the example used in the tapes, you will attach each item to a different locus (in the Latin parlance) or peg (in Trudeau’s), taking care to do it in as vivid and memorable a way possible. You won’t just smear bacon on your buttocks, then, but nail it or staple it to them. You won’t imagine that you are wearing a hat made of bananas – it would be too trite an image – but that a giant banana is balanced on the top of your head, always on the point of tipping over.

These are in fact a diluted, impoverished version of the imagines agentes that the students of rhetoric were taught to practice in Ad Herennium if they wished to master the art of memory. However the key difference is that the modern version lacks the broader philosophical context of the original. For the author of Ad Herennium, and more explicitly for Cicero, memory was a faculty inextricably linked to imagination and thought. In the Platonic view (to which Cicero subscribed), it was the means of apprehending the most fundamental human truths, which exist in the mind innately but are ‘forgotten’ as a result of our exposure to the messy world of sensory experience. These ideas survived in the Middle Ages and the Renaissance both in Neo-Platonic philosophy and through the teachings of St Augustine and of the theologians responsible for the inclusion of memory among the cardinal virtues.

For the Latin orator, declaiming a speech without referring to written notes wasn’t just a point of pride, or an aid to the quality of the performance. It signified that the speech had a greater truth content because it was remembered. The great English scholar of the ancient art of memory, Frances Yates, constantly marvelled at the lengths its practitioners went to; at their single-minded devotion to what could clearly not be described as mere mnemotechnics, but only as an art in the fuller sense of the word. Her book also presents a gallery of some of the most famous practitioners of this art. People who were said to be able to recall lists of thousands of names – like Cyrus, who knew the names of every soldier in his army – or who, like Metrodorus of Scepsis, could (as reported by Pliny) ‘repeat what he heard in the very same words’. People in respect of whom we are forced to smile, thinking of how much lower we set our sights: to learn a memory system so that we can remember a list of groceries, or the names of the people we’re introduced to at a party or in a business meeting; to improve our memory not in order to be closer to the truth, but so that ‘people will think we’re smart’.

By a delicious twist of irony, it took but one night at the Metropolitan Correctional Center in Chicago for Trudeau to suddenly remember the existence of a Swiss bank account in his name. The man who had spent a decade touting his superior powers of memory on television is now reduced to the role of the crook who can’t remember where he stashed the money. It’s a fitting end for Mega Memory, which took an ancient, culture-sustaining set of prescriptions – we may compare them to the tracing of one’s whakapapa – and reduced them to little more than party tricks. As if even the most perfect memory, once emptied of its validation and meaning, couldn’t but revert to amnesia.


Monday, January 13, 2014

Google wants to live forever

Originally published at Overland


It used to be Microsoft that had the inside track on immortality. After Bill Gates wrote his vision of a documented life, one of the company’s senior executives – a man by the name of Gordon Bell – was charged with turning that vision into reality. To this end, Bell took to wearing an array of recording devices and uploading the resulting data into a software suite he dubbed MyLifeBits. But recording every moment of one’s conscious life – which has since become one of the ways in which people think of social media, hence commonplace – was not all that the executive had in mind. In a paper he co-authored in 2000 for Microsoft Research, Bell outlined how that idea fitted into the larger project of ‘digital immortality’.
Current technology can extend corporal life for a few decades. Both one-way and two-way immortality require part of a person to be converted to information (Cyberized), and stored in a more durable media. We believe that two-way immortality where one’s experiences are digitally preserved, and which then take on a life of their own will be possible within this century.

The idea has a long and illustrious pedigree. It may have found its first expression when Norbert Wiener declared, in 1964, that it would be ‘conceptually possible for a human being to be sent over a telegraph line’. From the early 1980s onwards, the idea was taken on and expanded upon in quick succession by Vernor Vinge, Hans Moravec, Marvin Minsky, Bell and Gates, and many others. It became a trope of science fiction, most famously in Gibson’s Neuromancer (remember the Dixie Flatline?) and Vinge’s own novella True Names. But, in fact, it has always oscillated between science fiction, science proper, futurology and the discourse of popular culture, all of which are enmeshed in works like Moravec’s highly influential Mind Children.


There is at least one big name I left out of that list, and it is Ray Kurzweil’s. While the first person to christen the coming technological singularity was Vernor Vinge, it was Kurzweil who turned it into the pervasive meme it is today, notably in his 2005 book The Singularity Is Near: When Humans Transcend Biology. Simply put, the idea is that at some point in the near future technological progress will reach a critical point in which machines will be capable of greater complexity than the human mind and senses. What will happen then is the source of much breathless speculation, but the main element of all of these visions is that it will become possible to transfer human consciousness onto a whole new kind of hardware, thereby extending its (our) lifespan indefinitely.

A highly successful inventor, Kurzweil is the most engaging, witty and human-sounding of the proponents of the singularity (although that’s not necessarily saying much). He is also notable for his desire not just to outrun death himself, but also to bring back to life his father, who died in 1970 of a heart attack. To this end he is not collecting genetic material but rather assembling an archive of documents and testimonies concerning his father, underscoring how in this iteration of the Frankenstein myth the goal is no longer to bring back to life the flesh but rather to simulate, or copy, the mind. However, Kurzweil is also in the business of selling ‘longevity products’ (chiefly in the area of dietary supplements), and hopes that in fifteen years – that is to say, by the time he turns 80 – medical technology will be able to prolong life by one year every year. His desire to live forever, in other words, comes in a mix of forms, both old and new.

As the Microsoft story suggests, American technology companies have been involved in this project in various forms since its inception. Immortality is one hell of a mission statement, and if you’re current one is ‘Don’t be evil’, then your metaphysics is already primed to accommodate it. So earlier this year Ray Kurzweil was hired by Google. They gave him the old-fashioned title of director of engineering, but an interview with the Wall Street Journal suggests that Kurzweil was hired to work on what has become his full-time obsession, and that what the company had to offer him above all was direct access to the cloud.

A Google data centre, or a vision of Heaven on Earth
Then last July a rumour took shape, in the form of a blog post on ZDNet: that what Kurzweil could represent for Google is a powerful pull to recruit the top software engineers in what is one of the most intensely competitive job markets on the planet. Think about it: what if one company could offer not the best salary or benefits, but the first shot at the indefinite extension of life – the ultimate health plan? It doesn’t matter how likely you think this might be. Silicon Valley has been steeped in these prognostications for decades. To them these things are real. Kurzweil himself has revealed to the Journal that he takes ‘more than 150 pills and supplements a day’. Not to live longer, but to still be alive when the End of Death comes.

It’s a peculiar vision to entertain on a dying planet. And of course, as in all religions, there is the question of who will be chosen. If ZDNet is right, Google may have just taken a step towards becoming a cult whose acolytes get to be first in line for the new salvation. Thence, as with cryonics – if only cryonics worked – immortality would most likely be the preserve of those who can afford to purchase it, pace Kurzweil’s protestations that the technology will be as widely available as cellphones are today. But it’s not just a question of money in the present, but also of inequality over time – of fortunes that are no longer to be inherited, but that will remain in the possession of the eternally rich.

The 1%, forever. Who wouldn’t rise up against that?



Normal programming will resume next week, with a greater focus on memory than has been the case for the past year or so. On the subject of my long-term research concerns, and what used to be the over topic of this blog, I've had a piece published in the recently relaunched New Humanist (to which you can subscribe) and picked up by The Guardian (with the predictable, but not unwelcome, volume of comments one expects to find therein).