Persephone (Roman Proserpine) is the Greek goddess associated with the seasonal death and rebirth of Nature. Daughter of Demeter (Roman Ceres) she was abducted by Hades (Roman Dis), the god of the Underworld, while gathering flowers as immortalised in Milton’s beautiful lines                                         .nor that fair field
                                   Of Enna, where Proserpine gathering flowers,
Herself a fairer flower, by gloomy Dis
Was gathered…”                                                                                                                                                             (Paradise Lost, Book IV, v.207-272)    
Demeter wandered about distraught looking for her vanished daughter with the result that the  flowers faded, the plants  stopped growing and humanity was in danger of  extinction from famine. Eventually, Persephone was traced to the Underworld and a modus vivendi  was reached : Persephone  was to stay for four months of the year in the Underworld and return to Earth for Spring and Summer — the Greeks do not seem to have bothered with autumn. In some versions of the legend, Persephone is tricked into remaining at least part of the year in the Underworld, because she accepts some pomegranate seeds at the hands of Hades and is thus doomed by the decree that everyone who partakes of food in the Underworld must stay there. The pomegranate thus gained its reputation of being a “dangerous fruit”  and as such is celebrated in the song (music by John Baird, CD Aquarius) which appears in my play The Pomegranate Seeds (Samuel French,  2000)

“You the fruit of Dis have taken
You have eaten of the dangerous tree,
Of the pomegranate taken, taken, Y
our mother calls, your mother weeps Persephone.

Not for you the blaze of summertime,
Sunlit meadows you will never see’
Nevermore the blaze of summertime, summertime,
All that is gone, all that is gone, Persephone.

Now you have crossed to the land of despair,
To this shadowy realm,
Never to return.

You the fruit of Dis have taken,
In these regions you must always stay,

In the twilight of the Underworld, Underworld,
Those that you love, those that you love are far away.”  

Persephone, or Proserpine, has been a frequent subject in Western poetry and paintings, though, somewhat surprisingly, as far as I know no opera has ever been written about her — I could easily imagine Mozart or Glück writing one.

The figure of Persephone was so central to the poetess Anna de Noailles (see website that Catherine Perry entitled her recent interpretation of this remarkable Belle Époque French poet, Persephone Unbound. The figure of Persephone doubtless resonated with Anna de Noailles because of its ambivalence : on the one hand Persephone’s association with flowers, youth and a carefree existence and, on the other, with violence, sorrow, separation, darkness and death. Persephone belonged to two worlds, to the light and to the dark and may thus be seen to symbolize the “willingness… to encounter the fateful aspects of existence” (Catherine Perry) : this aspect of the myth would have appealed far more to Anna de Noailles, as a follower of Nietzsche, than the more traditional fertility goddess persona.    Like Orpheus, Persephone has a shamanic role as ‘crosser of boundaries’ , since, like all humans she crosses the fateful river of Lethe but, unlike ordinary mortals, she is eventually granted the power to return to Earth at will.  There is, interestingly, a late version of the Greek myth in which Persephone, having belatedly fallen in love with her kidnapper, Dis (a strange anticipation of the so-called Stockholm Syndrome) declines to return to Earth and only accepts to do so most reluctantly by order of Zeus.

The positive side of the myth of Persephone as personifying the rebirth of Nature and return of Spring has largely been obscured in Romantic and post-Romantic literature  by her role as Queen of the Underworld.  As such she is a formidable figure since, along with her husband Dis, she is Judge of  Souls, deciding impartially whether they deserve reward or punishment. Rimbaud alludes to her in Une Saison en Enfer “Elle ne finira donc point cette goule reine de millions d’ames et de corps morts et qui seront juges! » which, given the mention of “le port de la misere, la cite enorme” two lines earlier suggests that the poet identifies Persephone with Queen Victoria presiding over the hideous ‘City of Night’ that is nineteenth-century London in Rimbaud’s eyes !

George Gomori sees her as the inevitable figure of destiny that awaits us at the end of the line                         “There are no saving miracles — Nor can remorse win you grace.
                                 There’s no one to deflect the track
Of the knives whistling straight for you.
                                 Persephone, standing at your back,
Proffers her hand. Hold yours out too.”

However, for Swinburne, a vastly underrated poet incidentally, she represents not so much Justice or Destiny as the (welcome) negation of everything vital :

“She waits for each and other,    She waits for all men born; Forgets the earth her mother, The life of fruits and corn; And spring and seed and swallow Take wing for her and follow where summer song rings hollow    And flowers are put to scorn.”

Swinburne makes it quite clear what he is promoting : a Buddhistic total negation of the entire life-process in favour of the permanent quiescence of Nirvana.

“From too much love of living,
From hope and fear set free,
We thank with brief thanksgiving
Whatever gods may be
That no life lives for ever;
That dead men rise up never;
That even the weariest river
Winds somewhere safe to sea.

The star nor sun shall waken,
Nor any change of light:
Not sound of waters shaken,
Nor any sound or sight;
Not wintry leaves nor vernal,
Nor days nor things diurnal;
Only the sleep eternal    In an eternal night.”

Vyvian Grey takes a more nuanced position in her charming Proserpine since although she presents the Underworld in a surprisingly light and asks her mother not to ‘weep for me’, she  does not entirely exclude the idea of eventual return.


Here there are no days,
Only a night —
A sort of shimmering half-light
That no sun ever dazzles through;
And the sky — not blue,
No, never blue,
As in the world that once I knew,
But something like a skein of softest grey;
A blossom of white.  

Still are the rivers, are the lakes
Of this quiet land,
Whose silver waters offer up no sound,
Where I have never seen
The summer-coloured sails of mighty ships,
Or their distant gleam;
Only shrouded barges
Gliding soundlessly along through silent mists,
Each like a dream
And by lone oarsmen manned.

 Yet flowers grow here,
For I have plucked such quantities
Of sleepy-headed poppies,
Swarthy hellebores
While this weird countryside I roam.
True, they are not the primroses,
The violets of home,
They are not the roses:
Even so, they have their beauty,
A loveliness all of their own.

 And do not think that music never plays
Where I have come,
For music plays always its hum
Is heard, not in raucous revelries,
But between the cypress trees
Where souls of those departed sit and chant
Their melodies
In voices like a sigh,
So tenderly that I must dwell to hear them
Whenever I pass by.

 O mother, do not weep for me,
Though in this world I must abide:
Within the year I shall be yours again
And earth will gladden at the sight;
Until then, grieve not for me,
For here I shall remain
With my dark husband by my side,
Although the night is long;
Mother, it is a pleasant night.

Roger Hunt Carroll for his part , in the Envoi of his Hymns for Persephone considers that mankind has ‘lost’ not Persephone herself, but the ‘vision’ of Persephone

You are not lost, no, not you;
never could you be robbed from the orchard of earth,
not from flowers in the fields, the high grasses,
no, nor from soft shrubs that sway along the riverside.

It is we who have misplaced you,
dear and most-loved daughter of the world.
We loosened the cords of your summer gown,
and in our vision you fell in the evening light,
your hair shaded by leaves, your face behind a mask, hidden from our careless sight.

Sebastian Hayes



Benjamin Lee Whorf, seems to have been the first person to point out how much English, and other European languages, are ‘thing-languages’, ‘object-languages’. By far the most important part of speech is the noun and though it is now accepted that not all sentences are of the subject-predicate form, once regarded as universal, quite a lot are. We have a person or thing, the grammatical subject, and the rest of the sentence tells us something about this thing, for example localizes it (‘The cat was sitting on the mat’), or enumerates some property possessed by the ‘thing’ in question (‘The cover of the book is red’). And if we have an active verb, we normally have an agent doing the acting, a person or thing.
There’s nothing ‘wrong’ with such a linguistic structure, of course, but we are so used to it we tend to assume it’s perfectly  reasonable and irreplaceable by any other basic structure. However, as Whorf points out, it is not just applied to sentences of the type ‘A is such-and-such’, where it is appropriate, but also to sentences where it makes little sense. “We are constantly reading into nature fictional acting entities, simply because our verbs must have substantives. We have to say “It flashed” or “A light flashed”, setting up an actor to perform what we call an action, “to flash”. Yet the flashing and the light are one and the same!” (from Whorf, Language, Thought and Reality p. 242, M.I.T. edition).
The quantum physicist and philosopher, David Bohm,  seemingly unaware of Whorf’s prior work, makes exactly the same point.  “Consider the sentence ‘It is raining.’ Where is the ‘It’ that would, according to the sentence, be ‘the rainer that is doing the raining’? Clearly, it is more accurate to say: ‘Rain is going on’ (from Bohm, Wholeness and the Inplicate Order p. 29 ).

Whorf and Bohm clearly have a point here and the general hostility of the academic world to Whorf’s ‘Theory of Linguistic Relativity’ is doubtless in part due to their irritation at an outsider ─ Whorf trained as a chemical engineer ─ pointing out the obvious. Moreover, one would expect the syntax and vocabulary of languages to tell you something about the general conceptions, day to day concerns and modes of thought of the people whose language it is. After all, people talk about what interests them, and languages typically evolve to make communication about common interests more efficient (Note 1).
Even if this is granted for the sake of argument, one might still object that the subject-predicate structure and the role of nouns in English simply reflects ‘how things are’ ─ and there is only ‘one way for things to be’. Since ‘reality’ consists essentially of ‘things’, and relations between these things, isn’t it inevitable that nouns should have pride of place? Well, maybe, but maybe not. And Whorf, one of the very first ‘Westerners’ to actually speak various languages of American Indians, was in a good position to question what practically everyone else had so far taken for granted. Amerindian native languages certainly are very different from any European or even Indo-European language. For a start, “Nearly all American Indian languages are either distinctly ‘polysynthetic’ or have a tendency to be so. At the risk of oversimplification, polysynthetic languages can be thought of as consisting of words that in European languages would occupy whole sentences” (from Lord, Comparative Linguistics). Out and out literal  translations from other European languages into English may sound clunky but are perfectly comprehensible, but literal translations from Shawnee or Nitinat sound, not just awkward, but half crazy. Whorf writes, “We might ape such a compound sentence in English thus: ‘There is one who is a man who is yonder who does running which traverses-it which is a street which elongates’ …... the proper translation [being] ‘A man yonder is running down the long street’.” Whorf adds, “Of such a polysynthetic tongue it is sometimes said that all the words are verbs, or again that all the words are nouns with verb-forming elements added. Actually the terms verb and noun in such a language [as Nitinat] are meaningless.”
Secondly, approaching things from the physical/conceptual side, there can be no doubt that native American tribal societies, untouched as they were by Christianity or Newtonian physics, really did have very different conceptions about the world from those of the incoming European settlers, which is one reason why this meeting of the cultures was so catastrophic. Sapir (Whorf’s first teacher) and Whorf believed that this double dissimilarity was not an accident and that the structure of native American languages indeed reflected a very different ‘view of the world’.
So what, in a nutshell, were these linguistic and ‘metaphysical’ differences? According to Whorf, most Amerindian languages are ‘verb-based’ rather than ‘noun-based’ ─ “Most metaphysical words in Hopi are verbs, not nouns as in European languages”. Worse still, “When we come to Nootka, the sentence without subject or predicate is the only type….Nootka has no parts of speech”. Why were they ‘verb-based’, or at any rate not ‘noun-based’? Because, Whorf argues, the Amerindian world-view was not ‘thing-based’ or ‘object-based’ but ‘event-based’. “The SAE (Standard Average European) microcosm has analysed reality largely in terms of what it calls ‘things’ (bodies and quasibodies) plus modes of extensional but formless existence that it calls ’substances’ or ‘matter’. The Hopi microcosm seems to have analysed reality largely in terms of EVENTS” (Whorf, op. cit. p. 147).
        Again, there seems little to quarrel with in Whorf’s  claim that the SAE world-view, which we can trace right back to Greek atomism for its physics, really was ‘thing-based’ ─ “Nothing exists except atoms and void” as Democritus put it. The subsequent, more sophisticated Newtonian world-view nonetheless reduces to a world consisting of ‘hard, massy’, indestructible atoms colliding with each other and influencing each other from afar through universal attraction. Whether, the world of native American Indians really was ‘event-based’ in the way Whorf imagined it to be, few of us today are qualified to say ─ since hardly anyone speaks Hopi any more and even the most remote Amerindian tribes have long since ceased to be independent cultural entities. In any case, the complex metaphysics/physics of the Hopi as interpreted by Whorf is in itself interesting and original enough to be well worth investigating further.
To return to language. Assuming for the moment there is some truth in the Sapir-Whorf theory that language structure reflects underlying physical and metaphysical preconceptions,  what sort of structures would one expect an ‘event-language’ to have?  Bohm asked himself this but sensibly concluded  that “to invent a whole new language  implying a radically different structure of thought is….not practicable”. I asked myself a similar question when,  in my unfinished SF novel The Web of Aoullnnia,  I tried to rough out the principles underlying ‘Lenwhil Katylin’, a future language invented by the Sarlang, the first of the  Parthenogenic types that dominate Sarwhirlia (the future Earth).
For his part, Bohm proposes to introduce, “provisionally and experimentally”, a new mode into English that he calls the rheomode (‘rheo’ comes from the Greek ‘to flow’). This mode is meant to signal and reflect the “movement of growth, development and evolution of living things” in accordance with Bohm’s ‘holistic’ philosophy. Whorf, for his part, finds most of what Bohm is looking for already present in the Hopi language which typically emphasizes ‘process’ and continuity rather than focusing on specific objects and/or moments of time. Although both these thinkers were looking for  a ‘verb-based’ language, they were also firm believers in continuity and the ‘field’ concept in physics (as opposed to the particle concept). My preferences, or prejudices if you like, take me in the opposite direction, towards a physics and a language that reflect and represent  a ‘universe’ made up of staccato events that never last long enough to become ‘things’ and never overlap enough with their successor events to become bona fide processes.
Thus, in Lenwhil Katylin, a language deliberately concocted to reflect the Sarlang world-view, the verb (for want of a better term) is the pivot of every communication and refers to an event of some kind. In many cases there is no need for  a grammatical subject at all ─ events simply happen, or rather ‘become occurrent’, like the ‘lightning flash’ mentioned by Whorf ─ in the Sarlang world-view, all events are, at bottom,  ‘lightning flashes’. The rest of a typical LK sentence provides the ‘environment’ or ‘localization’ of the central event, e.g. for a ‘lightning-flash’ the equivalent of our ‘sky’, and also gives the causal origin of the event (if one exists). We have thus a basic structure Event/Localization/Origin ─ although in many cases the ‘localization’ and ‘origin’ might well be what for us is one and the same entity.
As to the central events themselves, the Katylin language applies an  inflection to show whether the event is ‘occurrent’ or, alternatively, ‘non-occurrent’. One might compare the inflection with Bohm’s ‘is going on’ in his formulation “Rain is going on” ― in LK we just get Irhil~ where ~ signifies “is occurrent”. Being ‘occurrent’ means that an event occupies a definite location on the Event Locality and has demonstrable physical consequences, i.e. brings into existence at least one other event. Such an event is what we would perhaps call an ‘objective’ event such as a blow with a hammer, as opposed to a subjective one like a wish to be somewhere else (which does not get you there). But the category ‘non-occurrent’ is much larger than our ‘subjective’ since it covers all ‘general’ entities, indeed everything that is not specific and precisely localized in space and time (as we would put it). On the other hand, the Sarlang consider a mental event that is infused with deep emotion, such as a flash of hatred or empathy, to be ‘occurrent’ even if it is completely private since, they would argue, such events can have observable physical consequences. This is somewhat similar to the Buddhist distinction between ‘karmic’ and ‘non-karmic’ events: the first have consequences (‘karma’ means ‘action’ or ‘activity’) while the second do not.
After the ‘occurrent/non-occurrent’ dichotomy, the most important category in Lenwhil Katylin is discontinuity/continuity. Although the Sarlang believe that, in the last analysis, all events are a succession of point-like ‘ultimate events’ (the dharma(s) of Hinayana Buddhism), they nonetheless distinguish between ‘strike-events’ such as a blow and ‘extend-events’ such as a ‘walk’, a ‘run’ and so on. Suffixes or inflections make it clear, for example, whether the equivalent of the verb ‘to look’ means a single glance or an extended survey. And the suffix –y or –yia turns a ‘strike-event’ into an ‘extend-event’  when both cases are possible. Moreover, ‘spread-out’ verbs themselves fall into two classes, those that are repetitions of a selfsame ‘strike-event’ and those that contain dissimilar ‘strike-events’. The monotonous beating of a drum is, for example, a ‘strike spread-event’ while even a single note played on a violin is classed as a ‘spread strike-event’ because of the overtones that are immediately brought into play.
A further linguistic category distinguishes between events which are caused by events of the same type and events brought about by events of an altogether different type. In particular, a physical event brought about by a physical event is sharply distinguished from a physical event brought about by a mental or emotional event: the latter case exhibits ‘cause-effect-dissimilarity’ and is usually, though not invariably, signalled by the suffix -ez. This linguistic distinction has its origin in the division of perceived reality into what is termed ‘the Manifest Occurrent’, very roughly the equivalent of our objective physical universe, and the Manifest Non-Occurrent which consists of wishes, dreams, desires, myths, legends, archetypes, indeed the whole gamut of mental and internal emotional occurrences. Nonetheless, these two domains are not absolutely independent and the Sarlang themselves claimed to have developed a technique (known as witr-conseil) that transferred whole complexes of events from the Manifest Non-Occurrent into the Manifest Occurrent and, more rarely, in the opposite direction. Whatever the truth of this claim, the technique, supposing it ever existed, was lost for ever when the Sarlang, reaching the end of their term, committed mass extinction.                                               SH 13/1/18


Note 1 The standard argument against the ‘Linguistic Relativity Theory’ is that, if it were correct, translation would be impossible which is not the case. This argument carries some weight but we must remember that almost all books successfully translated into English come from societies which share the same general religious and philosophic background and whose languages employ similar grammatical structures. Few books have been translated from so-called ‘primitive’ societies because such societies had a predominantly oral culture, while Biblical translators ‘going the other way’ have typically found it extremely difficult to get their message across when communicating with  animists.
There may be something in Whorf’s claim that the Hopi world-view was closer to the modern ‘energy field’ paradigm than to the ‘force and particle’ paradigm of classical physics. ‘Energy’ (a term never used by Newton) is essentially a ‘potential’ entity since it refers to what an object ‘possesses  within itself’, not what it is actually doing at any particular moment. Generally speaking, primitive societies were quite happy with ‘potential’ concepts, with the idea of a ‘latent force locked up within an object but which was not accessible to the five senses directly. It is in fact possible to formulate mechanics strictly in energy terms (via the Hamiltonian) rather than on the basis of Newton’s laws of motion, but no one ever learned mechanics this way, and doubtless never will, because it requires such complicated  mathematics. It is hard to imagine a society committed from the start to an ‘energy’ viewpoint on the world ever being able to develop an adequate symbolic system to flesh out such an intuition.  SH

Catherine Pozzi a modern mystic

Catherine Pozzi is one of those rare individuals who inhabit the strange hinterland between sensual and ‘spiritual’ ecstasy and steadfastly refuse to renounce either territory for the sake of the other. In a letter to Valéry she speaks of “seeing in my mind’s eye a sort of non-human paradise, made of a kind of transcendent material…. absolute solitude; the only possible inhabitants you and I”.
Descartes kick-started modern philosophy with his famous formula, Cogito ergo sum, ‘I think, therefore I am’. There has never really been a philosophical movement that takes first-hand physical sensation as its starting point ― empiricism only concerns itself with ‘sense-data’ and dismisses personal experience as ‘anecdotal’. In her more extravagant moments, Catherine Pozzi ― or Karin as she liked to be called ― viewed herself as a prophet ushering in an era when the life of the senses, science and religion would fuse. This is the theme of Peau d’Ame (‘Skin of the Soul’), a rambling would-be manifesto based on the premiss “JE-SENS-DONC-JE-SUIS” (‘I feel, therefore I am’). She adds the curious comment, “Ce n’est pas la pensée, c’est le sentir qui a besoin de JE” (‘It is not thought but feeling that requires an ‘I’).
In her later years, Karin undertook serious biological and physical studies in an attempt to formulate a new theory of sensation in part based on the ideas of Weber. As she herself admits, she failed in this but one nonetheless finds striking anticipations of Dr. Sheldrake’s contemporary theory of ‘morphic resonance’. For, according to Karin, a single sensation, while being unique, somehow recapitulates all previous sensations of the same type and makes possible further repetitions in the future ― exactly Sheldrake’s idea.
What of that sequence of sensations, her life? Catherine Pozzi (1882 ― 1934) was the daughter of a Parisian surgeon, Samuel Pozzi, while her mother maintained a salon frequented by Sarah Bernardt, Colette and Proust. Karin started writing a Journal at the age of ten and kept this up for most of her life. She was a proto-feminist and adolescent ‘rebel without a cause’ a long time before this became fashionable: indeed she would have been much more at home in California in the Sixties than in Paris during the Belle Époque.
The big question was how to put into practice her philosophy of mystical sensualism without prostrating herself before a man. One possibility was to seek a ‘kindred spirit’ rather than a lover; her adolescent Journal celebrates her passionate friendship with a young American girl who died a year after their first meeting on the ‘Day of the Holy Spirit’ (Passover?), a timing that Catherine found significant. Love, kindred spirit, illness, death, these four strands were henceforth to be forever intertwined.  ventual marriage to a stockbroker with some literary pretensions barely survived the honeymoon though it did produce a son. Three years later Karin was diagnosed with tuberculosis and spent much of the rest of her life undergoing cures and abusing prescribed mood-changing drugs. During WWI she met a young aviator, André Fernet, who firmly believed that ‘true love’ should be strictly Platonic. His death in action in 1916 (which Catherine claimed to have foreseen in a dream) was as much a fitting consummation to their love as it was an interruption.
In 1920 Karin embarked on a tempestuous affair with Paul Valéry, a married man with a family and a poet much less gifted than herself. She eventually disclosed the relation to Valéry’s wife, and henceforth the doors of Parisian society were firmly closed to this latter-day Anna Karenina. But she no longer cared.
Karin in her lifetime only published one or two short articles in magazines and (under the name C.K.) Agnès, a fictionalized account of her own adolescent crises. It is thus on the Six Poems that her reputation must rest and it is to be regretted that the NRF Gallimard edition of her works has them in the wrong order. Karin wrote in her Journal on 6 November 1934,“J’ai écrit VALE, AVE, MAYA, NOVA, SCOPOLAMINE, NYX. Je voudrais qu’on en fasse une plaquette”.

These intense, concise poems remind one of the ‘Stations of the Cross’, marking as they do the stages in an agonising spiritual journey, or perhaps resting points on the pathway that candidates for initiation followed at Eleusis. They could also be viewed as snapshots of a substance undergoing successive changes of state, the substance being, as it happens, a human being — and I fancy that Karin would have approved of this analogy. According to Karin’s ‘chemistry of the soul’, the essential elements of her current personality have already existed in previous reincarnations (‘MAYA’), will somehow persist after death albeit momentarily dissociated from each other (‘AVE’) and will eventually all come together again in a future time (‘NOVA’).
VALE (‘Farewell’) shows the pilgrim looking back at the old life she is now leaving for ever. She starts by lamenting her lover’s betrayal not so much of herself as of their shared life. But she turns the tables on him, as it were, by absorbing the high points of the experience into her body (not mind), so all has not been lost after all:
Il [cet amour] est mon corps et sera mon partage/ Apres mourir”
(‘This love is my body and will remain my portion after death’).
In this way, what is worth remembering remains with her for ever duly integrated into her inner self.
AVE (‘Hail’) begins the sequence proper. It is a passionate invocation not to a real person but to a higher being — one can imagine Psyche writing such a poem between visits from the god Eros, while the tone also recalls Saint Theresa of Avila addressing Christ. For the being is at once beloved, guide and controller of her destiny : he will be responsible for her rebirth even though she will first be broken entirely into pieces (‘You will remake my name and image out of the thousand bodies dispersed by time’). The author intimates that, for a while, she will cease to exist as an individual, will be “sans nom et sans visage” (‘nameless and faceless’), but will be given a new ‘name and face’ which is yet the same, since underlying these transformations is a “vive unité” (‘living unity’).
The tone of this poem is rapt, ecstatic, and it ends by invoking “Cœur de l’esprit, O centre du mirage“(‘Heart of my spirit, centre of the mirage’). ‘Mirage‘ is the world of the senses which Buddhism teaches is ‘maya’, illusion ― but, for Karin, the ‘centre’ of the mirage is not illusory.
In MAYA, the speaker returns to a previous idyllic life amongst the Mayans which she views as a recovery of her cosmic childhood ― ‘I retrace my steps into childhood’s abyss’. Indeed, the voyager hesitates, vainly wishing the process could stop here, in this lost paradise refound, “Que s’arrete le temps, que s’affaisse la trame” (‘If only time would stand still and the weft [of destiny] grow slack’).
After MAYA, NOVA comes as something of a shock. Instead of greeting with rapture a being from another realm, this time the spiritual traveller recoils with horror from a being (at once herself and another) that is canniballistically sucking the speaker’s vitality: its birth is the present speaker’s death. She desperately pleads with it not to be born at all:
‘Undo ! Unmake yourself, dissolve, refuse to be
Denounce what was desired but not chosen by me’

After anticipations of the future and a reliving of the past, SCOPOLAMINE and NYX return us to the present. (Scopolamine is incidentally not a hallucinatory drug but a ‘truth drug’ used by the Nazis on prisoners of war but prescribed to Karin by her doctor.) This time there is no holding back: on the contrary the spiritual voyage is imaged as the launching of a spacecraft with (what we would now call) an astronaut aboard it. Whatever it is that survives physical decomposition is already detaching itself from its earthly frame
        ‘My heart has left my life behind,
        The world of Shape and Form I’ve crossed,
        I am saved, I am lost
        Into the unknown am tossed
        A name without a past to find’ 

NYX was written in a single jet on Karin’s deathbed and brings us even closer to the moment of metamorphosis. The tone is a mixture of awe, regret, rapture and incomprehension:
‘O deep desire amazement spread abroad
O splendid journey of the spellstruck mind
O worst mishap O grace descended from above
O open door through which not one has passed

I know not why I sink, expire
Before the eternal place is mine
I know not who made me his prey
Nor who it was made me his love’

Note:  The full text of the Six Poems in both the original French and my translation  can be found on the website  or, if this expires, directly from the author.

SH 29/09/2016



What is number? By ‘number’ I mean whole number, positive integer such as 2, 34, 1457…  Conceptually, number is hard to pin down and the modern mathematical treatment which makes number depend on formal logical principles does not help us to understand what number ‘really’ is.  “He who sees things in their growth and first origins will obtain the clearest view of them” wrote Aristotle. So maybe we should start by asking why mankind ever bothered with numbers in the first place and what  mental/physical capacities are required to use them with confidence.

It would seem that numbers were invented principally to record data. Animals  get along perfectly well without a number system and innumerable ‘primitive’ peoples possessed a very rudimentary one, sometimes just one, two, three. The aborigine who often possessed no more than four tools did not need to count them and the pastoralist typically had a highly developed visual memory of his herd, an ability we have largely lost precisely because we don’t use it in our daily life (Note 1). It was the large, centrally controlled empires of the Middle East like Assyria and Babylon that developed both arithmetic and writing. The reasons are obvious: a hunter, goatherd or subsistence farmer in constant contact with his small store of worldly goods, does not need records, but a state official in charge of a large area does. Numbers were developed for the purpose of trade, stock-taking, assessment of military strength and above all taxation ― even today I would guess that numbers are employed more for bureaucratic purposes than scientific or pure mathematical ones (Note 2).

What about the required mental capacities? There are two, and seemingly only two, essential requirements. Firstly, one must be able to make a hard and fast distinction between what is ‘singular’ and ‘plural’, between a ‘one’ and a ‘more-than-one’. Secondly, one must be capable of ‘pairing off’ objects taken from two different sets, what mathematicians call carrying out a ’one-one correspondence’.

The difference between ‘one’ and ‘more-than-one’ is basic to human culture (Note 3). If we actually lived in a completely unified world, or felt that we did, there would be no need for numbers. Interestingly, in at least one ancient language, the word for ‘one’ is the same as the word for ‘alone’: without an initial sense of estrangement from the rest of the universe, number systems, and everything that is built upon them, would never have been invented.

‘Pairing off’ two sets, apples and pears for example, is the most basic procedure in the whole of mathematics and it was only relatively recently (end of 19th century) that it became clear that the basic arithmetic operations and numbers themselves        originate in messing about with collections of objects. Given any two sets, the first set A can either be paired off exactly member for member with the second set B, or it cannot be. In the negative case, the reason may be that A does not ‘stretch to’ B, i.e. is ‘less than’ (<), or alternatively it ‘goes beyond’ B (>), i.e. has at least one object to spare after a pairing off. Given two clearly defined sets of objects, one, and only one, of these three cases must apply.

But as Piaget points out, the ability to pair off, say, apples and pears is not sufficient. A child may be able to do this but baulk at pairing off shoes and chairs, or apples and people. To be fully numerate, one needs to be able, at least in principle, to ‘pair off’ any two collections of discrete objects. Not only children but whole societies hesitated at this point, considering that certain collections of things are ‘not to be compared’ because they are too different. One collection might be far away like the stars, the other near at hand, one collection comprised of  living things, the other of dead things and so on.

This gives us the cognitive baggage to be ‘numerate’ but a further step is necessary before we have a fully functioning number system. The society, tribe or social group  needs to decide on a set of more or less identical objects (later marks) which are to be the standard set against which all other sets are to be compared. So-called ‘primitive’ peoples used shells, beans or sticks as numbers for thousands of years and within living memory the Wedda of Ceylon carried out transactions with bundles of ‘number sticks’. Yoruba state officials of the Benin empire in Nigeria performed quite complicated additions and subtractions using only heaps of cowrie shells. Note that the use of a standard set is an enormous cultural and social advance  from simply pairing off specific sets of objects. The cowboy who had “so many notches on his gun” was (presumably) doing the latter, i.e. pairing off one dead man with one notch, and doubtless used other marks or words to refer to other objects. In many societies there were several sets of number words, marks or objects in use simultaneously, the choice depending on context or the objects being counted (Note 4).

So what are the criteria for the choice of a standard set? It is essential that the objects (or marks) chosen should be more or less identical since the whole principle of numbering is that individual differences such as colour, weight, shape and so on are irrelevant numerically speaking. Number is a sort of informational minimum: of all the information available we throw away practically everything since all that matters is how the objects concerned pair off with those of our standard set. Number, which is based on distinction by quantity, required a cultural and social revolution since it had to replace distinction by type which was far more important to the hunter/foodgatherer ― comestible or poisonous, friend or foe, male or female.

Secondly, we want a plentiful supply of object numbers so the chosen ‘one-object’ must be abundant or easy to make, thus the use of shells, sticks and beans. Thirdly, the chosen ‘one-object’ must be portable and thus fairly small and light. Fourthly, it is essential that the number objects do not fuse or adhere to each other when brought into close proximity.

All these requirements make the choice of a basic number object (or object-number) by no means as simple as it might appear and eventually led to the use of marks on a background such as charcoal strokes on plaster, or knots in a cord, rather than objects as such.

Numbering has come a long way since the use of shells or scratches on bones but the ingenious improvements leading up to our current Arab/Hindu place value number system have largely obscured the underlying principles of numbering.      The choice of a ‘one-object’, or mark, plus the ability to replicate this object or mark more or less indefinitely is the basis of a number system. The principal improvements subsequent to the replacement of number-objects by ‘number-marks’, have been ‘cipherisation’ and the use of bases.

In the case of cipherisation we allow a single word or mark to represent what is in fact a ‘more than one’, thus contradicting the basic distinction on which numbering depends. If we take 1 as our ‘one-mark’, 11111 ought by rights to represent what we call five and write as 5. Though this step was long to come,  the motivation is obvious: simple repetition is cumbersome and leads to error ― one can with difficulty distinguish 1111111 from 111111. Verbal number systems seem to have led the way in this: no  languages I know of say ‘one-one-one’ for 3 and very few simply repeat a ‘one-word’ and a ‘two-word’ (though there are examples of this).

The use of bases such as our base 10, depends on the idea of a ‘greater one’, i.e. an object that is at once ‘one’ and ‘more-than-one’ such as a tight bundle of similar sticks. And if we now extend the principle and make an even bigger bundle out of the previous bundles while keeping the ‘scaling’ the same, we have a fully fledged base number system. The choice of ten for a base is most likely a historical accident since we have exactly five fingers and thumbs on each hand. The hand was the first computer and finger counting was widely practiced until quite recent times: the Venerable Bede wrote a treatise on the subject.

The final advance was the use of ‘place value’: by shifting the mark to the left (or right in some cases) you make it ‘bigger’ by the same factor as your chosen base. Although we don’t see it like this, 4567, is a concise way of writing four thousands, five hundreds, six tens and seven ones. 

Human beings, especially in the modern world, spend a vast amount of time and effort moving objects from one location to another, from one country to another or from supermarket to kitchen. One set of possessions increases in size, another decreases, giving rise to the arithmetic operations of ‘adding’ and ‘subtraction’. And to make the vast array of material things manageable we need to divide them up neatly into subsets. And  for stock-taking and related purposes, we need agreed numerical symbols for the objects and people being shifted about. A tribal society can afford to ignore numbers (but not shape), an empire cannot.     SH




Note 1 A missionary in South America noted with amnazement that some tribes like the Abipone had only three number words but during migration could see at a glance from their saddles whether a single one of their dogs was missing out of the ‘immense horde’. From Menninger, Number Words and Number Symbols p. 10    

Note 2  To judge by the sort of problems they tackled, the Babylonian and Egyptian scribes were obviously interested in numbers for their own sake as well, i.e. were already pure mathematicians, but the primary motivation was undoubtedly socio-economic. Even geometry, which comes from the Greek word for ‘land-measurement’, was originally developed by the Egyptians in order to tax peasants with irregular shaped plots bordering the Nile.


Note 3  Some historians and ethnologists argue that the tripartite distinction ‘one-two-more than two’, rather than ‘one-many’, is the basic distinction. Thus the cases singular, dual and plural of certain ancient languages such as Greek.


Note 4  The Nootkans, for example, had different terms for counting or speaking of (a) people or salmon; (b) anything round in shape (c) anything long and narrow. And modern Japanese retains ‘numerical classifiers’.

What is time?

What is time? Time is succession. Succession of what? Of events, occurrences, states. As someone put it, time is Nature’s way of stopping everything happening at once.

In a famous thought experiment, Descartes asked himself what it was not possible to disbelieve in. He imagined himself alone in a quiet room cut off from the bustle of the world and decided he could, momentarily at least, disbelieve in the existence of France, the Earth, even other people. But one thing he absolutely could not disbelieve in was that there was a thinking person, cogito ergo sum (‘I think, therefore I am’).
Those of us who have practiced meditation, and many who have not, know that it is quite possible to momentarily disbelieve in the existence of a thinking/feeling person. But what one absolutely cannot disbelieve in is that thoughts and bodily sensations of some sort are occurring and, not only that, that these sensations (most of them anyway) occur one after the other. One outbreath follows an inbreath, one thought leads on to another and so on and so on until death or nirvana intervenes. Thus the grand conclusion: There are sensations, and there is succession.  Can anyone seriously doubt this? 

Succession and the Block Universe

That we, as humans, have a very vivid, and more often than not  acutely painful, sense of the ‘passage of time’ is obvious. A considerable body of the world’s literature  is devoted to  bewailing the transience of life, while one of the world’s four or five major religions, Buddhism, has been well described as an extended meditation on the subject. Cathedrals, temples, marble statues and so on are attempts to defy the passage of time, aars long vita brevis.
However, contemporary scientific doctrine, as manifested in the so-called ‘Block Universe’ theory of General Relativity, tells us that everything that occurs happens in an ‘eternal present’, the universe ‘just is’. In his latter years, Einstein took the idea seriously enough to mention it in a letter of consolation to the son of his lifelong friend, Besso, on the occasion of the latter’s death. “In quitting this strange world he [Michel Besso] has once again preceded me by a little. That doesn’t mean anything. For those of us who believe in physics, this separation between past, present and future is an illusion, however tenacious.”
Never mind the mathematics, such a theory does not make sense. For, even supposing that everything that can happen during what is left of my life has in some sense already happened, this is not how I perceive things. I live my life day to day, moment to moment, not ‘all at once’. Just possibly, I am quite mistaken about the real state of affairs but it would seem nonetheless that there is something not covered by the ‘eternal present’ theory, namely my successive perception of, and participation in, these supposedly already existent moments (Note 1). Perhaps, in a universe completely devoid of consciousness,  ‘eternalism’ might be true but not otherwise.
Barbour, the author of The End of Time, argues that we do not ever actually experience ‘time passing’. Maybe not, but this is only because the intervals between different moments, and the duration of the moments themselves, are so brief that we run everything together like movie stills. According to Barbour, there exists just a huge stack of moments, some of which are interconnected, some not, but this stack has no inherent temporal order. But even if it were true that all that can happen is already ‘out there’ in Barbour’s Platonia (his term), picking a pathway through this dense undergrowth of discrete ‘nows’ would still be a successive procedure.
I do not think time can be disposed of so easily. Our impressions of the world, and conclusions drawn by the brain, can be factually incorrect ― we see the sun moving around the Earth for example ― but to deny either that there are sense impressions and that they appear successively, not simultaneously, strikes me as going one step too far. As I see it, succession is an absolutely essential component  of lived reality and either there is succession or there is just an eternal now, I see no third possibility.
What Einstein’s Special Relativity does demonstrate is that there is seemingly no absolute ‘present moment’ applicable right across the universe (because of the speed of light barrier). But in Special Relativity at least succession and causality still very much exist within any particular local section, i.e. inside a particular event’s light cone. One can only surmise that the universe as a whole must have a complicated mosaic successiveness made up of interlocking pieces (tesserae).

In various areas of physics, especially thermo-dynamics, there is much discussion of whether certain sequences of events are reversible or not, i.e. could take place other than in the usual observed order. This is an important issue but is a quite different question from whether time (in the sense of succession) exists. Were it possible for pieces of broken glass to spontaneously reform themselves into a wine glass, this process would still occur successively and that is the point at issue.

Time as duration

‘Duration’ is a measure of how long something lasts. If time “is what the clock says” as Einstein is reported to have once said, duration is measured by what the clock says at two successive moments (‘times’). The trick is to have, or rather construct, a set of successive events that we take as our standard set and relate all other sets to this one. The events of the standard set need to be punctual and brief, the briefer the better, and the interval between successive events must be both noticeable and regular. The tick-tock of a pendulum clock provided such a standard set for centuries though today we have the much more regular expansion and contraction of quartz crystals or the changing magnetic moments of electrons around a caesium nucleus.

Continuous or discontinuous?

 A pendulum clock records and measures time in a discontinuous fashion: you can actually see, or hear, the minute or second hand flicking from one position to another. And if we have an oscillating mechanism such as a quartz crystal, we take the extreme positions of the cycle which comes to the same thing.
However, this schema is not so evident if we consider ‘natural’ clocks such as sundials which are based on the apparent continuous movement of the sun. Hence the familiar image of time as a river which never stops flowing. Newton viewed time in this way which is why he analysed motion in terms of ‘fluxions’, or ‘flowings’. Because of Calculus, which Newton invented, it is the continuous approach which has overwhelmingly prevailed in the West. But a perpetually moving object, or one perceived as such, is useless for timekeeping: we always have to home in on specific recurring configurations such as the longest or shortest shadow cast. We have to freeze time, as it were, if we wish to measure temporal intervals.

Event time

The view of time as something flowing and indivisible is at odds with our intuition that our lives consist of a succession  of moments with a unique orientation, past to future, actual to hypothetical. Science disapproves of the latter common sense schema but is powerless to erase it from our thoughts and feelings: clearly the past/present/future schema is hard-wired and will not go away.

If we dispense with continuity, we can also get rid of  ‘infinite divisibility’ and so we arrive at the notion, found in certain early Buddhist thinkers, that there is a minimum temporal (and spatial) interval, the ksana. It is only recently that physicists have even considered the possibility that time  is ‘grainy’, that there might be ‘atoms of time’, sometimes called chronons. Now, within a minimal temporal interval, there would be no possible change of state and, on this view, physical reality decomposes into a succession of ‘ultimate events’ occupying  minimal locations in space/time with gaps between these locations. In effect, the world becomes a large (but not infinite) collection of interconnected cinema shows proceeding at different rates.

Joining forces with time

 The so-called ‘arrow of time’ is simply the replacement of one localized moment by another and the procedure is one-way because, once a given event has occurred, there is no way that it can be ‘de-occurred’. Awareness of this gives rise to anxiety ― “the moving finger writes, and having writ/ Moves on, nor all thy piety or wit/Can lure it back to cancel half a line….”  Most religious, philosophic and even scientific systems attempt to allay this anxiety by proposing a domain that is not subject to succession, is ‘beyond time’. Thus Plato and Christianity, the West’s favoured religion. And even if we leave aside General Relativity, practically all contemporary scientists have a fervent belief in the “laws of physics” which are changeless and in effect wholly transcendent.
Eastern systems of thought tend to take a different approach. Instead of trying desperately to hold on to things such as this moment, this person, this self, Buddhism invites us to  ‘let go’ and cease to cling to anything. Taoism goes even further, encouraging us to find fulfilment and happiness by identifying completely with the flux of time-bound existence and its inherent aimlessness. The problem with this approach is, however, that it is not clear how to avoid simply becoming a helpless victim of circumstance. The essentially passive approach to life seemingly needs to be combined with close attention and discrimination ― in Taoist terms, Not-Doing must be combined with Doing.

Note 1 And if we start playing with the idea that  not only the events but my perception of them as successive is already ‘out there’, we soon get involved in infinite regress.

Note 2 I have attempted to develop this schema on the website

Even and Odd

Animals and so-called primitive peoples do not bother to make nice distinctions between entities on the basis of number and even today, when  deprived of technological aids, we are not at all good at it (Note 1).  What people do ‘naturally’ is to make distinctions of type not number and the favourite principle of division by type is the two-valued either/or principle.  Plato thought that this principle, dichotomy, was so fundamental that all knowledge was based on it — the reason for this being because the brain works in this way, the nerve synapsis is either ‘on’ or ‘off’. Psychologically human beings have a very strong inclination to proceed by straight two-valued distinctions, light/dark, this/that, on/off, sacred/profane, Greek/Barbarian, Jew/Gentile, good/evil and so on — more complex gradations are only introduced later and usually with great reluctance Science has eventually recognized the complexity of nature and apart from gender there are not many true scientific dichotomies left though we still have the classification of animals into  vertebrates and invertebrates.

Numbers themselves very early on got classified into even and odd , the most fundamental numerical distinction after the classification one and many which is even more basic.

The classification even/odd is radical: it provides what modern mathematicians call a partition of the whole set. That is, the classification principle is exhaustive : with the possible exception of the unit, all  numbers fall into one or other of the two categories. Moreover, the two classes are mutually exclusive: no number appearing in the list of evens will appear in thelist of odds. This is by no means true of all classification principles for numbers as one might perhaps at first assume. Numbers can be classified, for example, as triangular and as rectangular according to whether they can be (literally) made into rectangles or equilateral triangles. But ΟΟΟΟΟΟ turns out to be both since it can be formed either into a triangle or a rectangle:

ΟΟΟ                                         ΟΟΟ
ΟΟΟ                                         ΟΟ
The Greeks, like practically all cultures in the ancient world, viewed the odd and even numbers as male and female respectively — presumably because a woman has ‘two’ breasts and a male only one penis. And, since oddness, though in Greek the term did not have the same associations as in English, was nonetheless defined with respect to evenness and not the reverse, this made an odd number a sort of female manqué. This must have posed a problem for their strongly patriarchal society but the Greek philosophers and mathematicians got round this by arguing that ‘one’  (and not ‘two’) was the basis of the number system while ‘one’ was the ‘father of all numbers’.

On the other hand a matriarchal society or a species where females were dominant would almost certainly, and with better reasoning, have made ‘one’ a female number, the primeval egg from which the whole numerical progeny emerged. Those who consider that mathematics is in some sense ‘eternally true’ should reflect on the question of how mathematics would  have developed within a hermaphroditic species, or in a world where there were three and not two humanoid genders as in Ian Banks’s science-fiction novel  The Player of Games.

Evenness is not easy to define — nor for that matter to recognize as I have just realized since, coming across an earlier version of this section, I found I was momentarily incapable of deciding which of the rows of balls pictured at the head of this chapter represented odd or even numbers. We have to appeal to some very basic feeling for ‘symmetry’ — what is on one side of a dividing line is exactly matched by what is on the other side of it. A definition could thus be

If you can pair off a collection within itself and nothing remains over, then the collection is called even, if you cannot do this the collection is termed odd.

This makes oddness anomalous and less basic than evenness which intuitively one feels to be right —  we would not, I think, ever dream of defining oddness and then say “If a collection is not odd, it is even”. And although it is only in English and a few other languages that ‘odd’ also means ‘strange’, the pejorative sense that the word odd has picked up suggests that we expect and desire things to match up, i.e. we expect, or at least desire, them  to be ‘even’ —  the figure of Justice holds a pair of evenly balanced scales.

The sense of even as ‘level’ may well be the original one. If we have two collections of objects which, individually,  are more or less identical, then a pair of scales remains level if the collections are placed on each arm of the lever (at the same distance).  One could define even and odd thus pragmatically:

“If a collection of identical standard objects can be divided up in a way which keeps the arms of a balance level, then the collection is termed even. If this is not possible it is termed odd.”

This definition avoids using the word two which is preferable since the sense of things being ‘even’ is much more fundamental than a feeling for ‘twoness’  — for this reason the distinction even/odd, like the even more fundamental ‘one/many’ , belongs to the stage of pre-numbering rather than that of numbering.

Early man would not have had a pair of scales, of course, but he would have been familiar with the procedure of ‘equal division’, and the simplest way of dividing up a collection of objects is to separate it into two equal parts. If there was an item left over it could simply be thrown away. Evenness is thus not only the simplest way of dividing up a set of objects but the principle of division which makes the remainder a minimum: any other method of division  runs the risk of having more objects left over.

Euclid’s definition is that of equal division. He says “An even number is that which is divisible into two equal parts” (Elements Definition 6. Book VII)  and “An odd number is that which is not divisible into two equal parts, or that which differs by  a unit from an even number”  (Elements  Definition 7. Book VII). Incidentally, in Euclid ‘number’ not only always has the sense ‘positive integer but has a concrete sense — he defines ‘number‘ as a “multitude composed of units”.

Note that Euclid defines odd first privatively (by what it is not) and then as something deficient with reference to an even number. The second definition is still with us today: algebraically the formula for the odd numbers is (2n-1) where n is given the successive values 1, 2, 3…. or sometimes (in order to leave 1 out of it) by giving n the successive values 2, 3, 4….  In concrete terms,  we have the sequence

Ο     ΟΟ     ΟΟΟ  ……..                 …..

Duplicating them gives us the ‘doubles’ or even numbers

Ο     ΟΟ     ΟΟΟ  ..….
Ο     ΟΟ     ΟΟΟ  ……

and  removing a unit each time gives us the ‘deficient’ odd numbers.

The unit itself is something out on its own and was traditionally regarded as  neither even nor odd. It is certainly not even according to the ‘equal division’ definition since it cannot be divided at all (within the context of whole number theory) and it cannot be put on the scales without disturbing equilibrium. In practice it is often convenient to treat the unit as if it were odd, just as it is to consider it a square number, cube number and so forth, otherwise many theorems would have to be stated twice over. Context usually makes it clear whether the term ‘number’ includes the unit or not.

Note that distinguishing between even and odd has nothing to do with counting or even with distinguishing between greater or less – knowing that a number is even tells you nothing about its size. And vice-versa, associating a number word or symbol with a collection of objects will not inform you as to  whether the quantity is even or odd — there are no ‘even’ or ‘odd’ endings to the spoken word like those showing whether something is singular or plural,  masculine or feminine.

It is significant that we do not have words for numbers which, for example, are multiples of four or which leave a remainder of one unit when divided into three. (The Greek mathematicians did, however, speak of ‘even-even’ numbers.) If our species had three genders instead of two, as in the world described in The Player of Games, we would maybe tend to divide things into threes and classify all numbers according to whether they could be divided into three parts exactly, were a counter short or a counter over. This, however, would have made things so much more complicated that such a species would most likely have taken even longer to develop numbering and arithmetic than in our own case.

The distinction even/odd is the first and simplest case of what is today called a congruence. The integers can be separated out into so-called equivalence classes according to the remainder left when they are divided by a given number termed the modulus. All numbers are in the same class (modulus 1) since when they are separated out into ones there is only one possible remainder : nothing at all. In Gauss’s notation the even numbers are the numbers which leave a remainder of zero when divided by 2, or are ‘0 (mod 2)’ where mod is short for modulus. And the odd numbers are all 1 (mod 2) i.e. leave a unit when separated into twos. What is striking is that although the distinction between even and odd, i.e. distinction between numbers that are 0 or 1 (mod 2) is prehistoric, congruence arithmetic as such was invented by Gauss a mere couple of centuries ago.

In concrete terms we can set up equivalence classes relative to a given modulus by arranging collections of counters (in fact or in imagination) between parallel lines of set width starting with unit width, then a width which allows two counters only, then three and so on. This image enables us to see at once that the sum of any two or more even numbers is always even.

And since an odd number has an extra  Ο  this means a pair of odd numbers have each an extra unit and so, if we fit them together to make the units face each other we have an even result. Thus    Even plus even equals even” and “Odd plus odd equals even” are not just jingles we have to learn at school but correspond to what actually happens if we try to arrange actual counters or squares so that they match up.

We end up with the following two tables which may well have been the earliest ones ever to have been drawn up by mathematicians.

­­­­­­­­­­­­­­­­­­­­­­­          +       odd      even                        ×     odd    even  

       odd      even    odd                     odd    odd    even

       even    odd    even                     even  even  even


All this may seem so obvious that it is hardly worth stating but simply by appealing to these tables many results can be deduced that are far from being self-evident. For example, we find by experience that certain concrete  numbers can be arranged as rectangles and that, amongst these rectangular numbers, there are ones that can be separated into two smaller rectangles and those that cannot be. However if I am told that a certain collection can be arranged as a rectangle with one side just a unit greater than the other, then I can immediately deduce that it can be separated into two smaller rectangles. Why am I so sure of this? Because, referring to the tables above,

1.) the ‘product’ of an even and an odd number is even;
2.) an even number can by definition always be separated into two equal parts.

           I could deduce this even if I was a member of a society which had no written number system and no more than a handful of number words.

This is only the beginning: the banal distinction between even and odd and reference to the entries in the tables above crops up in a surprising amount of proofs in number theory. The famous proof that the square root of 2 is not a rational number — as we would put it — is based on the fact that no quantity made up of so many equal bits can be at once even and odd.                                                                       SH 5/03/15


Note 1  This fact (that human beings are not naturally very good at assessing numerical quantity) is paradoxical since mankind is the numerical animal par excellence. Mathematics is the classic case of the weakling who makes himself into Arnold Schwarzenegger. It is because we are so bad at quantitative assessment that playing cards are obliged to show the number words in the corner of the card and why the dots on a dice are arranged in set patterns to avoid confusion.


Poetry and Contemporary Attitudes towards Death

Poetry and Contemporary Attitudes towards Death

Note: On the brink of undergoing my first major surgical intervention, I came across the following piece amongst my papers. It was apparently written two years after the death of my father, which event took place at least twelve to fifteen years ago. There are plenty of things I could add but I thought it best to leave the piece unchanged. SH

Increasingly people today are arranging their own funeral services or those of their family and partners whether the service is a standard cremation or a woodland burial. Instead of, or in association with, passages from the Bible or other sacred books there is an increasing demand for readings from contemporary or at least relatively modern authors. Unfortunately, loafing through late twentieth century literature one finds very little indeed on the subject of death and that little is generally of extremely poor quality. Why is this? The present society, whatever its other merits, seems incapable of facing up to death and by and large we sweep it under the carpet and pretend it isn’t really there. For most of my life death was something that happened in the past or to other people  and I saw my first actual dead body (my father’s) only a couple of years ago when I was in my late fifties. Confronted with death, one tends to be flummoxed, embarrassed, at a loss. Although I suppose one could write a good poem saying exactly this ─ that one doesn’t really know what to feel ─ I don’t think a funeral service would be the place to read it out loud.

What exactly do most people require from readings at a funeral service? I think most people require something solemn. Since the language of the King James Bible and the Prayer Book is solemn in an absolutely magnificent way, a lot of people who don’t believe a word of it, are quite happy for extracts to be read at funerals ─ they ‘sound right’ and up to a point that is all that really matters. Death can be treated as a joke but a funeral service is not, one feels, the place for fooling around. I have been to a funeral where supposedly funny pieces were read : practically everyone present including myself found this tasteless and objectionable. (Irish Catholics have their wakes, of course, but they have a full-blown funeral service first.) And as it happens, modern poetry ─ I mean poetry from the nineteen-twenties onwards ─ has very largely been against solemnity, against anything high-sounding, has become deliberately prosaic and matter of fact. That is all very well but goes some way to explaining why few people today can write well about death, for the theme of death somehow does require one to pull the stops out.

Also, people generally desire to have something consoling if possible read out at a funeral service. Once again, the traditional religions score heavily here since they do offer serious consolations, in particular the consolation that, contrary to appearances, death is not the end. Humanism finds it hard to compete here.

`What is indubitable when confronted with a corpse is that something has gone, has ended. How can one attempt to console oneself for this? One solution is to argue that the ‘true self’ does not reside in the body and so does not die with the body. It may surprize some people to learn that this was not originally a Christian doctrine ─ Christianity still officially affirms the ‘resurrection of the body’ ─ but a Greek idea which some historians trace even further back to the ‘out-of-the-body’ experiences of Siberian shamans (see Note). Today, however, science has considerably weakened belief in the reality of this incorporeal entity, the soul; also, we are not too keen today on a system of belief which implicitly or explicitly downgrades the body. The ‘soul’ option is losing ground fast.

This more or less only leaves two broad options: belief in reincarnation and pantheism. Most people today who consider themselves pagans seem to believe in reincarnation or pantheism or both combined: certainly I myself am attracted to both. The difficulty with reincarnation as a ‘solution’ to the problem of death is that, either you believe in an immaterial ‘something’ which keeps on persisting, in which case you are driven back to the ‘soul option’, or, as in traditional Buddhism, you deny that there is anything that persists, in which case the whole system ceases to be so consoling. Hinduism, or certain forms of it, affirms that the ‘individual soul’ (atman) eventually gets merged completely in the Absolute (Brahman) from which it came.

Although Plato and some Greeks and Romans believed in reincarnation, the idea is basically Indian and, if one is looking for passages in the English language affirming reincarnation there is not a lot available.

Pantheism has the great advantage that it is actually in some sense true ! We do end up merged into ‘Nature’ and modern science in affirming that “energy cannot be destroyed but only changed in form” (1st Law of Thermo-dynamics) has actually reinforced pantheistic belief. The difficulties are of a different order. The Romantics identified ‘Nature’ with everything admirable and good, but since Darwin, and even worse since Dawkins, it seems we have to believe that Nature demonstrates the fascist principle of ‘survival of the fittest’. Also, since Nature obviously cares nothing for the individual, it is debatable to what extent pantheism can provide consolation when confronted with the death of an individual.

Finally, one should perhaps mention a sort of paradoxically ‘consoling’ solution to the problem of death, namely the belief that there is, and can be, no consolation. This option can at least claim to look things squarely in the face ─ or does it?

Sebastian Hayes

Notes : E.R. Dodds takes this view in chapter V of his remarkable book “The Greeks and the Irrational” .

Case Study in Eventrics : Adolf Hitler

                “There is a tide in the affairs of men
                Which, taken in the flood, leads on to fortune”

                                                Shakespeare, Julius Caesar

In a previous post I suggested that the three most successful non-hereditary ‘power figures’ in Western history were Cromwell, Napoleon and Hitler. Since none of the three had advantages that came by birth, as, for example, Alexander the Great or Louis XIV did, the meteoric rise of these three persons suggests either very unusual abilities or very remarkable ‘luck’.
From the viewpoint of Eventrics, success depends on how well a particular person fits the situation and there is no inherent conflict between ‘luck’ and ability. Quite the reverse, the most important ‘ability’ that a successful politician, military commander or businessman can have is precisely the capacity to handle events, especially unforeseen ones. In other words success to a considerable extent depends on how well a person handles his or her ‘good luck’ if and when it occurs, or how well a person can transform ‘bad luck’ into ‘good luck’. Whether everyone gets brilliant opportunities that they fail to seize one doubts but, certainly, most of us are blind to the opportunities that do arise and, when not blind, lack the self-confidence to seize such an offered ‘chance’ and turn it to one’s advantage.
The above is hardly controversial though it does rule out the view that everything is determined in advance, or, alternatively, the exact opposite, that ‘more or less anything can happen at any time anywhere’. I take the commonsense view that there are certain tendencies that really exist in a given situation. It is, however, up to the individual to reinforce or make use of such ‘event-currents’ or, alternatively, to ignore them and, as it were, pass by on the other side like the Levite in the Parable of the Good Samaritan. The driving forces of history are not people but events and ‘event dynamics’; however, this does not reduce individuals to the status of puppets, far from it. Either through instinct or correct analysis (or a judicious mixture of the two) the successful person identifies a ‘rising’ event current, gets with it if it suits him or her, and abandons it abruptly when it ceases to be advantageous. This is easy enough to state, but supremely difficult to put into practice. Everyone who speculates on the Stock Exchange knows that the secret of success is no secret at all : it consists in buying  when the price of stock is low but just about to rise and selling when the price is high but just about to fall. For one Soros, there are a hundred thousand or maybe a hundred million ‘ordinary investors’ who either fail entirely or make very modest gains.
But why, one might ask, is it advantageous to identify and go with an ‘event trend’ rather than simply decide what you want to do and pursue your objective off your own bat? Because the trend will do a good deal of the work for you : the momentum of a rising trend is colossal, indeed for a while, seems to be unstoppable. Pit yourself against a rising trend and it will overwhelm you, identify yourself with it and it will take you along with a force equivalent to that of a million individuals. If you can spot coming trends accurately and go with them, you can succeed with only moderate intelligence, knowledge, looks, connections, what have you.

Is charisma essential for success?

It is certainly possible to succeed spectacularly without charisma since Cardinal Richelieu, the most powerful man in the France and Europe of his day, had none whereas Joan of Arc who had plenty had a pitifully short career. Colbert, finance minister of Louis XIV is another example; indeed, in the case of ministers it is probably better not to stick out too much from the mass, even to the extent of appearing a mediocrity.
Nonetheless, Richelieu and Colbert lived during an era when it was only necessary to obtain the support of one or two big players such as kings or popes, whereas, in a democratic era, it is necessary to inspire and fascinate millions of ‘ordinary people’. No successful modern dictator lacked charisma : Stalin, Mao-tse-tong, Hitler all had plenty and this made up for much else. Charisma, however, is not enough, or not enough if one wishes to remain in power : to do this, an intuitive or pragmatic grasp of the behaviour of event patterns is a sine qua non and this is something quite different from charisma.

Hitler as failure and mediocrity

 Many historians, especially British, are not just shocked but puzzled by Hitler ─ though less now than they were fifty years ago. For how could such an unprepossessing individual, with neither looks, polish, connections or higher education succeed so spectacularly? One British newspaper writer described Hitler, on the occasion of his first big meeting with Mussolini, as looking like “someone who wanted to seduce the cook”.
Although he had participated in World War I and shown himself to be a dedicated and brave ‘common soldier’, Hitler never had any experience as a commander on the battlefield even at the level of a platoon ─ he was a despatch runner who was told what to do (deliver messages) and did it. Yet this was the man who eventually got control of the greatest military machine in history and blithely disregarded the opinions of seasoned military experts, initially with complete success. Hitler also proved to be a vastly successful public speaker, but he never took elocution lessons and, when he started, even lacked the experience of handling an audience that an amateur  actor or stand-up comedian possesses.
Actually, Hitler’s apparent disadvantages proved to be more of a help than a hindrance once he had  begun to make his mark, since it gave his adversaries and rivals the erroneous impression  that he would be easy to manipulate and outwit. Hitler learned about human psychology, not by reading learned tomes written by Freud and Adler, but by eking out a precarious living in Vienna as a seller of picture postcards and sleeping in workingmen’s hostels. This was learning the hard way which, as long as you last the course (which the majority don’t), is generally the best way.
It is often said that Hitler was successful because he was ruthless. But ruthlessness is, unfortunately, not a particularly rare human trait, at any rate in the lower levels of a not very rich society. Places like Southern Italy or Colombia by all accounts have produced and continue to produce thousands or tens of thousands of exceedingly ruthless individuals, but how many ever get anywhere? At the other end of the spectrum, one could argue that it is impossible to be a successful politician without a certain degree of ruthlessness ─ though admittedly Hitler took it to virtually unheard of extremes. Even ‘good’ successful political figures such as Churchill were ruthless enough to happily envisage dragging neutral Norway into the war (before the Germans invaded), to authorise the deliberate bombing of civilian centres and even to approve in theory the use of chemical weapons. Nor did de Gaulle bother unduly about the bloody repercussions for the rural population that the activities of partisans would inevitably bring  about. Arguably, if people like Churchill and de Gaulle had not had a substantial dose of ‘ruthlessness’ (aka ‘commitment’), we would have lost the war long before the Americans ever got involved  ─ which is not, of course, to put such persons on a level with Hitler and Stalin.
To return to Hitler. Prior to the outbreak of WWI, Hitler, though by all accounts  already quite as ruthless and opinionated as he subsequently proved himself to be on a larger arena, was a complete failure. He had a certain, rather conventional, talent for pencil drawing and some vague architectural notions but that is about it. Whether Hitler would or could have made a successful architect, we shall never know since he was refused entry twice by the Viennese School of Architecture. He certainly retained a deep interest in the subject and did succeed in spotting and subsequently promoting an architect of talent, Speer. But there is no reason to think we would have heard of Hitler if he had been accepted as an architectural student and subsequently articled to a Viennese firm of Surveyors and Architects.
As for public speaking, Hitler didn’t do any in his Vienna pre-war days, only discovering his flair in Munich in the early twenties. And although Hitler enlisted voluntarily for service at the outbreak of  WWI, he was for many years actually a draft-dodger wanted for national service by Austria, his country of birth. Hardly a promising start for a future grand military strategist.

Hitler’s Decisive Moment : the Beer Hall Putsch

 Hitler did, according to the few accounts we have by people who knew him at the time, have boyhood dreams of one day becoming a ‘famous artist’ — but what adolescent has not? Certainly, Hitler did not, in  his youth and early manhood, see himself as a future famous political or military figure, far from it. Even when Hitler started his fiery speeches about Germany’s revival and the need for strong government, he did not at first cast himself in the role of ‘Leader’. On the contrary, it would seem that awareness of his own mission as saviour of the German nation came to him gradually and spasmodically. Indeed, one could argue that it was only after the abortive Munich Beer-Hall putsch that Hitler decisively took on this role : it was in a sense thrust on him.
The total failure of this rather amateurish plot to take over the government of Bavaria by holding a gun to the governor’s face and suchlike antics turned out to be the turning-point of his thinking, and of his life. In Quattrocento Italy it was possible to seize power in such a way ─ though only the Medici with big finance behind them really succeeded on a grand scale  ─ and similar coups have succeeded in modern Latin American countries. But in an advanced industrial country like Germany where everyone had the vote, such methods were clearly anachronistic. Even if Hitler and his supporters had temporarily got control of Munich, they would easily have been put down by central authority : they would have been seven day wonders and no more. It was this fiasco that decided Hitler to obtain power via the despised ballot box rather than the more glamorous but outmoded methods of an Italian condottieri.
The failed Beer-hall putsch landed Hitler in court and, subsequently in prison; and most people at the time thought this would be the end of him. However, Hitler, like Napoleon before him in Egypt after the destruction of his fleet, was a strong enough character not to be brought  down by the disaster but, on the contrary, to view it as a golden opportunity. This is an example of the ‘law’ of Eventrics that “a disadvantage, once turned into an advantage, is a greater advantage than a straightforward advantage”.
What were the advantages of the situation? Three at least. Firstly, Hitler now had a regional and soon a national audience for his views and he lost no time in making the court-room a speaker’s platform with striking success. His ability as a speaker was approaching its zenith : he had the natural flair and already some years of experience. Hitler was given an incredibly  lenient sentence and was even at one point thanked by the judge for his informative replies concerning Germany’s recent history! Secondly, while in prison, Hitler had the time to write Mein Kampf which, given his lax, bohemian life-style, he would probably have never got round to doing  otherwise. And his court-room temporary celebrity meant the book was sure to sell if written and published rapidly.
Thirdly, and perhaps most important of all, the various nascent extreme Right groups made little or no headway with the ‘leader’ in prison which confirmed them in the view that  Hitler was indispensable. Once out of prison, he found himself without serious competitors on the Right and his position stronger than ever.
But the most important outcome was simply the realization that the forces of the State were far too strong to be overthrown by strong-arm tactics. The eventual break with Röhm and the SA was an inevitable consequence of Hitler’s fateful decision to gain power within the system rather than by openly opposing it.

Combination of opposite abilities

 As a practitioner of Eventrics or ‘handler of events’, Hitler held two trump cards that are rarely dealt to the same individual. Firstly, even though his sense of calling seems to have come relatively late, by the early nineteen-thirties he was entirely convinced that he was a man of destiny. He is credited with the remarkable statement, very similar to one made by Cromwell, “I follow the path set by Providence with the precision and assurance of a sleepwalker”. It was this messianic side that appealed to the masses of ordinary people, and it was something that he retained right up to the end. Even when the Russian armies were at the gates of Berlin, Hitler could still inspire people who visited him in the Bunker. And Speer recounts how, even  at Germany’s lowest ebb, he overheard (without being recognized) German working people in a factory repeating like a mantra that “only Hitler can save us now”.
However, individuals who see themselves as chosen by the gods, usually fail because they do not pay sufficient attention to ordinary, mundane technicalities. Richelieu said that someone who aims at high power should not be ashamed to concern himself with trivial details  ─ an excellent remark. Napoleon has been called a ‘map-reader of genius’ and to prepare for the Battles of Ulm and Austerlitz, he instructed Berthier “to prepare a card-index showing every unit of the Austrian army, with its latest identified location, so that the Emperor could check the Austrian order of battle from day to day” (Note 1). Hitler had a similar capacity for attention to detail, supported by a remarkable memory for facts and figures — there are many records of him reeling off correct data about the range of guns and the populations of certain regions to his amazed generals.
This ‘combination of contraries’ also applies to Hitler as a statesman. Opponents and many subsequent historians could never quite decide whether Hitler, from the beginning, aimed for world domination, or whether he simply drifted along, waiting to see where events would take him. In reality, as Bullock rightly points out, these contradictions are only apparent : “Hitler was at once fanatical and cynical, unyielding in his assertion of will power and cunning in calculation” (Bullock, Hitler and the Origins on the Second World War). This highly unusual combination of two opposing tendencies is the key to Hitler’s success. As Bullock again states, “Hitler’s foreign policy… combined consistency of aim with complete opportunism in method and tactics. (…) Hitler frequently improvised, kept his options open to the last possible moment and was never sure until he got there which of several courses of action he would choose. But this does not alter the fact that his moves followed a logical (though not a predetermined) course ─ in contrast to Mussolini, an opportunist who snatched eagerly at any chance that was going, but never succeeded in combining even his successes into a coherent policy” (Bullock, p. 139).
Certainly, sureness of ultimate aim combined with flexibility in day to day management is a near infallible recipe for conspicuous success. Someone who merely drifts along may occasionally obtain a surprise victory but will be unable to build on it; someone who is completely rigid in aim and means will not  be able to adapt to, and take advantage of, what is unforeseen and unforeseeable. Clarity of goal and unshakeable conviction is the strategic part of Practical Eventrics while the capacity to respond rapidly to the unforeseen belongs to the tactical side.

Why did Hitler ultimately fail?

 Given the favourable political circumstances and Hitler’s unusual abilities, the wonder is, not that he lasted as long as he did, but that he eventually failed. On a personal level, there are two reasons for this. Firstly, Hitler’s racial theories, while they originally helped him to power, eventually proved much more of a drawback than an advantage. For one thing, since Hitler regarded ‘Slavs’ as inferior, this conviction unnecessarily alienated large populations in Eastern Europe, many of whom were originally favourable to German intervention since they had had enough of Stalin. Moreover, Hitler allowed ideological and personal prejudices to influence his choice of subordinates : rightly suspicious of the older Army generals but jealous of brilliant commanders like von Manstein and Guderian, he ended up with a General Staff of supine mediocrities.
Secondly, Hitler, though he had an excellent intuitive grasp of overall strategy, was a poor tactician. Not only did he have no actual experience of command on the battlefield but, contrary to popular belief, he was easily rattled and unable to keep a clear head in emergencies.
Jomini considered that “the art of war consists of six distinct parts:

  1. Statesmanship in relation to war
  2. Strategy, or the art of properly directing masses upon the theatre of war, either for defence or invasion.
  3. Grand Tactics.
  4. Logistics, or the art of moving armies.
  5. Engineering ─ the attack and defence of frotifications.
  6. Minor tactics.”
    Jomini, The Art of War p. 2

Hitler certainly ticks the first three boxes. But certainly not (4), Logistics. Hitler tended to override his highly efficient Chief of General Staff, Halder, whereas Napoleon always listened carefully to what Halder’s equivalent, Berthier, had to say. According to Liddell Hart, the invasion of Russia failed, despite the high quality of the commanders and fighting men, because of an error in logistics.
Hitler lost his chance of victory because the mobility of his army was based on wheels instead of on tracks. On Russia’s mud-roads its wheeled transport was bogged when the tanks could move on. If the panzer forces had been provided with tracked transport they could have reached Russia’s vital centres by the autumn in spite of the mud” (Liddel-Hart, History of the Second World War )  On such mundane details does the fate of empires and even of the world often depend.
As for (5), the attack on fortifications, it had little importance in World War II though the long-drawn out siege of Leningrad exhausted resources and troops and should probably have been abandoned. Finally, on (6), what Jomini calls ‘minor tactics’, Hitler was so poor as to be virtually incompetent. By ‘minor tactics’, we should understand everything relating to the actual movement of troops on the battlefield (or battle zone) ─ the area in which Napoleon and Alexander the Great were both supreme.  Hitler was frequently indecisive and vacillating as well as nervy, all fatal qualities for a military commander.
On two occasions, Hitler made monumental blunders that cost him the war. The first was the astonishing decision to hold back the victorious tank units just as they were about to sweep into Dunkirk and cut off the British forces. And the second was Hitler’s rejection of  Guderian’s plan for a headlong drive towards Moscow before winter set in; instead, following conventional Clausewitzian principles,  Hitler opted for a policy of encirclement and head-on battle. Given the enormous man-power of the Russians and their scorched earth policy, this was a fatal decision.
Jomini, as opposed to Clausewitz, recognized the importance of statesmanship in the conduct of a war, something that professional army officers and even commanders are prone to ignore. Whereas Lincoln often saw things that his generals could not, and on occasion successfully overrided them  because he had a sounder long-term view, Hitler, a political rather than a military man, introduced far too much statesmanship into the conduct of war.
It has been plausibly argued, especially by Liddel Hart, that the decision to halt the tank units before Dunkirk was a political rather than a military decision. Blumentritt, operational planner for General Rundstedt, said, at a later date, that “the ‘halt’ had been called for more than military reasons, it was part of a political scheme to make peace easier to reach. If the British Expeditionary Force had been captured at Dunkirk, the British might have felt that their honour had suffered a stain which they must wipe out. By letting it escape, Hitler hoped to conciliate them” (Liddel Hart, History of the Second World War I p. 89-90). This did make some kind of sense : a rapid peace settlement with Britain would have wound up the Western campaign and freed Hitler’s hands to advance eastwards which had seemingly always been his intention. However, if this interpretation is correct, Hitler made a serious miscalculation, underestimating Britain’s fighting spirit and inventiveness.

Hitler’s abilities and disabilities

 It would take us too far afield from the field of Eventrics proper to go into the details of Hitler’s political, economic and military policies. My overall feeling is that Hitler was a master in the political domain, time and again outwitting his internal and external rivals and enemies, and that he had an extremely good perception of Germany’s economic situation and what needed to be done about it. But he was an erratic and often incapable military commander ─ for we should not forget that, following the resignation of von Brauchitsh, Hitler personally supervised the entire conduct of the war in the East (and everywhere else eventually). This is something like the reverse of the conventional assessment of Hitler so is perhaps worth explaining.
Hitler is credited with the invention of Blitzkrieg, a new way of waging war and, in particular, with one of the most successful campaigns in military history, the invasion of France, when the tank units moved in through the Ardennes, thought to be impassible. The original idea was in reality not Hitler’s but von Manstein’s (who got little credit for it) though Hitler did have the perspicacity to see the merits of this risky and unorthodox plan of attack which the German High Command unanimously rejected. It is also true that Hitler took a special interest in the tank and does seem to have some good ideas regarding tank design.
However, Hitler never seems to have rid himself completely of the conventional Clausewitzian idea that wars are won by large-scale confrontations of armed men, i.e. by modern ‘pitched battles’. Practically all (if not all) the German successes depended on surprise, rapidity of execution and artful manoeuvre ─ that is, by precisely the avoidance of direct confrontation. Thus the invasion of France, the early stages of the invasion of Russia, Rommel in North Africa and so on. When the Germans fought it out on a level playing field, they either lost as at Al Alamein or achieved ‘victories’ that were so costly as to be more damaging than defeats as in the latter part of the Russian campaign.  Hitler was in fact only a halfway-modernist in military strategy. “The school of Fuller and Basil Liddel Hart [likewise Guderian and Rommel] moved away from using manoeuvre to bring the enemy’s army to battle and destroy it. Instead, it [the tank] should be used in such a way as to numb the enemy’s command, control, and communications and bring about victory through disintegration rather than destruction” (Messenger, Introduction to Jomini’s Art of War).
As to the principle of Bitzkrieg (Lightning War) itself, though it doubtless appealed to Hitler’s imagination, it was in point of fact forced on him by economic necessity : Germany just did not have the resources to sustain a long war. It was make or break. And much the same went for Japan.
Hitler’s duplicity and accurate reading of his opponents’ minds in the realm of politics needs no comment. But what is less readily recognized is how well he understood the general economic situation. Hitler had doubtless never read Keynes ─ though his highly capable Economics Minister, Schacht, doubtless had. But with his talent for simplification, Hitler realized early on that Germany laboured under two crippling economic disadvantages : she did not produce enough food for her growing population and, as an industrial power, lacked indispensable natural resources especially oil and quality iron-ore. So where to obtain  these and a lot more essential  items? By moving eastwards, absorbing the cereal-producing areas of the Ukraine and getting hold of the oilfields of the Caucasus. This was the policy exposed to the German High Command in the so-called ‘Hossbach Memorandum’ to justify the invasion of Russia to an unenthusiastic general staff.
The policy of finding Lebensraum in the East was based on a ruthless but shrewd and essentially correct analysis of the economic situation in Europe at the time. But precisely because Germany would need even more resources in a wartime situation, victory had to be rapid, very rapid. The gamble nearly succeeded : as a taster, Hitler’s armies  overwhelmed Greece and Yugoslavia in a mere six weeks and at first looked set to do much the same in Russia in three months. Perhaps if Hitler had followed Guderian’s plan of an immediate all-out tank attack on Moscow, instead of getting bogged down in Southern Russia and failing to take Stalingrad, the gamble would actually have paid off though fortunately for the Russians it did not.

Hitler: Summary from the point of view of Eventrics

The main points to recall from this study of Hitler as a ‘handler of events’ are the following:

  1. The methods chosen must fit the circumstances, (witness Hitler’s switch to a strategy based on the ballot box rather than the revolver after the Beer-Hall putsch).
  2. An apparent defeat can be turned into an opportunity, a disadvantage into an advantage (e.g. Hitler’s trial after the Beer-hall putsch)
  3. Combining inflexibility of ultimate aim with extreme flexibility on a day-to-day basis is a near invincible combination (Hitler’s conduct of foreign affairs during the Thirties);
  4. It is disastrous to allow ideological and personal prejudices to interfere with the conduct of a military campaign, and worse still to become obsessed with a specific objective (e.g. Hitler’s racial views, his obsession with taking Stalingrad).


Panspermia and the Black Death

Where and how did life begin?

Can/could there be a universe without life? Life without a universe? Contemporary science says yes to the first question and no to the second, whereas most religions tend towards the opposite point of view; indeed disagreement on the subject is perhaps the main bone of contention between the two camps. Given that there is such a thing as life, and that we know how to recognize it when and where it exists (no easy task), where did it start? For Galileo and Newton, there was only one place where it could possibly start, the Earth. And, as to how and why it began in the first place, few if any of the ‘classical’ physicists troubled themselves about the question since they were all believers, if not in Christ at least in God. The paradigm of a unique universe created once and for all by an omnipotent intelligence, and henceforth forced to obey rules laid down by this intelligence, served physics and mechanics well for several centuries. But it was not clear how ‘life’, especially human life,  could be fitted into this schema which is probably the reason why Newton, having sorted out the physical side of things, tried his hand at alchemy. One can (perhaps) reduce biology, the life science, to chemistry but not to mechanics and in Newton’s day chemistry scarcely existed.

Darwin was extremely reticent on the subject of the origins of life though he did famously speak of “a warm little pond with all sorts of ammonia and phosphoric salts present”, a place where “a protein compound was chemically formed ready to undergo still more complex changes”. This shows that Darwin was at least firmly committed to the idea that life developed ‘spontaneously’ without the need for supernatural intervention or planning. Eventually, in the nineteen-fifties, Miller caused a sensation by simulating in  a test-tube the Earth’s supposed early environment (water, ammonia, methane and carbon dioxide) and bombarding it with ultra-violet radiation. The result was the spontaneous formation of certain compounds, including amino-acids, the building blocks of proteins. However, it would seem that Miller and Urey got the composition of the early Earth’s environment wrong and there are other reasons why the Miller/Urey experiment is no longer considered to be a good indication of how life on Earth started. Since the discovery in the nineteen-seventies of sulphur consuming microbes in deep ocean hydro-thermal vents, and the copious eco-systems to which they give rise, the theory that life began under the surface of the Earth, rather than on it, has become more and more popular. Such bacteria do not require sunlight to produce energy, i.e. do not photosynthesize, and, because of their location, they would have been at least partially protected against the intense bombardment of the Earth’s surface by meteorites which was so characteristic of the early  years of the Earth’s history.

Life from elsewhere

There is, however, another way to explain the sudden appearance of primitive life on Earth about 3.85 billion years ago : it came from somewhere else in the universe. The idea that there might well be life outside the Earth goes back as far as the IVth century B.C. and demonstrates how astonishingly ‘modern’ many of the Greek thinkers were in their outlook. Although better known for his contention that the goal of human life is, and should be, pleasure, Epicurus also wrote extensively on physical matters. Unfortunately these works have been lost and are only known via his Roman follower, Lucretius, author of the long poem De Rerum Naturae. The latter writes

“If atom stocks are inexhaustible Greater than the power of living things to count, If Nature’s same creative power were present too To throw the atoms into unions — exactly as united now Why then confess you must That other worlds exist in other regions of the sky And different tribes of men, and different kinds of beasts.”

Note that this is a reasoned argument based on the premises that (1) there exists an abundant supply  of the building-blocks of life (atoms), (2) these building-blocks combine together in much the same way everywhere, and (3) Nature’s ‘creative power’ (‘energy’) is limitless. Therefore, there must exist other habitable worlds containing beings made of the same material as us but different from us. This is almost word for word the argument put forward by contemporary physicists that we are not alone in the universe. Lucretius does not go so far as to suggest that there could be, or ever had been, any interaction between these different ‘worlds’. However, the Gnostics (‘Knowers’, ‘Those who Know’), a half-pagan, half-Christian sect that flourished during the declining Roman Empire, taught that human kind did not originate here but came from what we would call ‘outer space’  —  indeed this is precisely what they knew and what ordinary  persons didn’t (Note 1). 


The belief that life did not begin here but was brought from elsewhere in the cosmos seems to have disappeared after the triumph of orthodox Christianity but eventually re-surfaced at the end of the nineteenth century in a place where one would not normally expect to find it, namely physical science. The 19th century German physicist von Helmholtz, a hard-nosed physicist if ever there was one, wrote in 1874: “…it appears to me a fully correct scientific procedure to raise the question whether life is not as old as matter itself and whether seeds have not been carried from one planet to another and have developed everywhere that they have found fertile soil.” (Note 2)               Lord Kelvin agreed, arguing that collisions could easily transport material around the solar system and thus ‘infect’ other planets with life, as he put it. And the Swedish chemist,  Svante Arrhenius, energetically took up the idea which he dubbed ‘Panspermia’ (‘seeds everywhere’). Nearer our own time, Francis Crick of DNA fame argues in his book Life Itself, Its Origin and Nature that “microorganisms ….. travelled at the head of an [unmanned] spaceship sent to Earth by a higher civilization which had developed elsewhere billions of years ago.”

 Life as a Cosmic Phenomenon   

But the theory that life came from outer space is above all associated with the work of Fred Hoyle and Chandra Wickramasinghe who have expounded it in great detail in a series of books and scientific papers. They summarize their position thus : “The essential biochemical requirements of life exist in very large quantities within the dense interstellar clouds of gas. This material [eventually] becomes deposited within the solar system, first in comet-type bodies, and then in the collisions of such bodies with the Earth. (…) The picture is of a vast quantity of the right kind of molecules looking for suitable homes, and of there being very many suitable homes [i.e. planets].” (Note 3)

As the two authors never tire of pointing out, ‘Panspermia’ is not just another scientific theory : it constitutes a paradigm shift only to be compared with the shift from a geocentric to a heliocentric viewpoint instigated by Copernicus. Just as humanity previously — and mistakenly — considered itself to be situated at the centre of the universe and the favoured creation of the Almighty, so biologists and astronomers today mistakenly tend to take it for granted that the Earth has been particularly favoured to be the unique seat of intelligent life. In its full development, as propounded in Hoyle’s The Intelligent Universe (the title says all), the theory of ‘panspermia’ really means what it says  : ‘seeds of life evrywhere’.  Hoyle and Wickramasinghe argue that life is so improbable an eventuality, requiring so much fine-tuning of physical constants, that it has either always existed (Hoyle’s preferred option) or has only come about once. A complicated circulation system, involving interstellar dust, comets and meteorites, is responsible for randomly disseminating the seeds of life throughout the universe in the expectation that at least one or two of them will fall on fertile ground somewhere sometime. As one might expect, the Hoyle/Wickramasinghe theory was, and is, highly controversial. In the past they would have run foul of the Inquisition — Giordano Bruno was burnt at the stake for propounding something vaguely similar —  but, this being the 20th century, their main opponents have come from within the scientific establishment. The two scientists, despite their impressive credentials, were met with what Hoyle describes  euphemistically as “a wall of silence”. It is even said that Hoyle’s outspoken advocacy of ‘panspermia’ is the main reason he was not granted the Nobel Prize for his seminal work on the carbon cycle in stars.

Evidence in favour of the Hoyle/Wickramasinghe Theory

 We require a new scientific theory to, (1) explain known data in a more elegant and satisfactory way than current theories and (2) make predictions which can be tested experimentally. Now the Hoyle/Wickramasinghe theory certainly does explain something that contemporary astro-physics struggles to make any sense of, namely the surprisingly early appearance of bacterial life on the Earth. The Earth is currently considered to have formed some 4.6 billion years ago but for most, if not all, of the first seven or eight million years it must have been a boiling inferno uninhabitable even for heat-loving microbes. However, the high level of carbon-12 in certain rocks that date back 3.85 billion years, suggests that microbial life already existed at that early date. We are not here talking about organic molecules but about unicellular organisms which, though ‘primitive’ compared to plants and mammals, possess DNA (or RNA). It is scarcely credible that the transition from a few diffuse chemicals to such a highly organized entity as a prokaryotic cell came about in  such a short time in evolutionary terms, especially since the subsequent transition from bacterial to multicellular life took around 3 billion years! But for Hoyle and Wickramasinghe, this is not a problem : primitive life arrived here ready-made, a seed quite literally falling from the sky either naked or enclosed in the remnants of a comet that, like Icarus, had ventured too near the Sun and had spilled its contents into the atmosphere (Note 4). So, is there any evidence that a microbial spore could exist in interstellar space and be wafted to Earth riding on a gas cloud or enclosed in the kernel of a wandering comet? In the seventies, when Hoyle and Wickramasinghe first advanced the general idea, it sounded more like science fiction than science — Hoyle was after all the author of a successful SF novel The Black Cloud. But since then the critics have had to eat their words — though in point of fact scientists never seem to actually bring themselves to do such a terrible thing — since it is now common knowledge that interstellar dust is full of molecules, many of them organic, and that comets contain most, and probably all, of the basic ingredients for life. “Analysis of the dust grains streaming from the head [of Halley’s comet] revealed that as much as one third was organic material. Common substances such as benzene, methanol, and acetic acid were detected, as well as some of the building blocks of nucleic acids. If Halley is anything to go by, then comets could easily have supplied Earth with enough carbon to make the entire biosphere.”      Davies, The Origin of Life, p. 136

It is also generally admitted now that there is a considerable exchange of (not necessarily organic) material throughout the galaxy : the very latest issue of Science (15 August 2014) contains an article “Evidence for interstellar origin of seven dust particles collected by the Stardust space craft”. So far, so good. But all this, of course,  stops well short of Hoyle and Wickramasinghe’s claim that actual bacterial spores and/or viruses can and do make the perilous journey from a cloud of interstellar dust to a comet in the Oort Cloud and then on to the interior of the solar system and us. A number of claims for the presence of fossilized organic material in meteorites have nonetheless  been made, for example by Claus and Nagy in the 1960s and, more recently, by two researchers from the University of Naples D’Argenio and Geraci who concluded that their results  constituted “clear evidence for the existence of extra-terrestrial life”. Those who wish to pursue the topic further are referred to a recent article by Chandra Wickramasinghe on DNA Sequencing and Predictions of the Cosmic theory of Life (and the extensive bibliography at the end). This is available free of charge at

Can bacteria and viruses reach the surface of the Earth? 

The suggestion that diseases can come from space follows on from Hoyle and Wickramasinghe’s belief that life came from space in the first place. For the cosmic bombardment continues unabated though, thankfully, on a lesser scale than during the first billion or so years of the Earth’s history. Suppose for the moment that a certain amount of interstellar ‘dust’, some of it organic, finds its way into a comet inhabiting the so-called Oort Cloud situated beyond the orbit of Pluto. Incredibly, there are an estimated billion or more such comets on highly eccentric orbits and a few enter the inner reaches of the solar system each year.  Near the Sun much of the comet melts, ejecting millions of tons of incandescent debris into space and the Earth, like the other planets, cannot avoid ploughing through this cosmic muck. So much is fact. But can any organic material make it to ground level without being burned up or crushed ? If the answer is no, there is no point in discussing the diseases from space theory. However, it seems that bacteria and viruses, especially if protected by some sort of coating, can enter the Earth’s atmosphere without burning up if they come in at an oblique angle. H & W have calculated that “to fit the heating requirement, according to our criteria bacteria are about as big as they can possibly be”. If a bacterium exceeds 1 micron in length it must, according to H & W, be rod-shaped, i.e. what we term a ‘bacillus’. (Yersinia pestis is rod-shaped incidentally). Small particles will be wafted this way and that by air currents or descend inside raindrops or snowflakes. Have any micro-organisms of suspected cometary origin been identified? This is difficult to establish since there is always the possibility of terrestrial contamination but high resistance to ultra-violet radiation would suggest an extra-terrestrial origin. Again various claims have been made, for example by Wainwright, Shivaji et al. and  one new bacterial species culled from the stratosphere has even been named Janibacter hoylei.sp.nov. !

Tell-tale signs of cosmic origin

What characteristics of epidemics/pandemics does official epidemiology have difficulty in explaining? Or, to put things the other way round, supposing for a moment that pathogens can and do arrive from outer space, what special features would we expect to find in the consequent epidemics ?  Basically, the following : 1. We would expect an epidemic/pandemic with an extra-terrestrial origin to be unusually severe because the immune systems of potential hosts would be taken unawares; 2. We would expect very rapid spread since the pathogens would not originally be dependent on human or animal vectors (would rain down on people from the air); 3. We would expect a very wide but somewhat patchy distribution since the pathogens would be taken here and there by air currents; 4. We would expect the attack to be ‘one-off” since only after several outbreaks would a new disease be able to establish itself with a permanent terrestrial focus − to begin with it would most likely be too lethal and kill off potential hosts.

So, are there any epidemics or pandemics that fit the bill? Yes, I can think of several candidates at once, the Vth century Plague of Athens (1, 2 and 4); the Plague of Justinian  (1, 2, 3 and 4); the ‘Sweats’ of the Reformation period (1, 2 and 4); and the 1917-18 outbreak of Spanish Flu (1, 2, 3 and 4). Unfortunately, almost all outbreaks of disease prior to the 19th century are poorly documented so this only leaves the 1917-18 Spanish Flu Pandemic on which H & W concentrate in their book (along with the Common Cold). Since the Spanish Flu killed rather more people than WWI worldwide − estimates vary from 26 million to 50 million deaths − there is no doubt about (1), the severity. Also, because the influenza virus is constantly mutating, it generally manages to keep ahead of the human immune system : each wave of infection is thus essentially new even if given the same general name. So Spanish Flu passes on 4, and also qualifies on 2 and 3.  .         However,  Spanish flu had the benefits of (from its point of view) modern transportation systems and is known to have been passed on by person to person contact (especially by sneezing). However, “The lethal second wave [of Spanish Flu] involved almost the entire world over a very short space of time…. Its epidemiological behaviour was most unusual. Although person-to-person spread occurred in local areas, the disease appeared on the same day in widely separated parts of the world on the one hand, but took days to weeks to spread over relatively short distances. It was detected in Boston and Bombay on the same day, but took three weeks to reach New York City, despite the fact there was considerable travel between the two cities….”       Who wrote the above − Hoyle and Wickramasinghe? In fact, not. The author is a certain Dr. Louis Weinstein cited by H & W and presumably a contemporaryof the pandemic. Likewise, H & W cite Professor Magrassi commenting on the 1948 influenza epidemic  “We were able to verify….the appearance of influenza in shepherds who were living for a long time alone, in solitary open country far from any inhabited centre; this occurred absolutely contemporaneously with the appearance of influenza in the nearest inhabited centre”.       It is on the basis of this kind of evidence, along with detailed maps showing the spread and distribution, that H & W make their case for extra-terrestrial origin. But everything they say about the Spanish Flu pandemic a fortiori applies to the best candidate of the lot, at least on counts (1, 2 and 3),  namely the Black Death itself.

Did the Black Death  come from Space?

H & W only devote five pages of their well-documented book to the Black Death and the treatment is sketchy indeed. When H & W were writing (late nineteen-seventies) the official view was that the Black Death was undoubtedly bubonic plague and that it was spread about by rats : even so learned an author as Shrewsbury does not for a moment question the received wisdom though he does state that the quantity of rats required to get such a pandemic going would be enormous. H & W also assume that the Plague of Justinian and the Black Death were caused by the same pathogen, an assumption Dr Twigg and others have questioned. H & W do, however make one or two valid points. They rightly ridicule the idea of an army of rats advancing like killer ants across most of Europe infecting all and sundry as they march. But, still keeping within the rat/bubonic plague schema, they point out that, if the pestilence was spread about by ship, at least to begin with, we would expect it to systematically spread inland from seaports, something they claim is not the case if we examine the evidence. For example, they cite the testimony of Carbonell, archivist to the Court of Aragon, who reports that the Black Death “began in Aragon, not at the Mediterranean coast or at the eastern frontier, but in the western inland city of Ternel.”        However, we have to distinguish between evidence that the Black Death was propagated by air (which I certainly believe) from the belief that it came from outer space in the first place. H & W emphasize the very extensive but strangely patchy distribution of Black Death mortality: while it attained remote hamlets and monasteries it spared Milan and Nuremburg almost completely. But again this is evidence for airborne dissemination rather than proof of extra-terrestrial origin. Dr Twigg has also pointed out to me that  a vast quantity of bacteria would be required to start the pandemic going in the manner H & W suggest. The Japanese experimented with bubonic plague as a biological weapon and dropped 36 kilos of bubonic plague infested fleas. Seemingly only 7,000 humans died, a statistic that was acquired fifty years later and so was more likely to have been amplified than reduced. Serious though this must have been for the unwilling recipients of this Japanese manna from heaven, this is not a fantastic death toll. And one would expect dropping infected fleas to be a more efficient method of spreading the disease than simply having bacteria drifting down spasmodically.

Conclusion that there is no conclusion

So where does that leave us? As far as I am concerned, I would say that Hoyle and Wickramasinghe have made  a fair case for the Black Death coming from outer space given the extreme severity, rapid spread and extended but patchy distribution of the pandemic, the worst in human history. As to (4), whether the Black Death was a ‘one-off’ outbreak that failed to establish itself or not, this depends on whether one considers that subsequent outbreaks during succeeding centuries were, or were not, the same disease. The jury is not out on this topic and both sides have made valid points. But there is no doubt that ‘plague’, whatever it was, suddenly and mysteriously disappeared from Europe in the early eighteenth century never to return apart from a few spasmodic 20th century cases. But I am not sure that a supposed extra-terrestrial origin has any particular bearing on this circumstance.
More specifically, I do not feel that H & W have said anything to shake my personal conviction that the Black Death was not bubonic plague. If the Black Death was plague and came from space, this suggests that the pandemic started off as the pneumonic variety since human beings would absorb it directly from the air, or from fresh water. It would subsequently pass from human beings to rats and the bubonic variety would dominate and the ‘normal’ scenario develop from then on. Now, an initial outbreak of bubonic plague amongst rodents can spread to humans and become a predominantly pneumonic outbreak, since this is what happened in Manchuria in 1910, but I do not know of any cases of an initial pneumonic epidemic giving rise to bubonic plague. Moreover, supposing for the moment the Black Death was plague, it is not possible that the outbreak was exclusively pneumonic since buboes do not have time to form and we have plenty of contemporary medieval descriptions of buboes, whatever it was that caused them. H & W’s suggestion that plague came from above does not help us to understand the spread of the 14th century pandemic, rather the reverse. It may well be that the cause of the Black Death, whatever it was, did come from space but this does not advance us in understanding what the disease was and how it propagated. Although H & W’s books have made me a likely convert to the general theory of panspermia, I do not feel they have brought us any closer to understanding the Black Death which remains as much of an enigma as ever.            SH 12/10/14

Note 1  According to the Gnostics, the universe was not deliberately created by an omnipotent God but was the result of a cosmic accident with tragic consequences, i.e. was the result of Chance rather than Necessity. They identify God with ‘the Light’ and, in one version, relate how Sophia, the ancestress of humanity, ‘fell’ from a domain of light into a dark, cold and empty universe. Some ‘seeds of light’ end up on the Earth which is ringed by hostile powers (archons) who prevent those who have become aware of their true nature from returning to their place of birth. This schema is really quite close to what we, and especially H & W, currently believe to be the case. It would seem that all the heavier elements including carbon and oxygen were created by nuclear fusion in the heart of stars and were subsequently disseminated throughout the universe in a supernova explosion. Any material ejected would certainly have found itself in a cold, dark and hostile world. Also, since our bodies are made of carbon, hydrogen and oxygen, we are indeed “stardust”. And if Hoyle and Wickramasinghe are right and the first organisms formed in space, we are aliens or the progeny of aliens since the Earth is not humanity’s place of birth.

Note 2 Quoted by Paul Davies in The Origin of Life (p. 125)

Note 3 From Hoyle & Wickramasinghe, Life Cloud (1978). The original schema given in this book and Diseases from Space (1979) goes something like this :

(1) The cycle starts with outflows of gaseous material from the surfaces of stars;
(2) High-density clouds give rise to ‘dust’, “solid particles with dimensions comparable to the wavelength of visible light”(H & W)
(3) Under compression, organic molecules present in the gas clouds “condense in solid form onto the dust grains” (H & W);
(4) In particular formaldehyde (H2CO), which is “one of the commonest organic molecules actually found by actual observation in interstellar space” condenses onto grains and a process of polymerization is induced by cosmic rays;
(5) Sugars and polysaccharides form — and “it has been shown by C. Ponnamperuma that this can happen when formaldehyde is exposed to ultra-violet radiation”;
(6) All of the basic building blocks of life are formed in a similar way and are transported randomly around the galaxy by comets;
(7) Primitive living organisms evolve inside some comets;
(8) One or two comets are drawn within the inner solar system and, on partially melting, deposit clouds of living material some of which falls onto the Earth.

Wickramasinghe has  subsequently extended this model to one where “the genetic products of evolution on a planet like the Earth were mixed on a galactic scale with products of local evolution on other planets elsewhere” (from the paper on DNA sequencing mentioned earlier).

Note 4   At first sight it might seem that Hoyle and Wickramasinghe have not answered the problem of the origins of life but have simply shelved it by situating it somewhere else. Their reply would be that, firstly, rather more suitable environments for the development of life than the early Earth have certainly existed, and, secondly, given the vast number of galaxies and the age of the universe (between 14 and 15 billion years) even such an improbable event as the emergence of microbial life might conceivably have occurred (and apparently did). The point is that, given the cosmic circulation system they propose, life need only occur once somewhere for it to eventually reach practically everywhere (though it would of course not catch on equally well everywhere). And this is assuming the Big Bang scenario. If we assume Hoyle’s modified Steady State model, life has always existed and always will.  



Napoleon Buonaparte : Case Study in Eventrics

“There is a tide in the affairs of men                Which taken in the flood, leads on to fortune”

                                         Shakespeare, Julius Caesar

 Eventrics, a term I have coined, is the theory of events and their interactions. Up to now, I have almost entirely given my attention to the ‘micro’ end of Eventrics, that is to Ultimate Event Theory, an ‘ultimate event’ being the smallest possible event, roughly the equivalent of an atom or elementary particle. But it is now time to turn to ‘macro-Eventrics’ and, in particular, human power politics. Any principles that underlie massive human-directed complexes of events have, on the face of it, little or nothing to do with the sort of things I have been discussing up to now [on the website] ─ but that is just as true of matter-based physics when we shift from the world of the electron to the world of matter in bulk. As a disbeliever in continuity, I ought to be prepared for such a difference, but I would never have expected it to be so large. It is notoriously difficult to say exactly what this extra ingredient is, which is why reductionist theories have, in the last two hundred years or so, decisively gained the upper hand over ‘vitalist’ or ‘system’ theories. For the moment, the question of how and at what exact scale groups of causally bonded ultimate events start behaving in a qualitatively different manner from their individual components will be laid aside. Chief features of ‘power’ event-chains  In the last post I mentioned a few features of power politics viewed from the standpoint of Eventrics and, in particular, enunciated the basic doctrine that “it is events, not human beings that drive history”. I stressed the importance of a so-called ‘tipping point’ or ‘moment of opportunity’ in the fortunes of famous individuals. Persons become powerful, so I argued, not because they have outstanding intelligence, looks or charm ─ though clearly such things are assets ─ but because they (1) “fit the situation they find themselves in” and (2) because they seize with both hands the passing opportunity that presents itself (if it presents itself). I also advanced the notion that the recommended way to seize power and hold it is based on two, and essentially only two items, which are summed up in the codeword used by the US for the invasion of Panama : “Shock and Awe”. Descending the stairs immediately after putting this post on the Internet, my eye was drawn to a battered second-hand book on Napoleon (Napoleon by Paul Johnson) that I remembered only vaguely (Note 1). I opened it and came across the following passage that I had marked in red in the margin : “Victor Hugo, a child of one of Bonaparte’s generals, was later to write: Nothing is more powerful than an idea whose time has come. It is equally true to say : No one is more fortunate than a man whose time has come. Bonaparte was thus favoured by fortune and the timing of the parabola, and he compounded his luck by the alacrity and decision with which he snatched at opportunities as they arose”. Paul Johnson, Napoleon   Exactement, c’est ça. Certainly, the author would seem to be wholly in agreement with the ‘First and Second Principles of Eventrics’. I then wondered how some of the other general principles I tentatively outlined applied to Napoleon. Did he have his ‘moment of opportunity’? Did he apply “Shock and Awe” as his principal methods? In fact, yes, on both counts but first let us enquire a little more into the thesis that it is not the man who commands events but rather the events that offer the opportunity to the man. This is not quite the same thing as saying a successful person is ‘lucky’, though as a matter of fact a surprising number of very successful people do say this, and not out of modesty. Three Dictators If we exclude Russia as being somewhat out on its own, and thus Stalin, the three most powerful non-elected individuals in Western history are Cromwell, Napoleon and Hitler. It is worthwhile examining their backgrounds carefully. None came from a rich, well-educated, aristocratic background ─ but none of them came from a working-class background either (Note 1). Oliver Cromwell, though distantly related to Henry VIII’s all-powerful minister Thomas Cromwell, came from the ‘lower gentry’ and even for a while apparently worked his own land. He did attend Oxford for one year but made no sort of mark there. More to the point, he had absolutely no formal military training, perhaps a blessing in disguise since it forced him to improvise and innovate. As for Hitler, he was the son of an Austrian Customs Official and had at least enough education to be able to present samples of his work to the Viennese School of Architecture (which twice refused him). Bonaparte’s family were small-time impoverished Corsican gentry turned lawyers “just rich enough to own their own house and garden and employ servants” as Johnson puts it. Coming from the lower gentry or the ‘upwardly mobile’ lower middle class can actually be an advantage if you have your eye on the heights : such persons have enough of a leg up to obtain one or two useful contacts and a minimum of education and professional training  — but not enough to spare them the titanic effort needed if they want to seriously improve their condition.

The Young Napoleon  

Napoleon JPEGOf the three ‘dictators’ Napoleon was without a doubt the most talented in the ‘normal’ sense of the word. He had outstanding powers of concentration and application and a memory for “facts and locations” that never ceased to astonish people; he was a better mathematician than any Western ruler I can think of and such a good map-reader that he would be considered a ‘genius’ if we treated cartography on a level with, say, astronomy . At school he was not quite a prodigy ─ real prodigies rarely achieve anything in later life ─ but he was not far short of being one. Whereas the military training at the Ecole Militaire, Paris, usually took two or three years, Napoleon passed all his exams in a single year and qualified as a 2nd Lieutenant at the ripe age of 16. His examiner in mathematics was the world-famous mathematician and astronomer, Laplace.
So, what would Buonaparte have been in another time and place? There have been extremely few eras in history when talent alone could win out over entrenched interests and tradition. All three of our ‘dictators’ came to power in extremely uncertain times, Cromwell during the English Civil War, Hitler during the chaotic post-war German Weimar Republic and Napoleon himself emerged from obscurity after the most famous revolution of all time. Even so, Napoleon, a ‘provincial’ with a Corsican accent and an obscure titre de noblesse that his fellow students didn’t take seriously, and with no family money behind him, nearly missed the boat since most of the best places were already filled by the time he hit Paris. Under the Directory, most important appointments were controlled by a self-serving clique that ruled France from the centre of Paris, and the ambitious young Buonaparte had little direct access to this circle. At one point he thought of offering his services to the Sultan or emigrating to India where he might well have been a rather less successful Clive ─ less successful only because France did not bother so much about India as England did.
My conclusion is that, in another time and place, Buonaparte would certainly have been someone, but equally certainly he would not have been Napoleon.

A Strange Combination of Circumstances” 
As countless historians have pointed out, had Napoleon been born a decade or so earlier when his native Corsica was owned by Genoa, he would not have been a French citizen at all and so would not have qualified to become a boursier (paid student) first at Brienne and later at the Ecole Militaire in Paris. The teaching seems to have been pretty good and by a second stroke of luck, the young officer shortly after receiving his commission, was assigned to an artillery regiment commanded by Baron Duteil, “perhaps the most distinguished gunner in the French army” (Marshall-Cornwall, Napoleon as Military Commander). The Baron was extremely impressed by the cadet officer, who had not yet been under fire, and helped him as much as he could. And the scarcely credible slackness of a young officer’s life under the ancien régime (when there wasn’t actually a war on) meant that the young Napoleon had plenty of time to supplement his formal education with intensive personal study, not just military theory but also politics and ancient history (Note 2).
After the revolution, virtually all the top officers went over to the royalist side so the fledgling republic needed all the trained soldiers it could find and was ready to promote them accordingly. Nonetheless, Napoleon very nearly squandered the chance of a lifetime because he got embroiled in the struggle for Corsican Independence, changing sides more than once, until he finally opted for Revolutionary France. But then, while inspecting the coastal defences of the South of France he had three amazing strokes of luck. (1) The General in charge of the forces near Toulon, Carteaux, was incompetent and soon to be relieved of his functions (2) Carteaux’s artillery commander, Dugommier was badly wounded and (3) Napoleon ‘happened’ to come across a certain Saliceti, an old Corsican friend of the family, now a leading politician in the Paris Convention. Saliceti arranged for the promising but still relatively inexperienced officer, scarcely out of his teens, to be put in charge of the artillery for the relief of Toulon. As Marshall-Cornwall says, “It was a strange combination of circumstances”.

First Moment of Opportunity : Awe
Toulon was the first and most important ‘tipping point’ in Napoleon’s career and though it was ‘Fortune’ that gave him the golden opportunity, it was Napoleon who seized it firmly with both hands and never let it go. The siege of Toulon, more correctly the attack on the forts and batteries surrounding Toulon, was an impeccably executed manoeuvre conducted largely according to plans drawn up by Napoleon himself and enthusiastically endorsed by the new commander, du Teil, the brother of Napoleon’s artillery mentor (another amazing stroke of luck). Napoleon himself took part in the final assault on the key position, Fort Mulgrave, and was wounded by a bayonet. His commander du Teil sent a glowing report to the War Minister Words fail me to describe Buonaparte’s merits. He has great knowledge and as much intelligence and courage, and that is only a faint outline of the qualities of this rare officer”.
        Toulon was, militarily speaking, an awesome performance and at the age of 24 the young artillery officer found himself promoted to général de brigade ( Brigadier) skipping all the intermediary ranks including Colonel. This could surely never have happened in any national army in the world except perhaps during the American War of Independence. Shortly before the successful conclusion of the siege, Napoleon was brought to the notice of a ‘political commissar’ ─ one feels oneself already in the 20th century ─ sent by the Paris Convention to scout around for promising talent, but doubtless also to check on people’s political correctness. The person in question was a Royalist officer turned Republican, Paul Barras. This remarkable man was to play a key role in Napoleon’s life since he later downloaded onto him an attractive but aging mistress with bad teeth, Josephine de Beauharnais, and, more important still, provided Napoleon with his first chance to show his political mettle. It is well to remember that it was Barras that spotted Napoleon and not the reverse, and that Napoleon came to his notice not through social contacts but by his actions. Although Barras was an opportunist who wanted to use Napoleon for his own purposes, the other senior figures, the two du Teils and Napoleon’s new superior General Dugommier, all war-hardened veterans, seem to have been literally spell-bound by the young officer, somewhat as Beauregard was spell-bound by Jeanne d’Arc.

Second Moment of Opportunity : Shock
I mentioned in the last post that the recipe for power is Shock & Awe (not necessarily in that order). Occasionally, figures achieve eminence without the first element but they are almost always religious or artistic figures, not political or military ones. Machiavelli is quite adamant about the importance of Shock and advises the would-be usurper to get this part over with as quickly and decisively as possible : “If you take control of a state, you should make a list of all the crimes you have to commit and do them all at once. He who acts otherwise, either out of squeamishness or out of bad judgment, has to hold a bloody knife in his hand all the time. (…) Do all the harm you must at one and the same time, that way the full extent of it will be noticed, and it will give least offence” (Note 3
        We do not know at what point Napoleon decided he wanted not only military but political power as well. Actually, he did not have a lot of choice. While the chaotic aftermath of the French Revolution gave able young officers like Napoleon their chance, it also made them extremely vulnerable to the vicissitudes of party politics. After such a brilliant start Napoleon was briefly imprisoned when the Jacobins were guillotined since he had been on good terms with the younger Robespierre. The ever ready Corsican Saliceti came to his aid, arguing that France needed officers like Napoleon ─ but from then on he was regarded by the authorities with some distrust, though returned to his duties. After the collapse of the first attempt to invade Italy (with plans partly drawn up by Napoleon himself), Napoleon must have felt that his promising career had been nipped in the bud. There he was, poor, regarded with some suspicion at Paris and despite his striking looks, gauche and unsuccessful with women.
But Barras was now the leading figure of the Directory. This man, even more than Napoleon in a sense, was an absolute master of what I call Eventrics since, incredibly, while starting off as a Royalist officer, he went through all the vicissitudes of the Revolution unscathed, changing sides exactly at the right time and like a political Soros always backing the winner. In 1795 Napoleon was called to the bureau topographique (sort of Planning Office) in Paris. The political situation was extremely serious : the poorer population of Paris, feeling that the revolution had been snatched out of their hands by a lot of devious politicians and currency speculators, were getting ready for the third stage of the Revolution. Barras was granted full powers to restore order while the other members of the new government barricaded themselves in the Tuileries. Napoleon’s job was to quell the revolt. According to Johnson, on 13 vendémiaire (5 October) about 30,000 malcontents (??), many of them armed, rampaged through the centre of Paris. This sounds like an enormous number of people given the smaller populations at the time and much of Paris was an ideal battleground for urban guerrilla ─ as parts of it remained right up to the ‘student revolution’ of May 1968.
It is not clear whether Buonaparte seized this second opportunity to distinguish himself because of ambition, or because of conviction (most likely both). Politically, Napoleon was what we would today consider ‘Centre-Left’. He was sincere in his dislike of the ancient régime, opposed to the power of the Catholic Church, believed everyone should be equal before the law and was an active patron of the arts and sciences. But, like all other ‘middle-class’ people at the time, he would have been horrified by the idea of giving power to ‘the mob’ (Note 3).
Buonaparte applied the Machiavellian principles to the letter. He realized that in hand-to-hand fighting the rebels, even if poorly armed, would probably get the upper hand by sheer weight of numbers. His plan, then, was to lure the rioters away from the lethal alleyways of much of central Paris into an open space, of which there were not so many then, where he could unleash his artillery on them. Fortunately for him, the Tuileries, siege of the government and target of the populace’s anger, did have some open space around it. Johnson goes so far as to suggest that Napoleon deliberately chose ‘grapeshot’ rather than balls or shells because “it scattered over a wide area, tending to produce a lot of blood” but maiming rather than killing its victims. If this is the case, Buonaparte possibly did the ‘right thing’ ─ or so at any rate Machiavelli would have said ─ since it was more politically expedient (and even more humane) to frighten once and for all than to kill. A heavy initial death toll after a massed charge would have enraged the assailants and made them even more desperate, thus more dangerous. As it was, the operation went off as successfully as the raising of the siege of Toulon : the mob recoiled, bloodied and terrified out of their wits by the noise of the big guns at point-blank range, and never got together in such numbers again until the July Revolution of 1830 by which time Napoleon was long dead.

A classic case  Napoleon’s career is almost too pat as a study in Eventrics power politics. First, an ideal situation to step into, two ‘moments of opportunity’, one military and one political, a steady ascent to absolute power, finally decline due to overconfidence and unnecessary risk-taking (invasion of Russia). When the tide of events left him stranded, his dash and mastery changed to bluster and obstinacy. Napoleon could easily have got a better deal for himself, and certainly for France, if he had accepted the Allied offers of returning France to its 1799 or 1792 frontiers instead of fighting on against Allied forces that outnumbered him eight or ten to one. And most historians think that, even if he had won the Battle of Waterloo after his return from Elba (which he nearly did), he could never have remained in power. Nothing particularly surprising here from the point of view of ‘Eventrics’, simply the trap of believing yourself to command events when they always, at the end of the day, control you. Byron apparently thought Napoleon should have died fighting and certainly that would have been better for his posthumous image. Hitler committed suicide on the advice of Goebbels in order precisely to “maintain the Fuhrer legend” which was judged to be more important than the man himself.     SH 24/3/14  

Note 1 Surprisingly, Thomas Cromwell, Henry VIII’s (for a while) all-powerful minister, was the son of a Putney blacksmith. But Thomas Cromwell’s position was always precarious and he did not last long. Henry VIII put into practice Machiavelli’s golden rule of getting someone to do the dirty work and, then, when his services were no longer needed, getting rid of him, thus earning the gratitude of the common people.

Note 3 “Apart from his service duties, Buonaparte plunged into an intensive course of self-education, devouring in particular books on military and political history. In order to train     his memory, he wrote out a preface for every book he read, and these voluminous digests still survive; they cover a wide range of subjects…”          Marshall-Cornwall, Napoleon as Military Commander p. 18  

Note 3 The  quotation is form Machiavelli, The Prince ch. VIII (edited and translated David Wootton)  
Machiavelli praises Cesare Borgia for putting Remiro d’Orco, “a man both cruel and efficient” in charge of the Romagna. “D’Orco in short order established peace and unity and acquired immense authority. At that point the duke decided such unchecked power as no longer necessary, for he feared people might come to hate it. (…) In order to purge the ill-will of the people and win them completely over to him, he [Cesare Borgia] wanted to make it clear that, if there had been any cruelty, he was not responsible for it and that his hard-hearted minister was to blame. One morning, in the town square of Cesena, he had Remiro d’Orco’s corpse laid out in two pieces, with a chopping board and a bloody knife beside it. This ferocious sight made the people of the Romagna simultaneously happy and dumbfounded.” Machiavelli,  The Prince ch. 7 translated Wootton  

Note 4 Today, at a safe distance of two centuries, one tends to feel sympathetic to the rioters and, as a socialist and a romantic, I used at one time to think that “the people” were automatically in the right especially when attacked by the military. However, I hardly think much good would have come of this popular uprising : there would just have been more pointless bloodshed and general chaos. In effect, the ‘Third Revolution’ was postponed until the 1871 Paris Commune. In this case grape-shot was not enough : some 22,000 people, mainly civilians, were killed in a single week, the notorious ‘semaine de sang’ 21-28 May 1871.                              

« Older entries