ORIGINS

He who examines things in their first growth and origins will obtain the clearest view of them” (Aristotle).

 Origin. Origin of what? In the past people were very much concerned about their own personal or extended-personal, i.e.  tribal, origins. But, because of Nazism, all this has become suspect, at any rate for contemporary Westerners (though approved of and even encouraged in ethnic minorities). With some exceptions (e.g. Mormons) we in the West are today interested exclusively in the origin of (1) humanity, (2) ‘life’ and (3) the universe. §

Origins can be distinguished as (1) deliberate, i.e. the result of a definite decision, or (2) non-deliberate ─ what happened just happened. Contemporary science emphatically considers all three origins (of humanity, life and the universe) to have been non-deliberate. This is not just informed opinion but scientific dogma.
Most scientists today  believe that intelligent life evolved ‘by accident’, because of certain fortuitous circumstances and  mutations that are random by definition. As for the universe, one popular theory has it that it started because of a runaway ripple in the quantum vacuum.
Such a schema marks a decisive break with Western cultural traditions: it fits much better with certain Eastern world-views, e.g. that of Buddhism which sees the physical universe as a cosmic accident, or Taoism which views the emergence of the physical world as an entirely ‘natural’ process. And this at the very moment when the East is frantically trying to catch up and outdo the West at their own game (physics-based technology). §

All theories of cosmic origin inevitably go beyond what can be directly tested, though that does not mean they lack evidence or rational support. But fashions change and typically swerve to the opposite; where once intelligent design (by God) was de rigueur and the idea of ‘spontaneous generation’ was ridiculed, today ‘emergence’ has become respectable and any notion of intent or purpose is heresy. From Paley to Dawkins via Darwin.
One unexpected result is that Chance has become a very big player indeed on the world scene. Whereas science was typically    defined as ‘applied determinism’ (by Claude Bernard), today indeterminism is enshrined at the heart of the most successful physical theory we have, Quantum Mechanics. §

But in certain respects the discussion, though mathematically much more abstruse, has not advanced that much from Aristotle or Aquinas. ‘Why bother with a beginning at all?’ one might ask. Because a causal world where one event gives rise to another seemingly requires it, cries out for it. If causality exists ─ which few in practice doubt ─ there must be an end to the backwards chain of cause and effect. And, moreover, this ‘universal origin’ must itself not require an origin if we are to avoid infinite regress.
In practice, extremely few cosmologies have adopted a schema of ‘more of the same’ going backwards (and forwards) indefinitely. The Stoics preached something along these lines and, reasonably enough, posited eternal recurrence. For the only way to avoid beginnings altogether is to make the ends of a line segment join up to form a circle: the ‘no-beginning’ theory of Hawkins-Hartle is a contemporary version of the same schema.
And, against all odds, orthodox science has returned to belief in an origin ‘in time’ ─ at least for our universe. Positing a multiverse with universes popping into existence all the time (sic) merely pushes the problem further back: either this multi-verse is the Origin or it requires one. §

Essentially the problem is this: there are things that require an origin and things that don’t, and almost everyone is now agreed that our physical universe falls into the first category.
If, then, the present physical world falls into the first category, it seemingly requires the existence of something radically dissimilar to itself. The idea that this ‘original something’ was in any sense a person, with a will and purpose of its own, seems to have outlived its usefulness. But this is not the important point. What is the important point? That the universe, and us within it, require something radically ‘other’ from which it and we derive.
Being so different from what we can observe, anything said about this ‘other thing’ tends to sound weird or absurd. Since the physical world is palpably material (or so we like to think) this ‘other thing’ must be in some sense non-material. The more sophisticated versions of Buddhism posit an original ‘Emptiness’ (Sunyata), while today we have the quantum vacuum, a ‘nothing’ seething with energy and ceaselessly spewing forth ‘virtual particles’. §

There is an order of things of an entirely different kind lying at the foundation of the physical order”, wrote Schopenhauer.
For many clever people this ‘hidden order’ is mathematical. Pure mathematicians have always been secret Platonists but today, 2,600 years or so after Plato, a well-known theoretical physicist, Tegmark, has actually gone so far as to identify physical reality with mathematical reality. This is in some ways an appealing option (at least to mathematicians), but one feels it may simply be a matter of projecting onto the beyond one’s own preoccupations and capacities. A musician would most likely prefer the Vedic doctrine, “In the beginning was the sound”.
Lao Tse lived at a time when the spoken word was the dominant communication system. Today, he would probably phrase the first line of the Tao Te Ching thus:
“The tao that can be mathematized is not the original Tao”. §

All contemporary physical theories of ultimate origin, with very few exceptions, make the ‘laws of physics’ transcendent : they somehow exist prior to, and independently of, the actual universe. Philosophically, this is a very difficult position to defend.
In Newton’s time such a viewpoint made a lot of sense. All the early classical scientists were firm believers in God (not necessarily Christ) and God was, amongst other things, the supreme mathematician. He was also, thanks to the Judeo-Christian tradition, a Lawgiver. Needham has suggested that one reason why China, at one time centuries ahead of the West technologically, did not give birth to the scientific revolution was that it lacked this vital notion of ‘physical law’. Moreover, the idea of an all-powerful God laying down rules that matter must obey was a far better theory than Plato’s idea of perfect (but totally inert) ‘Forms’ that actual objects and beings strive to emulate.
Eventually, God was dispensed with, of course, partly because many of the leading scientists became anti-clerical for political reasons, and partly because successive generations had so refined Newton’s schema that there was no need for any further tinkering on the part of the Almighty (which Newton himself had envisaged in extreme cases). Laplace famously said to Napoleon, “I did not need that hypothesis”, meaning God. But Laplace needed the ‘laws of physics’ more than ever. The latter, bereft of their divine ancestry, became omnipotent in their own right and by and large remain so. §

What we, the public, the ‘ordinary people’, want is some sort of explanation of why things are as they apparently are. And we want an explanation that is not simply ‘correct’ (in the sense that it is not logically flawed or contradicted by the evidence) but one that is believable. But contemporary science, though apparently ‘correct’ (as far as we can judge) is not believable !
The editor of the New Scientist a few years ago made the astonishing admission that he didn’t really believe in Quantum Mechanics because it was so different from the world he (thought he) lived in. Einstein himself, of all people, was sometimes troubled by doubts about the real-life implications of his own theories, remarking notably that “scientific descriptions cannot possibly satisfy our human needs” (quoted Smolin, Time Reborn). The thoughtful student ─ not the same thing as the intelligent student ─ is in fact more likely to be repulsed by, than attracted to, modern physics (as I was myself).
One or two physicists recognize that there is a problem here. But generally, they trot out the sort of reply d’Alembert gave to a student who confessed he had misgivings about the Calculus, namely “Allez à l’avant, la foi vous viendra” (“Keep going, faith will come to you”). Again, if challenged, physicists simply pass the buck, arguing  that it is not the job of science to answer the ‘why’ questions but only the ‘how’ questions. This is both cowardly and dishonest ─ dishonest because they know very well that people without advanced mathematical training feel themselves to be unable to pronounce on the matter. §

Bizarre though it is, Quantum Mechanics is not the least believable theory of modern science: that prize must be given to the ‘block universe’ version of General Relativity. Such a theory, which is currently supported by the majority of physicists, is at once very difficult to dismiss and even more difficult to take seriously.
The argument may, very roughly, be presented thus. All first year mathematics undergraduates and even most ‘A’ level students know that it is fairly easy to show mathematically (given the postulates of SR) that whereas for me events A and B occur in a given order, for another ‘observer’ somewhere in the universe the temporal order of events is reversed. Now, this does not mean that this, usually  distant, observer knows what is going to happen to me before it happens as it were, nor that he or she can in any way give me a warning (since this would break the speed of light barrier). Practically speaking, the alleged time reversal is not of the slightest significance ─ but conceptually it is. My perception of the order of events is true for me but a ‘reverse-order’ sequence of events is just as true for the other ‘observer’. According to GR, every possible observer’s version of events is equally valid and there is, in Einstein’s world-view (as opposed to Newton’s) no ‘absolute’, i.e. universally true order of events.
Considerations such as this have led some contemporary physicists to declare that time is an illusion, and a noted theorist, Julian Barbour, has written a book whose title says it all, The End of Time. For Barbour past, present and future have no universally valid, objective existence or, as I would put it, everything that can have occurrence, already has in a very real sense, occurred. This completely rules out the possibility of ‘free’ individual action! In Barbour’s schema, ‘my’ death (and his) is already what he calls a ‘Now’ time-capsule in Platonia, and there is certainly nothing that either I or Julian Barbour can do about this. (Maybe Julian Barbour can live with this but I certainly can’t.)
The only possible objection I can see to this argument is logical, not mathematical. Supposing for a moment that it is true that all events in ‘my’ future have in some sense already happened, it remains the case that I am not aware of their having happened ― since in my blinkered, deluded state  I become aware of these future events successively, one by one. However, this step by step awareness of what (for me) is yet to happen  is not out there in Platonia, or, if it is, I am not aware of it &c. &c.  Thus we either get something that goes contradicts the hypothesis or infinite regress. I nonetheless don’t feel entirely satisfied by this line of argument.   §

It should be emphasized that Einstein himself was very much aware of this problem and, to some degree, faced up to it. Smolin quotes a significant passage from Carnap’s autobioigraphy:

“Once, Einstein said to me that the problem of the Now worried him seriously. He explained that the experience of the Now means something essentially different from the past and future, but that this important difference does not and cannot occur within physics. That this experience cannot be grasped by science seemed to him a matter of painful but inevitable resignation.” Carnap, Intellectual Autobiography quoted Smolin, Time Reborn
Interestingly, Einstein was not satisfied by Carnap’s glib assertion that physics would one day prove equal to the task (of dealing with the problem of Now). He (Einstein) said, “There is something in the Now that is outside the realm of science”. Quite.
In the end though Einstein’s attitude of resignation got the better of his dissatisfaction since, on the occasion of the death of his lifelong friend, Michele Besso, he wrote to Besso’s widow:
“In quitting this strange world, he has once again preceded me by a little. That doesn’t mean anything. For those of us who believe in physics, this separation between past, present and future is only an illusion, however tenacious”. §

We have thus a situation where what science says is, for many of us, literally unbelievable ─ without it being apparently wrong. Has this situation occurred before? Yes, and the outcome is instructive.
Newton’s Principia, got a mixed reception on the continent. The general feeling was that Newton’s mechanical explanations were entirely convincing when he was dealing with contact forces and the dynamics of moving terrestrial bodies. However, very few people were prepared to take on board Newton’s celestial mechanics since they viewed the hypothesis of ‘universal attraction’ as fantastic, indeed the very sort of ‘occult force’ that they were trying to expel from physics. Newton himself admitted in private that he had no mechanical explanation of the workings of gravity and even went so far, in a letter to Bentley, to suppose it had something to do with “God’s presence in the universe”.
Nonetheless, the theory of gravity worked whereas Descartes’ rival vortex theory didn’t and eventually even Newton’s continental opponents adopted it, stifling their rational scruples. It is interesting to note, however, that it was precisely this aspect of Newtonian mechanics that Einstein attacked head on and, as most people today would say, did succeed in eliminating from physics. Although we still talk in Newtonian terms, there is strictly speaking no ‘force of gravity’ in Einstein’s universe, the alleged force being replaced by the ‘warping of Space-Time’, a very different concept. So, the tiny chink in Newton’s plate armour let in the poisoned dart that felled him.
I would certainly hope that the weak point in the theory of Relativity has something to do with time, i.e. that pace General Relativity,  time, i.e. succession, really does exist since, without it, life loses its savour. Maybe, one day, this chink will be decisively exploited by someone ─ Smolin himself has written a book entitled Time Reborn. For the Block Universe version of General  Relativity is not a psychologically acceptable theory, it is not liveable. Do contemporary physicists ever actually console themselves, when a loved one dies, with the thought that “in physics there is no past, present and future”? §

It is notable that ‘Will’ is something science does not recognize or ever take into account ― because it is not an empirically testable attribute or phenomenon. Contrast this with Schopenhauer’s view, that the universe is at bottom nothing but will (and therefore hateful) or Nietzsche’s, that the mainspring of human action is the Will to Power and that, perhaps with one or two minor reservations, we should wholeheartedly embrace such a fact. Both these philosophers emphasize and throw an intense light on the biological origins of human psychology. §

The consensus at present seems to be that both ‘life’ and the universe itself came about ‘by chance’. What is meant is that neither the origin of the universe nor the development of life within it was the result of a deliberate act. There was no entity that said “Let there be light”, there simply ‘was light’ (i.e. radiation).
But the explanation according to ‘chance’ has its own problems. Firstly, ‘chance’ is a treacherous word that needs close examination. We commonly use it in the loose sense as the opposite of ‘intentional’  ─  ‘I met him by chance’ ─ or as the opposite of predictable ─ ‘The chance movements of a leaf blown by the wind’. Scientifically speaking, my meeting with a friend or the leaf’s movements are not in the least random: I met so-and-so because our trajectories happened to coincide, the leaf moved to the right because the wind took it in that direction. The true sense of ‘chance’ is, or should be, ‘without cause’ and, prior to the advent of Quantum Mechanics, no physical events were considered to be causeless ─ if an event did not have a cause, how on earth could it have come about? Only since QM do we find science stating categorically that certain events, the radio-active decay of an atom for example, are ‘chance events’ in  the strong sense, i.e. no previous event brought them into being. This is not the sort of assumption that can possibly be proved or disproved.
Is there any other possibility besides belief in determinism and belief in chance? Perhaps. One can evade the chance/determinism dilemma to some extent by the concept of the ‘potential’, a term that is sparingly used in science (except in a narrow mechanical sense). The universe has given rise to intelligent life, this we know ─ at any rate if we class ourselves as intelligent creatures. Therefore, within the universe, even at its inception when it, the future universe,  was nothing but a ripple on the quantum vacuum, it held this potentiality within itself, was pregnant with the potentiality of life, as it were.
Now such a line of thought stops well short of attributing ‘intent’ or ‘aim’ to the universe (or to what preceded it) but nonetheless goes further in its emphasis than a ‘chance’ theory does. Moreover, and here we are getting a little closer to dangerous territory, the combination of ‘potential’ with ‘randomness’ turns out to be a very effective ‘strategy’, always provided one has plenty of time (which the universe has). If only one possibility out of trillions, leads to a deterministic route and on to the advent of intelligent life, then endless random experimentation will (probably) eventually lead to this one pathway. And, once embarked on this trajectory, the ‘universe’ ceases to be random ─ the cosmic fruit-machine always pays out if you stand in the casino long enough. So, given a broad and deep enough ‘potential’, ‘chance’ can lead to its own negation, i.e. to determinism. But the reverse is not true: it is hard to see how causality could ever ‘naturally’ lead to randomness. A deterministic chain of events must seemingly either lead to more determinism or, just conceivably, to the total collapse of the system. §

Curiously, there exists a persistent religious tradition concerning a catastrophic ‘falling away’ from an original state of completeness, purity, unity. (That this is not a uniquely Christian concept is shown by Eliade who shows that many native Amerindian cultures had a similar tradition.) There was, then, abstractly speaking, a prior state which was unitary, continuous, immeasurable and which somehow broke down or, rather, ‘broke apart’, ‘exploded’,  leaving the fragmented world we know and live in. This ‘fallen world’ is alma de-peruda, the ‘world of separation’ as the Zohar puts it. If the mystics are to be believed, we nonetheless retain a confused memory of this prior blessed state, it is embedded within us not just as a ‘human memory’, a ‘species memory’ but even perhaps as a ‘world-memory’. §

In QM we have the “collapse of the wave function” and in cosmology the Big Bang, a collapse of pre-existence into existence. These are catastrophic events, not peaceful ones. The wave function describes a state/object which, unlike the massy particles of Newtonian physics, is ‘all over the place’, radically ‘delocalized’. Once again we have the double schema of a continuous, undifferentiated state which somehow gives rise to a specific, isolated, physical state. In some versions, this ‘collapse’ is attributed to human intervention (i.e. meddling) ─ this is the modern physics equivalent of Adam’s responsibility for the Fall. Interestingly, we talk about the ‘measurement’ problem of QM rather than the ‘intervention problem’, as if measuring, quantifying, something on which all science is based (and which art generally disdains) was the cause of the collapse, i.e. the cause of the fall into materiality, the ‘original sin’. §

What hopefully will eventually emerge from the confused and confusing welter of cosmological and mathematical speculation is the paradigm of a ‘universe’ pre-existing in a more compact and  rarefied form which then ‘unfolds’ to give rise to the familiar physical/intellectual one. This is what Bohm is getting at with his theory of the Explicate and Implicit Order though the terms are not very enlightening. §

Although Hoyle never developed his theory into a proper philosophy, the very title of one of his books, The Intelligent Universe, is a give away. According to this view, the Universe contains intelligence which is currently manifested in carbon-based life here on Earth but will perhaps in the future manifest itself otherwise.
“This point of view suggests that in the future the Universe may evolve so that carbon-based life becomes impossible, which in turn suggests that throughout the Universe intelligence is struggling to survive against changing physical laws, and that the history of life on Earth has been only a minor skirmish in this contest.”
The Intelligent Universe p. 222

Hoyle has very  little evidence for this appealing theory though probably rather more than most String theorists for their cosmic theory of everything. Hoyle’s strongest argument is that bacterial life on Earth developed almost as soon as climatic conditions made it possible ― something that has been a serious problem for biology. Hoyle’s explanation is that the building blocks of life (though not life itself) arrived pre-formed: they were seeds sent out by an advanced life-form elsewhere in the galaxy and which fell fortuitously on fertile ground. The point is that life is an extremely unlikely development but, in Hoyle’s scheme, it only needs to take place once in a galaxy ─ since he has given us the details of a supposed galactic dispersal system of the ‘seeds of life’ via comets and meteorites. And, as it happens, Hoyle has been shown to be right at least to the extent that comets do contain chemically complex substances such as formaldehyde ─ but not as far as we know proteins.
Interestingly, during the declining years of the Roman Empire, the Gnostics in North Africa advanced the strange theory that human beings came from ‘somewhere else’ that was very far away and that they retained a vague memory of their distant origin. Humans were thus ‘strangers in a strange world’ and what the Gnostics (‘knowers’) knew was simply this, that the Earth was not their home. Many of them drew the conclusion that they owed no allegiance to earthly authorities, secular or religious, but only to the ‘good God’ of their ancient, infinitely distant  homeland. Initiation into the cult prepared them for the long and dangerous journey back to their place of origin. The strange thing is that, if there is any truth in Hoyle’s theory, the Gnostics were not entirely mistaken! (Hoyle, as far as I am aware, knew nothing about the Gnostics’ cosmological theories.)
What the theory of man’s distant origin would explain is humankind’s deep sense of alienation ― something that material progress seems powerless to eradicate. This sense of being cut off from one’s origins is notably expressed in a Gnostic-type  hymn sung by a future religious sect known as ‘The Remembrancers’ :

    Hymn of Remembrance

“We have forgotten who we are
We have forgotten our distant origins,
We have forgotten the tranquil plains of light,
We have forgotten Zoarr¹ ,
We have forgotten our deepest natures,
Lost as we are within the Web of Sarwhirlia2.
But the hidden self that breathes within us,
Buried beneath the debris of the world,
Caged by these ribs of bone,
Smothered by these cells of flesh,
Deafened by the screams of sense,
Diminished by our thoughts and words,
Submerged, defeated, trampled on,
The everlasting seed of light
Remembers all.

Together we shall leave this ancient prison,
We shall break free from our captivity,
We shall set out across the empty skies,
Fearless, we shall traverse the great Abyss,
A thousand perils we shall undergo,
Our only guide the distant gleams of Zoarr,
And we shall reach the tranquil plains of light,
We shall return to our lost home,
Never to depart again.”

1 Zoarr is the name given to the infinitely distant ‘good God’ of the Remembrancers.
2 Sarwhirlia is the Remembrancers’ name for the Earth. see http://www.webofaoullnnia.wordpress.com

 

 

 

PERSEPHONE

Persephone (Roman Proserpine) is the Greek goddess associated with the seasonal death and rebirth of Nature. Daughter of Demeter (Roman Ceres) she was abducted by Hades (Roman Dis), the god of the Underworld, while gathering flowers as immortalised in Milton’s beautiful lines                                         .nor that fair field
                                   Of Enna, where Proserpine gathering flowers,
Herself a fairer flower, by gloomy Dis
Was gathered…”                                                                                                                                                             (Paradise Lost, Book IV, v.207-272)    
Demeter wandered about distraught looking for her vanished daughter with the result that the  flowers faded, the plants  stopped growing and humanity was in danger of  extinction from famine. Eventually, Persephone was traced to the Underworld and a modus vivendi  was reached : Persephone  was to stay for four months of the year in the Underworld and return to Earth for Spring and Summer — the Greeks do not seem to have bothered with autumn. In some versions of the legend, Persephone is tricked into remaining at least part of the year in the Underworld, because she accepts some pomegranate seeds at the hands of Hades and is thus doomed by the decree that everyone who partakes of food in the Underworld must stay there. The pomegranate thus gained its reputation of being a “dangerous fruit”  and as such is celebrated in the song (music by John Baird, CD Aquarius) which appears in my play The Pomegranate Seeds (Samuel French,  2000)

“You the fruit of Dis have taken
You have eaten of the dangerous tree,
Of the pomegranate taken, taken, Y
our mother calls, your mother weeps Persephone.

Not for you the blaze of summertime,
Sunlit meadows you will never see’
Nevermore the blaze of summertime, summertime,
All that is gone, all that is gone, Persephone.

Now you have crossed to the land of despair,
To this shadowy realm,
Never to return.

You the fruit of Dis have taken,
In these regions you must always stay,

In the twilight of the Underworld, Underworld,
Those that you love, those that you love are far away.”  

Persephone, or Proserpine, has been a frequent subject in Western poetry and paintings, though, somewhat surprisingly, as far as I know no opera has ever been written about her — I could easily imagine Mozart or Glück writing one.

The figure of Persephone was so central to the poetess Anna de Noailles (see website annadenoailles.com) that Catherine Perry entitled her recent interpretation of this remarkable Belle Époque French poet, Persephone Unbound. The figure of Persephone doubtless resonated with Anna de Noailles because of its ambivalence : on the one hand Persephone’s association with flowers, youth and a carefree existence and, on the other, with violence, sorrow, separation, darkness and death. Persephone belonged to two worlds, to the light and to the dark and may thus be seen to symbolize the “willingness… to encounter the fateful aspects of existence” (Catherine Perry) : this aspect of the myth would have appealed far more to Anna de Noailles, as a follower of Nietzsche, than the more traditional fertility goddess persona.    Like Orpheus, Persephone has a shamanic role as ‘crosser of boundaries’ , since, like all humans she crosses the fateful river of Lethe but, unlike ordinary mortals, she is eventually granted the power to return to Earth at will.  There is, interestingly, a late version of the Greek myth in which Persephone, having belatedly fallen in love with her kidnapper, Dis (a strange anticipation of the so-called Stockholm Syndrome) declines to return to Earth and only accepts to do so most reluctantly by order of Zeus.

The positive side of the myth of Persephone as personifying the rebirth of Nature and return of Spring has largely been obscured in Romantic and post-Romantic literature  by her role as Queen of the Underworld.  As such she is a formidable figure since, along with her husband Dis, she is Judge of  Souls, deciding impartially whether they deserve reward or punishment. Rimbaud alludes to her in Une Saison en Enfer “Elle ne finira donc point cette goule reine de millions d’ames et de corps morts et qui seront juges! » which, given the mention of “le port de la misere, la cite enorme” two lines earlier suggests that the poet identifies Persephone with Queen Victoria presiding over the hideous ‘City of Night’ that is nineteenth-century London in Rimbaud’s eyes !

George Gomori sees her as the inevitable figure of destiny that awaits us at the end of the line                         “There are no saving miracles — Nor can remorse win you grace.
                                 There’s no one to deflect the track
Of the knives whistling straight for you.
                                 Persephone, standing at your back,
Proffers her hand. Hold yours out too.”

However, for Swinburne, a vastly underrated poet incidentally, she represents not so much Justice or Destiny as the (welcome) negation of everything vital :

“She waits for each and other,    She waits for all men born; Forgets the earth her mother, The life of fruits and corn; And spring and seed and swallow Take wing for her and follow where summer song rings hollow    And flowers are put to scorn.”

Swinburne makes it quite clear what he is promoting : a Buddhistic total negation of the entire life-process in favour of the permanent quiescence of Nirvana.

“From too much love of living,
From hope and fear set free,
We thank with brief thanksgiving
Whatever gods may be
That no life lives for ever;
That dead men rise up never;
That even the weariest river
Winds somewhere safe to sea.

The star nor sun shall waken,
Nor any change of light:
Not sound of waters shaken,
Nor any sound or sight;
Not wintry leaves nor vernal,
Nor days nor things diurnal;
Only the sleep eternal    In an eternal night.”

Vyvian Grey takes a more nuanced position in her charming Proserpine since although she presents the Underworld in a surprisingly light and asks her mother not to ‘weep for me’, she  does not entirely exclude the idea of eventual return.

Persephone

Here there are no days,
Only a night —
A sort of shimmering half-light
That no sun ever dazzles through;
And the sky — not blue,
No, never blue,
As in the world that once I knew,
But something like a skein of softest grey;
A blossom of white.  

Still are the rivers, are the lakes
Of this quiet land,
Whose silver waters offer up no sound,
Where I have never seen
The summer-coloured sails of mighty ships,
Or their distant gleam;
Only shrouded barges
Gliding soundlessly along through silent mists,
Each like a dream
And by lone oarsmen manned.

 Yet flowers grow here,
For I have plucked such quantities
Of sleepy-headed poppies,
Swarthy hellebores
While this weird countryside I roam.
True, they are not the primroses,
The violets of home,
They are not the roses:
Even so, they have their beauty,
A loveliness all of their own.

 And do not think that music never plays
Where I have come,
For music plays always its hum
Is heard, not in raucous revelries,
But between the cypress trees
Where souls of those departed sit and chant
Their melodies
In voices like a sigh,
So tenderly that I must dwell to hear them
Whenever I pass by.

 O mother, do not weep for me,
Though in this world I must abide:
Within the year I shall be yours again
And earth will gladden at the sight;
Until then, grieve not for me,
For here I shall remain
With my dark husband by my side,
Although the night is long;
Mother, it is a pleasant night.

Roger Hunt Carroll for his part , in the Envoi of his Hymns for Persephone considers that mankind has ‘lost’ not Persephone herself, but the ‘vision’ of Persephone

You are not lost, no, not you;
never could you be robbed from the orchard of earth,
not from flowers in the fields, the high grasses,
no, nor from soft shrubs that sway along the riverside.

It is we who have misplaced you,
dear and most-loved daughter of the world.
We loosened the cords of your summer gown,
and in our vision you fell in the evening light,
your hair shaded by leaves, your face behind a mask, hidden from our careless sight.

Sebastian Hayes

‘EVENT LANGUAGES’ AND ‘OBJECT LANGUAGES’

Benjamin Lee Whorf, seems to have been the first person to point out how much English, and other European languages, are ‘thing-languages’, ‘object-languages’. By far the most important part of speech is the noun and though it is now accepted that not all sentences are of the subject-predicate form, once regarded as universal, quite a lot are. We have a person or thing, the grammatical subject, and the rest of the sentence tells us something about this thing, for example localizes it (‘The cat was sitting on the mat’), or enumerates some property possessed by the ‘thing’ in question (‘The cover of the book is red’). And if we have an active verb, we normally have an agent doing the acting, a person or thing.
There’s nothing ‘wrong’ with such a linguistic structure, of course, but we are so used to it we tend to assume it’s perfectly  reasonable and irreplaceable by any other basic structure. However, as Whorf points out, it is not just applied to sentences of the type ‘A is such-and-such’, where it is appropriate, but also to sentences where it makes little sense. “We are constantly reading into nature fictional acting entities, simply because our verbs must have substantives. We have to say “It flashed” or “A light flashed”, setting up an actor to perform what we call an action, “to flash”. Yet the flashing and the light are one and the same!” (from Whorf, Language, Thought and Reality p. 242, M.I.T. edition).
The quantum physicist and philosopher, David Bohm,  seemingly unaware of Whorf’s prior work, makes exactly the same point.  “Consider the sentence ‘It is raining.’ Where is the ‘It’ that would, according to the sentence, be ‘the rainer that is doing the raining’? Clearly, it is more accurate to say: ‘Rain is going on’ (from Bohm, Wholeness and the Inplicate Order p. 29 ).

Whorf and Bohm clearly have a point here and the general hostility of the academic world to Whorf’s ‘Theory of Linguistic Relativity’ is doubtless in part due to their irritation at an outsider ─ Whorf trained as a chemical engineer ─ pointing out the obvious. Moreover, one would expect the syntax and vocabulary of languages to tell you something about the general conceptions, day to day concerns and modes of thought of the people whose language it is. After all, people talk about what interests them, and languages typically evolve to make communication about common interests more efficient (Note 1).
Even if this is granted for the sake of argument, one might still object that the subject-predicate structure and the role of nouns in English simply reflects ‘how things are’ ─ and there is only ‘one way for things to be’. Since ‘reality’ consists essentially of ‘things’, and relations between these things, isn’t it inevitable that nouns should have pride of place? Well, maybe, but maybe not. And Whorf, one of the very first ‘Westerners’ to actually speak various languages of American Indians, was in a good position to question what practically everyone else had so far taken for granted. Amerindian native languages certainly are very different from any European or even Indo-European language. For a start, “Nearly all American Indian languages are either distinctly ‘polysynthetic’ or have a tendency to be so. At the risk of oversimplification, polysynthetic languages can be thought of as consisting of words that in European languages would occupy whole sentences” (from Lord, Comparative Linguistics). Out and out literal  translations from other European languages into English may sound clunky but are perfectly comprehensible, but literal translations from Shawnee or Nitinat sound, not just awkward, but half crazy. Whorf writes, “We might ape such a compound sentence in English thus: ‘There is one who is a man who is yonder who does running which traverses-it which is a street which elongates’ …... the proper translation [being] ‘A man yonder is running down the long street’.” Whorf adds, “Of such a polysynthetic tongue it is sometimes said that all the words are verbs, or again that all the words are nouns with verb-forming elements added. Actually the terms verb and noun in such a language [as Nitinat] are meaningless.”
Secondly, approaching things from the physical/conceptual side, there can be no doubt that native American tribal societies, untouched as they were by Christianity or Newtonian physics, really did have very different conceptions about the world from those of the incoming European settlers, which is one reason why this meeting of the cultures was so catastrophic. Sapir (Whorf’s first teacher) and Whorf believed that this double dissimilarity was not an accident and that the structure of native American languages indeed reflected a very different ‘view of the world’.
So what, in a nutshell, were these linguistic and ‘metaphysical’ differences? According to Whorf, most Amerindian languages are ‘verb-based’ rather than ‘noun-based’ ─ “Most metaphysical words in Hopi are verbs, not nouns as in European languages”. Worse still, “When we come to Nootka, the sentence without subject or predicate is the only type….Nootka has no parts of speech”. Why were they ‘verb-based’, or at any rate not ‘noun-based’? Because, Whorf argues, the Amerindian world-view was not ‘thing-based’ or ‘object-based’ but ‘event-based’. “The SAE (Standard Average European) microcosm has analysed reality largely in terms of what it calls ‘things’ (bodies and quasibodies) plus modes of extensional but formless existence that it calls ’substances’ or ‘matter’. The Hopi microcosm seems to have analysed reality largely in terms of EVENTS” (Whorf, op. cit. p. 147).
        Again, there seems little to quarrel with in Whorf’s  claim that the SAE world-view, which we can trace right back to Greek atomism for its physics, really was ‘thing-based’ ─ “Nothing exists except atoms and void” as Democritus put it. The subsequent, more sophisticated Newtonian world-view nonetheless reduces to a world consisting of ‘hard, massy’, indestructible atoms colliding with each other and influencing each other from afar through universal attraction. Whether, the world of native American Indians really was ‘event-based’ in the way Whorf imagined it to be, few of us today are qualified to say ─ since hardly anyone speaks Hopi any more and even the most remote Amerindian tribes have long since ceased to be independent cultural entities. In any case, the complex metaphysics/physics of the Hopi as interpreted by Whorf is in itself interesting and original enough to be well worth investigating further.
To return to language. Assuming for the moment there is some truth in the Sapir-Whorf theory that language structure reflects underlying physical and metaphysical preconceptions,  what sort of structures would one expect an ‘event-language’ to have?  Bohm asked himself this but sensibly concluded  that “to invent a whole new language  implying a radically different structure of thought is….not practicable”. I asked myself a similar question when,  in my unfinished SF novel The Web of Aoullnnia,  I tried to rough out the principles underlying ‘Lenwhil Katylin’, a future language invented by the Sarlang, the first of the  Parthenogenic types that dominate Sarwhirlia (the future Earth).
For his part, Bohm proposes to introduce, “provisionally and experimentally”, a new mode into English that he calls the rheomode (‘rheo’ comes from the Greek ‘to flow’). This mode is meant to signal and reflect the “movement of growth, development and evolution of living things” in accordance with Bohm’s ‘holistic’ philosophy. Whorf, for his part, finds most of what Bohm is looking for already present in the Hopi language which typically emphasizes ‘process’ and continuity rather than focusing on specific objects and/or moments of time. Although both these thinkers were looking for  a ‘verb-based’ language, they were also firm believers in continuity and the ‘field’ concept in physics (as opposed to the particle concept). My preferences, or prejudices if you like, take me in the opposite direction, towards a physics and a language that reflect and represent  a ‘universe’ made up of staccato events that never last long enough to become ‘things’ and never overlap enough with their successor events to become bona fide processes.
Thus, in Lenwhil Katylin, a language deliberately concocted to reflect the Sarlang world-view, the verb (for want of a better term) is the pivot of every communication and refers to an event of some kind. In many cases there is no need for  a grammatical subject at all ─ events simply happen, or rather ‘become occurrent’, like the ‘lightning flash’ mentioned by Whorf ─ in the Sarlang world-view, all events are, at bottom,  ‘lightning flashes’. The rest of a typical LK sentence provides the ‘environment’ or ‘localization’ of the central event, e.g. for a ‘lightning-flash’ the equivalent of our ‘sky’, and also gives the causal origin of the event (if one exists). We have thus a basic structure Event/Localization/Origin ─ although in many cases the ‘localization’ and ‘origin’ might well be what for us is one and the same entity.
As to the central events themselves, the Katylin language applies an  inflection to show whether the event is ‘occurrent’ or, alternatively, ‘non-occurrent’. One might compare the inflection with Bohm’s ‘is going on’ in his formulation “Rain is going on” ― in LK we just get Irhil~ where ~ signifies “is occurrent”. Being ‘occurrent’ means that an event occupies a definite location on the Event Locality and has demonstrable physical consequences, i.e. brings into existence at least one other event. Such an event is what we would perhaps call an ‘objective’ event such as a blow with a hammer, as opposed to a subjective one like a wish to be somewhere else (which does not get you there). But the category ‘non-occurrent’ is much larger than our ‘subjective’ since it covers all ‘general’ entities, indeed everything that is not specific and precisely localized in space and time (as we would put it). On the other hand, the Sarlang consider a mental event that is infused with deep emotion, such as a flash of hatred or empathy, to be ‘occurrent’ even if it is completely private since, they would argue, such events can have observable physical consequences. This is somewhat similar to the Buddhist distinction between ‘karmic’ and ‘non-karmic’ events: the first have consequences (‘karma’ means ‘action’ or ‘activity’) while the second do not.
After the ‘occurrent/non-occurrent’ dichotomy, the most important category in Lenwhil Katylin is discontinuity/continuity. Although the Sarlang believe that, in the last analysis, all events are a succession of point-like ‘ultimate events’ (the dharma(s) of Hinayana Buddhism), they nonetheless distinguish between ‘strike-events’ such as a blow and ‘extend-events’ such as a ‘walk’, a ‘run’ and so on. Suffixes or inflections make it clear, for example, whether the equivalent of the verb ‘to look’ means a single glance or an extended survey. And the suffix –y or –yia turns a ‘strike-event’ into an ‘extend-event’  when both cases are possible. Moreover, ‘spread-out’ verbs themselves fall into two classes, those that are repetitions of a selfsame ‘strike-event’ and those that contain dissimilar ‘strike-events’. The monotonous beating of a drum is, for example, a ‘strike spread-event’ while even a single note played on a violin is classed as a ‘spread strike-event’ because of the overtones that are immediately brought into play.
A further linguistic category distinguishes between events which are caused by events of the same type and events brought about by events of an altogether different type. In particular, a physical event brought about by a physical event is sharply distinguished from a physical event brought about by a mental or emotional event: the latter case exhibits ‘cause-effect-dissimilarity’ and is usually, though not invariably, signalled by the suffix -ez. This linguistic distinction has its origin in the division of perceived reality into what is termed ‘the Manifest Occurrent’, very roughly the equivalent of our objective physical universe, and the Manifest Non-Occurrent which consists of wishes, dreams, desires, myths, legends, archetypes, indeed the whole gamut of mental and internal emotional occurrences. Nonetheless, these two domains are not absolutely independent and the Sarlang themselves claimed to have developed a technique (known as witr-conseil) that transferred whole complexes of events from the Manifest Non-Occurrent into the Manifest Occurrent and, more rarely, in the opposite direction. Whatever the truth of this claim, the technique, supposing it ever existed, was lost for ever when the Sarlang, reaching the end of their term, committed mass extinction.                                               SH 13/1/18

 

Note 1 The standard argument against the ‘Linguistic Relativity Theory’ is that, if it were correct, translation would be impossible which is not the case. This argument carries some weight but we must remember that almost all books successfully translated into English come from societies which share the same general religious and philosophic background and whose languages employ similar grammatical structures. Few books have been translated from so-called ‘primitive’ societies because such societies had a predominantly oral culture, while Biblical translators ‘going the other way’ have typically found it extremely difficult to get their message across when communicating with  animists.
There may be something in Whorf’s claim that the Hopi world-view was closer to the modern ‘energy field’ paradigm than to the ‘force and particle’ paradigm of classical physics. ‘Energy’ (a term never used by Newton) is essentially a ‘potential’ entity since it refers to what an object ‘possesses  within itself’, not what it is actually doing at any particular moment. Generally speaking, primitive societies were quite happy with ‘potential’ concepts, with the idea of a ‘latent force locked up within an object but which was not accessible to the five senses directly. It is in fact possible to formulate mechanics strictly in energy terms (via the Hamiltonian) rather than on the basis of Newton’s laws of motion, but no one ever learned mechanics this way, and doubtless never will, because it requires such complicated  mathematics. It is hard to imagine a society committed from the start to an ‘energy’ viewpoint on the world ever being able to develop an adequate symbolic system to flesh out such an intuition.  SH

Catherine Pozzi a modern mystic

Catherine Pozzi is one of those rare individuals who inhabit the strange hinterland between sensual and ‘spiritual’ ecstasy and steadfastly refuse to renounce either territory for the sake of the other. In a letter to Valéry she speaks of “seeing in my mind’s eye a sort of non-human paradise, made of a kind of transcendent material…. absolute solitude; the only possible inhabitants you and I”.
Descartes kick-started modern philosophy with his famous formula, Cogito ergo sum, ‘I think, therefore I am’. There has never really been a philosophical movement that takes first-hand physical sensation as its starting point ― empiricism only concerns itself with ‘sense-data’ and dismisses personal experience as ‘anecdotal’. In her more extravagant moments, Catherine Pozzi ― or Karin as she liked to be called ― viewed herself as a prophet ushering in an era when the life of the senses, science and religion would fuse. This is the theme of Peau d’Ame (‘Skin of the Soul’), a rambling would-be manifesto based on the premiss “JE-SENS-DONC-JE-SUIS” (‘I feel, therefore I am’). She adds the curious comment, “Ce n’est pas la pensée, c’est le sentir qui a besoin de JE” (‘It is not thought but feeling that requires an ‘I’).
In her later years, Karin undertook serious biological and physical studies in an attempt to formulate a new theory of sensation in part based on the ideas of Weber. As she herself admits, she failed in this but one nonetheless finds striking anticipations of Dr. Sheldrake’s contemporary theory of ‘morphic resonance’. For, according to Karin, a single sensation, while being unique, somehow recapitulates all previous sensations of the same type and makes possible further repetitions in the future ― exactly Sheldrake’s idea.
What of that sequence of sensations, her life? Catherine Pozzi (1882 ― 1934) was the daughter of a Parisian surgeon, Samuel Pozzi, while her mother maintained a salon frequented by Sarah Bernardt, Colette and Proust. Karin started writing a Journal at the age of ten and kept this up for most of her life. She was a proto-feminist and adolescent ‘rebel without a cause’ a long time before this became fashionable: indeed she would have been much more at home in California in the Sixties than in Paris during the Belle Époque.
The big question was how to put into practice her philosophy of mystical sensualism without prostrating herself before a man. One possibility was to seek a ‘kindred spirit’ rather than a lover; her adolescent Journal celebrates her passionate friendship with a young American girl who died a year after their first meeting on the ‘Day of the Holy Spirit’ (Passover?), a timing that Catherine found significant. Love, kindred spirit, illness, death, these four strands were henceforth to be forever intertwined.  ventual marriage to a stockbroker with some literary pretensions barely survived the honeymoon though it did produce a son. Three years later Karin was diagnosed with tuberculosis and spent much of the rest of her life undergoing cures and abusing prescribed mood-changing drugs. During WWI she met a young aviator, André Fernet, who firmly believed that ‘true love’ should be strictly Platonic. His death in action in 1916 (which Catherine claimed to have foreseen in a dream) was as much a fitting consummation to their love as it was an interruption.
In 1920 Karin embarked on a tempestuous affair with Paul Valéry, a married man with a family and a poet much less gifted than herself. She eventually disclosed the relation to Valéry’s wife, and henceforth the doors of Parisian society were firmly closed to this latter-day Anna Karenina. But she no longer cared.
Karin in her lifetime only published one or two short articles in magazines and (under the name C.K.) Agnès, a fictionalized account of her own adolescent crises. It is thus on the Six Poems that her reputation must rest and it is to be regretted that the NRF Gallimard edition of her works has them in the wrong order. Karin wrote in her Journal on 6 November 1934,“J’ai écrit VALE, AVE, MAYA, NOVA, SCOPOLAMINE, NYX. Je voudrais qu’on en fasse une plaquette”.

These intense, concise poems remind one of the ‘Stations of the Cross’, marking as they do the stages in an agonising spiritual journey, or perhaps resting points on the pathway that candidates for initiation followed at Eleusis. They could also be viewed as snapshots of a substance undergoing successive changes of state, the substance being, as it happens, a human being — and I fancy that Karin would have approved of this analogy. According to Karin’s ‘chemistry of the soul’, the essential elements of her current personality have already existed in previous reincarnations (‘MAYA’), will somehow persist after death albeit momentarily dissociated from each other (‘AVE’) and will eventually all come together again in a future time (‘NOVA’).
VALE (‘Farewell’) shows the pilgrim looking back at the old life she is now leaving for ever. She starts by lamenting her lover’s betrayal not so much of herself as of their shared life. But she turns the tables on him, as it were, by absorbing the high points of the experience into her body (not mind), so all has not been lost after all:
Il [cet amour] est mon corps et sera mon partage/ Apres mourir”
(‘This love is my body and will remain my portion after death’).
In this way, what is worth remembering remains with her for ever duly integrated into her inner self.
AVE (‘Hail’) begins the sequence proper. It is a passionate invocation not to a real person but to a higher being — one can imagine Psyche writing such a poem between visits from the god Eros, while the tone also recalls Saint Theresa of Avila addressing Christ. For the being is at once beloved, guide and controller of her destiny : he will be responsible for her rebirth even though she will first be broken entirely into pieces (‘You will remake my name and image out of the thousand bodies dispersed by time’). The author intimates that, for a while, she will cease to exist as an individual, will be “sans nom et sans visage” (‘nameless and faceless’), but will be given a new ‘name and face’ which is yet the same, since underlying these transformations is a “vive unité” (‘living unity’).
The tone of this poem is rapt, ecstatic, and it ends by invoking “Cœur de l’esprit, O centre du mirage“(‘Heart of my spirit, centre of the mirage’). ‘Mirage‘ is the world of the senses which Buddhism teaches is ‘maya’, illusion ― but, for Karin, the ‘centre’ of the mirage is not illusory.
In MAYA, the speaker returns to a previous idyllic life amongst the Mayans which she views as a recovery of her cosmic childhood ― ‘I retrace my steps into childhood’s abyss’. Indeed, the voyager hesitates, vainly wishing the process could stop here, in this lost paradise refound, “Que s’arrete le temps, que s’affaisse la trame” (‘If only time would stand still and the weft [of destiny] grow slack’).
After MAYA, NOVA comes as something of a shock. Instead of greeting with rapture a being from another realm, this time the spiritual traveller recoils with horror from a being (at once herself and another) that is canniballistically sucking the speaker’s vitality: its birth is the present speaker’s death. She desperately pleads with it not to be born at all:
‘Undo ! Unmake yourself, dissolve, refuse to be
Denounce what was desired but not chosen by me’

After anticipations of the future and a reliving of the past, SCOPOLAMINE and NYX return us to the present. (Scopolamine is incidentally not a hallucinatory drug but a ‘truth drug’ used by the Nazis on prisoners of war but prescribed to Karin by her doctor.) This time there is no holding back: on the contrary the spiritual voyage is imaged as the launching of a spacecraft with (what we would now call) an astronaut aboard it. Whatever it is that survives physical decomposition is already detaching itself from its earthly frame
        ‘My heart has left my life behind,
        The world of Shape and Form I’ve crossed,
        I am saved, I am lost
        Into the unknown am tossed
        A name without a past to find’ 

NYX was written in a single jet on Karin’s deathbed and brings us even closer to the moment of metamorphosis. The tone is a mixture of awe, regret, rapture and incomprehension:
‘O deep desire amazement spread abroad
O splendid journey of the spellstruck mind
O worst mishap O grace descended from above
O open door through which not one has passed

I know not why I sink, expire
Before the eternal place is mine
I know not who made me his prey
Nor who it was made me his love’

Note:  The full text of the Six Poems in both the original French and my translation  can be found on the website www.catherinepozzi.org  or, if this expires, directly from the author.

SH 29/09/2016

 

NUMBER

What is number? By ‘number’ I mean whole number, positive integer such as 2, 34, 1457…  Conceptually, number is hard to pin down and the modern mathematical treatment which makes number depend on formal logical principles does not help us to understand what number ‘really’ is.  “He who sees things in their growth and first origins will obtain the clearest view of them” wrote Aristotle. So maybe we should start by asking why mankind ever bothered with numbers in the first place and what  mental/physical capacities are required to use them with confidence.

It would seem that numbers were invented principally to record data. Animals  get along perfectly well without a number system and innumerable ‘primitive’ peoples possessed a very rudimentary one, sometimes just one, two, three. The aborigine who often possessed no more than four tools did not need to count them and the pastoralist typically had a highly developed visual memory of his herd, an ability we have largely lost precisely because we don’t use it in our daily life (Note 1). It was the large, centrally controlled empires of the Middle East like Assyria and Babylon that developed both arithmetic and writing. The reasons are obvious: a hunter, goatherd or subsistence farmer in constant contact with his small store of worldly goods, does not need records, but a state official in charge of a large area does. Numbers were developed for the purpose of trade, stock-taking, assessment of military strength and above all taxation ― even today I would guess that numbers are employed more for bureaucratic purposes than scientific or pure mathematical ones (Note 2).

What about the required mental capacities? There are two, and seemingly only two, essential requirements. Firstly, one must be able to make a hard and fast distinction between what is ‘singular’ and ‘plural’, between a ‘one’ and a ‘more-than-one’. Secondly, one must be capable of ‘pairing off’ objects taken from two different sets, what mathematicians call carrying out a ’one-one correspondence’.

The difference between ‘one’ and ‘more-than-one’ is basic to human culture (Note 3). If we actually lived in a completely unified world, or felt that we did, there would be no need for numbers. Interestingly, in at least one ancient language, the word for ‘one’ is the same as the word for ‘alone’: without an initial sense of estrangement from the rest of the universe, number systems, and everything that is built upon them, would never have been invented.

‘Pairing off’ two sets, apples and pears for example, is the most basic procedure in the whole of mathematics and it was only relatively recently (end of 19th century) that it became clear that the basic arithmetic operations and numbers themselves        originate in messing about with collections of objects. Given any two sets, the first set A can either be paired off exactly member for member with the second set B, or it cannot be. In the negative case, the reason may be that A does not ‘stretch to’ B, i.e. is ‘less than’ (<), or alternatively it ‘goes beyond’ B (>), i.e. has at least one object to spare after a pairing off. Given two clearly defined sets of objects, one, and only one, of these three cases must apply.

But as Piaget points out, the ability to pair off, say, apples and pears is not sufficient. A child may be able to do this but baulk at pairing off shoes and chairs, or apples and people. To be fully numerate, one needs to be able, at least in principle, to ‘pair off’ any two collections of discrete objects. Not only children but whole societies hesitated at this point, considering that certain collections of things are ‘not to be compared’ because they are too different. One collection might be far away like the stars, the other near at hand, one collection comprised of  living things, the other of dead things and so on.

This gives us the cognitive baggage to be ‘numerate’ but a further step is necessary before we have a fully functioning number system. The society, tribe or social group  needs to decide on a set of more or less identical objects (later marks) which are to be the standard set against which all other sets are to be compared. So-called ‘primitive’ peoples used shells, beans or sticks as numbers for thousands of years and within living memory the Wedda of Ceylon carried out transactions with bundles of ‘number sticks’. Yoruba state officials of the Benin empire in Nigeria performed quite complicated additions and subtractions using only heaps of cowrie shells. Note that the use of a standard set is an enormous cultural and social advance  from simply pairing off specific sets of objects. The cowboy who had “so many notches on his gun” was (presumably) doing the latter, i.e. pairing off one dead man with one notch, and doubtless used other marks or words to refer to other objects. In many societies there were several sets of number words, marks or objects in use simultaneously, the choice depending on context or the objects being counted (Note 4).

So what are the criteria for the choice of a standard set? It is essential that the objects (or marks) chosen should be more or less identical since the whole principle of numbering is that individual differences such as colour, weight, shape and so on are irrelevant numerically speaking. Number is a sort of informational minimum: of all the information available we throw away practically everything since all that matters is how the objects concerned pair off with those of our standard set. Number, which is based on distinction by quantity, required a cultural and social revolution since it had to replace distinction by type which was far more important to the hunter/foodgatherer ― comestible or poisonous, friend or foe, male or female.

Secondly, we want a plentiful supply of object numbers so the chosen ‘one-object’ must be abundant or easy to make, thus the use of shells, sticks and beans. Thirdly, the chosen ‘one-object’ must be portable and thus fairly small and light. Fourthly, it is essential that the number objects do not fuse or adhere to each other when brought into close proximity.

All these requirements make the choice of a basic number object (or object-number) by no means as simple as it might appear and eventually led to the use of marks on a background such as charcoal strokes on plaster, or knots in a cord, rather than objects as such.

Numbering has come a long way since the use of shells or scratches on bones but the ingenious improvements leading up to our current Arab/Hindu place value number system have largely obscured the underlying principles of numbering.      The choice of a ‘one-object’, or mark, plus the ability to replicate this object or mark more or less indefinitely is the basis of a number system. The principal improvements subsequent to the replacement of number-objects by ‘number-marks’, have been ‘cipherisation’ and the use of bases.

In the case of cipherisation we allow a single word or mark to represent what is in fact a ‘more than one’, thus contradicting the basic distinction on which numbering depends. If we take 1 as our ‘one-mark’, 11111 ought by rights to represent what we call five and write as 5. Though this step was long to come,  the motivation is obvious: simple repetition is cumbersome and leads to error ― one can with difficulty distinguish 1111111 from 111111. Verbal number systems seem to have led the way in this: no  languages I know of say ‘one-one-one’ for 3 and very few simply repeat a ‘one-word’ and a ‘two-word’ (though there are examples of this).

The use of bases such as our base 10, depends on the idea of a ‘greater one’, i.e. an object that is at once ‘one’ and ‘more-than-one’ such as a tight bundle of similar sticks. And if we now extend the principle and make an even bigger bundle out of the previous bundles while keeping the ‘scaling’ the same, we have a fully fledged base number system. The choice of ten for a base is most likely a historical accident since we have exactly five fingers and thumbs on each hand. The hand was the first computer and finger counting was widely practiced until quite recent times: the Venerable Bede wrote a treatise on the subject.

The final advance was the use of ‘place value’: by shifting the mark to the left (or right in some cases) you make it ‘bigger’ by the same factor as your chosen base. Although we don’t see it like this, 4567, is a concise way of writing four thousands, five hundreds, six tens and seven ones. 

Human beings, especially in the modern world, spend a vast amount of time and effort moving objects from one location to another, from one country to another or from supermarket to kitchen. One set of possessions increases in size, another decreases, giving rise to the arithmetic operations of ‘adding’ and ‘subtraction’. And to make the vast array of material things manageable we need to divide them up neatly into subsets. And  for stock-taking and related purposes, we need agreed numerical symbols for the objects and people being shifted about. A tribal society can afford to ignore numbers (but not shape), an empire cannot.     SH

 

 

 

Note 1 A missionary in South America noted with amnazement that some tribes like the Abipone had only three number words but during migration could see at a glance from their saddles whether a single one of their dogs was missing out of the ‘immense horde’. From Menninger, Number Words and Number Symbols p. 10    

Note 2  To judge by the sort of problems they tackled, the Babylonian and Egyptian scribes were obviously interested in numbers for their own sake as well, i.e. were already pure mathematicians, but the primary motivation was undoubtedly socio-economic. Even geometry, which comes from the Greek word for ‘land-measurement’, was originally developed by the Egyptians in order to tax peasants with irregular shaped plots bordering the Nile.

 

Note 3  Some historians and ethnologists argue that the tripartite distinction ‘one-two-more than two’, rather than ‘one-many’, is the basic distinction. Thus the cases singular, dual and plural of certain ancient languages such as Greek.

 

Note 4  The Nootkans, for example, had different terms for counting or speaking of (a) people or salmon; (b) anything round in shape (c) anything long and narrow. And modern Japanese retains ‘numerical classifiers’.

What is time?

What is time? Time is succession. Succession of what? Of events, occurrences, states. As someone put it, time is Nature’s way of stopping everything happening at once.

In a famous thought experiment, Descartes asked himself what it was not possible to disbelieve in. He imagined himself alone in a quiet room cut off from the bustle of the world and decided he could, momentarily at least, disbelieve in the existence of France, the Earth, even other people. But one thing he absolutely could not disbelieve in was that there was a thinking person, cogito ergo sum (‘I think, therefore I am’).
Those of us who have practiced meditation, and many who have not, know that it is quite possible to momentarily disbelieve in the existence of a thinking/feeling person. But what one absolutely cannot disbelieve in is that thoughts and bodily sensations of some sort are occurring and, not only that, that these sensations (most of them anyway) occur one after the other. One outbreath follows an inbreath, one thought leads on to another and so on and so on until death or nirvana intervenes. Thus the grand conclusion: There are sensations, and there is succession.  Can anyone seriously doubt this? 

Succession and the Block Universe

That we, as humans, have a very vivid, and more often than not  acutely painful, sense of the ‘passage of time’ is obvious. A considerable body of the world’s literature  is devoted to  bewailing the transience of life, while one of the world’s four or five major religions, Buddhism, has been well described as an extended meditation on the subject. Cathedrals, temples, marble statues and so on are attempts to defy the passage of time, aars long vita brevis.
However, contemporary scientific doctrine, as manifested in the so-called ‘Block Universe’ theory of General Relativity, tells us that everything that occurs happens in an ‘eternal present’, the universe ‘just is’. In his latter years, Einstein took the idea seriously enough to mention it in a letter of consolation to the son of his lifelong friend, Besso, on the occasion of the latter’s death. “In quitting this strange world he [Michel Besso] has once again preceded me by a little. That doesn’t mean anything. For those of us who believe in physics, this separation between past, present and future is an illusion, however tenacious.”
Never mind the mathematics, such a theory does not make sense. For, even supposing that everything that can happen during what is left of my life has in some sense already happened, this is not how I perceive things. I live my life day to day, moment to moment, not ‘all at once’. Just possibly, I am quite mistaken about the real state of affairs but it would seem nonetheless that there is something not covered by the ‘eternal present’ theory, namely my successive perception of, and participation in, these supposedly already existent moments (Note 1). Perhaps, in a universe completely devoid of consciousness,  ‘eternalism’ might be true but not otherwise.
Barbour, the author of The End of Time, argues that we do not ever actually experience ‘time passing’. Maybe not, but this is only because the intervals between different moments, and the duration of the moments themselves, are so brief that we run everything together like movie stills. According to Barbour, there exists just a huge stack of moments, some of which are interconnected, some not, but this stack has no inherent temporal order. But even if it were true that all that can happen is already ‘out there’ in Barbour’s Platonia (his term), picking a pathway through this dense undergrowth of discrete ‘nows’ would still be a successive procedure.
I do not think time can be disposed of so easily. Our impressions of the world, and conclusions drawn by the brain, can be factually incorrect ― we see the sun moving around the Earth for example ― but to deny either that there are sense impressions and that they appear successively, not simultaneously, strikes me as going one step too far. As I see it, succession is an absolutely essential component  of lived reality and either there is succession or there is just an eternal now, I see no third possibility.
What Einstein’s Special Relativity does demonstrate is that there is seemingly no absolute ‘present moment’ applicable right across the universe (because of the speed of light barrier). But in Special Relativity at least succession and causality still very much exist within any particular local section, i.e. inside a particular event’s light cone. One can only surmise that the universe as a whole must have a complicated mosaic successiveness made up of interlocking pieces (tesserae).

Irreversibility
In various areas of physics, especially thermo-dynamics, there is much discussion of whether certain sequences of events are reversible or not, i.e. could take place other than in the usual observed order. This is an important issue but is a quite different question from whether time (in the sense of succession) exists. Were it possible for pieces of broken glass to spontaneously reform themselves into a wine glass, this process would still occur successively and that is the point at issue.

Time as duration

‘Duration’ is a measure of how long something lasts. If time “is what the clock says” as Einstein is reported to have once said, duration is measured by what the clock says at two successive moments (‘times’). The trick is to have, or rather construct, a set of successive events that we take as our standard set and relate all other sets to this one. The events of the standard set need to be punctual and brief, the briefer the better, and the interval between successive events must be both noticeable and regular. The tick-tock of a pendulum clock provided such a standard set for centuries though today we have the much more regular expansion and contraction of quartz crystals or the changing magnetic moments of electrons around a caesium nucleus.

Continuous or discontinuous?

 A pendulum clock records and measures time in a discontinuous fashion: you can actually see, or hear, the minute or second hand flicking from one position to another. And if we have an oscillating mechanism such as a quartz crystal, we take the extreme positions of the cycle which comes to the same thing.
However, this schema is not so evident if we consider ‘natural’ clocks such as sundials which are based on the apparent continuous movement of the sun. Hence the familiar image of time as a river which never stops flowing. Newton viewed time in this way which is why he analysed motion in terms of ‘fluxions’, or ‘flowings’. Because of Calculus, which Newton invented, it is the continuous approach which has overwhelmingly prevailed in the West. But a perpetually moving object, or one perceived as such, is useless for timekeeping: we always have to home in on specific recurring configurations such as the longest or shortest shadow cast. We have to freeze time, as it were, if we wish to measure temporal intervals.

Event time

The view of time as something flowing and indivisible is at odds with our intuition that our lives consist of a succession  of moments with a unique orientation, past to future, actual to hypothetical. Science disapproves of the latter common sense schema but is powerless to erase it from our thoughts and feelings: clearly the past/present/future schema is hard-wired and will not go away.

If we dispense with continuity, we can also get rid of  ‘infinite divisibility’ and so we arrive at the notion, found in certain early Buddhist thinkers, that there is a minimum temporal (and spatial) interval, the ksana. It is only recently that physicists have even considered the possibility that time  is ‘grainy’, that there might be ‘atoms of time’, sometimes called chronons. Now, within a minimal temporal interval, there would be no possible change of state and, on this view, physical reality decomposes into a succession of ‘ultimate events’ occupying  minimal locations in space/time with gaps between these locations. In effect, the world becomes a large (but not infinite) collection of interconnected cinema shows proceeding at different rates.

Joining forces with time

 The so-called ‘arrow of time’ is simply the replacement of one localized moment by another and the procedure is one-way because, once a given event has occurred, there is no way that it can be ‘de-occurred’. Awareness of this gives rise to anxiety ― “the moving finger writes, and having writ/ Moves on, nor all thy piety or wit/Can lure it back to cancel half a line….”  Most religious, philosophic and even scientific systems attempt to allay this anxiety by proposing a domain that is not subject to succession, is ‘beyond time’. Thus Plato and Christianity, the West’s favoured religion. And even if we leave aside General Relativity, practically all contemporary scientists have a fervent belief in the “laws of physics” which are changeless and in effect wholly transcendent.
Eastern systems of thought tend to take a different approach. Instead of trying desperately to hold on to things such as this moment, this person, this self, Buddhism invites us to  ‘let go’ and cease to cling to anything. Taoism goes even further, encouraging us to find fulfilment and happiness by identifying completely with the flux of time-bound existence and its inherent aimlessness. The problem with this approach is, however, that it is not clear how to avoid simply becoming a helpless victim of circumstance. The essentially passive approach to life seemingly needs to be combined with close attention and discrimination ― in Taoist terms, Not-Doing must be combined with Doing.

Note 1 And if we start playing with the idea that  not only the events but my perception of them as successive is already ‘out there’, we soon get involved in infinite regress.

Note 2 I have attempted to develop this schema on the website www.ultimateeventtheory.com

Even and Odd

Animals and so-called primitive peoples do not bother to make nice distinctions between entities on the basis of number and even today, when  deprived of technological aids, we are not at all good at it (Note 1).  What people do ‘naturally’ is to make distinctions of type not number and the favourite principle of division by type is the two-valued either/or principle.  Plato thought that this principle, dichotomy, was so fundamental that all knowledge was based on it — the reason for this being because the brain works in this way, the nerve synapsis is either ‘on’ or ‘off’. Psychologically human beings have a very strong inclination to proceed by straight two-valued distinctions, light/dark, this/that, on/off, sacred/profane, Greek/Barbarian, Jew/Gentile, good/evil and so on — more complex gradations are only introduced later and usually with great reluctance Science has eventually recognized the complexity of nature and apart from gender there are not many true scientific dichotomies left though we still have the classification of animals into  vertebrates and invertebrates.

Numbers themselves very early on got classified into even and odd , the most fundamental numerical distinction after the classification one and many which is even more basic.

The classification even/odd is radical: it provides what modern mathematicians call a partition of the whole set. That is, the classification principle is exhaustive : with the possible exception of the unit, all  numbers fall into one or other of the two categories. Moreover, the two classes are mutually exclusive: no number appearing in the list of evens will appear in thelist of odds. This is by no means true of all classification principles for numbers as one might perhaps at first assume. Numbers can be classified, for example, as triangular and as rectangular according to whether they can be (literally) made into rectangles or equilateral triangles. But ΟΟΟΟΟΟ turns out to be both since it can be formed either into a triangle or a rectangle:

ΟΟΟ                                         ΟΟΟ
ΟΟΟ                                         ΟΟ
Ο
The Greeks, like practically all cultures in the ancient world, viewed the odd and even numbers as male and female respectively — presumably because a woman has ‘two’ breasts and a male only one penis. And, since oddness, though in Greek the term did not have the same associations as in English, was nonetheless defined with respect to evenness and not the reverse, this made an odd number a sort of female manqué. This must have posed a problem for their strongly patriarchal society but the Greek philosophers and mathematicians got round this by arguing that ‘one’  (and not ‘two’) was the basis of the number system while ‘one’ was the ‘father of all numbers’.

On the other hand a matriarchal society or a species where females were dominant would almost certainly, and with better reasoning, have made ‘one’ a female number, the primeval egg from which the whole numerical progeny emerged. Those who consider that mathematics is in some sense ‘eternally true’ should reflect on the question of how mathematics would  have developed within a hermaphroditic species, or in a world where there were three and not two humanoid genders as in Ian Banks’s science-fiction novel  The Player of Games.

Evenness is not easy to define — nor for that matter to recognize as I have just realized since, coming across an earlier version of this section, I found I was momentarily incapable of deciding which of the rows of balls pictured at the head of this chapter represented odd or even numbers. We have to appeal to some very basic feeling for ‘symmetry’ — what is on one side of a dividing line is exactly matched by what is on the other side of it. A definition could thus be

If you can pair off a collection within itself and nothing remains over, then the collection is called even, if you cannot do this the collection is termed odd.

This makes oddness anomalous and less basic than evenness which intuitively one feels to be right —  we would not, I think, ever dream of defining oddness and then say “If a collection is not odd, it is even”. And although it is only in English and a few other languages that ‘odd’ also means ‘strange’, the pejorative sense that the word odd has picked up suggests that we expect and desire things to match up, i.e. we expect, or at least desire, them  to be ‘even’ —  the figure of Justice holds a pair of evenly balanced scales.

The sense of even as ‘level’ may well be the original one. If we have two collections of objects which, individually,  are more or less identical, then a pair of scales remains level if the collections are placed on each arm of the lever (at the same distance).  One could define even and odd thus pragmatically:

“If a collection of identical standard objects can be divided up in a way which keeps the arms of a balance level, then the collection is termed even. If this is not possible it is termed odd.”

This definition avoids using the word two which is preferable since the sense of things being ‘even’ is much more fundamental than a feeling for ‘twoness’  — for this reason the distinction even/odd, like the even more fundamental ‘one/many’ , belongs to the stage of pre-numbering rather than that of numbering.

Early man would not have had a pair of scales, of course, but he would have been familiar with the procedure of ‘equal division’, and the simplest way of dividing up a collection of objects is to separate it into two equal parts. If there was an item left over it could simply be thrown away. Evenness is thus not only the simplest way of dividing up a set of objects but the principle of division which makes the remainder a minimum: any other method of division  runs the risk of having more objects left over.

Euclid’s definition is that of equal division. He says “An even number is that which is divisible into two equal parts” (Elements Definition 6. Book VII)  and “An odd number is that which is not divisible into two equal parts, or that which differs by  a unit from an even number”  (Elements  Definition 7. Book VII). Incidentally, in Euclid ‘number’ not only always has the sense ‘positive integer but has a concrete sense — he defines ‘number‘ as a “multitude composed of units”.

Note that Euclid defines odd first privatively (by what it is not) and then as something deficient with reference to an even number. The second definition is still with us today: algebraically the formula for the odd numbers is (2n-1) where n is given the successive values 1, 2, 3…. or sometimes (in order to leave 1 out of it) by giving n the successive values 2, 3, 4….  In concrete terms,  we have the sequence

Ο     ΟΟ     ΟΟΟ  ……..                 …..

Duplicating them gives us the ‘doubles’ or even numbers

Ο     ΟΟ     ΟΟΟ  ..….
Ο     ΟΟ     ΟΟΟ  ……

and  removing a unit each time gives us the ‘deficient’ odd numbers.

The unit itself is something out on its own and was traditionally regarded as  neither even nor odd. It is certainly not even according to the ‘equal division’ definition since it cannot be divided at all (within the context of whole number theory) and it cannot be put on the scales without disturbing equilibrium. In practice it is often convenient to treat the unit as if it were odd, just as it is to consider it a square number, cube number and so forth, otherwise many theorems would have to be stated twice over. Context usually makes it clear whether the term ‘number’ includes the unit or not.

Note that distinguishing between even and odd has nothing to do with counting or even with distinguishing between greater or less – knowing that a number is even tells you nothing about its size. And vice-versa, associating a number word or symbol with a collection of objects will not inform you as to  whether the quantity is even or odd — there are no ‘even’ or ‘odd’ endings to the spoken word like those showing whether something is singular or plural,  masculine or feminine.

It is significant that we do not have words for numbers which, for example, are multiples of four or which leave a remainder of one unit when divided into three. (The Greek mathematicians did, however, speak of ‘even-even’ numbers.) If our species had three genders instead of two, as in the world described in The Player of Games, we would maybe tend to divide things into threes and classify all numbers according to whether they could be divided into three parts exactly, were a counter short or a counter over. This, however, would have made things so much more complicated that such a species would most likely have taken even longer to develop numbering and arithmetic than in our own case.

The distinction even/odd is the first and simplest case of what is today called a congruence. The integers can be separated out into so-called equivalence classes according to the remainder left when they are divided by a given number termed the modulus. All numbers are in the same class (modulus 1) since when they are separated out into ones there is only one possible remainder : nothing at all. In Gauss’s notation the even numbers are the numbers which leave a remainder of zero when divided by 2, or are ‘0 (mod 2)’ where mod is short for modulus. And the odd numbers are all 1 (mod 2) i.e. leave a unit when separated into twos. What is striking is that although the distinction between even and odd, i.e. distinction between numbers that are 0 or 1 (mod 2) is prehistoric, congruence arithmetic as such was invented by Gauss a mere couple of centuries ago.

In concrete terms we can set up equivalence classes relative to a given modulus by arranging collections of counters (in fact or in imagination) between parallel lines of set width starting with unit width, then a width which allows two counters only, then three and so on. This image enables us to see at once that the sum of any two or more even numbers is always even.

And since an odd number has an extra  Ο  this means a pair of odd numbers have each an extra unit and so, if we fit them together to make the units face each other we have an even result. Thus    Even plus even equals even” and “Odd plus odd equals even” are not just jingles we have to learn at school but correspond to what actually happens if we try to arrange actual counters or squares so that they match up.

We end up with the following two tables which may well have been the earliest ones ever to have been drawn up by mathematicians.

­­­­­­­­­­­­­­­­­­­­­­­          +       odd      even                        ×     odd    even  

       odd      even    odd                     odd    odd    even

       even    odd    even                     even  even  even

 

All this may seem so obvious that it is hardly worth stating but simply by appealing to these tables many results can be deduced that are far from being self-evident. For example, we find by experience that certain concrete  numbers can be arranged as rectangles and that, amongst these rectangular numbers, there are ones that can be separated into two smaller rectangles and those that cannot be. However if I am told that a certain collection can be arranged as a rectangle with one side just a unit greater than the other, then I can immediately deduce that it can be separated into two smaller rectangles. Why am I so sure of this? Because, referring to the tables above,

1.) the ‘product’ of an even and an odd number is even;
2.) an even number can by definition always be separated into two equal parts.

           I could deduce this even if I was a member of a society which had no written number system and no more than a handful of number words.

This is only the beginning: the banal distinction between even and odd and reference to the entries in the tables above crops up in a surprising amount of proofs in number theory. The famous proof that the square root of 2 is not a rational number — as we would put it — is based on the fact that no quantity made up of so many equal bits can be at once even and odd.                                                                       SH 5/03/15

 

Note 1  This fact (that human beings are not naturally very good at assessing numerical quantity) is paradoxical since mankind is the numerical animal par excellence. Mathematics is the classic case of the weakling who makes himself into Arnold Schwarzenegger. It is because we are so bad at quantitative assessment that playing cards are obliged to show the number words in the corner of the card and why the dots on a dice are arranged in set patterns to avoid confusion.

 

Poetry and Contemporary Attitudes towards Death

Poetry and Contemporary Attitudes towards Death

Note: On the brink of undergoing my first major surgical intervention, I came across the following piece amongst my papers. It was apparently written two years after the death of my father, which event took place at least twelve to fifteen years ago. There are plenty of things I could add but I thought it best to leave the piece unchanged. SH

Increasingly people today are arranging their own funeral services or those of their family and partners whether the service is a standard cremation or a woodland burial. Instead of, or in association with, passages from the Bible or other sacred books there is an increasing demand for readings from contemporary or at least relatively modern authors. Unfortunately, loafing through late twentieth century literature one finds very little indeed on the subject of death and that little is generally of extremely poor quality. Why is this? The present society, whatever its other merits, seems incapable of facing up to death and by and large we sweep it under the carpet and pretend it isn’t really there. For most of my life death was something that happened in the past or to other people  and I saw my first actual dead body (my father’s) only a couple of years ago when I was in my late fifties. Confronted with death, one tends to be flummoxed, embarrassed, at a loss. Although I suppose one could write a good poem saying exactly this ─ that one doesn’t really know what to feel ─ I don’t think a funeral service would be the place to read it out loud.

What exactly do most people require from readings at a funeral service? I think most people require something solemn. Since the language of the King James Bible and the Prayer Book is solemn in an absolutely magnificent way, a lot of people who don’t believe a word of it, are quite happy for extracts to be read at funerals ─ they ‘sound right’ and up to a point that is all that really matters. Death can be treated as a joke but a funeral service is not, one feels, the place for fooling around. I have been to a funeral where supposedly funny pieces were read : practically everyone present including myself found this tasteless and objectionable. (Irish Catholics have their wakes, of course, but they have a full-blown funeral service first.) And as it happens, modern poetry ─ I mean poetry from the nineteen-twenties onwards ─ has very largely been against solemnity, against anything high-sounding, has become deliberately prosaic and matter of fact. That is all very well but goes some way to explaining why few people today can write well about death, for the theme of death somehow does require one to pull the stops out.

Also, people generally desire to have something consoling if possible read out at a funeral service. Once again, the traditional religions score heavily here since they do offer serious consolations, in particular the consolation that, contrary to appearances, death is not the end. Humanism finds it hard to compete here.

`What is indubitable when confronted with a corpse is that something has gone, has ended. How can one attempt to console oneself for this? One solution is to argue that the ‘true self’ does not reside in the body and so does not die with the body. It may surprize some people to learn that this was not originally a Christian doctrine ─ Christianity still officially affirms the ‘resurrection of the body’ ─ but a Greek idea which some historians trace even further back to the ‘out-of-the-body’ experiences of Siberian shamans (see Note). Today, however, science has considerably weakened belief in the reality of this incorporeal entity, the soul; also, we are not too keen today on a system of belief which implicitly or explicitly downgrades the body. The ‘soul’ option is losing ground fast.

This more or less only leaves two broad options: belief in reincarnation and pantheism. Most people today who consider themselves pagans seem to believe in reincarnation or pantheism or both combined: certainly I myself am attracted to both. The difficulty with reincarnation as a ‘solution’ to the problem of death is that, either you believe in an immaterial ‘something’ which keeps on persisting, in which case you are driven back to the ‘soul option’, or, as in traditional Buddhism, you deny that there is anything that persists, in which case the whole system ceases to be so consoling. Hinduism, or certain forms of it, affirms that the ‘individual soul’ (atman) eventually gets merged completely in the Absolute (Brahman) from which it came.

Although Plato and some Greeks and Romans believed in reincarnation, the idea is basically Indian and, if one is looking for passages in the English language affirming reincarnation there is not a lot available.

Pantheism has the great advantage that it is actually in some sense true ! We do end up merged into ‘Nature’ and modern science in affirming that “energy cannot be destroyed but only changed in form” (1st Law of Thermo-dynamics) has actually reinforced pantheistic belief. The difficulties are of a different order. The Romantics identified ‘Nature’ with everything admirable and good, but since Darwin, and even worse since Dawkins, it seems we have to believe that Nature demonstrates the fascist principle of ‘survival of the fittest’. Also, since Nature obviously cares nothing for the individual, it is debatable to what extent pantheism can provide consolation when confronted with the death of an individual.

Finally, one should perhaps mention a sort of paradoxically ‘consoling’ solution to the problem of death, namely the belief that there is, and can be, no consolation. This option can at least claim to look things squarely in the face ─ or does it?

Sebastian Hayes

Notes : E.R. Dodds takes this view in chapter V of his remarkable book “The Greeks and the Irrational” .

Case Study in Eventrics : Adolf Hitler

                “There is a tide in the affairs of men
                Which, taken in the flood, leads on to fortune”

                                                Shakespeare, Julius Caesar

In a previous post I suggested that the three most successful non-hereditary ‘power figures’ in Western history were Cromwell, Napoleon and Hitler. Since none of the three had advantages that came by birth, as, for example, Alexander the Great or Louis XIV did, the meteoric rise of these three persons suggests either very unusual abilities or very remarkable ‘luck’.
From the viewpoint of Eventrics, success depends on how well a particular person fits the situation and there is no inherent conflict between ‘luck’ and ability. Quite the reverse, the most important ‘ability’ that a successful politician, military commander or businessman can have is precisely the capacity to handle events, especially unforeseen ones. In other words success to a considerable extent depends on how well a person handles his or her ‘good luck’ if and when it occurs, or how well a person can transform ‘bad luck’ into ‘good luck’. Whether everyone gets brilliant opportunities that they fail to seize one doubts but, certainly, most of us are blind to the opportunities that do arise and, when not blind, lack the self-confidence to seize such an offered ‘chance’ and turn it to one’s advantage.
The above is hardly controversial though it does rule out the view that everything is determined in advance, or, alternatively, the exact opposite, that ‘more or less anything can happen at any time anywhere’. I take the commonsense view that there are certain tendencies that really exist in a given situation. It is, however, up to the individual to reinforce or make use of such ‘event-currents’ or, alternatively, to ignore them and, as it were, pass by on the other side like the Levite in the Parable of the Good Samaritan. The driving forces of history are not people but events and ‘event dynamics’; however, this does not reduce individuals to the status of puppets, far from it. Either through instinct or correct analysis (or a judicious mixture of the two) the successful person identifies a ‘rising’ event current, gets with it if it suits him or her, and abandons it abruptly when it ceases to be advantageous. This is easy enough to state, but supremely difficult to put into practice. Everyone who speculates on the Stock Exchange knows that the secret of success is no secret at all : it consists in buying  when the price of stock is low but just about to rise and selling when the price is high but just about to fall. For one Soros, there are a hundred thousand or maybe a hundred million ‘ordinary investors’ who either fail entirely or make very modest gains.
But why, one might ask, is it advantageous to identify and go with an ‘event trend’ rather than simply decide what you want to do and pursue your objective off your own bat? Because the trend will do a good deal of the work for you : the momentum of a rising trend is colossal, indeed for a while, seems to be unstoppable. Pit yourself against a rising trend and it will overwhelm you, identify yourself with it and it will take you along with a force equivalent to that of a million individuals. If you can spot coming trends accurately and go with them, you can succeed with only moderate intelligence, knowledge, looks, connections, what have you.

Is charisma essential for success?

It is certainly possible to succeed spectacularly without charisma since Cardinal Richelieu, the most powerful man in the France and Europe of his day, had none whereas Joan of Arc who had plenty had a pitifully short career. Colbert, finance minister of Louis XIV is another example; indeed, in the case of ministers it is probably better not to stick out too much from the mass, even to the extent of appearing a mediocrity.
Nonetheless, Richelieu and Colbert lived during an era when it was only necessary to obtain the support of one or two big players such as kings or popes, whereas, in a democratic era, it is necessary to inspire and fascinate millions of ‘ordinary people’. No successful modern dictator lacked charisma : Stalin, Mao-tse-tong, Hitler all had plenty and this made up for much else. Charisma, however, is not enough, or not enough if one wishes to remain in power : to do this, an intuitive or pragmatic grasp of the behaviour of event patterns is a sine qua non and this is something quite different from charisma.

Hitler as failure and mediocrity

 Many historians, especially British, are not just shocked but puzzled by Hitler ─ though less now than they were fifty years ago. For how could such an unprepossessing individual, with neither looks, polish, connections or higher education succeed so spectacularly? One British newspaper writer described Hitler, on the occasion of his first big meeting with Mussolini, as looking like “someone who wanted to seduce the cook”.
Although he had participated in World War I and shown himself to be a dedicated and brave ‘common soldier’, Hitler never had any experience as a commander on the battlefield even at the level of a platoon ─ he was a despatch runner who was told what to do (deliver messages) and did it. Yet this was the man who eventually got control of the greatest military machine in history and blithely disregarded the opinions of seasoned military experts, initially with complete success. Hitler also proved to be a vastly successful public speaker, but he never took elocution lessons and, when he started, even lacked the experience of handling an audience that an amateur  actor or stand-up comedian possesses.
Actually, Hitler’s apparent disadvantages proved to be more of a help than a hindrance once he had  begun to make his mark, since it gave his adversaries and rivals the erroneous impression  that he would be easy to manipulate and outwit. Hitler learned about human psychology, not by reading learned tomes written by Freud and Adler, but by eking out a precarious living in Vienna as a seller of picture postcards and sleeping in workingmen’s hostels. This was learning the hard way which, as long as you last the course (which the majority don’t), is generally the best way.
It is often said that Hitler was successful because he was ruthless. But ruthlessness is, unfortunately, not a particularly rare human trait, at any rate in the lower levels of a not very rich society. Places like Southern Italy or Colombia by all accounts have produced and continue to produce thousands or tens of thousands of exceedingly ruthless individuals, but how many ever get anywhere? At the other end of the spectrum, one could argue that it is impossible to be a successful politician without a certain degree of ruthlessness ─ though admittedly Hitler took it to virtually unheard of extremes. Even ‘good’ successful political figures such as Churchill were ruthless enough to happily envisage dragging neutral Norway into the war (before the Germans invaded), to authorise the deliberate bombing of civilian centres and even to approve in theory the use of chemical weapons. Nor did de Gaulle bother unduly about the bloody repercussions for the rural population that the activities of partisans would inevitably bring  about. Arguably, if people like Churchill and de Gaulle had not had a substantial dose of ‘ruthlessness’ (aka ‘commitment’), we would have lost the war long before the Americans ever got involved  ─ which is not, of course, to put such persons on a level with Hitler and Stalin.
To return to Hitler. Prior to the outbreak of WWI, Hitler, though by all accounts  already quite as ruthless and opinionated as he subsequently proved himself to be on a larger arena, was a complete failure. He had a certain, rather conventional, talent for pencil drawing and some vague architectural notions but that is about it. Whether Hitler would or could have made a successful architect, we shall never know since he was refused entry twice by the Viennese School of Architecture. He certainly retained a deep interest in the subject and did succeed in spotting and subsequently promoting an architect of talent, Speer. But there is no reason to think we would have heard of Hitler if he had been accepted as an architectural student and subsequently articled to a Viennese firm of Surveyors and Architects.
As for public speaking, Hitler didn’t do any in his Vienna pre-war days, only discovering his flair in Munich in the early twenties. And although Hitler enlisted voluntarily for service at the outbreak of  WWI, he was for many years actually a draft-dodger wanted for national service by Austria, his country of birth. Hardly a promising start for a future grand military strategist.

Hitler’s Decisive Moment : the Beer Hall Putsch

 Hitler did, according to the few accounts we have by people who knew him at the time, have boyhood dreams of one day becoming a ‘famous artist’ — but what adolescent has not? Certainly, Hitler did not, in  his youth and early manhood, see himself as a future famous political or military figure, far from it. Even when Hitler started his fiery speeches about Germany’s revival and the need for strong government, he did not at first cast himself in the role of ‘Leader’. On the contrary, it would seem that awareness of his own mission as saviour of the German nation came to him gradually and spasmodically. Indeed, one could argue that it was only after the abortive Munich Beer-Hall putsch that Hitler decisively took on this role : it was in a sense thrust on him.
The total failure of this rather amateurish plot to take over the government of Bavaria by holding a gun to the governor’s face and suchlike antics turned out to be the turning-point of his thinking, and of his life. In Quattrocento Italy it was possible to seize power in such a way ─ though only the Medici with big finance behind them really succeeded on a grand scale  ─ and similar coups have succeeded in modern Latin American countries. But in an advanced industrial country like Germany where everyone had the vote, such methods were clearly anachronistic. Even if Hitler and his supporters had temporarily got control of Munich, they would easily have been put down by central authority : they would have been seven day wonders and no more. It was this fiasco that decided Hitler to obtain power via the despised ballot box rather than the more glamorous but outmoded methods of an Italian condottieri.
The failed Beer-hall putsch landed Hitler in court and, subsequently in prison; and most people at the time thought this would be the end of him. However, Hitler, like Napoleon before him in Egypt after the destruction of his fleet, was a strong enough character not to be brought  down by the disaster but, on the contrary, to view it as a golden opportunity. This is an example of the ‘law’ of Eventrics that “a disadvantage, once turned into an advantage, is a greater advantage than a straightforward advantage”.
What were the advantages of the situation? Three at least. Firstly, Hitler now had a regional and soon a national audience for his views and he lost no time in making the court-room a speaker’s platform with striking success. His ability as a speaker was approaching its zenith : he had the natural flair and already some years of experience. Hitler was given an incredibly  lenient sentence and was even at one point thanked by the judge for his informative replies concerning Germany’s recent history! Secondly, while in prison, Hitler had the time to write Mein Kampf which, given his lax, bohemian life-style, he would probably have never got round to doing  otherwise. And his court-room temporary celebrity meant the book was sure to sell if written and published rapidly.
Thirdly, and perhaps most important of all, the various nascent extreme Right groups made little or no headway with the ‘leader’ in prison which confirmed them in the view that  Hitler was indispensable. Once out of prison, he found himself without serious competitors on the Right and his position stronger than ever.
But the most important outcome was simply the realization that the forces of the State were far too strong to be overthrown by strong-arm tactics. The eventual break with Röhm and the SA was an inevitable consequence of Hitler’s fateful decision to gain power within the system rather than by openly opposing it.

Combination of opposite abilities

 As a practitioner of Eventrics or ‘handler of events’, Hitler held two trump cards that are rarely dealt to the same individual. Firstly, even though his sense of calling seems to have come relatively late, by the early nineteen-thirties he was entirely convinced that he was a man of destiny. He is credited with the remarkable statement, very similar to one made by Cromwell, “I follow the path set by Providence with the precision and assurance of a sleepwalker”. It was this messianic side that appealed to the masses of ordinary people, and it was something that he retained right up to the end. Even when the Russian armies were at the gates of Berlin, Hitler could still inspire people who visited him in the Bunker. And Speer recounts how, even  at Germany’s lowest ebb, he overheard (without being recognized) German working people in a factory repeating like a mantra that “only Hitler can save us now”.
However, individuals who see themselves as chosen by the gods, usually fail because they do not pay sufficient attention to ordinary, mundane technicalities. Richelieu said that someone who aims at high power should not be ashamed to concern himself with trivial details  ─ an excellent remark. Napoleon has been called a ‘map-reader of genius’ and to prepare for the Battles of Ulm and Austerlitz, he instructed Berthier “to prepare a card-index showing every unit of the Austrian army, with its latest identified location, so that the Emperor could check the Austrian order of battle from day to day” (Note 1). Hitler had a similar capacity for attention to detail, supported by a remarkable memory for facts and figures — there are many records of him reeling off correct data about the range of guns and the populations of certain regions to his amazed generals.
This ‘combination of contraries’ also applies to Hitler as a statesman. Opponents and many subsequent historians could never quite decide whether Hitler, from the beginning, aimed for world domination, or whether he simply drifted along, waiting to see where events would take him. In reality, as Bullock rightly points out, these contradictions are only apparent : “Hitler was at once fanatical and cynical, unyielding in his assertion of will power and cunning in calculation” (Bullock, Hitler and the Origins on the Second World War). This highly unusual combination of two opposing tendencies is the key to Hitler’s success. As Bullock again states, “Hitler’s foreign policy… combined consistency of aim with complete opportunism in method and tactics. (…) Hitler frequently improvised, kept his options open to the last possible moment and was never sure until he got there which of several courses of action he would choose. But this does not alter the fact that his moves followed a logical (though not a predetermined) course ─ in contrast to Mussolini, an opportunist who snatched eagerly at any chance that was going, but never succeeded in combining even his successes into a coherent policy” (Bullock, p. 139).
Certainly, sureness of ultimate aim combined with flexibility in day to day management is a near infallible recipe for conspicuous success. Someone who merely drifts along may occasionally obtain a surprise victory but will be unable to build on it; someone who is completely rigid in aim and means will not  be able to adapt to, and take advantage of, what is unforeseen and unforeseeable. Clarity of goal and unshakeable conviction is the strategic part of Practical Eventrics while the capacity to respond rapidly to the unforeseen belongs to the tactical side.

Why did Hitler ultimately fail?

 Given the favourable political circumstances and Hitler’s unusual abilities, the wonder is, not that he lasted as long as he did, but that he eventually failed. On a personal level, there are two reasons for this. Firstly, Hitler’s racial theories, while they originally helped him to power, eventually proved much more of a drawback than an advantage. For one thing, since Hitler regarded ‘Slavs’ as inferior, this conviction unnecessarily alienated large populations in Eastern Europe, many of whom were originally favourable to German intervention since they had had enough of Stalin. Moreover, Hitler allowed ideological and personal prejudices to influence his choice of subordinates : rightly suspicious of the older Army generals but jealous of brilliant commanders like von Manstein and Guderian, he ended up with a General Staff of supine mediocrities.
Secondly, Hitler, though he had an excellent intuitive grasp of overall strategy, was a poor tactician. Not only did he have no actual experience of command on the battlefield but, contrary to popular belief, he was easily rattled and unable to keep a clear head in emergencies.
Jomini considered that “the art of war consists of six distinct parts:

  1. Statesmanship in relation to war
  2. Strategy, or the art of properly directing masses upon the theatre of war, either for defence or invasion.
  3. Grand Tactics.
  4. Logistics, or the art of moving armies.
  5. Engineering ─ the attack and defence of frotifications.
  6. Minor tactics.”
    Jomini, The Art of War p. 2

Hitler certainly ticks the first three boxes. But certainly not (4), Logistics. Hitler tended to override his highly efficient Chief of General Staff, Halder, whereas Napoleon always listened carefully to what Halder’s equivalent, Berthier, had to say. According to Liddell Hart, the invasion of Russia failed, despite the high quality of the commanders and fighting men, because of an error in logistics.
Hitler lost his chance of victory because the mobility of his army was based on wheels instead of on tracks. On Russia’s mud-roads its wheeled transport was bogged when the tanks could move on. If the panzer forces had been provided with tracked transport they could have reached Russia’s vital centres by the autumn in spite of the mud” (Liddel-Hart, History of the Second World War )  On such mundane details does the fate of empires and even of the world often depend.
As for (5), the attack on fortifications, it had little importance in World War II though the long-drawn out siege of Leningrad exhausted resources and troops and should probably have been abandoned. Finally, on (6), what Jomini calls ‘minor tactics’, Hitler was so poor as to be virtually incompetent. By ‘minor tactics’, we should understand everything relating to the actual movement of troops on the battlefield (or battle zone) ─ the area in which Napoleon and Alexander the Great were both supreme.  Hitler was frequently indecisive and vacillating as well as nervy, all fatal qualities for a military commander.
On two occasions, Hitler made monumental blunders that cost him the war. The first was the astonishing decision to hold back the victorious tank units just as they were about to sweep into Dunkirk and cut off the British forces. And the second was Hitler’s rejection of  Guderian’s plan for a headlong drive towards Moscow before winter set in; instead, following conventional Clausewitzian principles,  Hitler opted for a policy of encirclement and head-on battle. Given the enormous man-power of the Russians and their scorched earth policy, this was a fatal decision.
Jomini, as opposed to Clausewitz, recognized the importance of statesmanship in the conduct of a war, something that professional army officers and even commanders are prone to ignore. Whereas Lincoln often saw things that his generals could not, and on occasion successfully overrided them  because he had a sounder long-term view, Hitler, a political rather than a military man, introduced far too much statesmanship into the conduct of war.
It has been plausibly argued, especially by Liddel Hart, that the decision to halt the tank units before Dunkirk was a political rather than a military decision. Blumentritt, operational planner for General Rundstedt, said, at a later date, that “the ‘halt’ had been called for more than military reasons, it was part of a political scheme to make peace easier to reach. If the British Expeditionary Force had been captured at Dunkirk, the British might have felt that their honour had suffered a stain which they must wipe out. By letting it escape, Hitler hoped to conciliate them” (Liddel Hart, History of the Second World War I p. 89-90). This did make some kind of sense : a rapid peace settlement with Britain would have wound up the Western campaign and freed Hitler’s hands to advance eastwards which had seemingly always been his intention. However, if this interpretation is correct, Hitler made a serious miscalculation, underestimating Britain’s fighting spirit and inventiveness.

Hitler’s abilities and disabilities

 It would take us too far afield from the field of Eventrics proper to go into the details of Hitler’s political, economic and military policies. My overall feeling is that Hitler was a master in the political domain, time and again outwitting his internal and external rivals and enemies, and that he had an extremely good perception of Germany’s economic situation and what needed to be done about it. But he was an erratic and often incapable military commander ─ for we should not forget that, following the resignation of von Brauchitsh, Hitler personally supervised the entire conduct of the war in the East (and everywhere else eventually). This is something like the reverse of the conventional assessment of Hitler so is perhaps worth explaining.
Hitler is credited with the invention of Blitzkrieg, a new way of waging war and, in particular, with one of the most successful campaigns in military history, the invasion of France, when the tank units moved in through the Ardennes, thought to be impassible. The original idea was in reality not Hitler’s but von Manstein’s (who got little credit for it) though Hitler did have the perspicacity to see the merits of this risky and unorthodox plan of attack which the German High Command unanimously rejected. It is also true that Hitler took a special interest in the tank and does seem to have some good ideas regarding tank design.
However, Hitler never seems to have rid himself completely of the conventional Clausewitzian idea that wars are won by large-scale confrontations of armed men, i.e. by modern ‘pitched battles’. Practically all (if not all) the German successes depended on surprise, rapidity of execution and artful manoeuvre ─ that is, by precisely the avoidance of direct confrontation. Thus the invasion of France, the early stages of the invasion of Russia, Rommel in North Africa and so on. When the Germans fought it out on a level playing field, they either lost as at Al Alamein or achieved ‘victories’ that were so costly as to be more damaging than defeats as in the latter part of the Russian campaign.  Hitler was in fact only a halfway-modernist in military strategy. “The school of Fuller and Basil Liddel Hart [likewise Guderian and Rommel] moved away from using manoeuvre to bring the enemy’s army to battle and destroy it. Instead, it [the tank] should be used in such a way as to numb the enemy’s command, control, and communications and bring about victory through disintegration rather than destruction” (Messenger, Introduction to Jomini’s Art of War).
As to the principle of Bitzkrieg (Lightning War) itself, though it doubtless appealed to Hitler’s imagination, it was in point of fact forced on him by economic necessity : Germany just did not have the resources to sustain a long war. It was make or break. And much the same went for Japan.
Hitler’s duplicity and accurate reading of his opponents’ minds in the realm of politics needs no comment. But what is less readily recognized is how well he understood the general economic situation. Hitler had doubtless never read Keynes ─ though his highly capable Economics Minister, Schacht, doubtless had. But with his talent for simplification, Hitler realized early on that Germany laboured under two crippling economic disadvantages : she did not produce enough food for her growing population and, as an industrial power, lacked indispensable natural resources especially oil and quality iron-ore. So where to obtain  these and a lot more essential  items? By moving eastwards, absorbing the cereal-producing areas of the Ukraine and getting hold of the oilfields of the Caucasus. This was the policy exposed to the German High Command in the so-called ‘Hossbach Memorandum’ to justify the invasion of Russia to an unenthusiastic general staff.
The policy of finding Lebensraum in the East was based on a ruthless but shrewd and essentially correct analysis of the economic situation in Europe at the time. But precisely because Germany would need even more resources in a wartime situation, victory had to be rapid, very rapid. The gamble nearly succeeded : as a taster, Hitler’s armies  overwhelmed Greece and Yugoslavia in a mere six weeks and at first looked set to do much the same in Russia in three months. Perhaps if Hitler had followed Guderian’s plan of an immediate all-out tank attack on Moscow, instead of getting bogged down in Southern Russia and failing to take Stalingrad, the gamble would actually have paid off though fortunately for the Russians it did not.

Hitler: Summary from the point of view of Eventrics

The main points to recall from this study of Hitler as a ‘handler of events’ are the following:

  1. The methods chosen must fit the circumstances, (witness Hitler’s switch to a strategy based on the ballot box rather than the revolver after the Beer-Hall putsch).
  2. An apparent defeat can be turned into an opportunity, a disadvantage into an advantage (e.g. Hitler’s trial after the Beer-hall putsch)
  3. Combining inflexibility of ultimate aim with extreme flexibility on a day-to-day basis is a near invincible combination (Hitler’s conduct of foreign affairs during the Thirties);
  4. It is disastrous to allow ideological and personal prejudices to interfere with the conduct of a military campaign, and worse still to become obsessed with a specific objective (e.g. Hitler’s racial views, his obsession with taking Stalingrad).

 

Panspermia and the Black Death

Where and how did life begin?

Can/could there be a universe without life? Life without a universe? Contemporary science says yes to the first question and no to the second, whereas most religions tend towards the opposite point of view; indeed disagreement on the subject is perhaps the main bone of contention between the two camps. Given that there is such a thing as life, and that we know how to recognize it when and where it exists (no easy task), where did it start? For Galileo and Newton, there was only one place where it could possibly start, the Earth. And, as to how and why it began in the first place, few if any of the ‘classical’ physicists troubled themselves about the question since they were all believers, if not in Christ at least in God. The paradigm of a unique universe created once and for all by an omnipotent intelligence, and henceforth forced to obey rules laid down by this intelligence, served physics and mechanics well for several centuries. But it was not clear how ‘life’, especially human life,  could be fitted into this schema which is probably the reason why Newton, having sorted out the physical side of things, tried his hand at alchemy. One can (perhaps) reduce biology, the life science, to chemistry but not to mechanics and in Newton’s day chemistry scarcely existed.

Darwin was extremely reticent on the subject of the origins of life though he did famously speak of “a warm little pond with all sorts of ammonia and phosphoric salts present”, a place where “a protein compound was chemically formed ready to undergo still more complex changes”. This shows that Darwin was at least firmly committed to the idea that life developed ‘spontaneously’ without the need for supernatural intervention or planning. Eventually, in the nineteen-fifties, Miller caused a sensation by simulating in  a test-tube the Earth’s supposed early environment (water, ammonia, methane and carbon dioxide) and bombarding it with ultra-violet radiation. The result was the spontaneous formation of certain compounds, including amino-acids, the building blocks of proteins. However, it would seem that Miller and Urey got the composition of the early Earth’s environment wrong and there are other reasons why the Miller/Urey experiment is no longer considered to be a good indication of how life on Earth started. Since the discovery in the nineteen-seventies of sulphur consuming microbes in deep ocean hydro-thermal vents, and the copious eco-systems to which they give rise, the theory that life began under the surface of the Earth, rather than on it, has become more and more popular. Such bacteria do not require sunlight to produce energy, i.e. do not photosynthesize, and, because of their location, they would have been at least partially protected against the intense bombardment of the Earth’s surface by meteorites which was so characteristic of the early  years of the Earth’s history.

Life from elsewhere

There is, however, another way to explain the sudden appearance of primitive life on Earth about 3.85 billion years ago : it came from somewhere else in the universe. The idea that there might well be life outside the Earth goes back as far as the IVth century B.C. and demonstrates how astonishingly ‘modern’ many of the Greek thinkers were in their outlook. Although better known for his contention that the goal of human life is, and should be, pleasure, Epicurus also wrote extensively on physical matters. Unfortunately these works have been lost and are only known via his Roman follower, Lucretius, author of the long poem De Rerum Naturae. The latter writes

“If atom stocks are inexhaustible Greater than the power of living things to count, If Nature’s same creative power were present too To throw the atoms into unions — exactly as united now Why then confess you must That other worlds exist in other regions of the sky And different tribes of men, and different kinds of beasts.”

Note that this is a reasoned argument based on the premises that (1) there exists an abundant supply  of the building-blocks of life (atoms), (2) these building-blocks combine together in much the same way everywhere, and (3) Nature’s ‘creative power’ (‘energy’) is limitless. Therefore, there must exist other habitable worlds containing beings made of the same material as us but different from us. This is almost word for word the argument put forward by contemporary physicists that we are not alone in the universe. Lucretius does not go so far as to suggest that there could be, or ever had been, any interaction between these different ‘worlds’. However, the Gnostics (‘Knowers’, ‘Those who Know’), a half-pagan, half-Christian sect that flourished during the declining Roman Empire, taught that human kind did not originate here but came from what we would call ‘outer space’  —  indeed this is precisely what they knew and what ordinary  persons didn’t (Note 1). 

Panspermia    

The belief that life did not begin here but was brought from elsewhere in the cosmos seems to have disappeared after the triumph of orthodox Christianity but eventually re-surfaced at the end of the nineteenth century in a place where one would not normally expect to find it, namely physical science. The 19th century German physicist von Helmholtz, a hard-nosed physicist if ever there was one, wrote in 1874: “…it appears to me a fully correct scientific procedure to raise the question whether life is not as old as matter itself and whether seeds have not been carried from one planet to another and have developed everywhere that they have found fertile soil.” (Note 2)               Lord Kelvin agreed, arguing that collisions could easily transport material around the solar system and thus ‘infect’ other planets with life, as he put it. And the Swedish chemist,  Svante Arrhenius, energetically took up the idea which he dubbed ‘Panspermia’ (‘seeds everywhere’). Nearer our own time, Francis Crick of DNA fame argues in his book Life Itself, Its Origin and Nature that “microorganisms ….. travelled at the head of an [unmanned] spaceship sent to Earth by a higher civilization which had developed elsewhere billions of years ago.”

 Life as a Cosmic Phenomenon   

But the theory that life came from outer space is above all associated with the work of Fred Hoyle and Chandra Wickramasinghe who have expounded it in great detail in a series of books and scientific papers. They summarize their position thus : “The essential biochemical requirements of life exist in very large quantities within the dense interstellar clouds of gas. This material [eventually] becomes deposited within the solar system, first in comet-type bodies, and then in the collisions of such bodies with the Earth. (…) The picture is of a vast quantity of the right kind of molecules looking for suitable homes, and of there being very many suitable homes [i.e. planets].” (Note 3)

As the two authors never tire of pointing out, ‘Panspermia’ is not just another scientific theory : it constitutes a paradigm shift only to be compared with the shift from a geocentric to a heliocentric viewpoint instigated by Copernicus. Just as humanity previously — and mistakenly — considered itself to be situated at the centre of the universe and the favoured creation of the Almighty, so biologists and astronomers today mistakenly tend to take it for granted that the Earth has been particularly favoured to be the unique seat of intelligent life. In its full development, as propounded in Hoyle’s The Intelligent Universe (the title says all), the theory of ‘panspermia’ really means what it says  : ‘seeds of life evrywhere’.  Hoyle and Wickramasinghe argue that life is so improbable an eventuality, requiring so much fine-tuning of physical constants, that it has either always existed (Hoyle’s preferred option) or has only come about once. A complicated circulation system, involving interstellar dust, comets and meteorites, is responsible for randomly disseminating the seeds of life throughout the universe in the expectation that at least one or two of them will fall on fertile ground somewhere sometime. As one might expect, the Hoyle/Wickramasinghe theory was, and is, highly controversial. In the past they would have run foul of the Inquisition — Giordano Bruno was burnt at the stake for propounding something vaguely similar —  but, this being the 20th century, their main opponents have come from within the scientific establishment. The two scientists, despite their impressive credentials, were met with what Hoyle describes  euphemistically as “a wall of silence”. It is even said that Hoyle’s outspoken advocacy of ‘panspermia’ is the main reason he was not granted the Nobel Prize for his seminal work on the carbon cycle in stars.

Evidence in favour of the Hoyle/Wickramasinghe Theory

 We require a new scientific theory to, (1) explain known data in a more elegant and satisfactory way than current theories and (2) make predictions which can be tested experimentally. Now the Hoyle/Wickramasinghe theory certainly does explain something that contemporary astro-physics struggles to make any sense of, namely the surprisingly early appearance of bacterial life on the Earth. The Earth is currently considered to have formed some 4.6 billion years ago but for most, if not all, of the first seven or eight million years it must have been a boiling inferno uninhabitable even for heat-loving microbes. However, the high level of carbon-12 in certain rocks that date back 3.85 billion years, suggests that microbial life already existed at that early date. We are not here talking about organic molecules but about unicellular organisms which, though ‘primitive’ compared to plants and mammals, possess DNA (or RNA). It is scarcely credible that the transition from a few diffuse chemicals to such a highly organized entity as a prokaryotic cell came about in  such a short time in evolutionary terms, especially since the subsequent transition from bacterial to multicellular life took around 3 billion years! But for Hoyle and Wickramasinghe, this is not a problem : primitive life arrived here ready-made, a seed quite literally falling from the sky either naked or enclosed in the remnants of a comet that, like Icarus, had ventured too near the Sun and had spilled its contents into the atmosphere (Note 4). So, is there any evidence that a microbial spore could exist in interstellar space and be wafted to Earth riding on a gas cloud or enclosed in the kernel of a wandering comet? In the seventies, when Hoyle and Wickramasinghe first advanced the general idea, it sounded more like science fiction than science — Hoyle was after all the author of a successful SF novel The Black Cloud. But since then the critics have had to eat their words — though in point of fact scientists never seem to actually bring themselves to do such a terrible thing — since it is now common knowledge that interstellar dust is full of molecules, many of them organic, and that comets contain most, and probably all, of the basic ingredients for life. “Analysis of the dust grains streaming from the head [of Halley’s comet] revealed that as much as one third was organic material. Common substances such as benzene, methanol, and acetic acid were detected, as well as some of the building blocks of nucleic acids. If Halley is anything to go by, then comets could easily have supplied Earth with enough carbon to make the entire biosphere.”      Davies, The Origin of Life, p. 136

It is also generally admitted now that there is a considerable exchange of (not necessarily organic) material throughout the galaxy : the very latest issue of Science (15 August 2014) contains an article “Evidence for interstellar origin of seven dust particles collected by the Stardust space craft”. So far, so good. But all this, of course,  stops well short of Hoyle and Wickramasinghe’s claim that actual bacterial spores and/or viruses can and do make the perilous journey from a cloud of interstellar dust to a comet in the Oort Cloud and then on to the interior of the solar system and us. A number of claims for the presence of fossilized organic material in meteorites have nonetheless  been made, for example by Claus and Nagy in the 1960s and, more recently, by two researchers from the University of Naples D’Argenio and Geraci who concluded that their results  constituted “clear evidence for the existence of extra-terrestrial life”. Those who wish to pursue the topic further are referred to a recent article by Chandra Wickramasinghe on DNA Sequencing and Predictions of the Cosmic theory of Life (and the extensive bibliography at the end). This is available free of charge at http://arxiv.org/ftp/arxiv/papers/1208/1208.5035.pdf

Can bacteria and viruses reach the surface of the Earth? 

The suggestion that diseases can come from space follows on from Hoyle and Wickramasinghe’s belief that life came from space in the first place. For the cosmic bombardment continues unabated though, thankfully, on a lesser scale than during the first billion or so years of the Earth’s history. Suppose for the moment that a certain amount of interstellar ‘dust’, some of it organic, finds its way into a comet inhabiting the so-called Oort Cloud situated beyond the orbit of Pluto. Incredibly, there are an estimated billion or more such comets on highly eccentric orbits and a few enter the inner reaches of the solar system each year.  Near the Sun much of the comet melts, ejecting millions of tons of incandescent debris into space and the Earth, like the other planets, cannot avoid ploughing through this cosmic muck. So much is fact. But can any organic material make it to ground level without being burned up or crushed ? If the answer is no, there is no point in discussing the diseases from space theory. However, it seems that bacteria and viruses, especially if protected by some sort of coating, can enter the Earth’s atmosphere without burning up if they come in at an oblique angle. H & W have calculated that “to fit the heating requirement, according to our criteria bacteria are about as big as they can possibly be”. If a bacterium exceeds 1 micron in length it must, according to H & W, be rod-shaped, i.e. what we term a ‘bacillus’. (Yersinia pestis is rod-shaped incidentally). Small particles will be wafted this way and that by air currents or descend inside raindrops or snowflakes. Have any micro-organisms of suspected cometary origin been identified? This is difficult to establish since there is always the possibility of terrestrial contamination but high resistance to ultra-violet radiation would suggest an extra-terrestrial origin. Again various claims have been made, for example by Wainwright, Shivaji et al. and  one new bacterial species culled from the stratosphere has even been named Janibacter hoylei.sp.nov. !

Tell-tale signs of cosmic origin

What characteristics of epidemics/pandemics does official epidemiology have difficulty in explaining? Or, to put things the other way round, supposing for a moment that pathogens can and do arrive from outer space, what special features would we expect to find in the consequent epidemics ?  Basically, the following : 1. We would expect an epidemic/pandemic with an extra-terrestrial origin to be unusually severe because the immune systems of potential hosts would be taken unawares; 2. We would expect very rapid spread since the pathogens would not originally be dependent on human or animal vectors (would rain down on people from the air); 3. We would expect a very wide but somewhat patchy distribution since the pathogens would be taken here and there by air currents; 4. We would expect the attack to be ‘one-off” since only after several outbreaks would a new disease be able to establish itself with a permanent terrestrial focus − to begin with it would most likely be too lethal and kill off potential hosts.

So, are there any epidemics or pandemics that fit the bill? Yes, I can think of several candidates at once, the Vth century Plague of Athens (1, 2 and 4); the Plague of Justinian  (1, 2, 3 and 4); the ‘Sweats’ of the Reformation period (1, 2 and 4); and the 1917-18 outbreak of Spanish Flu (1, 2, 3 and 4). Unfortunately, almost all outbreaks of disease prior to the 19th century are poorly documented so this only leaves the 1917-18 Spanish Flu Pandemic on which H & W concentrate in their book (along with the Common Cold). Since the Spanish Flu killed rather more people than WWI worldwide − estimates vary from 26 million to 50 million deaths − there is no doubt about (1), the severity. Also, because the influenza virus is constantly mutating, it generally manages to keep ahead of the human immune system : each wave of infection is thus essentially new even if given the same general name. So Spanish Flu passes on 4, and also qualifies on 2 and 3.  .         However,  Spanish flu had the benefits of (from its point of view) modern transportation systems and is known to have been passed on by person to person contact (especially by sneezing). However, “The lethal second wave [of Spanish Flu] involved almost the entire world over a very short space of time…. Its epidemiological behaviour was most unusual. Although person-to-person spread occurred in local areas, the disease appeared on the same day in widely separated parts of the world on the one hand, but took days to weeks to spread over relatively short distances. It was detected in Boston and Bombay on the same day, but took three weeks to reach New York City, despite the fact there was considerable travel between the two cities….”       Who wrote the above − Hoyle and Wickramasinghe? In fact, not. The author is a certain Dr. Louis Weinstein cited by H & W and presumably a contemporaryof the pandemic. Likewise, H & W cite Professor Magrassi commenting on the 1948 influenza epidemic  “We were able to verify….the appearance of influenza in shepherds who were living for a long time alone, in solitary open country far from any inhabited centre; this occurred absolutely contemporaneously with the appearance of influenza in the nearest inhabited centre”.       It is on the basis of this kind of evidence, along with detailed maps showing the spread and distribution, that H & W make their case for extra-terrestrial origin. But everything they say about the Spanish Flu pandemic a fortiori applies to the best candidate of the lot, at least on counts (1, 2 and 3),  namely the Black Death itself.

Did the Black Death  come from Space?

H & W only devote five pages of their well-documented book to the Black Death and the treatment is sketchy indeed. When H & W were writing (late nineteen-seventies) the official view was that the Black Death was undoubtedly bubonic plague and that it was spread about by rats : even so learned an author as Shrewsbury does not for a moment question the received wisdom though he does state that the quantity of rats required to get such a pandemic going would be enormous. H & W also assume that the Plague of Justinian and the Black Death were caused by the same pathogen, an assumption Dr Twigg and others have questioned. H & W do, however make one or two valid points. They rightly ridicule the idea of an army of rats advancing like killer ants across most of Europe infecting all and sundry as they march. But, still keeping within the rat/bubonic plague schema, they point out that, if the pestilence was spread about by ship, at least to begin with, we would expect it to systematically spread inland from seaports, something they claim is not the case if we examine the evidence. For example, they cite the testimony of Carbonell, archivist to the Court of Aragon, who reports that the Black Death “began in Aragon, not at the Mediterranean coast or at the eastern frontier, but in the western inland city of Ternel.”        However, we have to distinguish between evidence that the Black Death was propagated by air (which I certainly believe) from the belief that it came from outer space in the first place. H & W emphasize the very extensive but strangely patchy distribution of Black Death mortality: while it attained remote hamlets and monasteries it spared Milan and Nuremburg almost completely. But again this is evidence for airborne dissemination rather than proof of extra-terrestrial origin. Dr Twigg has also pointed out to me that  a vast quantity of bacteria would be required to start the pandemic going in the manner H & W suggest. The Japanese experimented with bubonic plague as a biological weapon and dropped 36 kilos of bubonic plague infested fleas. Seemingly only 7,000 humans died, a statistic that was acquired fifty years later and so was more likely to have been amplified than reduced. Serious though this must have been for the unwilling recipients of this Japanese manna from heaven, this is not a fantastic death toll. And one would expect dropping infected fleas to be a more efficient method of spreading the disease than simply having bacteria drifting down spasmodically.

Conclusion that there is no conclusion

So where does that leave us? As far as I am concerned, I would say that Hoyle and Wickramasinghe have made  a fair case for the Black Death coming from outer space given the extreme severity, rapid spread and extended but patchy distribution of the pandemic, the worst in human history. As to (4), whether the Black Death was a ‘one-off’ outbreak that failed to establish itself or not, this depends on whether one considers that subsequent outbreaks during succeeding centuries were, or were not, the same disease. The jury is not out on this topic and both sides have made valid points. But there is no doubt that ‘plague’, whatever it was, suddenly and mysteriously disappeared from Europe in the early eighteenth century never to return apart from a few spasmodic 20th century cases. But I am not sure that a supposed extra-terrestrial origin has any particular bearing on this circumstance.
More specifically, I do not feel that H & W have said anything to shake my personal conviction that the Black Death was not bubonic plague. If the Black Death was plague and came from space, this suggests that the pandemic started off as the pneumonic variety since human beings would absorb it directly from the air, or from fresh water. It would subsequently pass from human beings to rats and the bubonic variety would dominate and the ‘normal’ scenario develop from then on. Now, an initial outbreak of bubonic plague amongst rodents can spread to humans and become a predominantly pneumonic outbreak, since this is what happened in Manchuria in 1910, but I do not know of any cases of an initial pneumonic epidemic giving rise to bubonic plague. Moreover, supposing for the moment the Black Death was plague, it is not possible that the outbreak was exclusively pneumonic since buboes do not have time to form and we have plenty of contemporary medieval descriptions of buboes, whatever it was that caused them. H & W’s suggestion that plague came from above does not help us to understand the spread of the 14th century pandemic, rather the reverse. It may well be that the cause of the Black Death, whatever it was, did come from space but this does not advance us in understanding what the disease was and how it propagated. Although H & W’s books have made me a likely convert to the general theory of panspermia, I do not feel they have brought us any closer to understanding the Black Death which remains as much of an enigma as ever.            SH 12/10/14

Note 1  According to the Gnostics, the universe was not deliberately created by an omnipotent God but was the result of a cosmic accident with tragic consequences, i.e. was the result of Chance rather than Necessity. They identify God with ‘the Light’ and, in one version, relate how Sophia, the ancestress of humanity, ‘fell’ from a domain of light into a dark, cold and empty universe. Some ‘seeds of light’ end up on the Earth which is ringed by hostile powers (archons) who prevent those who have become aware of their true nature from returning to their place of birth. This schema is really quite close to what we, and especially H & W, currently believe to be the case. It would seem that all the heavier elements including carbon and oxygen were created by nuclear fusion in the heart of stars and were subsequently disseminated throughout the universe in a supernova explosion. Any material ejected would certainly have found itself in a cold, dark and hostile world. Also, since our bodies are made of carbon, hydrogen and oxygen, we are indeed “stardust”. And if Hoyle and Wickramasinghe are right and the first organisms formed in space, we are aliens or the progeny of aliens since the Earth is not humanity’s place of birth.

Note 2 Quoted by Paul Davies in The Origin of Life (p. 125)

Note 3 From Hoyle & Wickramasinghe, Life Cloud (1978). The original schema given in this book and Diseases from Space (1979) goes something like this :

(1) The cycle starts with outflows of gaseous material from the surfaces of stars;
(2) High-density clouds give rise to ‘dust’, “solid particles with dimensions comparable to the wavelength of visible light”(H & W)
(3) Under compression, organic molecules present in the gas clouds “condense in solid form onto the dust grains” (H & W);
(4) In particular formaldehyde (H2CO), which is “one of the commonest organic molecules actually found by actual observation in interstellar space” condenses onto grains and a process of polymerization is induced by cosmic rays;
(5) Sugars and polysaccharides form — and “it has been shown by C. Ponnamperuma that this can happen when formaldehyde is exposed to ultra-violet radiation”;
(6) All of the basic building blocks of life are formed in a similar way and are transported randomly around the galaxy by comets;
(7) Primitive living organisms evolve inside some comets;
(8) One or two comets are drawn within the inner solar system and, on partially melting, deposit clouds of living material some of which falls onto the Earth.

Wickramasinghe has  subsequently extended this model to one where “the genetic products of evolution on a planet like the Earth were mixed on a galactic scale with products of local evolution on other planets elsewhere” (from the paper on DNA sequencing mentioned earlier).

Note 4   At first sight it might seem that Hoyle and Wickramasinghe have not answered the problem of the origins of life but have simply shelved it by situating it somewhere else. Their reply would be that, firstly, rather more suitable environments for the development of life than the early Earth have certainly existed, and, secondly, given the vast number of galaxies and the age of the universe (between 14 and 15 billion years) even such an improbable event as the emergence of microbial life might conceivably have occurred (and apparently did). The point is that, given the cosmic circulation system they propose, life need only occur once somewhere for it to eventually reach practically everywhere (though it would of course not catch on equally well everywhere). And this is assuming the Big Bang scenario. If we assume Hoyle’s modified Steady State model, life has always existed and always will.  

 

 

« Older entries