Adam's Opticks

A science and philosophy blog by Joe Boswell

Lee Smolin Interview

[This interview was originally published over at Massimo Pigliucci’s Scientia Salon; my thanks to Massimo for letting me duplicate it here.]

As regular readers of this extremely irregular blog will know, about a year and a half ago I published an in-depth critique of Lee Smolin’s Time Reborn (2013), a radical reappraisal of the role of ‘the present moment’ in physics. My article was highly critical, but also something of a labour of love, and I’m completely thrilled to say that Lee has now read the piece and would like to respond. What follows is a Q&A, with most of the questions derived from the earlier post.

Lee SmolinI’d like to start, however, with an apology to Professor Smolin for some of the rhetoric in my original article, which ill-advisedly bordered on suggesting that as a physicist he was not au fait with philosophy as a discipline. This couldn’t be further from the truth: half of Lee’s undergraduate degree was in philosophy, and his personal and professional engagements with philosophy and philosophers such as David Chalmers, Galen Strawson and Roberto Mangabeira Unger have been extensive. Indeed, standing in opposition to a lamentable culture of hostility towards philosophy within contemporary physics, Lee has been quite explicit in his endorsement of Einstein’s pronouncement that ‘a knowledge of the historical and philosophical background [of science] … is — in my opinion — the mark of distinction between a specialist and a real seeker after truth.’

In any case, as an amateur philosopher myself, with no formal qualifications, I’d be loathe to be caught trying to police the boundaries of philosophical conversation! I hope, also, that the depth of my engagement with Professor Smolin’s ideas stands as testament to my respect for them. Nevertheless, I have been keen to hear Lee’s response to various philosophical objections to his view, both historical and contemporary, I felt left unaddressed by Time Reborn – and I thank him heartily for agreeing to speak to Adam’s Opticks.

Adams Opticks:

Hi Lee,

Central to your thesis outlined in Time Reborn, and it’s recent follow-up The Singular Universe (co-authored with Roberto Mangabeira Unger), is a rejection of the ‘block universe’ interpretation of physics in which timeless laws of nature dictate the history of the universe from beginning to end. Instead, you argue, all that exists is ‘the present moment’ (which is one of a flow of moments). And as such the regularities we observe in nature must emerge from the present state of the universe as opposed to following a mysterious set of laws that exist ‘out there’. If this is true, you also foresee the possibility that the regularities in nature may be open to forms of change and evolution.

My first question is this: Does it make sense to claim that ‘the present moment is all that exists’ if one has to qualify that statement by saying that there is also a ‘flow of moments’? Does the idea of a flow of time not return us to the block universe? Or at the very least to the idea that the present moment represents the frontier of an ever ‘growing’ or ‘evolving’ block as the cosmologist George Ellis might say?

Lee Smolin:

Part of our view is that an aspect of moments, or events, is that they are generative of other moments. A moment is not a static thing, it is an aspect of a process (or visa versa) which generates new moments. The activity of time is a process by which present events bring forth or give rise to the next events.

I studied this idea together with Marina Cortes. We developed a mathematical model of causation from a thick present which we called energetic causal sets. (See

Our thought is that each moment or event may be a parent of future events. A present moment is one that has not yet exhausted or spent its capability to parent new events. There is a thick present of such events. Past events were those that exhausted their potential and so are no longer involved in the process of producing new events, they play no further role and therefore there is no reason to regard them as still existing. (So no to Ellis’s growing block universe.)

Adams Opticks:

Can you help me understand what you mean by a ‘thick present’? I’m confused because if the present moment is ‘thick’ rather than instantaneous, and may contain ‘events’, it seems like you’re defining the present moment as a stretch of time, which looks like a contradiction in terms. Similarly, when you say that the ‘activity’ of time is a ‘process’ I’m left thinking that ‘events’, ‘activities’ and ‘processes’ are all already temporal notions, and so to account for time in those terms seems circular.

Lee Smolin:

I can appreciate your confusion but look, think about it this way: the world is complex. What ever it is, it contains many elements in a complicated network of relations. To say what exists is events in the present does not mean it is one thing. The present is not one simple thing, it is the whole world, therefore it contains a vast complexity and plurality. Of what? Of processes, which are dual to events.

Adams Opticks:

One of your main objections to the idea of eternal laws comes in the form of what you diagnose as the ‘Cosmological Fallacy’ in physics. Your argument runs that the regularities we identify in small subsystems of the universe – laboratories mainly! – ought never be upscaled to apply to the universe as a whole. You point out that in general we gain confidence in scientific hypotheses by running experiments again and again, and define our laws in terms of what stays the same over the course of many repetitions. But this is obviously impossible at a cosmological scale because the universe only happens once.

But what’s wrong with the idea of cautiously extrapolating from the laws we derive in the lab, and treating them as working hypotheses at the cosmological scale? If they fit the facts and find logical coherence with other parts of physics then great… if not, then they’re falsified and we can move on. As an avowed Popperian yourself, are you not committed to the idea that this is how science works?

In addition, wouldn’t the very idea of ‘laws that evolve and change’ make science impossible? How could we ever confirm or falsify a hypothesis if, at the back of our minds, we always had to contend with the possibility that nature might be changing up on us? Don’t we achieve as much by postulating fixed laws and revising them on the basis of evidence as we might by speculating about evolving laws that would be impossible to confirm or falsify?

Lee Smolin:

To be clear: The Cosmological Fallacy is to scale up the methodology or paradigm of explanation, not the regularities.

Nevertheless, there are several problems with extrapolating the laws that govern small subsystems to the universe as a whole. They are discussed in great detail in the books, but in brief:

  1. Those laws require initial conditions. Normally we vary the initial conditions to test hypotheses as to the laws. But in cosmology we must test simultaneously hypotheses as to the laws AND hypotheses as to the initial conditions. This weakens the adequacy of both tests, and hence weakens the falsifiability of the theory.
  1. There is no possible explanation for the choice of laws, nor for the initial conditions, within the standard framework (which we call the Newtonian paradigm).

Regarding your questions about falsifiability, one way to address them is to study specific hypotheses outlined in the books. Cosmological Natural Selection, for instance, is a hypothesis about how the laws may have changed which implies falsifiable predictions. Take the time to work out how that example works and you will have the answer to your question.

Another way to reconcile evolving laws with falsifiability is by paying attention to large hierarchies of time scales. The evolution of laws can be slow in present conditions, or only occur during extreme conditions which are infrequent. On much shorter time scales and far from extreme conditions, the laws can be assumed to be unchanging.

Adams Opticks: 

I’m actually a big fan of Cosmological Natural Selection (which suggests that black holes may give birth to new regions of spacetime, fixing their laws and cosmological constants at the point of inception) – and I can see how that is both falsifiable in itself, and would still allow for falsifiable science on shorter time scales.

Far more radical, however, is your alternative theory which you dub the Principle of Precedence. The suggestion here is that we replace the metaphysical extravagance of universal ‘laws of nature’ with the more modest notion that ‘nature repeats itself’. The promise of this idea is that it makes sense of the success of current science whilst leaving open the possibility that truly novel experiments or situations – for which the universe has no precedent – will yield truly novel results.

To my mind, however, this notion begs many more questions than it answers. You claim, for instance, that the Principle of Precedence does away with all needless metaphysics and is itself checkable by experiment. But is it? You suggest setting up quantum experiments of such complexity that they’ve never been done before in the history of the universe and seeing if something truly novel pops out. But how could we ever tell the difference between a spontaneously generated occurrence and one that was always latent in nature and simply unexpected on the basis of our limited knowledge? And once again, as a falsificationist, shouldn’t you count the thwarting of expectations as evidence against individual theories, rather than positive proof of a deeper principle?

Lee Smolin:

My paper on the principle of precedence is a first proposal of a new idea. Of course it raises many questions. Of course there is much work to do. New ideas are always fragile at first.

As to how to tell the difference between ‘a spontaneously generated occurrence and one that was always latent in nature’ – this is a question for the detailed experimental design. Roughly speaking, the statistics of the fluctuations of the outcomes would be different in the two cases. I fail to see how such an experiment would violate falsificationist principles.

In addition, we believe we know the laws as they apply to complex systems: they are the same laws that apply to elementary particles. To posit new laws which apply only to complex systems, and are not derivative from elementary laws, would be as radical a step as the one I propose.

Adams Opticks:

Can you tell me how the universe is supposed to distinguish between precedented and unprecedented situations? On the face of it, it seems like unprecedented things are happening all the time. You and I have never had this conversation before. Are we establishing a new law of nature right now, and if not, why not?

Another objection: can you tell me where novelty is supposed to come from? If the ‘present moment’ is both the source of all regularity in the universe, and the blank slate upon which formative experiences are recorded – then what could introduce any change? Are you assuming that human consciousness and free will may be sources of genuine novelty?

Lee Smolin:

How nature generates unprecedented events and how precedent may build up are important questions that need to be addressed to develop the idea of precedence in nature. What I published so far is just the beginning of a new idea.

It’s intriguing to speculate about the implications for intentional and free actions on the part of living things. But in my view this is very premature. I am not assuming that consciousness is a source of novelty; I am only making a hypothesis about quantum physics. There is a very long way to go before the implications could be developed for living things.

Adam’s Opticks:

Nevertheless, it seems readily apparent from your collaborations with the social theorist Roberto Mangabeira Unger, and also the computer scientist Jaron Lanier, that you see many connections between your conception of physics and the prospects of human freedom and human flourishing. It concerns me, however, that in pursuit of a singular – very beautiful – solution to so many problems in science, philosophy, politics and our personal lives, a lot of awkward details may get overlooked.

In philosophy, for instance, you claim to show that the reality of the present moment – conceived in terms of unresolved quantum possibilities – may at last solve the problem of free will. But what of the history of compatiblism in philosophy – from David Hume to Daniel Dennett – that purports to show that our freedom as biological and psychological agents is not only compatible with the regularity of nature, but may in fact depend upon it?

Lee Smolin:

There are certainly common themes and influences in my work and those of Jaron Lanier and Roberto Mangabeira Unger. And I’m happy at times to indulge in some speculation about these influences. But these are very much to be distinguished from the science. The point is that I am happy to do the scientific work I can do now and trust for future generations to develop any implications for how we see ourselves in the universe. There is much serious, hard work to be done, and it will take a long time. Especially given the present confusions of actual science with the science fiction fantasies of many worlds and AI (these two ideas are expressions of the same intellectual pathology) I agree we have to build a counter view carefully.

I don’t claim to show that my work solves the problem of free will. I suggest there may be possibilities worthy of careful development as we learn more. As for compatiblism, I am unconvinced, but I haven’t yet done the hard work needed to develop the alternative. Dan Dennett is a generous, serious and warm hearted thinker who works hard to produce arguments which are crystal clear. But talking with him or reading him, both of which are great pleasures, I sometimes find that at the climax of one of his beautifully constructed arguments, the clarity fades and there is a step which I can’t follow. I hope someday to have the time to do the hard work to convince myself whether the fault is with his reasoning or my understanding.

Adam’s Opticks:

Since I have you here, let me try to make the compatibilist objection compelling with three more questions, inspired to a great extent by Dennett’s Freedom Evolves (2003):

  1. If we turn to physics (as opposed to biology or psychology) in search of free will, are we not likely to end up granting as much free will to rocks or tables or washing machines – or indeed computers – as we do to human beings? If we are to be able to change and adapt in response to the problems we face, surely the science of free will must be the science of a human plasticity that outstrips the plasticity of nature more generally?
  1. You claim that the openness of physics may enable us to transcend the fatalism inherent to predictions from climate science, for example: ‘In 2080 the average temperature on earth will be six degrees warmer than it is now’. But what of those other predictions stemming from climate science such as: ‘A concerted effort to reduce carbon emissions will avert disaster’? If the true nature of physics undermines the certainty of the first prediction, does it not also undermine the certainty of the second?
  1. Setting yourself against a long history of thinkers who would write off the sensation of ‘now’ as a psychological quirk incompatible with timeless physics, you go so far as to call it ‘the deepest clue we have as to the nature of reality’. But I wonder what you make of the innumerable psychological and neuroscientific studies that demonstrate the problematic nature of humans’ perception of time over short intervals? Benjamin Libet’s apparent prediction of conscious decisions from unconscious brain activity seem particularly troubling. Might you be persuaded to push in the direction urged by Dennett and resist such a conclusion by arguing that an instantaneous ‘you’ cannot be contrasted with your slow-moving brain activity, and that the search for free will and consciousness in ‘the present moment’ is fundamentally misguided? Can we not look, instead, to the mechanically-possible processes of decision making, learning and adaption that take place over seconds, minutes, weeks and years?

Lee Smolin:

I don’t see why grounding human capabilities in an understanding of what we are as natural beings implies that every capability we have is shared with rocks. We have a physical understanding of metabolism, or the immune system, but rocks and tables have neither. My guess is that when we know enough to seriously address these issues, the vocabulary of concepts and principles at our disposal will be greatly enhanced compared to what we have now. Certainly we are aspects of nature and every capability we have is an aspect of the natural world.

Regarding climate change, the first is a prediction of what could happen if we don’t take action to strongly reduce GHG emissions. My point is not that the climate models are completely accurate. My point instead is that the intrinsic uncertainties in their projections are the strongest reason to act to reduce emissions so we can avert disaster however the uncertainties develop. In national defence we prepare for war because the future is uncertain. Climate change is not an environmental issue, its a national security issue and should be treated as such.

As for the objections from neuroscience, I completely fail to see the force in this kind of argument. Those studies are fascinating but I don’t think they remotely show what is claimed. Certainly the present moment is thick and the self is not instantaneous. But giving up the instantaneous moment for the thick and active or generative present (as I sketched above) does not imply that consciousness or time or becoming are illusions.

Adam’s Opticks:

Lee Smolin – thank you!


On Time Reborn as modern myth: Why Lee Smolin may be right about physics (but probably wrong about free will, consciousness, computers and the limits of knowledge)

Lee Smolin

The renowned mythologist Joseph Campbell once argued that any existential narrative worth its salt must combine four essential elements: the cosmological, the psychological, the sociological, and the mystical. That is to say that in rendering an image of the world – its cosmology – a good myth must also sooth the psychological pains of existence, answer the sociological questions of how to live as part of a group, and – crucially – suffuse our lives with a mystical awe such that ‘through every feature of the experienced world, the sense of an ideal harmony resting on a dark dimension of wonder should be communicated’.

Time Reborn CoverIn the 21st century our cosmologists are rather more humble. Though their theories may strive to provide our most accurate and comprehensive view of nature, and (at a push) the sort of mystical wonder that Brian Cox does well to bang on about, the majority of scientists shun the notion that their every discovery ought to comfort us, or tell us how to live. This is a very good thing in my view: maintaining a distinction between what-there-is and what-we-ought-to-value protects both our science from wishful thinking, and our ethics from God On Our Side dogmatism (or Nature On Our Side, or what have you).

So when a theoretical physicist publishes a book that promises not only a potential solution to the longest standing puzzles in physics and cosmology, but also a ‘scientific’ reinvigoration of free will and human agency, and on top of that a series of deep insights into the problems of politics, social organization, economics, climate change, and our personal lives… well, I think it sensible to be cautious. Lee Smolin’s Time Reborn (2013) is science as modern myth and draws its philosophical vigour from a revolutionary physics of time that overturns universal determinism and asserts the absolute reality of the present moment.

Or something. Smolin freely admits that he is not a philosopher, but that his work aspires to be philosophical. But philosophy is a dangerous business for eminent scientists – a point well made by Dan Dennett in a recent interview with the Guardian:

The history of philosophy is the history of very tempting mistakes made by very smart people, and if you don’t learn that history you’ll make those mistakes again and again and again. One of the ignoble joys of my life is watching very smart scientists just reinvent all the second-rate philosophical ideas because they’re very tempting until you pause, take a deep breath and take them apart.

Many of Smolin’s philosophical ideas, I would like to suggest, are nothing new – and have either been put forth, or persuasively debunked, by numerous philosophers including Dennett himself. That said, his science is fascinating, and what I’d like to do now is to try to sift the best of Smolin’s scientific ideas out from what I’m convinced is some sadly shaky philosophy.

But before I do that I should mention that I had the great pleasure of attending a talk by Smolin as part of the Bristol Festival of Ideas earlier this year – and found him to be a profoundly humane scientist with a fizzing profusion of exciting and iconoclastic ideas. And not to mention a wonderful orator. Cop a blast of this:

Stay tuned for a clip of the challenge I managed to put to Smolin at the end of the talk.

N.B. I’m going to try and recount as much of Smolin’s argument as is possible within this article, but if you want to skip straight to the source there’s this excellent talk given on Smolin’s home turf at the Parameter Institute of Theoretical Physics in Ontario, and this revealing interview with the Santa Barbara Independent. And then, of course, there’s the book.

Timeless mathematics and the computer universe

So what’s Smolin’s big idea? In his own words: that Time Is Real. ‘Well of course time is real!’ you may be tempted to respond, ‘… and who ever said it was illusory?’ Well, according to Smolin, that’s exactly what physics has been saying ever since Newton.

The ‘Newtonian Paradigm’ is one that suggests that the future may be calculated with perfect fidelity from two inputs: accurate knowledge of a system’s initial state plus the application of physical law. This is what Newton himself achieved with the planets and his laws of gravity and motion. Take the position of the planets around the sun at any one moment, apply the laws, and it is possible to predict their future positions indefinitely. The same logic underlies general relativity, the standard model of elementary particles, and quantum physics (although the latter is contentious, as we’ll see).

So what does this say about time? Well here’s the bombshell (according to Smolin): Any version of physics that claims to be able to predict the universe with timeless logic has done away with time completely. If the history of the universe is a perfect continuum, equivalent to a perfectly logical deduction, then there is no such thing as the present moment and the universe may as well be a computer (cf. David Deutsch) or perhaps a ‘mathematical structure’ (cf. Max Tegmark).

So what’s wrong with that? The usual – and, as I will argue, fallacious – objection runs that an entirely deterministic universe denies free will, and indeed Smolin goes along with this. In fact at one point in his Bristol talk Smolin conceded that ‘the secret history’ of his project was to offer some cosmological support to the work of Jaron Lanier, a computer scientist whose One Half A Manifesto (2000) pits itself against evolutionary psychology and its attendant vision of human beings as biological computers. I actually think that the analogy between humans and computers is a pretty good one – but either way I’m highly suspicious of the attempt to contrive an entire cosmology to fit one’s preferred self-image for humanity (whatever image that might be). Therein lies myth. That said, Smolin does have a number of independent arguments for the validity of his position that are well worth considering. I’ll deal with those first before turning to the wider implications for human life.

The Cosmological Fallacy or the Cosmological Hypothesis?

As we’ve established, the Newtonian Paradigm that founds the computational vision of nature consists in two components: initial state plus physical law. In arguing that ‘time is real’ Smolin would collapse the distinction between the two. His radical thesis asserts that all that exists is the present moment (which is one of a succession of moments) and that the state of the universe in any one moment is therefore generative of physical law rather than slavish to it. (And perhaps, as a consequence, open to change).

Smolin’s main ‘battering ram’ against the prevailing belief in eternal laws is what he diagnoses as the ‘the Cosmological Fallacy’ in physics. All laws of physics, Smolin would remind us, are inferred from experiments conducted in small subsystems of the universe – laboratories mainly! – and ought not to be automatically extended to the universe as a whole. And if a law describes that which remains constant over many repetitions of an experiment then the idea of a universal law could never be scientifically warranted because, after all, the universe only happens once.

Smolin’s arguments concerning the Cosmological Fallacy are extensive and interesting, but to my mind almost entirely pre-empted by David Hume (a figure notable by his absence amongst the philosophers cited in Time Reborn). In An Enquiry Concerning Human Understanding (1748), Hume famously argued that though the patterns in the behaviour of physical objects appear remarkably consistent, our faith in the uniformity of nature will always be impossible to prove outright. This is because whenever we try to justify our belief that the future will continue to resemble the past we inevitably fall back on the excuse ‘well, it’s always done so in the past!’ – which is a perfect example of circular reasoning. This is Hume in his own words:

When I see, for instance, a Billiard-ball moving in a straight line towards another… may I not conceive that a hundred different events might as well follow from that cause? May not both these balls remain at absolute rest? May not the first ball return in a straight line, or leap off from the second in any line or direction? All these suppositions are consistent and conceivable.

Hume’s arguments that the regularity of nature may be ultimately unprovable and that radical changes are both ‘consistent and conceivable’ have long worried those philosophers keen to justify science’s claim to eternal truth. Their cogitations, however, have not been without fruit. Indeed it is my view that Karl Popper’s twentieth century reply to Hume takes much of the apparent force out of the issue, and, by extension, out of Smolin’s Cosmological Fallacy. Popper’s strategy was to concede that no amount of positive evidence could ever prove any theory to be ultimately true – but that this is beside the point. Science, for Popper, should never about demonstrating the ‘truth’ of theories (lasting or otherwise) so much as throwing a thousand guesses at a problem and seeing which ones fall foul of experiment. Those hypotheses which could in some way be proved false – but hold up – are to be accepted as good science until counter-evidence comes along.

Smolin is, in fact, an avowed Popperian, but it strikes me that he is being inconsistent on this point. According to Popper’s ‘falsificationist’ model of science the idea that the laws of nature may be timeless should not be considered a truth that requires justification so much as a working hypothesis that demands falsification. And, as Smolin himself admits, we have yet to see any hard evidence that the laws have changed since the big bang.

Cosmological Natural Selection and the Principle of Precedence

But what about that big bang? Rather more persuasive than his attacks on the a priori validity of timeless generalisations is Smolin’s drawing attention to the two inputs the Newtonian Paradigm takes for granted and could never itself explain; the initial state of the universe, and the existence of laws in the first place. If these things explain everything else, then what explains them?

Smolin’s solution to this perennial mystery is his theory of Cosmological Natural Selection (first published in 1992) – and it is both wonderfully outlandish and deeply elegant. The argument runs that black holes may ‘give birth’ to new regions of space-time whose laws undergo a subtle ‘mutation’ in the process. The initial singularity of our big bang is thereby explained in terms of a quantum ‘bounce’ from a previous era. The argument concerning laws, meanwhile, is essentially Darwinian: those universes with laws conducive to the generation of black holes will have many progeny while those with dramatically different laws will not. If our universe is typical, as the great majority will be, it will belong to the former camp. The theory – expressed mathematically – has even made some potentially falsifiable predictions about the nature of our universe which enable Smolin (on this count at least) to boast that he has satisfied the demands of good Popperian science.

According to Cosmological Natural Selection physical law is only ever augmented at the birth of universes – but it requires that law emerge from some fact about the present state of the universe in order for law to be malleable at all. If laws existed ‘outside of time’ they could never be subject to mutation, and hence natural selection, and their origins would remain mysterious. But do the pressure-cooker singularities at the hearts of black holes provide the only circumstances under which laws could change? Could they ever change during the lifespan of a universe? Or even during our lifetimes? Smolin’s more recent proposal for the means by which laws may change – the Principle of Precedence – appears to raise that tantalising prospect.

Smolin lays the groundwork for this idea with an uncontroversial fact about quantum physics:

If we prepare and measure a quantum system we have studied many times in the past, the response will be as if the outcome were randomly chosen from the ensemble of past instances of that preparation and measurement.

This apparent randomness is normally held to be consistent with timeless law because of the fact that the statistical distribution of the data remains constant over time: study the position of an electron orbiting a hydrogen atom today and you’ll get the same spread of readings you got fifty years ago. But for Smolin, for whom the only reality is the present moment, all that exists is the ensemble of possibilities itself, and there is no timeless law governing their regularity. In the place of such a law it is Smolin’s contention that all we really need to assume is that ‘nature repeats itself’. This is the Principle of Precedence. To elaborate: it may be the case that any physical system, when ‘confronted with a question’, will ‘look to the past’ at all the other times a similar question has been asked and ‘pick randomly’ from previous responses. The promise of this idea is that it makes sense of the success of Newtonian science whilst leaving open the possibility that truly novel experiments or situations – for which the universe has no precedent – will yield truly novel results. Thus, for Smolin, ‘the future is open’ and we ‘may be liberated from the idea that we can’t do anything about it’.

Problems with the Principle of Precedence

It is at this point that I’m going to take a leaf out of Dan Dennett’s book, pause, take a deep breath, and take the Principle of Precedence apart. I’m readily attracted to Smolin’s proposal that the regularity of nature may emerge from some fact about the present state of the universe, and may – as a consequence – be malleable in the singularities of black holes. The pressing need to account for the initial state of our universe and for the existence of its regularities strike me as perfectly valid motivations for taking these ideas seriously. I want to argue that the Principle of Precedence, by contrast, lacks any of the explanatory power of Cosmological Natural Selection, may well rely upon a deeply unscientific notion of consciousness in order to work, and actually serves to diminish the prospects of human freedom in the universe.

I’ll start, once again, with the issue of scientific methodology. It is Smolin’s bold claim that, unlike the idea of timeless laws, the Principle of Precedence is truly scientific because it does away with all needless metaphysics and is itself checkable by experiment. But is it? Smolin cites, by way of a possible demonstration, the already extant practice of setting up quantum mechanical systems so complicated that they’ve never yet existed in the history of the universe. If his principle is correct then such systems ought to thwart all theoretical expectations and produce some genuinely novel phenomena. My problem with this is that expectations in science are thwarted all the time, especially in new areas of research – and when this happens scientists can be relied upon to busy themselves with the search for alternative models and explanations. I find it hard to avoid the reductio ad absurdum that every thwarted expectation in the history of science ought to count as proof of Smolin’s idea. How to tell the difference between a hitherto undiscovered but timeless fact about the universe that falsifies our expectations in this new area, and a spontaneously generated fact about the universe that confirms the Principle of Precedence? A true retreat from metaphysics ought to avoid begging such unanswerable questions – which is, I submit, the virtue of the falsificationist approach I outlined earlier. A consistent falsificationist is agnostic about the ultimate truth of any theory: all they can know is that it holds up in the face of potentially falsifying evidence. It would make science an entirely trivial exercise to celebrate such theories when they work and at the same time claim to have validated some deeper principle every time they fail.

Another problem with the Principle of Precedence is the lack of a convincing account of where ‘genuinely novel’ phenomena are supposed to come from. If we are to imagine that for the most part the universe glides along well-worn grooves, and only occasionally hits some bump in the road that necessitates a spin of the quantum wheel and the settling upon some new mode of behaviour, then what exactly constitutes that bump? To put it another way: if ‘the present moment’ is both the source of all regularity in the universe and the blank slate upon which formative experiences are recorded – then from whence do such influences emerge? One possible answer might be ‘from conscious human intervention’ – but if Smolin’s thesis promises to justify the existence of a miraculous form of free will then it ought not to presuppose it. Resisting that reading for the moment, perhaps it is the case that a generally deterministic universe will occasionally produce some set of circumstances so extraordinary as to bend itself out of shape. We’ve discussed black holes, but perhaps Smolin would have it that human intervention in the universe is likewise entirely determined right up until the point when – wholly governed by laws of physics, biology, psychology – we are moved to create some unprecedented situation or experiment which only then forges a new universal habit.

But, supposing we grant that possibility, what are we to suppose counts as an ‘unprecedented situation’? How is the universe supposed to distinguish between unprecedented situations and familiar ones? Can it really tell that quantum physicists in 2013 are asking ‘the same question’ when they repeat an experiment performed fifty years ago when (for instance) our universe was smaller, our solar system was in another part of it, and the previous experimenter had a cold? And aren’t unprecedented things happening all the time if only we avail ourselves of this kind of combinatory power? As I type these words I am sat in a café in Berlin, staring out the window, and watching the precise fluctuation of every leaf in the trees that line the street. The couple next to me are paying for their meal and murmuring their thanks to the waitress. There are three young girls outside petting a tiny dog on a bright red leash. Is this combination of circumstances novel enough to create a new law of physics, and if not why not? Isn’t a new-fangled quantum laboratory set-up precisely this kind of recombination of existing phenomena in a novel way?

A second and much more important problem with the idea that the universe may determine humans to tickle some novel response out of itself is that there is still no room for free will whatsoever: our lives remain entirely dictated by universal regularities right up to the point when something unpredictable – and hence equally beyond our control – just happens. If Smolin would like to suggest, à la Jaron Lanier, that human beings are more than biological computers then his argument requires some account of how our species in particular is capable of harnessing the plasticity of nature and making it our own. In the absence of that account the mere fact of evolving laws grants as much free will to rocks or tables or washing machines – or indeed computers – as it does to human beings.

Consciousness and the problems of essentialism

So perhaps it is the case that Smolin has brought some unscientific pre-suppositions about free will and consciousness to bear in the formulation of his thesis after all. How else to break the tautology inherent in his assertion that novel situations introduce novelty into the universe? Where does that novelty come from? I think Smolin may be tacitly presupposing that human consciousness is some miraculous source of novelty that is not in itself caused by anything else. This is Smolin’s only explicit discussion of consciousness, drawn from closing pages of Time Reborn:

The problem of consciousness is an aspect of the question of what the world really is. We don’t know what a rock really is, or an atom, or an electron. We can only observe how they interact with other things and thereby describe their relational properties. Perhaps everything has external and internal aspects. The external properties are those that science can capture and describe — through interactions, in terms of relationships. The internal aspect is the intrinsic essence; it is the reality that is not expressible in the language of interactions and relations. Consciousness, whatever it is, is an aspect of the intrinsic essence of brains.

This argument is harmless insofar as it goes. However, as the above passage freely concedes, the very notion of inner essences is by definition well beyond the reach of scientific validation. And if Smolin does mean to suggest that consciousness has a practical role to play in explaining human inventiveness, and perhaps even the evolution of the laws of physics, then he faces a problem familiar to all adherents of a ‘dualistic’ theory of mind, namely: How is it that a purely ‘inner’ essence may simultaneously reach out and influence ‘external’ events? There are games being played with language here. One cannot define consciousness as distinct from the cut-and-thrust of external relations and simultaneously require that it does all the work that Smolin needs it to.

Neuroscience, free will and the sensation of ‘now’

Even more problematic than Smolin’s discussion of inner essences is the emphasis he places on our species’ subjective awareness of ‘the present moment’. Setting himself against a long history of thinkers who would write off the sensation of ‘now’ as a psychological quirk incompatible with timeless physics, Smolin goes so far as to call it ‘the deepest clue we have as to the nature of reality’. Once again, I deeply distrust this desire to twist the universe to fit particular hunches – but if it were true that humans had a particularly intimate sensitivity to what Smolin characterises as an ‘open’ moment teeming with unresolved quantum possibilities then it is readily easy to imagine that our agency may in some way stem from this. Unfortunately – and quite apart from physicists objections to the very idea of a ‘present moment’ – there is robust neuroscientific evidence that humans’ perception of time over short intervals is not all that intimate or acute.

Take ‘flash and beep’ tests, for example. These involve the exposure of experimental participants to near-simultaneous bursts of light and sound. Controlling for the respective transmission speeds so that either the beep or the flash can be relied upon to reach the participant’s senses a few hundred milliseconds in advance of the other, individuals nevertheless find it notoriously difficult to identify which came first in their subjective experience. And this is for the entirely unsurprising reason that it takes the brain a certain amount of time (i.e. longer than an instant) to get a handle on any given sense data – and often varying amounts of time depending on the sorts of signals being received. Sounds, for instance, are often processed more rapidly than visuals, and so may appear to arrive first in subjective experience, even if objectively speaking they turned up after a flash. In everyday life it seems likely that we make use of other cues – such as our own motor actions – to put such disparate signals into registration (our brains may ‘expect’, for instance, that when we click our fingers the sight and the sound are simultaneous). But under laboratory conditions designed to eliminate such cues it is possible to reveal that our sensation of ‘now’ is actually a blurry concatenation of real world events that are in fact spread out over short periods of time. Smolin may of course be right for independent reasons that reality consists in a series of discrete moments, but this experiment would seem to rule out the idea that we have any kind of immediate access to them.

But even if we did, could we choose to influence them? Once again, the picture coming from neuroscience is not particularly encouraging. There is, in fact, a rather troubling history of pronouncements that free will must be illusory on account of the fact that conscious decisions can be entirely predicted from unconscious brain activity observed up to several seconds earlier in FMRI scanners. (See the work of Benjamin Libet, but also his followers such as John-Dylan Haynes as discussed in a previous entry on this blog). Thankfully, such fatalistic research has been fairly conclusively undermined by the analyses of Dennett and others who point out two key flaws. The first flaw is in the set-up of the experiments: researchers have only found it possible to relate decisions to rather small and uninteresting mechanisms in the brain by requiring that their participants make strictly ‘spontaneous’ choices (to flick their wrists, for instance) with no forethought or planning. The second flaw lies in the interpretation of the results: by assuming that you may be predicted by prior brain activity, Libet et al. betray their doggedly ‘dualist’ assumption that one’s consciousness is somehow instantaneous and may therefore be contrasted with the slow mechanical chugging of the brain.

In short, the neuroscience of free will has often proceeded with precisely Smolin’s hunch that consciousness exists in ‘the present moment’ and is capable of issuing directives from that dimensionless space – and it has come up wanting. A far more sensible alternative is that you are your brain’s processes, and that ‘thinking in time’, as Smolin would have, might actually take some time.

The naturalistic alternative: why predictability sets us free

But if consciousness is simply the brain ‘chugging away’ according to the laws of physics then what kind of freedom is that? It’s not freedom from the universe, that’s for sure. But I don’t think we need that. What we really need is freedom from natural disasters, from hunger, from disease, from those other members of our species who would exploit us – and also freedom from ourselves in those instances where we have become stuck in self-destructive ruts. It is my view that those kinds of freedoms, where they exist, emerge from our species’ non-miraculous capacity to perceive patterns in the world (and ourselves) and to make wise decisions accordingly. To learn and adapt, in other words.

By these lights Smolin’s prospect of radical unpredictability only limits our capabilities. I put this point to Smolin at his talk in Bristol:

Clarifying my point at the book signing afterwards, I suggested to Smolin that free will not only requires some predictability – it actually correlates with the degree to which predictability is possible. How else can we channel nature, or fortify against it, if we don’t know what’s going to happen next? Smolin’s response, as best I recall, was that we never know exactly what’s going to happen next and that life always involves some degree of risk in the face of uncertainty. This is certainly true, to the limit, but I fear that Smolin may have missed my point, which is this: who could possibility benefit from the increased risks of an unstable universe? A bridge is built to withstand a certain strength of gravity. A computer is reliant for its functionality on the precise electrical conductivity of certain components. A cancer treatment will exploit some regularity in body chemistry. We do not want these patterns to dissipate. I embrace absolutely Smolin’s advocacy of perpetual innovation, and his warnings against the dangers of dogmatism – but one can only ever adapt to circumstances that are either static, or at least moving more slowly than you can. Once again, Smolin may well be right for independent reasons that the laws of physics are open to change – but the science of free will cannot benefit from this fact and must necessarily be the science of a human plasticity that outstrips the plasticity of nature more generally. (For some clues as to the blind material processes underlying that plasticity, see my discussion of learning as an evolutionary process here.)

Tackling climate change and negotiating personal relationships

Before I close, here’s a couple more examples designed to show how the naturalistic approach to free will can tackle those issues close to Smolin’s heart more effectively than his own more radical thesis.

Smolin opened his Bristol talk with a foreboding prediction drawn from climate science: ‘In 2080 the average temperature on earth will be six degrees warmer than it is now’. He then asked whether this was a fact set in stone, or whether we could choose to influence it – and claimed that we could, as a direct result of the openness of physics. But what of those other predictions stemming from climate science such as: ‘A concerted effort to reduce carbon emissions will avert disaster’? If the true nature of physics undermines the certainty of the first prediction, does it not also undermine the certainty of the second? What these awkward questions are intended to reveal is that climate science does not predict the actual future – it lays out possible futures based on different sorts of assumptions. The first prediction, as cited by Smolin, assumes that humans will do nothing. The second, that I just introduced, assumes that we will wise up and take action. Consider the analogous forecasts we make when we see a brick hurtled in our direction. First: It’s gonna hit me! And then: I’ll avoid it if I duck! Do we have to change the laws of physics in order to undermine the fatalism inherent to the prediction that the brick will hit us? Of course not. We simply have to take whatever evasive action seems wise, and recognise that our choice to do so may always have been part of the equation. And in fact we’d better hope that the future is predictable, because if the laws of physics go changing up on us after we’ve committed to a course of action then perhaps we’ll get hit by the brick after all (or disastrous climate change, for that matter).

Finally, in the case of personal relationships, Smolin would have it that a faith in an open future may be the key to a healthy union:

If you’re in a relationship or a marriage and you’ve been stuck in fixed positions and battling from them for ten years, and you think ‘there’s just her position and my position and things will never change’ then you’re thinking outside of time. But if you think ‘this is a process and I don’t know where this is going – but I’m in’, then you’re thinking in time.

I wouldn’t wish to say that in an otherwise healthy relationship this kind of easy-going flexibility couldn’t be a wonderful thing – but I would like to add that it is also true that many people, and many relationships, get stuck in fixed positions precisely because of a misplaced hope that the future will be different. Charlie Kaufmann’s screenplay for Eternal Sunshine of the Spotless Mind (2004) is a good illustration of this idea. Eternal Sunshine Choosing to free themselves from the trauma of a relationship turned sour, Joel and Clementine undergo a radical surgical procedure to have their memories of each other eradicated. The slate wiped clean, the couple happen to encounter each other afresh – only to fall in love for precisely the same reasons as before. The bittersweet conclusion of the film hints at the fact that the relationship may be doomed to forever repeat this cycle of initial ecstasy and subsequent pain. Dennett has a lovely phrase to describe what Joel and Clementine are doing: by attempting to escape their memories and reclaim their ability to live in the present, they are ‘making themselves small’. That is to say that they are denying themselves the chance to encompass the patterns of the past, and hence surrendering themselves to those patterns absolutely.

Conclusion: Towards a better kind of myth

In a recent Guardian article, ‘Philosophy isn’t dead yet’, the philosopher Raymond Tallis heaped praise on Time Reborn, identifying Lee Smolin as one of a number of thinkers engaged in a broader campaign to rescue contemporary physics from its floundering mathematical excesses by relating it back to our everyday experience of reality. ‘It is time’ Tallis concludes, citing the physicist Neil Turok, ‘to connect our science to our humanity, and in doing so to raise the sights of both’.

I’m all for interrogating foundational assumptions in science – that’s certainly how progress is made. I’m also fascinated by the question of how human experience fits into the grand scheme of things. But the idea that we should conflate these two quests is a great mistake – and is itself somewhat ‘momentary’ and free of an appreciation of history. As a cursory glance at the study of mythology reveals, humankind has long attempted to weave our hunches about ourselves into our picture of the world. It is only in recent centuries that we have begun to learn from the shortcomings of these fledgling cosmologies, and to separate out what we’d like to be true from what may actually be demonstrated. To try to return to a form of enquiry based upon what feels right is no novel innovation; it is a relapse into old habits.

Of course I can readily understand the temptations of Smolin’s world view. In the face of those who would seem to dehumanise our species in order to make us consistent with the laws of physics (Richard Dawkins’ phrase about our being ‘lumbering robots’ springs to mind), it strikes me that Smolin is offering the precise logical opposite: he would humanise the universe, suffusing it with inner essences, and granting it a kind of memory and the agent-like capacity to interpret and ‘answer’ questions posed by curious experimenters.

But the trouble is – as I’ve taken pains to demonstrate – such talk is not science. At least not yet. In choosing to present his ideas in popular form before offering a rigorously falsifiable hypothesis, Smolin is speaking over the heads of his colleagues and attempting to set the scientific agenda by intuition alone. It is my view that if we are to forge some kind of neat and easily digestible picture of the world that encompasses everything from the success of physics to the existence of human freedom then it must be read off from science, and not used to direct it. This ‘naturalistic’ approach is the strength of Dan Dennett’s accounts of consciousness and free will upon which I have drawn heavily in this essay, as well as James Ladyman’s account of metaphysics which I hope will be the topic of a future post. It is also, according to Joseph Campbell, a typical feature of traditional mythologies, the cosmological function of which has always been ‘that of formulating and rendering an image of the universe … in keeping with the science of the time’.

UPDATE: Philosopher Raymond Tallis responds:

Tallis AvatarDear Joe,

Many thanks indeed for your excellent paper. It has inspired me to complete a very tricky paper on causation that is also part of my ‘Of Time and Lamentation’ – due out next year.

I am drowning at present – 3 books due by spring plus endless other things (NHS defence, assisted dying decriminalisation) – so I haven’t had a chance to formulate a proper response to your paper. Meanwhile, many thanks for a terrific, thought-provoking read.

Kindest regards,

References and further reading:

Baggini, Julian (2013) ‘Daniel Dennett: ‘You can make Aristotle look like a flaming idiot” in The Guardian. Available online here.

Campbell, Joseph (1971), ‘Mythological Themes in Creative Literature and Art’ in Myths, Dreams, and Religion (New York: Viking)

Dennett, Dan (1991), Conciousness Explained (London: Viking)

Dennett, Dan (2003), Freedom Evolves (London: Penguin)

Hume, David (1748), An Enquiry Concerning Human Understanding. Available online here.

Hunt, Tam (2013), ‘Time Reborn: A Conversation with Lee Smolin about Time and Physics’ in The Santa Barbara Independent. Available online here.

Kaufmann, Charlie (2004), Eternal Sunshine of the Spotless Mind (Screenplay) (London, Nick Hern Books)

Lanier, Jaron (2000), One Half A Manifesto. Available online here.

Parsons B. D., Novich S. D., Eagleman D. M (2013) ‘Motor-sensory recalibration modulates perceived simultaneity of cross-modal events at different distances’ in Frontiers in Psychology Vol 26, No.4. Available online here.

Smolin, Lee (2013), Time Reborn: From the Crisis of Physics to the Future of the Universe (London: Penguin)

Tallis, Raymond (2013), ‘Philosophy isn’t dead yet’ in in The Guardian. Available online here.

Can the part explain the whole? A video interview with Dr Guy Saunders, UWE

This video debate is a kind of a sequel to the written debate I conducted with Guy between October 2013 and May 2013. That’s available here. And here’s a direct link to the Horizon clip that we discuss.

Notes on an exchange:

I first got talking to Guy in the corridors at UWE where I work as a notetaker for disabled students. The joy of the job is that I get to attend a great many lectures and seminars – with the catch being that my professional responsibilities prevent me from engaging in any class discussion. Guy has a lovely attitude to free speech – he insists that everyone must be allowed a voice – and indeed I’m very thankful to him for letting me attend a few of his seminars on my own time where I was permitted to pipe up!

We disagree about a lot of things, and our starting points are very different. Nevertheless, I like to think that over the course of our interaction we’ve worn each other down on a few issues. Guy’s discussion of John Dylan Haynes’ work has been especially enlightening to me – there are some real problems with the interpretation of neuroscientific results in terms of human freedom, and with the ways these conclusions are sold to a wider audience. For my part, I’m hopeful that my attempts to collapse the distinctions between reductionism and holism, physicalism and non-physicalism, and patterns and stuffs have been persuasive to Guy. My feeling is that such dichotomies say more about academic tribalism than the nature of the world, and my hope is that a more comprehensive vision of science – as the search for predictive patterns – might preserve the best of all possible worlds, and promote exactly the kind of ‘collegiate’ approach that Guy himself advocates.

Finally, I’d just like to wish Guy the best of luck with his book, ‘Acts of Consciousness’. I was originally moved to disagree with Guy over his arguments against those sciences and scientists that I find valuable and interesting. It will be a real delight to hear his positive vision of a ‘Cubist psychology’ outlined in full.

Social Psychology vs. Neuroscience: Adam’s Opticks debates with Dr Guy Saunders, UWE

Guy SaundersDr Guy Saunders is Senior Lecturer in Social Psychology and Consciousness at the University of the West of England and author of the forthcoming ‘Acts of Consciousness’ for Cambridge University Press. He has kindly agreed to say a few words for Adam’s Opticks about his views on the overreach of neuroscience and the problems inherent to a ‘purely physicalist’ science of the world. A video follow-up to our exchange may be found here

Adams Opticks:

Hi Guy. I’ve been lucky enough this past year to attend a number of your lectures on consciousness and the philosophical underpinnings of psychology as a discipline. One of your most resounding themes has been a criticism of neuroscience, and what you perceive to be an attempt on the part of its proponents to reduce psychology to the scientific study of the brain. Following the philosopher Mary Midgley (whose recent appearance on Radio 4 can be found here) you assert that the ‘unit’ of psychological explanation can be nothing less than ‘the whole person’. Perhaps you’d like to begin by explaining to the uninitiated what you mean by a ‘unit’ in this instance, and what you think is at stake when neuroscientists identify that unit as the brain?

Dr Guy Saunders:

Hi Joe. Thanks for your comments and the opportunity the engage in this way. I hope the following begins to address the questions you put.

If I wish to carry out research on consciousness using supposedly ‘traditional’ scientific methods, it will be necessary to take ‘the person’ as the object of my enquiry. In traditional scientific research there needs to be a ‘unit’ of analysis in order to observe something in action, manipulate and measure it. I wouldn’t conceive of science or research on consciousness in this way – I believe the atomising of the world to be part of the problem – but I accept sometimes the need to offer alternatives to the traditional physical unit; hence suggesting ‘the person’ as an alternative to ‘the brain’.

Part of what it is to study ‘the person’ will include the person’s body and the person’s body necessarily includes the person’s brain. If we have to unitize the world, this stacking makes conceptual sense. But I believe that it is nonsensical (conceptually) to treat the brain as a unit that could stand in for the person: the part cannot stand in for the whole. The brain enables me to act as a person but the brain cannot by itself act as a person. Brains cannot interact, for example, only persons can do this.

I’ve just read ‘Fahrenheit 451’ by Ray Bradbury in which persons become the living embodiment of books. It is only because they no longer carry the physical books that the book-burning regime does not see them as a threat. A person can carry more than a physical book; they can carry the ideas conveyed within. They can do this privately, subjectively, personally and if or when the regime loses power, they can help to restore the cultural legacy that might have been wiped out forever had persons not acted in this way. A brain may enable a person to remember the words in the books, but it is only whole persons who can read. Reading is not merely the simple mechanics of speaking words aloud – it is a person’s acting freely to pick up or set down a particular text, the how-it-is-for-them to do it, the what-is-brought-to-a-text in terms of memories and reading history, and the wealth generated by their reflexive reading. These important features of a person’s reading are not given in their neuroscience.

Adam’s Opticks:

You seem to adopt a suspicious stance toward the ‘traditional’ conception of science – especially where it concerns human beings! – and at the same time you appear to raise the tantalising prospect of an alternative method that does away with ‘atomisation’ in order to do justice to ‘the whole person’. I’d love to hear your thoughts on what this radical science might look like, but before that I’d just like to ask whether there isn’t there a great danger that your objections to neuroscience are only conceptual? In other words, if you don’t even conceive of science in the same way, how can you be sure you are not simply talking past neuroscientists on the topic of personhood (and vice-versa)? Are they really saying that persons are reducible to brains? Or are they innocuously assuming that the brain is a very important part of the whole, and then proceeding to uncover the neural-level mechanisms necessary (but not sufficient) for the human activities you list: interacting with other people, making decisions, reading reflexively and so on? I’m not sure the distinction between mechanically saying words and imaginatively responding to a book casts light on the debate – surely persons need their brains at every stage in the richer reading process you draw attention to? And so oughtn’t neuroscientists contribute to (if not close down) the attendant scientific discussions of such phenomena?

Dr Guy Saunders:

Regarding method, it is not my intention to advocate a single ‘alternative’ to traditional science; I am talking about the variety of methods that already exist in social psychology, the arts, mathematics and the humanities, among other fields of enquiry. I am not biased against any particular method; I think that the problem is the other way around. Some scientists are prejudiced against fruitful methods that fall outside their jurisdiction and expertise. Likewise, certain ideas about science preclude those methods that may in fact fit with less conventional ideas about what counts as ‘scientific’. This is one of the things that Mary Midgley is getting at in her essay ‘Against Humanism’:

The search for a “scientific explanation of consciousness” … still centres not on trying to be scientific in the sense of using suitable methods, but on making consciousness respectable by somehow bringing it within the range of physics and chemistry, mainly at present through neurobiology.

Note the distinction between ‘respectable’ and ‘suitable’ methods. When it is the observer and their subjective experience that is the object in question, neuroscience may not be the only suitable method and may not be a suitable method at all.

There is the case of the patient Scott Routley in the news currently that relates to the issue. The BBC News article is here. He is completely paralysed without, as far as I can tell, any voluntary muscle movement – but he has communicated that he is conscious via a brain scanning procedure. The doctors have asked him whether or not he is suffering any pain and he has answered: ‘No’. How did he answer? They asked him to imagine walking round his home for ‘Yes’, and playing tennis for ‘No’. The brain activity in this instance is being used as a form of code by both patient and neuroscientist in order to establish a means of communication (much as Morse code can be used to send messages to someone who knows the code). But I argue that it is Routley as a person – and not just his brain! – that is talking to his doctors. If it becomes possible for the means of communication to be more sophisticated then a full conversation might ensue (See also Jean-Dominique Bauby (1997) and the film ‘The Diving Bell and The Butterfly’ (2007), which put his story on screen).

Are neuroscience and its critics such as myself talking past each other? Yes, I think we are – and this comment has been made before by David Chalmers, for example. He has suggested that we keep the word ‘consciousness’ for subjective experience and use the term ‘awareness’ for the kinds of consciousness normally spoken of as if you can have it or not, lose it or not. Similarly, I believe we should reserve ‘person’ for the way the term is popularly used by people and avoid a kind of anthropomorphising of brain function. In this conception, ‘person’ includes our dealings with other people, our experiences of the past, our imagined futures, the times in which we live and our place in society. It is pointless and potentially inaccurate to try to pack into the brain features of the world that already have an accepted existence as part of the fabric of human society, culture, history. If we were to pack all aspects of human society into the brain, we would not have explained it, we would simply have moved the problem to a different location.

But I’d rather not play a defensive role here; it’s too easy for the person asking questions. This is meant to be a conversation, so I think it’s your turn. If you think that the brain is important to decision making, can you say how the brain does it? It is not sufficient to show correlative brain activity because the sight of the brain doing x during the decision making process would only be illustrative of a brain enabling a person to make a decision. If the brain is to be given some kind of independent causal role, this is an extraordinary claim that requires extraordinary evidence and explanation. Can you explain how the brain doing x simply is the person doing x? How do brains interact with other brains? If they can, how do they get to be able to do this? Would anything prevent brains interacting with other brains? How do brains become the person that they are identified as being, given that it seems unlikely that such an identity exists at birth?

Adam’s Opticks:

Thank you for the challenge, Guy! I’ll start by trying to answer the first of your questions. I take the rest to be illustrative of misunderstandings and deeper faultlines between our positions. I hope that the following serves to identify some of these, and to justify why I come down on the sides I do.

How does the brain make decisions? One type of answer that I have been finding increasingly persuasive is expressed by competitive or threshold-based theories. The following Horizon clip, featuring Bristol University’s Nigel Franks, is instructive:

Translating the lesson of the rock ants back into neuronal activity, we might speculate that when faced with two possibilities – should I cross the road, or hold back? – two camps of neurones will fire (one for each possibility). The sources of neural excitement feeding into either one of these camps will be things like sense data (the sights and sounds of the road), estimates about my ability to cover the requisite distance in the time available to me, tacit memories of prior crossings (perhaps I was almost hit by a car last time and will proceed with above average levels of caution), and a sense of the time pressures facing me (perhaps I am late for work, and will throw caution to the wind). At some point, the largest or most excitable of the two camps will win out – with the threshold for victory determined by the level of urgency – and a decision will be made, resulting in action.

Notice, here, that I do not wish to exclude the effects of a person’s surroundings, or their past, or their imagined futures from the process of decision-making. I reject the notion that neuroscience attributes to the brain ‘some kind of independent causal role’. To be truly ‘independent’ of one’s surroundings in space and time runs counter to the very notion of causality itself! Causes must themselves always be the effect of something else. I would submit that the brain is an important set of gears in a larger deterministic machine (i.e. the universe) – it is no magical ‘source’ of decisions.

But – you may well counter – if I’m willing to concede that persons make decisions based upon their senses, abilities, memories, and preferences, then what is to be gained by adding to this perfectly adequate list the qualification ‘but really the neurones did it’? On one level, not much. If you want to understand why I always hesitate when crossing the road outside my house you’d do better to ask me about my recent accident than to peer at my brain through an MRI scanner. But – and here’s the nub – I don’t think that’s the kind of explanation that neuroscience is at all interested in providing or competing over. And where individual neuroscientists care to differ, that is where I will depart from them. I think neuroscience achieves something rather different.

So what’s that then? To answer the question in a general way (I’ll come to the specifics in a moment), I’d like to take issue with your insistence that ‘the part cannot stand in for the whole’. I’d agree with you, superficially. But how often in science – or social science, or history, or engineering for that matter – is it necessary to focus our attentions on a particularly important constituent part? Are historians committing a catastrophic conceptual howler when they refer to ‘Stalin’s Five Year Plan’ when really the policy emerged from the ceaseless interaction of the entire body-politic, and not one man in isolation? Are architects embarrassing themselves when they admonish their protégées for turning in beautiful conceptual designs that fail to take account of the physical reality of what can actually be achieved using bricks? Did you know that the seemingly extravagant design of the human kidney (in which water must be directed all the time along concentration gradients) is an ornate work-around necessitated by the impossibility of a simple molecular H2O pump of the sort that works so well for other molecules? Sometimes the nature (and powers and limitations) of the parts play an integral role in the design and functioning of the whole. Sometimes it is the bricks that matter.

Specifically, then, I would submit that the nature of neurones explains not what decisions we make, but the mechanisms by which we make them – and that those mechanisms offer us some deep insight into our (whole) selves. The threshold-theory outlined above says something important – at least to me – about the fragmentary and confusing nature of being alive. I am not a unified entity so much as a battleground of competing influences. As I type – for instance – I can feel mounting hunger competing against my desire to finish this line of argument. Will some central authority keep the hunger at bay in the meantime so as to allow me to concentrate and thereby finish faster? That might seem like a good design solution, but no – the hunger will continue to nag, and nag, until it becomes unbearable and the threshold is reached. Neuroscience explains why this must be the case.

In your next response, Guy, perhaps you could let me know if you find this definition of neuroscience’s reach satisfying. And if so, I wonder if you might provide some examples of neuroscientific discourse explicitly breaching the parameters I identify – perhaps claiming that the brain is a source of human or societal phenomena, rather than an important mechanism embedded within those wider domains.

As for me, I’m off for a sandwich.

Dr Guy Saunders:

Just such an example of neuroscience identifying the brain as the ‘source’ of decisions can be found in the following clip. This is taken from another Horizon episode entitled ‘The Secret You’. It was broadcast in 2009 and presented by Marcus de Sautoy.

In the clip, the neuroscientist John-Dylan Haynes and his team claim that

Up to six seconds before you make up your mind, we can predict what decision you’re going to make.

Except they can’t do this and I want to elaborate on why.

N.B. – Scroll to the bottom of this post for John Dylan Haynes’ response to our discussion.

Firstly, there is a severe case of ‘bait and switch’ going on here. We are sold the experiment on the basis that it can predict freely made conscious decisions: watch the entire set up as de Sautoy arrives and the way that the significance of what is to take place is ramped up. But wait a minute: the experimenters have reduced the freedom to act to a forced-choice experiment between physically pressing one of two buttons (left or right), and de Sautoy says nothing to draw attention to the substitution. Worse still, the experimenters ask de Sautoy to ‘randomly decide and then immediately press’ one of the buttons. Um… how do you ‘randomly decide’ to do anything? I wouldn’t know how and, I suggest, neither does de Sautoy and nor does John-Dylan Haynes.

The experiment is carried out and de Sautoy returns to get the results using a narrative device similar to being tested and receiving some kind of expert medical diagnosis. He gets told that his decision was foretold by patterns in his brain activity: certain regions get more active when you’re going to choose left or right. But there’s a contradiction here. At first, Haynes confirms de Sautoy’s suggestion that his ‘conscious decision’ was a very ‘secondary thing’ to the ‘actual’ brain activity observed six seconds earlier. But then, when de Sautoy proposes that this makes him a ‘hostage’ to his earlier brain activity, Haynes changes tack: de Sautoy is no hostage to his brain because his conscious decision making and his patterns of brain activity are two aspects of the ‘same thing’. So how can brain activity be said to predict conscious decision making when the two are not discretely different?

At other stages in the interview Haynes reverts back to a kind of dualism by differentiating between conscious brain activity and unconscious brain activity: we are told that ‘there’s a lot of unconscious brain activity that is shaping your decisions’ – but not to worry because ‘your unconscious is in harmony with your beliefs and desires’. What? In what sense is this ‘unconscious activity’? The kind of brain activity Haynes is discussing might more accurately be described as ‘non-conscious’ in that it has no more of a ‘say’ in what I do than that which makes my feet move when I’m about to play a shot in tennis.

There is also the issue of Haynes’ personal stake in the explanation – as a neuroscientist he of course wants to suggest that thoughts and decisions are the ‘same thing’ as the ‘physical processes’ in which he is interested. But there’s a problem here. If it’s all brain activity, Haynes hasn’t solved anything. He’s taken the problem indoors – that is, he’s moved the problem of decision making from the world of people into the world of brains, but he hasn’t stated how the brain makes decisions. Let Haynes repeat his experiment with his participants asked to make a more meaningful decision: ‘Click right to submit a plagiarised essay on your university course’ or ‘Click left to submit a document with the essay title only’. The physical action would remain the same but we would not expect to reduce the nature of such a decision to a mere engineering solution. The rock ants you described earlier are doing no more and no less than de Sautoy in his fixed choice experiment; but this only serves to confirm the level of engineering on offer. If we conflate forced choice, context-free, computational switching with the kind of decision making in the example of submitting an essay, we will fail to take account of the social, historical, cultural and economic factors acting on individuals over longer periods of time.

Ultimately I think John Dylan Haynes’ example belies a fundamental problem in our ideas of ‘existence’; of what counts as existing and how it is characterised. The popular view that all existence must be somehow ‘physical’ is, to my mind, a form of madness dressed up as a foregone conclusion. We are psychological, social, historical, cultural and political beings and physicalist explanations must therefore omit, misrepresent, or misconceive these kinds of existence. We each have our favoured viewpoints. Your metaphor of being a ‘battleground of competing influences’, for instance, says much about those explanations you’re likely to prefer. But if we work with others who do not share our beliefs, we may be forced to consider questions we would otherwise fail to address. We will need a collegiate way of inquiring if we are seriously to tackle issues such as ‘what it is to have subjective experience and to make free and conscious decisions’.

Adam’s Opticks:

Thanks once again for this exchange, Guy! It’s been a fantastic exercise in clarifying my thinking on these topics. I’ll use my final reply to try to surmise where I’m at, but also to critique the dangers implicit in your suggestion that we must allow everyone their view when it comes to scientific theorising. I heartily agree that science must be ‘collegiate’, though I think we mean different things by it.

Firstly, regarding John Dylan Haynes, this is where we agree: you’ve persuaded me that by denying de Sautoy the opportunity to weigh up a decision based upon his desires, past experiences, and imagined futures, the experiment by its very design systematically excludes from its domain of study all those things we would normally consider conscious decision making to be. In fact, requiring de Sautoy to ‘randomly decide’ which button to press probably does leave him ‘hostage’ to those rather more mundane aspects of his brain function to which you refer to as ‘non-conscious’ – but only by coercing him to leave all the most interesting aspects of his consciousness at the door.

That said, I think your criticisms are overstated. If there’s to be one guiding principle that defines the collegiate approach to science I think that must be a commitment to understanding competing schools of thought on their own terms. That ought to prevent us wasting our time on purely semantic disagreements, and help us identify those genuine areas of controversy that require more attention. In that spirit, I’d like to point out that though John Dylan Haynes certainly over-inflates the importance of results (they say little about conscious decisions), this does not automatically invalidate their value or the methods by which they were achieved. The fact remains that his experiment is delivering a non-trivial prediction about which way his participants will swing – and that must mean he’s tracking something of importance in the brain. He’s certainly had to quiet down the majority of brain function in order to isolate this more modest mechanism – and he’d do well to recognise and acknowledge that – but to be fair I don’t think there’s any other methods available to him. Reductionism works by turning the volume down on all other factors, or holding them still, in order to get a handle on the role of a constituent part. This is entirely sensible in my view – just so long as you remember to bring all the other factors back in when you’re done; to put the world back together once you’ve finished breaking it down.

In light of this, I’m minded to reject your suggestion that holistic approaches could replace atomistic ones – I think the distinction between them is false, and perpetuated by a misconception of reductionist practice. I find the following analogy – courtesy of Richard Dawkins – a much more enlightened way to think about reductionism. Imagine, says Dawkins, that you have a recipe for a cake. It would be ludicrous to claim that any single word in the recipe could explain the whole cake, or even that a single word in the recipe could explain a single crumb of the cake. This is because you need the whole recipe, and the precise set of interactions it specifies, in order to explain the whole cake. However, if you did choose to focus your attention on a particular word in the recipe – for instance ‘tablespoon’ – there is a sense in which you might meaningfully claim that it was was responsible for a feature of the whole cake. Perhaps in context ‘tablespoon’ refers to a tablespoon of sugar, in which case you might say that that the word was a cause of the cake’s sweetness, for example.

This particular image was originally proposed by Dawkins as a clarification of his arguments regarding gene reductionism (in response to criticisms from Mary Midgley among others). Nevertheless, I think the point generalises to brain reductionism (and all other forms). What Dawkins demonstrates is that to pick out a unit in the world is not the same thing as saying that the unit acts in isolation. The claim is never that all else is irrelevant. Rather, reductionism rests on the idea that all else being equal the part makes an important difference to the whole (we can imagine, for instance, the effect that substituting the word ‘teaspoon’ for ‘tablespoon’ would have upon the sweetness of the cake). This is why I think your insistence that brains cannot be studied in isolation flies wide of the mark – I just don’t think that’s the aim of the game.

However, with all that said, I do hear your frustration regarding the prioritisation of ‘physicalist’ explanations over broader psychological, or social, or historical approaches. I don’t blame individual scientists for this so much as the psychology of our species – I think we’re inherently more comfortable dealing with that which may be kicked, poked at, or rendered visible by an MRI scanner. Thoughts, economies, cultures, histories – these seem somehow less real to us than actual physical stuff. I’ve read Midgley argue that we ought to accord ‘patterns’ as much respect as we accord ‘stuffs’ – and I think she’s on the right track, though I prefer the more comprehensive suggestion made by Dan Dennett in his paper ‘Real Patterns’ and developed by James Ladyman and Don Ross in their book ‘Every Thing Must Go: Metaphysics Naturalised’. These philosophers argue that patterns are all there is (or more precisely that a kind of ‘relational structure’ is all there is, and that multiple patterns may be used to capture aspects of that structure at different scales of space, time and accuracy). In this framework, actual physical stuff is simply the sort of pattern we humans have evolved to deal with in our everyday environment – and should have no greater claim to existence than those entities like quanta, or centers of gravity, or cultural trends invoked by scientists and social scientists to help us get a grip on phenomena too big, or too small, or too complicated for us to grasp with our evolved intuition alone. The only criterion by which a pattern ought to be judged ‘real’ (i.e. having a genuine grip on the world) is whether or not it enables us to make reliable predictions of the phenomena in which we’re interested. It is my feeling that a general recognition of this philosophy across the scientific establishment would go a very long way to promoting the even standing of the disciplines you’re arguing for – and without tipping into relativism.

And that’s a real danger, I think. I can see that you’re at least partly motivated to reject a physicalist science of persons on humanitarian grounds – if you describe a person as a thing, then that paves the way to treat them like a thing. This is a valid concern. But rejecting a science on the basis that it doesn’t conform to your politics – that’s also an extremely dangerous precedent to set in an era when tackling the most important humanitarian cause of our age – dangerous climate change – depends upon the majority accepting science at its word. It is not, in my view, an unproblematically wonderful thing to foster the ideology in which Every Person Has Their View. I’m genuinely worried that the academic culture of postmodernism has left generations of students with a vague entitlement to reject any part of science they just don’t like the look of. Teaching the philosophy of real patterns, meanwhile, would allow us the best of both worlds. We would be able to argue on a principled basis that people, beliefs, desires, intentions, societies and cultures are every bit as real as supposedly ‘physical’ things like brains, neurons, atoms or quarks. And we would be able to do this without throwing the traditional scientific principles of rigour, evidence and testability out of the window.

So where does this leave psychology and neuroscience, then? I would conceive of their relationship as akin to, say, molecular chemistry and evolutionary biology. Though both those sciences are necessary to describe biological life, they do so at vastly different scales of space and time and thereby achieve very different things. Nevertheless, their union – which allows scientists to relate evolutionary traits to the A, C, T and G of molecules on chromosomes – has been of vast scientific importance. With the comparison drawn as such I do not think there is any danger that neuroscience could ever replace psychology – but nevertheless I think it remains an open and interesting question as to the ways in which it might inform it.

UPDATE: Neuroscientist John Dylan Haynes responds to the discussion of his work:

Haynes AvatarDear Joe,

That is quite enjoyable, thanks!

Actually, in the scientific papers I’ve written you will find a much more nuanced position (see attached paper). The media picture tends to be much more black and white than the actual science. Most importantly, I think now we need 20 years of research on “free decisions”, rather another swath of theory papers discussing the handful of available studies.

With best wishes,

FURTHER UPDATE: Philosopher James Ladyman offers the following comments:

James LadymanDear Joe,

Thanks for your message. This is really good stuff. My view roughly is that Guy Saunders has a point but it needs to be separated from the Midgley/Tallis/holistic axis. I very much enjoyed the way you did that and your analysis and argument was excellent in general I thought and I very much agree with your general line. On ‘Every Thing Must Go’ I would stress that they key to the way I think about the relation among the sciences is integration rather than reduction and strong emergence, but with some kind of asymmetry between fundamental physics and the rest.

Take care,

Dr Saunders’ references and suggested further reading:

Bauby, Jean-Dominique (1997) The Diving-bell and the Butterfly (London: Fourth Estate).

Bennett, Max and Hacker, Peter (2003) Philosophical Foundations of Neuroscience (London: Blackwell).

Hofstadter, Douglas and Dennett, Daniel (1981) The Mind’s I (London: Penguin).

Diving Bell and the Butterfly, The (2007). Directed by Julian Schnabel (Pathé).

Gergen, Ken (1973) ‘Social Psychology as History’ in Journal of Personality and Social Psychology, Vol. 26, pp. 309-320.

Harré, Rom (1992) ‘What Is Real in Psychology: A Plea for Persons’ in Theory and Psychology, Vol. 2, No. 2, pp. 153-158.

Harré, Rom (1993) Social Being (Oxford: Blackwell).

Harré, Rom (1998) A Singular Self (London: Sage).

Harré, Rom (2000) ‘Social Construction and Consciousness’ in Max Velmans (ed.) Investigating Phenomenal Consciousness (Philadelphia, PA: John Benjamins Publishing Company).

Harré, Rom and Gillett, Grant (1994) The Discursive Mind (London: Sage).

Midgley, Mary (2012), A reaction to Colin Blakemore on the Today programme, BBC Radio 4, 5 September 2012 – Available here.

Midgley, Mary (2010) ‘Against Humanism’ on the Rationalist Association website – Available here.

Rose, Hilary and Rose, Stephen (2001) Alas, Poor Darwin: Arguments Against Evolutionary Psychology (London: Vintage).

Joe’s references:

Dawkins, Richard (1981), ‘In Defence of Selfish Genes’ in Philosophy, Vol. 56, No. 218, pp. 556-573 – This is Dawkins’ hot-blooded but beautifully clear defence of The Selfish Gene in the wake of Mary Midgley’s criticisms. I came away with a renewed appreciation of the book’s logic. It is downloadable here.

Dennett, Dan (1991), ‘Real Patterns’ in The Journal of Philosophy, Vol. 88, No. 1, pp. 27-51 – Also downloadable from here.

Dennett, Dan (2009), What Does My Body Need Me For? – A speech made at the Bristol Festival of Ideas, downloadable as a podcast from here. This includes an excellent discussion of the advantages that competitive models of the brain may have in explaining phenomena like anxiety and obsessive compulsive disorder.

Ladyman, James & Ross, Don (2009), Every Thing Must Go: Metaphysics Naturalised (Oxford: Oxford University Press) – An attempt to unify James Ladyman’s philosophy of fundamental physics according to which ‘structure is all there is’ and Don Ross’s account of the rich population of ‘things’ in the special sciences. The linchpin is Dan Dennett’s Real Patterns. The book is, however, extremely heavy duty, and can I can recommend Massimo Pigliucci’s online exegesis (parts one and two) as a good starting point.

Midgley, Mary (2003), The Myths We Live By (London: Routledge) – Available as a download from here. Includes the chapter ‘Thought Is Not Granular’ in which Midgley makes the distinction between ‘stuffs’ and ‘patterns’.

Robinson, EJH, Franks, NR, Ellis, S, Okuda, S & Marshall, JAR (2011) ‘A simple threshold rule is sufficient to explain sophisticated collective decision-making’ in PLoS ONE, Vol. 6, No. 5 – An academic reference for the Horizon clip featuring Nigel Franks and his rock ants.

Wilson, E. O. (1998) Consilience (New York: Knopf) – An inspiring survey of the state of the natural and social sciences, with a view to promoting fruitful cooperation between them. A good antidote to the suspicion that ‘physicalists’ such as Wilson wish to dominate the social sciences by reducing them back to biology and physics. Wilson advocates a spirit of collaboration.

On David Mitchell’s ‘Cloud Atlas’, and why the humanities ought to pay more attention to science…

So the film adaptation of David Mitchell’s ‘Cloud Atlas’ is out this week, and seeing as this is (probably) the last time there’s likely to be 3-hour multimillion-dollar advertisement for my undergraduate dissertation, I thought I’d post it up.

The essay’s a few years old now, but I’m still very proud of it. ‘Cloud Atlas’ is essentially an epic meditation on human nature across the ages, and I was determined to show that the controversial science of evolutionary psychology could do a better job of teasing out the novel’s depths than the vacuous postmodernist philosophy so in-vogue across the humanities and social sciences. I discuss figures such as Michel Foucault and Jacques Derrida on the one side, and E. O. Wilson and Noam Chomsky on the other.

Here it is!

Cloud Atlas Dissertation

7,000,000 years in 700 words: A Brief History of the Human Race

'Guns, Germs and Steel': The spread of humans around the world.

‘Guns, Germs and Steel’: The spread of humans around the world.

I normally steer well-clear of history as a subject because I’m useless at remembering detail. I prefer patterns. It is for this reason that I’m currently deeply absorbed Jared Diamond’s Guns, Germs, and Steel: The Fates of Human Societies. As a history book written by a biologist it brings to the traditional conception of the subject not only a breathtaking multi-million-year perspective, but also – controversially – a scientist’s ambition to perceive order in history’s chaos. Diamond’s theory is one that seeks to explain why some societies – most notably Eurasian ones – got such a headstart over others (such as Native Americans, Australian Aboriginals, South Africans and Pacific Islanders), and it does so largely with recourse to geographical factors like the worldwide distribution of domesticable plant and animal species. Gone are the tales of individual genius, great leaders, or the innate superiority of certain peoples. In their place sits the guiding assumption that everyone, everywhere, is pretty smart – and that getting ahead is just a matter of what resources you have to work with.

I’m not going to recount Diamond’s full argument here. You can read the book for that. Instead, I’d just like to offer a quick summary of chapter one, ‘Up to the Starting Line’, because it provides a wonderful foundation for all the subsequent detail – and for an understanding of humankind in general. What follows is essentially a whirlwind account of the last 7 million years of human development and expansion across the globe.

A note regarding accuracy: Obviously, most of the dates listed here are the best guesses based upon the available evidence (which is sometimes scant). Diamond discusses at length the various controversies surrounding each claim, but for the sake of simplicity I’ve relegated most of these concerns to a brief footnote.

And so…

c. 7,000,000 years ago: A group of African apes splits into four separate populations which eventually evolve into four distinct species: gorillas, the common chimp, the bonobo chimp, and humans.*

c. 4,000,000 – 1,700,000 years ago: Proto-humans evolve in sequence from what we now call ‘Australopithecus africanus’ to ‘Homo habilus’ to ‘Homo erectus’. There are advances in upright posture, body-size and brain-size (though the latter is still only half as large modern humans’). Some very crude stone tools are innovated.*

c. 1,000,000 years ago: Homo erectus makes it out of Africa, and all the way to South-East Asia (with the earliest evidence of a human ancestor outside of Africa found in Java).*

c. 500,000 years ago: Proto-humans – now classified as ‘Homo sapiens’ – make it to Europe. Despite sharing our classification, these people were yet to develop the brain size and behaviour patterns of modern human. The use of fire was innovated.*

c. 500,000 – 40,000 years ago: There are evolutionary divergences between the three main populations of proto-humans in Africa, Asia, and Europe. The most famous amongst these were Europe’s ‘Neanderthals’, who had larger brains, buried their dead, and cared for their sick. The Africans at this time were the most similar to modern humans. Tools and hunting skills remained rudimentary across the board, however.*

c. 50,000 years ago: ‘The Great Leap Forward’ occurs. Modern (‘Cro Magnon’) man appears. The revolution in human development probably centred around the emergence of language. Its effects included the invention of standardised stone tools, jewellery, bone tools (including fishhooks), harpoons, spears, bows and arrows, rope, houses, sewn clothing, and incredible art (most famously at Lascaux, in France). It seems likely that this ‘Great Leap’ happened first in Africa and then spread to other continents as those more advanced humans displaced their evolutionary counterparts. This is certainly what happened in Europe, with African Cro-Magnons killing or displacing the Neanderthals with little hybridisation. In China and Indonesia the picture is less clear: there is some (controversial) evidence that the indigenous people of those areas have been established for hundreds of thousands of years, suggesting a parallel ‘Great Leap’.*

[According to Andrew Marr’s History of the World, currently airing on the BBC, all modern humans may trace their line of descent back to a single tribe – and a single pregnant woman – who left Africa at this time. This supports the hypothesis that modern humans spread to every corner of the world by force.]

c. 40,000 – 30,000 years ago: Modern humans reach the (then-combined) Australia-New Guinea continent. Much of Indonesia was reachable on foot (due to lower sea levels during the Ice Age), but this is the first indication of watercraft being used. There was the first major extinction of indigenous ‘megafauna’ (including giant kangaroos, rhino-like ‘diprodonts’, marsupial ‘leopards’ and ostrich-like birds). Unlike large African or European mammals, these species had not had a chance to co-evolve with increasingly threatening humans. Instead they were probably completely tame, and likely wiped out by the burgeoning human population.*

c. 20,000 years ago: Humans (in possession of needles, sewn clothing and warm-housing) reach Northern Europe and Siberia. The woolly mammoth and woolly rhino go extinct.*

c. 14,000 years ago (12,000 BC): With the thawing of the previously impassable Canadian ice-sheet, humans reach the Americas via Alaska, and spread all the way south as far as Amazonia and Patagonia within 1000 years. Hoards of native elephants, horses, lions, cheetahs, camels and giant sloths go extinct.*

c. 10,500 – 6,000 years ago (8,500 – 4,000 BC): Mediterranean peoples reach the islands of Crete, Cyprus, Corsica and Sardinia.*

c. 4000 years ago (2,000 BC): Native Americans (latterly Inuits) reach the High Arctic.*

c. 3,200 – 1,500 years ago (1,200 BC – 500 AD): One group of sea-faring New Guineans (from the Bismark Archipelago) spread out across the hundreds of Polynesian and Micronesian Islands (including New Zealand, Tonga and Hawaii).*

c. 1,700 – 1,200 years ago (300 AD – 800 AD): Indonesians (rather than Africans) discover Madagascar by canoe across the Indian Ocean.*

c. 1,100 – 1000 years ago (900 AD – 1000 AD): Norse peoples reach Iceland (though they may have been preceded by Scottish or Irish Celts).*

c. 700 years ago onward (1,300 AD – ): European explorers discover the last remaining islands in the remote Atlantic and Indian Oceans (such as the Azores and Seychelles), plus Antarctica.*


* Probably.

Love, science, and meeting Steven Pinker

See bottom of post for Steven Pinker’s response.

Reading popular science works by rationalist, neo-Enlightenment thinkers like Richard Dawkins, Daniel Dennett, E. O. Wilson and Steven Pinker is something that has a powerful emotional effect upon me. It’s not a purely cerebral thing – it also fills me with a great sensation of understanding, and certainty, and control.

When dealing with a bad break-up earlier this year, it occurred to me how valuable it would be to me to read what these men might have to say about ‘love’ as a phenomenon. What I wanted was to extend those same feelings of certainty and control into an area of my life that had recently made me feel so fraught and powerless. And yet amongst all the talk of brains, evolution and consciousness in their literature, there is next to no mention of love.

I had the briefest of chances to mention this thought to Steven Pinker himself last night, after a talk at the Bristol Festival of Ideas. He laughed knowingly, and pointed me to a 3-page extract in his book, How the Mind Works, which I’ve not found yet, though I suspect corresponds to this warbly Youtube clip. It’s a brief game-theoretical analysis that argues that because it is almost always irrational to make a lasting commitment to another person (who knows – you might meet someone better?), the sensation of love must be dramatically irrational in order for people to pair up at all.

Still… only 3 pages, in a 700-page book, amongst shelves and shelves of tomes by Pinker and his ilk. Why is it that the single most important concern of art and fiction is so neglected by these men of science? Why don’t Dennett or Dawkins or Wilson say anything at all? Pinker’s suggestion, at the autograph table last night, was that they are philosophers and biologists rather than psychologists – but I found this notion unsatisfying. Don’t biology and philosophy have very important things to say about love? And since when have any of the above authors proved shy about crossing disciplinary boundaries in order to weave sweeping syntheses on questions of religion, society, or the origin of human instincts?

Of course it may be that love is either too simple or too fuzzy a concept to waste much ink on, but my suspicion is that perhaps Pinker’s bite-sized theory gets at something quite important. If it is true that love must be ‘irrational’ (i.e. fiercely emotional) in order to make sense in evolutionary terms, then perhaps it is that fact that makes even scientists a little queasy about subjecting it to too much cold analysis. Or perhaps they’re just afraid of offending their partners with too much talk of cost-benefit trade-offs. Either way, it seems to me that explaining love may be a taboo even greater than denouncing God.

UPDATE: Steven Pinker responds via email:

Steven PinkerThanks, Joe! The two main discussions of love in my books can be found in the section “Fools for Love” in chapter 6 of How the Mind Works (with a bit more in the section “Men and Women” in the following chapter), and the conclusion to the chapter “The Many Roots of Our Suffering” in The Blank Slate.


Had better get reading then!

On Universal Darwinism, the nature of foresight, and the virtues of flailing aimlessly…

I first saw this extract – culled from the BBC’s ‘Life’ series – during my final year at university, and to this day I honestly cannot help but gasp for breath on every re-watch. Mesmerising, isn’t it?

Placing sheer wonder-value to one side, however, I want to focus in particular on the image of the passionflower tentacle ‘flailing aimlessly’ in search of a hold, and to use that coiling limb to pull together my thoughts on a variety of topics I’ve encountered in my science-based reading over the past couple of years. These include things like evolution, consciousness, cosmology, creativity, culture and the history of ideas. But first: back to wonder.

The illusion of foresight in the natural world

What is it about seeing plant-life through the medium of time-lapse that fascinates us so? I rather think the key to answering that lies in the type of language employed by David Attenborough in the above clip: for the young plants too little light means death, this poses them a problem, though they need not be passive. Even before the comparison with ‘fingertips searching for a hold’ is made, it’s clear that the narration has slipped – quite understandably – into anthropomorphism. Following the American philosopher Daniel Dennett, we might say that Attenborough is adopting ‘the intentional stance’: a cognitive filter (normally reserved for other human beings) through which the plants are rendered as wanters, believers, strategists.

‘They’re alive!’ yells a voice in our heads, ‘And they know what they’re doing!’

Is this an error? I’ll come back to that question, but for now it is enough to say that of course plants are alive – but that they certainly don’t foresee their own deaths, nor strategise to avoid them, as Attenborough (perhaps carelessly) implies. They’re blind, mechanistic, protein-processes, responding only to their genetic recipes and basic cues from their environment. They may look like they know what they’re doing, but that’s only because they’ve inherited their lifecycles from a billion-year chain of ancestors, each of which enjoyed at least a base-level of reproductive fitness bequeathed them by their parents, and every now and then a particularly potent genetic reshuffle (or perhaps a novel mutation) that gave them an incremental edge over their counterparts.

But there is another, perhaps more important, kind of fortune enjoyed by every organism that belongs to a successful lineage of plant or indeed animal life, and that’s this: Not being one of the missteps. It’s a cheering thought that not a single one of our direct ancestors ever failed to pass on their genes; to recognise that each of us is the latest in an unbroken chain of successful procreators that stretches right back to some great granddaddy bacterium. But (and here’s the nub) this vision of a perfect line, or lines, of descent through the generations is only one half of the great evolutionary narrative: in every generation of every species, there are always creepers with pads not quite sticky enough (or too sticky), or giraffes with necks not quite long enough (or too long), or those organisms of any kind that suffer deleterious rather than adaptationary mutations. Each of our successful ancestors will likely have had brothers and sisters who did not quite make the grade. Richard Dawkins has referred to natural selection as ‘The Blind Watchmaker’ – and it is true to say that its workshop floor is littered with the corpses of the Not-Quite-Rights.

As distressing as that might sound, however, we ought not to bemoan it – because without a little trial and error there would be no natural selection at all. Or, to grasp my theme more fully, I should say that evolution – just like the passionflower – has no foresight, and must flail aimlessly before striking upon success.

The temptations of teleology and the evolution of scientific knowledge

In many ways Darwin’s elimination of foresight from biology strikes me as analogous to a similar reversal in physics. Right up until the scientific revolution of the late sixteenth and seventeenth centuries, the prevailing view of the physical world in Western learning was that of Aristotle’s, in which all matter – divided neatly into the four elements – was said to tend toward its God-given place: ‘earth’ (the very stuff of our planet) sat in the middle of creation, about which rested layers of water, air and fire in that order. It’s still a readily attractive worldview in many ways: the ground, the sea, the sky, and the sun are all put in their proper places – and simple physical processes like a stone sinking in water, or the upward flickering of an open fire, can be explained in terms of the elements attempting to reach their ‘natural’ positions.

‘They’re alive!’ yells a voice in our heads, ‘And they know what they’re doing!’

Nevertheless, this kind of ‘teleological’ explanation, in which final destinations are given prominence over prior causes, is a dying breed in most sciences. One still finds it in the ‘equilibrium’ models of economists (according to which financial systems tend toward a perfectly balanced state), or in James Lovelock’s new-agey Gia-theory (in which the planetary ecosystem does much the same thing) – but in general scientists these days would rather uncover the lower level mechanisms that do the pushing than posit the existence of a never-actually-observed ideal that somehow pulls the phenomena toward it. (What balanced economy? What balanced ecosystem? Doesn’t anybody watch the news?).

As such theories die off, and new ones take their place, it has tempted philosophers of science to account for scientific progress itself as a kind of evolutionary process. Karl Popper is the touchstone here, but I’m going to attempt my own account. The evolutionary view of science is one in which:

  1. every intellectual innovation of every working scientist (good or bad) is to be considered a comparatively minor mutation in the larger body of ideas inherited from the previous generation, and
  2. the practice of science (with its attendant insistence on logical argument and empirical support) is understood to act as the selection pressure by which the useful mutations are singled out and perpetuated (at the expense of the ineffectual ones that simply wither away).

What I find especially appealing about this argument is the challenge it poses to that popular history of science in which a few intellectual juggernauts (the Darwins, Newtons, Einsteins et al) are portrayed as having cast aside the prevailing intellectual myopia of their day and paved the way into the future with their great genius. The challenge is two-fold. Firstly, the evolutionary view of science points to the fact that even the most revolutionary ideas build upon an existing inherited framework. Einstein himself once wrote of his forebear Ernst Mach,

[His writing] clearly recognised the weak spots in Classical Mechanics and was not far from requiring a General Theory of Relativity … it is not improbable that Mach would have come across Relativity if, at the time when he was in his prime, physicists had concerned themselves with the significance of the constancy of the speed of light’.

It is worth remembering, likewise, poor old Alfred Russel Wallace – the second man to have struck upon the theory of natural selection, independently of Darwin, and nigh-on simultaneously. The temporal proximity of their discoveries can only be explained, to my mind, by the two men’s existence in a shared intellectual culture – one that made the great leap of evolutionary theory a small step capable of being taken by anybody with sufficient immersion in the debates of the time (and at least a modest sense of iconoclasm).

The second challenge posed by evolutionary science to the traditional history of scientific genius has to do with the notion of luck: if all theoretical advances are mere chance mutations, then progress may be less a matter of great individuals being able to ‘see’ the solution to a problem, and more to do with an entire crowd of scientists each throwing suggestions at it, and each hoping that theirs in the one that sticks. As Popper surmises in Conjectures and Refutations,

So my answer to the questions ‘How do you know? What is the source or the basis of your assertion? What observations have led you to it?’ would be: ‘I do not know: my assertion was merely a guess. Never mind the source … if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can…

Perhaps it is the case that science and great scientists – just like the passionflower – have no foresight, and must flail aimlessly before striking upon success.

Universal Darwinism and the ‘survival of the stable’

The application of Darwinian logic to subjects other than biology is known as Universal Darwinism, and it’s a difficult game to stop playing once you’ve grasped the rules. Perhaps the most famous example is Richard Dawkins’ ‘meme theory’, first propounded in The Selfish Gene, in which the entirety of human culture is recast as the history of the differential survival of self-replicating ideas, or ‘memes’. The overall picture is akin to the account of science I’ve just given – only with a rather less formal set of selection pressures. Scientific ideas are whittled down and developed over the course of generations under the weight of questions like ‘does-it-work-as-an-explanation?’ and ‘what-testable-predictions-does-it-make?’. Culture, meanwhile, asks only ‘is-it-catchy?’. Catchy songs, paintings, practices, religions – these spread, while boring ones die out.

I’m simplifying – but meme theory is an effective meme in itself, and so there’s little reason to go into the detail here. Instead, I’d like to take the opportunity to quote another passage The Selfish Gene (one of my favourites):

Darwin’s ‘survival of the fittest’ is really a special case of a more general law of ‘survival of the stable’. The universe is populated by stable things. A stable thing is [that which] is permanent enough or common enough to deserve a name.

I find this simple observation incredibly profound, and a powerful inoculation again the counter-intuitive unease that evolutionary arguments often provoke. It just seems crazy – does it not? – to suggest that those creepers don’t know what they’re doing. Or that life on earth, the General Theory of Relativity, or Catholicism, weren’t designed by conscious agents. Even if you accept the logic of the evolutionary argument, it’s still hard to really believe it sometimes. Nevertheless, if one imagines (contra Dawkins) a universe populated exclusively by unstable things – organisms that didn’t reliably survive and reproduce, sciences that answered no questions, melodies that no one could hum – then one can quickly appreciate that whenever this ceaseless chaos of nonsense, discordance and twisted limbs just so happened to produce something capable of lasting or perpetuating itself, then that ‘thing’ would soon become ubiquitous. Or at least catch our attention like nothing else. This is why the universe – perfectly capable of chaos – is populated for the most part by stable things.

I’ve even heard it suggested that the laws of physics themselves may have ‘evolved’ by such a process. It is quite easy to imagine that at one time nothing in the universe exhibited any regularity whatsoever; that no one particle colliding with any other could be relied upon to bounce off in predictable directions, or even last long enough to come into collision in the first place. Perhaps they just popped in and out of existence with genuine randomness. Perhaps they weren’t even particles, and so forth. Yet, from amongst this chaos, whenever phenomena did emerge – by accident – that happened to have the properties of lasting, and behaving in regular ways, then this would slowly become the norm. The central idea is that chaos must, by its nature, chance upon order sooner or later.

(I should point out that when I say I’ve ‘heard this suggested’, that’s quite literally true. I owe it to my friend Steve McKellar, who done told me it down the pub. You can find some of Steve’s fantastic evolutionary programming and artwork at

Conclusion: Dancing children and the everyday evolution of ourselves

Time for one last video:

When I first saw this documentary, I was putting together a DVD of home movie footage taken by my parents in the early nineties. What astonished me was the similarity between these digitally-evolved walkers and the sight of my one-year-old sister – emerging through the static – as she learnt to walk. Time and time again she would struggle to her feet, take a gamble on a few paces, and suffer the inevitable tumble. And yet, by the end of the year encompassed by the grimy old VHS tape, she’d learnt enough to be dancing with my dad and cousins on Christmas day.

It’s a very beautiful and moving sight – but one that got me thinking: ‘hey, is learning an evolutionary process too?’. That is, do we learn to walk (or talk, or master cryptic crosswords) by ‘allowing’ our bodies and brains to make a thousands mistakes amongst which the accidentally successful ones get reinforced by the rewards they bring us? The following passage from Dennett (on neuron growth) seems to chime with this idea:

It has been recognised for years that the human genome, large as it is, is much too small to specify (in its gene recipes) all the connections that are formed between neurons. What happens is that the genes specify processes that set in motion huge population growths of neurons – many more neurons than our brains will eventually use – and these neurons send out exploratory branches at random (at pseudo-random, of course), and many of these happen to connect to  other neurons in ways that are detectably useful (detectable by the mindless process of brain-pruning). These winning connections tend to survive, while the losing connections die, to be dismantled so that their parts can be recycled in the next generation of hopeful neuron growths a few days later.

And so it seems that even our brains are passionflower tentacles, sending out lightning fork ‘exploratory branches’ until we grasp a thought or habit toward which we may haul ourselves. Is this the recipe for all thought, all creativity? I like the idea that it might be. Perhaps, then, it is rather unfair to suggest that – unlike us – plants ‘don’t know what they’re doing’. Of course the speed and plasticity of our neuron growth far outstrips the once-in-a-lifetime ascent toward the sun exhibited by the creepers, but the essential mechanism remains the same. We should not speak of plants’ – or science’s – ‘lack of foresight’ when what we perceive as foresight in ourselves is precisely their ability to cast around in search of a hold.

On Inductivism and Falsificationism: Why do scientists value evidence?

Why do scientists value evidence? The question may appear absurd, and the answer blindingly obvious: because evidence demonstrates the truth of theories. But is this really true?

I’m currently reading James Ladyman’s textbook, ‘Understanding Philosophy of Science’ which outlines a number of debates on the nature of evidence and the practice of science in general. In this post I’ll relate some highlights gleaned from the chapters on ‘inductivism’ and ‘falsificationism’ (friendly definitions to follow, I promise).

A popular conception of science sees scientists making a large number of observations regarding a particular phenomena and then proceeding to make generalisations that account for every instance. For example, if time and time again it is observed that metals of every type expand upon heating it would seem sensible to conclude that the statement ‘all metals expand when heated’ is true. This extraction of generalisations from evidence is called ‘induction’. ‘Inductivism’ is the view that science may be defined by this method.

There is, however, a long-standing debate within the philosophy of science about the validity of induction and inductivism. The various objections to evidence-based generalisations arise from the following, comparatively simple argument: No amount of positive evidence can ensure against the eventuality that a negative instance may yet be found. For instance, it is conceivable that a metal may yet be discovered that does not expand upon heating. Extending the argument into the realm of metaphysics, the eighteenth-century Scottish philosopher David Hume goes so far as to suggest that all scientific generalisations are based upon the assumption that the future will resemble the past, and that we have no rational reason to believe that it will. So even if we had subjected all metal in the universe to heating and found that every sample conformed to our expectations of expansion, we would still – according to Hume – have no reason extract any kind of general law. Perhaps the next time we hold a lump of copper over a Bunsen burner it might shrink. This certainly seems very counter-intuitive – but drawing attention to nature of our intuitions is Hume’s intention. We only believe that the future will resemble the past because it has always done so in the past. Though this certainly seems a very good assumption, Hume’s point is that we must admit that it is an assumption in that it cannot conceivably be justified by evidence.

What such objections to induction have in common is the challenge they pose to science. If it is true that a theory may at any time be disproved by a negative instance (whether or not this involves the regularity of the universe unravelling, or simply new evidence coming to light), then the ability of scientists to pronounce with certainty upon any subject appears fatally undermined.

Perhaps the most successful rebuttal to this argument is Karl Popper’s ‘falsificationism’. Popper, a twentieth-century British philosopher, sought to undermine, rather than solve, the problem of induction by suggesting that science is never about proving theories to be true; quite the reverse. Science, according to Popper, should busy itself with the falsification of theories; with the whittling down of the available possibilities. Truth is only ever to be approached and never claimed absolutely. The best theories are those battle-hardened formulations that have survived whatever tests have thus far been devised. They are not to be considered correct; merely the least-wrong.

An important consequence of this argument is that scientific theories must be potentially falsifiable. If one suggests a theory that could not be proven wrong under any circumstances, then no debate can take place and truth is reduced to a matter of assertion. Popper was especially scathing towards Marxism and psychoanalysis on these grounds. If, for instance, a government made some efforts to look after its nation’s poor, then a die-hard Marxist could simply explain away this apparent contradiction of their favoured theory as an attempt by the ruling elite to thwart the oncoming proletariat revolution. Likewise, those expressing criticisms of psychoanalysis may be dismissed by its practitioners as suffering from deep-rooted repression.

Critics of falsificationism have pointed out that scientists do, in some cases, appear to believe things for positive reasons. Many successful theories posit the existence of things that cannot be directly observed; atoms, black holes, DNA and so forth. According to Popper, such entities are merely conceptual devices employed by the least-wrong theories to make predictions. A true adherent of falsificationism cannot assert the literal truth of their existence – and yet many scientists do. Speaking personally, I think this splitting hairs: it is possible that in conversations, especially with journalists, scientists may simply use the word ‘exist’ as shorthand for ‘is a reasonable inference of our least-wrong theory’.

Another criticism of falsificationism is that some scientific principles are not falsifiable, and whose apparent violation would send scientists seeking any other explanation but the refutation of the theory. One such example is the principle of the conservation of energy, which states that energy may take different forms but is never created or destroyed. If a system is observed to be creating energy from nothing, scientists would rather question the accuracy of their observations, or postulate the existence of some non-observable energy source interfering with their measurement devices. In such instances, scientists do seem to value the sheer weight of positive confirmations of a theory over one negative instance. The danger here is that such scientists are indistinguishable from self-deluding Marxists. I would argue, however, that it is certainly sensible to make sure that all options have been considered before abandoning a long-standing principle. One may remain open-minded to the prospect of its refutation at the same time as exploring the possibilities for its continued relevance.

On Imaginary Numbers: How ‘unreal’ concepts may help us understand ‘real’ phenomena

Something of a place-holder entry this time, I’m afraid. With more questions than answers.

First question: What are imaginary numbers? There’s a fascinating introduction to the subject available as part of the BBC’s In Our Time archive, but I’ll précis the basics for you now:

An imaginary number is that which gives a negative result when multiplied by itself. The square root of minus one – also known as i – is an example. The most astonishing thing about imaginary numbers (though perhaps their name ought to have given us fair warning) is that they don’t ‘exist’ in the real world. One cannot count or measure with them. And yet – when embedded in equations – they have proven extraordinarily helpful in providing verifiably accurate solutions to real world problems. Imaginary numbers are crucial conceptual tools in contemporary scientific models of electromagnetism, fluid dynamics and quantum mechanics for example.

How can this possibly be? How can something that doesn’t exist describe something that does?

A helpful analogy – though not really a solution – can be found in negative numbers. After all, negatives don’t really ‘exist’ either. And that doesn’t forestall their use in equations that come out with positive results. Imagine a healthy balance sheet. So long as your income (modelled by ‘real’ positive numbers) outweighs your debts (modelled by ‘conceptual’ negative numbers), then your bottom line will be a ‘real’ number insofar as you could convert it into tangible purchases if you so wished. It doesn’t matter that you’ve used unreal negative numbers to get there. The only difference between negative numbers and imaginary numbers, then, is that the former may be attached to an intuitively graspable concept: debt.

Perhaps a more comprehensive explanation could be found by going one stage further and admitting that even positive numbers are, in a sense, unreal. Mathematical constants are just like nouns in any spoken language: they divide a continuous universe up into discrete chunks that may be talked about. That some mathematical concepts (such as positive numbers) ‘make more sense’ to us as human beings is interesting, but this should have no bearing upon whether they (or any other concept) should be considered ‘real’. All concepts are representational, their definitions man-made. The question is whether they are useful, by which I mean that they aid our ability to comprehend and/or predict observable phenomena.

I apologise if I’m bordering on incomprehensibility here. This idea that scientific concepts ought to be judged on their usefulness rather than their essence is one that has been intriguing me for a while now. I’m going to try and write about it with more clarity (and perhaps some nice pictures) before too long.