Summary and Review: ‘Philosophy and the Sciences’ (Manchester, May 2015)
I had a fantastic time last week attending a conference entitled ‘Philosophy and the Sciences’ organised by Helen Beebee and Michael Rush at the University of Manchester. Here’s the blurb:
Scientists sometimes express the view that science, in principle, has all the answers to the important questions about the universe and our place in it, and that philosophy therefore no longer has anything to contribute. In this public conference we consider ways in which philosophy and various sciences can usefully engage with one another.
The speakers included Raymond Tallis, Barry C. Smith, Patrick Haggard, John Dupré, Hans Westerhoff, Eleanor Knox, Fay Dowker and James Ladyman. Good crowd, I thought. What follows is a rough summary of the proceedings, with one or two personal interjections. Many thanks to Bristol University’s Centre for Science and Philosophy for inviting me to blog about the event.
Opening remarks: Raymond Tallis
Raymond Tallis set the scene for the ensuing debates between scientists and philosophers with the now infamous quote from Steven Hawking’s The Grand Design (2010):
How can we understand the world in which we find ourselves? How does the universe behave? What is the nature of reality? Where did all this come from? Did the universe need a creator? … Traditionally these are questions for philosophy, but philosophy is dead. Philosophy has not kept up with modern developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge.
Ouch. Is philosophy really “an air balloon in a jet age?” asked Tallis. Few of the philosophers in the room were likely to assent, though opinions differed as to what exactly philosophy ought to contribute. Tallis’s own conception of the philosopher’s purview centred around notions of consciousness and human experience. He illustrated this view with the following quote from Richard Feynman:
The next great awakening of human intellect may well produce a method of understanding the qualitative content of equations. Today, we cannot see whether Schrödinger’s equation contains frogs, musical composers, or morality – or whether it does not.
In addition to frogs, composers and morality, Tallis also pointed towards the seeming absence of consciousness, free will, and the flow of time from the austere worldview of mathematical physics. Lee Smolin’s recent collaboration with the philosopher Roberto Mangabeira Unger in The Singular Universe (2015) was held up as a paradigmatic example of “natural philosophy” i.e. a return to the kind of inquiry practiced before science and philosophy became separate disciplines, combining rigorous empirical investigation with the philosopher’s ambition to paint a broader and more meaningful worldview.
Philosophy and neuroscience: Patrick Haggard (the neuroscientist) vs. Barry C. Smith (the philosopher of mind)
Taking against voguish studies on the neuroscience of love and other “high end” states of consciousness, Patrick Haggard began his talk by urging the audience to consider the “ground level” of experience in terms of “labelled lines” in the brain. These are neuronal pathways that correlate with specific types of sensation. Haggard noted that the sense of touch subdivides into two sets of labelled lines; one carrying information about vibrations on the skin, the other about sustained pressure (like someone standing on your toe). Haggard’s neuroscientific work concerns “molecular hijacking” i.e. inducing familiar sensations by unfamiliar means, for example applying Sichuan pepper to the lips. This “hijacks” the labelled lines for vibration, and produces a tingling sensation. Regarding the philosophy of mind, Haggard was dismissive of naïve concepts such as “qualia” or “sense data”. He claimed that his own reductive methodology – breaking complex experiences down into their constituent sensations that correlate with neuronal pathways – was a step-by-step means of solving the big mysteries that philosophers try and fail to solve in one fell swoop.
Haggard’s view found resounding support in Barry Smith’s presentation. Smith was highly critical of the explanatory “stopping points” introduced by those philosophers of mind who would demand that neuroscience explain, all at once, “the smell of a rose” – once again because seemingly simple sensations tend to stem from the activation of many neuronal channels simultaneously. Smith pointed out that many “tastes” are in fact tastes, textures and smells combined – and that we can learn to appreciate this from a first person perspective by eating strawberries with a nose-clip on (for instance). Smith also noted that different cultures parse sensory experiences in more or less “accurate” ways; the Japanese, for instance, recognise Umami as the fifth basic type of taste (common to peas, tomatoes, parmesan cheese and seafood). This corresponds in a very specific way to one of the taste receptors on the tongue – and yet Westerners find this harder to pick out without training.
Much of the resulting discussion with the audience focussed on whether Haggard and Smith had side-stepped the “hard problem” of consciousness. One conference attendee pressed the pair on whether consciousness correlated with brain events, was caused by brain events or was identical with brain events (with the former two presumably requiring some spooky “extra” substance). Smith responded by saying that neuroscience has demonstrated a very fine grained correlation, time and time again – with the other two options left open as philosophical possibilities. He erred towards identity theory. Haggard expressed optimism about a purely operational theory of consciousness; he admitted that this might ultimately bypass the question of “what consciousness is”, but seemed to think it wouldn’t.
Thomas Nagel’s question “What is it like to be a bat?” was given a thorough dressing down by Smith. “What is it like…?” questions make sense in the context of particular experiences, he argued e.g. “What is the tingling on your tongue like? It’s like licking a battery”. But they make little sense when asked about experience itself. “What is it like to experience that tingling? It’s not like anything”.
On the relationship between science and philosophy, Smith suggested that science doesn’t always answer philosophical questions, but more often than not it helps get the phenomena in view, and enables us to ask better, more informed questions about the world.
Philosophy and biology: Hans Westerhoff (the biologist) vs. John Dupré (the philosopher of biology)
Hans Westerhoff began with a brief presentation on systems biology entitled “Irreducible complexity and computable emergence”. He noted that biological systems are irreducibly complex in the sense that they require a minimum amount of genes to get going. But their emergence from constituent parts (such as genes) remains completely computable, he claimed. Westerhoff noted that this makes him an adherent of “weak emergence” in philosopher’s parlance. As a consequence of this view, Westerhoff noted that doctors may soon be able to utilise computer simulations of their patients (generated from their genetic profile) in order to administer drug treatments with greater efficacy.
John Dupré was critical of Westerhoff’s reductionistic ambition on the grounds that there are no “fundamental particles” in biology. All biological entities – genes, cells, organisms et al. – operate in response to the larger systems that they are supposed to explain from the bottom up; top down approaches are often more realistic for this reason. He complained that biologists tend to postulate “closed systems” with neat boundaries – but there are no such systems. Cell membranes, for instance, are highly active tissues that constantly maintain inflows and outflows of material in response to their environment. Even individual humans must be studied in their social contexts; it makes little sense to conduct psychology on individuals in isolation. On the question of “supervenience” (the idea that the lower-level properties of a system determine its higher-level properties), Dupré argued that since it is impossible to fully separate any lower level system from its place in the higher level system, the only way to prove that the lower levels are doing the work would be to study the universe as a whole – and in the absence of that (impossible) investigation, he insisted that supervenience is rendered idle metaphysical speculation.
In the debate that followed, Westerhoff denied that biologists postulate closed systems; their models take full account of the inflows and outflows. Taking the example of an E. coli bacteria in a test tube, he claimed that biologists have a “complete” model that allows perfect computation of the consequences given any environmental input. Dupré countered that E. coli swarm in hordes in digestive systems often exchanging DNA; no study of an individual bacterium could take account of the indefinite range of environmental interactions. Westerhoff replied that the number of receptors on an individual bacterium was limited and that this represented a manageable framework within which it was possible to understand increasingly complex combinations of stimuli.
Dupré also took issue with Westerhoff’s conception of a medical patient’s virtual doppelganger. A genetic portrait would not be sufficient to model an individual’s response to drugs because it would fail to take account of their life history, and the effects this would have had on their bodies, including epigenetic adaptation. There was some laughter in the audience when Westerhoff was asked about the ethical implications of “experimenting” on our virtual selves; “That’s your problem!” he replied.
Attempting to find common ground between Westerhoff and Dupré, Raymond Tallis suggested that since modelling the entire universe was impossible, the reductionist program was a necessary evil; we have to subdivide in order to understand. On a similar note, Barry Smith suggested to Dupré that broad appeals to “context” were empty without some notion of mechanism. I had to agree with both Tallis and Smith. I take Dupré’s point that the world is incredibly complicated and interactive – but I struggled (as Westerhoff did) to find any practical worth in his broad appeals to context and complexity. Science just is the search for comparatively simple models of the world, in my view, and reductionism makes inroads into understanding complexity by identifying individual parts and holding external factors steady in order to get a handle on their role. Dupré’s demand that scientists “take complexity seriously” echoed, for me, the explanatory “stopping points” that Barry Smith had railed against in the previous debate. It seems to me that one explains complex phenomena step-by-step, and not by brute insistence on the reality of complexity itself.
Philosophy and physics: Fay Dowker (the physicist) and Eleanor Knox (the philosopher of physics)
Fay Dowker is a physicist working on quantum gravity, and her talk consisted in a cogent introduction to her discipline. In fundamental physics, Dowker explained, General Relativity represents our best account of gravity, and quantum mechanics represents our best account of matter. The project of quantum gravity is to unite the two. According to General Relativity there is a two-way relationship between matter and spacetime; spacetime bends and warps according to the distribution of matter within it, and matter moves according to that distribution (and that’s what we call gravity). The theory works well to predict, in a fully deterministic way, how configurations of matter change in spacetime. In quantum mechanics, however, the configurations of matter change in probabilistic ways. Thus far we have no theory that describes how spacetime responds to quantum mechanical matter. Dowker’s particular approach to the problem invokes a “granular” rather than continuous account of reality; she believes the universe consists in “atoms” of spacetime. Her motivation for pursuing this tack follows from the theoretical “discovery” by Steven Hawking and others that black holes have a temperature and an entropy (equal to the area of the event horizon as measured in Planck-units). Since entropy is traditionally conceived of as an emergent property of thousands of molecules, this immediately suggests event horizons have a granular structure.
Eleanor Knox opened with some comments about the relationship between physics and philosophy. The philosophy of physics has nothing to do with age-old philosophical problems, she noted. It consists in trying to think through specific paradoxes thrown up by contemporary physics. Within the physics community the very same activity gets called “foundations of physics”. Typical questions include “What is spacetime?”, “How do theories relate to one another?”, and “Are the differences between General Relativity and quantum mechanics purely interpretational, or are there predictive payoffs to be had in combining the two?”. Knox appeared to suggest that quantum gravity is much more than an interpretational exercise, and pushes into “crucial” questions for physics. With regard to Dowker’s “granular” spacetime, however, she called for caution and clarity; applying the language of thermodynamics (traditionally conceived of in terms of the aggregate properties of collections of atoms) to the atom-free event horizons of black holes represents an enormous conceptual leap that may well be justified but requires serious analysis. Is thermodynamics an emergent property of gasses, or is it a fundamental property of spacetime? That’s a big difference, she stressed.
Attempting to justify the leap identified by Knox, Dowker recounted the reasoning which lead Hawking and his collaborators to speak of black holes in thermodynamic terms. A 1973 paper by Hawking, James Bardeen and Brandon Carter entitled “The Laws of Black Hole Mechanics” noted the astonishing similarity between the mathematics of black holes and the laws of thermodynamics. The authors initially believed this to be a coincidence, but when others pointed out that the event horizon of a black hole must have an entropy proportional to its length in Planck units, the laws of thermodynamics became literally true of black holes. Moreover, when Hawking introduced quantum mechanics into his model of black holes, he discovered to his surprise that they do indeed have a temperature.
Keen to press the point about the inadequacy of physicists’ mathematical portrait of the world, Raymond Tallis asked Dowker if her granular depiction of spacetime was an artefact of mathematics. Dowker insisted that it was the other way around; the mathematics of continuums is much more developed, and the notion of a granular spacetime pushes mathematicians outside of their comfort zone. She conceded, however, that physicists’ worldviews are certainly shaped by the mathematical tools they have available; it’s much easier to use existing mathematics than to invent new kinds. Knox concurred that physicists are much more likely than other scientists to conflate the mathematical representation of the world with the world itself.
Tallis also pressed Dowker and Knox on whether philosophers can help keep physicists on track by bringing their models back into alignment with human experience. Knox answered that it would be obtuse to try to reduce cutting edge physics back to traditional concepts – but that there was a genuine project for philosophers interested in trying to reconceptualise familiar categories in light of new knowledge. Dowker provided an example of a contemporary physicist pushing in a direction that might restore a familiar notion of the passage of time: Rafael Sorkin’s work on granular spacetime suggests that the present moment may consist in the “birth” of spacetime atoms. Knox countered that perhaps we don’t need a traditional notion of the passage of time. The explanation for the “flow of time” may lie in human psychology rather than in physics. In some circumstances philosophers may be necessary to prevent physicists from returning to homely, old-fashioned conceptions of world when in fact we don’t need them.
Closing lecture: James Ladyman (the philosopher of science)
James Ladyman’s closing lecture addressed the relationship between science and philosophy in general terms. He began with the assertion that there is no sharp distinction between the two enterprises – but they are driven apart by a division of labour. This arises in our epoch because of the success of science, and the success of philosophy in spawning sciences; the result is an enormous industry that requires intense specialisation. Ladyman quipped that you will find physicists protesting “Oh no, I don’t know anything about the other part of the strong force…” Particle physics is such a collaborative exercise that papers will often have hundreds of authors; you need a committee just to decide which scientists deserve a credit. This level of specialisation is completely anathema to the philosopher’s love of general knowledge and the quest for an overall worldview. As a consequence philosophers can spend an entire career studying the history of science and philosophy without getting anywhere close to the cutting edge of scientific knowledge. On this model of the relationship between science and philosophy you might define the latter as the broadest and most theoretical side of any given science.
Ladyman was keen, however, to say that there is more to philosophy. As well as contributing to science, philosophers are well placed to reflect on the nature of scientific inquiry. Scientists themselves tend to give the wider public the impression that there is a singular scientific “method”, and hence a hard and fast line between science and non-science – but philosophers disagree. There is always a tension in science between those who say “I’ve seen enough, this is true” and others who say “No, I want more proof” – and there is no algorithm to say when the search for certainty should stop. It remains a matter of judgement. You won’t often find scientists talking like this, and this is a problem because if science is seen as all or nothing, and the wider public catches glimpses of uncertainty or ideological distortion in science, then the temptation is to reject the enterprise wholesale. There are the infamous examples of asbestos, tobacco, and Big Pharma where science has colluded with industry in not telling the truth. Philosophers should encourage a more nuanced attitude toward the sciences, drawing attention to varying levels of certainty depending on the area of inquiry.
Regarding notions of philosophy’s “sacred space”, Ladyman was generally sceptical, pointing to historical examples of science’s successful encroachment on philosophical turf. In the 19th century the epithet “scientism” was coined to describe the incursion of science into the study of human beings and society – and however critical we might be of certain facets of psychology, sociology, or economics, it seems crazy to suggest that the behavioural sciences have taught us nothing. When it comes to the philosopher’s special claim to expertise regarding consciousness and human experience – a vision championed by Raymond Tallis – Ladyman sided with Barry Smith in suggesting that such philosophies are often conceptually confused on their own terms.
Addressing in turn the hubris of science, Ladyman was highly critical of those public intellectuals given free reign by the title “scientist” to speculate about matters well beyond their specialism. Such individuals tend to lack intellectual humility, and communicate little of the difficulty and complexity of science – which is a shame because it is the constant state of flux and uncertainty that makes the scientific enterprise so exciting. Scientific hubris also leads to an ill-informed dismissal of philosophy. Foundational questions of interest to philosophers can have enormous practical benefits in the long run. In the 19th century few were interested in Frege’s quest to define the number one, but the logics he developed were carried through Russell, and Turing, and ultimately lead to modern computing.
Ladyman concluded with a critique of a related attitude in government funding of scientific research. The insistent demand that research generate instantaneous “impact” works against the far more productive practice of pursuing questions for their own sake. The history of science is littered with examples of great discoveries chanced upon as a result of open-ended rather than goal-directed research.
In the Q&A, Raymond Tallis asked Ladyman’s opinion of those philosophers working on the nature of time, independent of physics. Ladyman replied that they were largely wasting their time, though he suggested that they may develop interesting logics in the process.
Hans Westerhoff suggested that philosophy could be doing far more to keep science in check; he noticed many poor arguments and methodological errors in contemporary papers in biology, and hoped that philosophers would step in to help. Ladyman concurred that one of the problems in complexity science is that there are more free variables in the models than there are in the data; this was allowed to pass funding bodies without proper checks.
The conference concluded with an anecdote from Ladyman concerning the value of the humanities in general. He noted the frequency with which he encounters critics of a humanities education who would rather devote their careers to earning lots of money – but when asked why they respond: “So I can retire and pursue my interests in the arts and humanities…”