## Wednesday, 25 January 2017

### Careful With That Dial!

 Image from steelguitarforum.com
You can tune a piano but you can't tuna fish. Or a universe.

Among the arguments that get trotted out with cyclical regularity is that the universe is fine-tuned for life. On the face of it, this is actually one of the best arguments in the apologist's arsenal for several reasons. It's certainly not an easy argument to debunk with any rigour. That said, defeating it is fairly straightforward once you grasp the issues.

The problem for the counter-apologist is that this argument, unlike most of the arguments we see erected, actually seems to play by all the rules. It looks at the data, draws conclusions that actually relate to the data - or seem to, at any rate - and has what looks an awful lot like solid support for its premises. But does it really stack up?

The place to begin is to deal with what fine-tuning actually is. In the lexicon of the apologist, it's an indication that somebody had to set the parameters of the universe to specific values in order to allow life - and ultimately humans - to exist. There are deep issues with this, but I'm going to defer them for the knockout blow at the end.

In the lexicon of the physicist, it simply means that certain parameters must fall within a narrow range of values if the model under scrutiny is correct.

In some cases, fine-tuning can be seen as a problem for a model if there's no explanation. We met one of them in Before the Big Bang Part I, where we discussed some problems with the classic big bang. Among them was the flatness problem, an issue with the energy density of the cosmos having to fall within a very narrow range of values in order for the cosmos to be flat (Euclidean) on large scales. There is a kind of resolution offered by Alan Guth's inflationary theory, although there are those in the physics community who see this as something of a fudge, because inflation requires new physics that hasn't yet been observed (it also suffers some fine-tuning issues itself, not least the fine-tuning of the Lambda term in GR, also known as the cosmological constant). That said, there are at least candidate explanations that have observational evidence. Dark energy, for example, which is the name we give to the fact that the expansion of the cosmos is accelerating.

Ultimately all the fine-tuning 'problems' in physics are of this nature. They're parameters whose values require an explanation. Of course, what the apologist wants to do is to insert the default explanation that - because it's unexplained itself and isn't falsifiable in any way - doesn't, in fact, explain anything. This is even more of a fudge than inflation. We looked closely at this sort of thinking in Mind the Gap!

Let's look at a few common examples. The following text is from an exchange on Facebook, though a quick google tells me that this is nothing more than copypasta from somewhere else, which raises its own problems. I'll come back to those shortly, but for now, here's the text.

1. If the initial explosion of the big bang had differed in strength by as little as 1 part in 1060, the universe would have either quickly collapsed back on itself, or expanded too rapidly for stars to form. In either case, life would be impossible. [See Davies, 1982, pp. 90-91. (As John Jefferson Davis points out (p. 140), an accuracy of one part in 10^60 can be compared to firing a bullet at a one-inch target on the other side of the observable universe, twenty billion light years away, and hitting the target.)

The first and most obvious problem here, and what alerted me to the copypasta nature of it (beyond just experience, which tells me that such arguments are almost always copied from somewhere), is that first number. 1 part in 1060 isn't even a small number. Of course, what's actually happened (and it's been corrected in the second citation) is that the 60 should be an exponent, so it should read $$1$$ part in $$10^{60}$$, which is one with sixty zeroes after it.

As it turns out, the source of this was almost certainly the Discovery Institute, not least because their citation in an article from 1998 included the same text with the same missing exponent and the same later correction. At the risk of committing the genetic fallacy, any citation whose source is the Duplicity Institute should raise red flags in abundance at the very least, given their penchant for using what the new Drumpf administration has termed 'alternative facts'.

Another issue is that the citation of Paul Davies is from about the same time that inflationary theory was being developed, and well before the DI even erected this argument.

That said, we should, as always, be wary of accepting a claim at face value, even when the source is a reputable physicist, so let's have a look at what Davies actually said and see if the apologetic actually reflects it.
It follows from (4.13) that if p > p_crit then k > 0, the universe is spatially closed, and will eventually contract. The additional gravity of the extra-dense matter will drag the galaxies back on themselves. For p p_crit, the gravity of the cosmic matter is weaker and the universe ‘escapes’, expanding unchecked in much the same way as a rapidly receding projectile. The geometry of the universe, and its ultimate fate, thus depends on the density of matter or, equivalently, on the total number of particles in the universe, N. We are now able to grasp the full significance of the coincidence (4.12). It states precisely that nature has chosen N to have a value very close to that required to yield a spatially flat universe, with k = 0 and p = p_crit
Well, this is exactly what the discussion in Before the Big Bang Part I was dealing with, namely the flatness problem. This is concerned with the energy density of the cosmos p and the critical density p_crit to attain a spatially flat (Euclidean) cosmos and, apart from the fact that Davies has used slightly different notation, he's expressing precisely what that post dealt with. In other words, there's a resolution on the table for this, so it's not so much a problem as an open question. Moreover, as Davies goes on to say:

At the Planck time – the earliest epoch at which we can have any confidence in the theory – the ratio was at most an almost infinitesimal $$10^{-60}$$. If one regards the Planck time as the initial moment when the subsequent cosmic dynamics were determined, it is necessary to suppose that nature chose p to differ from p_crit by no more than one part in $$10^{60}$$.
In short, he's dealing with an instance of what we were looking at right at the head of this post. The apologist wants it to say that the universe is fine-tuned, and that therefore somebody had to twiddle some dials, but what he's actually saying is that this particular parameter, the energy density of the cosmos, must fall within a narrow range of values if the theory under scrutiny is correct. No measurement has taken place here, it's simply that if space is flat, the energy must hit close to that density. If the density varies by a large amount in the early cosmos, expansion will be attenuated by gravity and the cosmos will recollapse. Moreover, as discussed in Scale Invariance and the Cosmological Constant, we know that it was discovered in the '90s, more than a decade after this time, that the expansion of the cosmos began to accelerate some four billion years or so ago. What this means is that, wait for it... the expansion rate of the cosmos is a variable! Far from being restricted to a particular value, it can change over time. More importantly for our purposes here, and going back to that definition of fine-tuning employed by physicists, the model upon which Davies' calculations are predicated - the classic big bang - is wrong! Thus, the fine-tuning he cites vanishes in a puff of future research.

The simple fact is that there is no instance of fine-tuning mentioned by physicists that doesn't fall under this rubric, because this is what fine-tuning means to a physicist.

Let's move on and look at some other instances for completeness before we deliver the fatal blow.
2. Calculations indicate that if the strong nuclear force, the force that binds protons and neutrons together in an atom, had been stronger or weaker by as little as 5%, life would be impossible. (Leslie, 1989, pp. 4, 35; Barrow and Tipler, p. 322.)
This is a nice example, mostly because it furnishes me with an opportunity to talk about some really interesting physics. This is going to get a bit quantum.

In The Certainty of Uncertainty, we looked at some of the implications of Heisenberg's Uncertainty Principle. One of those implications, later demonstrated experimentally, is the idea that, because the value and rate of change of any field is a pair of conjugate variables, they're subject to the same uncertainty relationship as the position and momentum of a particle. The corollary to this is that field values must fluctuate. This is the now-famous 'zero-point energy', which manifests as virtual particle pairs that arise as a differential in their field, move apart, and then come back together in annihilation to equalise the differential. These have an interesting effect when it comes to the strength and range of forces, so it's apposite to mention them here.

Let's start with the electromagnetic force. Take two charged particles a little way apart, and bring them slowly together. When they're some distance apart (we're talking about extremely short distance scales here, in the Angstrom range), they're somewhat attracted, but as they come closer together, the attraction ramps up dramatically. Why? Well, the process of pair-production acts as an insulator for the electromagnetic force, so that the further you get away, the more insulation there is, but when you get really close, this is reduced to pretty much nothing and it ramps up dramatically. So the electromagnetic force is attenuated by virtual particles.

Now, in the case of the strong and weak nuclear forces, the opposite occurs, and the pair production acts as a conductor, so that when you are further away (still on extremely short distance scales, as these are very short range forces) they are actually more strongly attractive. As you get close enough to penetrate the barrier of pair production, they fall away in strength dramatically.

So how is that a problem for this apologetic? Simple; this is an entirely random process. Thus, we're being asked to accept that a completely random process is fine-tuned. Since this process plays a part in the strengths of the forces, we have to accept that the fine-tuning applies there as well, which is patently nonsense.
3. Calculations by Brandon Carter show that if gravity had been stronger or weaker by 1 part in 10 to the 40th power, then life-sustaining stars like the sun could not exist. This would most likely make life impossible. (Davies, 1984, p. 242.) 4. If the neutron were not about 1.001 times the mass of the proton, all protons would have decayed into neutrons or all neutrons would have decayed into protons, and thus life would not be possible. (Leslie, 1989, pp. 39-40 ) 5. If the electromagnetic force were slightly stronger or weaker, life would be impossible, for a variety of different reasons. (Leslie, 1988, p. 299.)
I'm going to lump these three together, because the resolution to them is the same.

First, we should note that, in isolation, these claims are nonsense, and I don't need to inconvenience any physicists to demonstrate it.

Mass, according to our best current models, is a function of interaction with the Higgs field. Gravity is the curvature of spacetime in the presence of mass. In other words, there's a deep relationship between mass and gravity. This is why treating these things in isolation is nonsense. If we increase the strength of gravity on its own, then stars would tend to be more dense, which can cause problems. However, if you simultaneously reduce the value of interaction with the Higgs field, the result is... no change! In short, these are not independent variables, so treating them as independent is silly.

Secondly, the latter of those claims fails for the reasons identified with pair production above.

I said that the source of the claims was an issue, especially the obvious fact that the apologist who presented this had copied the text from somewhere else. The biggest problem is that he didn't actually understand it. That missing exponent is a big clue. Anybody who truly understands the arguments they're making wouldn't overlook something as critical as an exponent, especially where they're trying to blind you with big numbers. This is exactly the same approach to argumentation we met before in Probably the Worst Argument in the World, wherein big numbers were used to try to show that something was impossible. The only difference here is that the numbers are being used in a slightly different context, but the goal is exactly the same, namely to show that such numbers can only be achieved by magical intervention.

When arguments are copied wholesale from elsewhere, the problem the counter-apologist faces is that the person copying the arguments rarely understands them well enough to recognise a valid objection. Indeed, I went through a period at one time of simply responding 'one-eyed trouser-trout' to such instances, because the apologist hasn't a hope of getting whether this rebuts his argument or not.

In this case, the apologist simply got shrill and insisted that I refute the arguments as presented, and that they must be true because 'physicists accept fine-tuning', entirely overlooking the fact that I'd already refuted them. In short, the problem is that the apologist can erect these arguments and think that they have the effect of making them look clever but, because they can't actually own the arguments, the arguments become self-defeating in the hands of the apologist.

Here's the thing: We have, as yet, no good scientific reasons for supposing that the constants and values we experience in the universe could be any different. We certainly engage in thought experiments in which they are because, often, looking at things from an alternative perspective leads us to ask questions that might not have occurred to us had we just taken them at face value. Rather than asking why water finds a level, we ask what would happen is water didn't find a level, but stacked up. This leads to some really interesting questions about the nature of water that otherwise may not have arisen.

We've also looked at the idea of a 'multiverse' (a term I have some distaste for), because the mathematics physicists use to deal with the evolution of the cosmos seem to point in that direction but, beyond that, they're devices for generating questions. Maybe those values can vary, and give rise to incredibly different kinds of cosmos. It's fruitful from a heuristic perspective to think about such things.

So, are there any other problems? Once again conscious that this is becoming a quite voluminous entry, I'm going to very briefly touch on some glaring issues with any argument that the universe is fine-tuned for intelligent life and, ultimately, humans.

The first station on our whistle-stop tour of culmination will be the weak nuclear force. This is one of the four fundamental forces of the universe (we should probably say three, since it's far from clear that gravity is actually a force, but that's an unnecessary complication for now). It's responsible for some forms of radiometric decay and nothing else, as near as we can tell. There was a marvellous study conducted in 2006 by Harnik et al of the High-Energy Physics team at Cornell University that suggests that this force could be removed entirely from the universe without appreciably affecting the evolution of the cosmos. How does this help? Well, without the weak interactions driving the decay of some isotopes, there would be more stable isotopes capable of forming long complex chains, something restricted to carbon and silicon when the weak force is included. Thus, any designer fine-tuning the universe for intelligent life would not, given a choice, include this force.

Another issue arises when we consider two areas of research that have really found their feet in the last century or so; chaos theory and quantum mechanics. I won't detail chaos theory in any detail here, because this is the topic of a future post, wherein it will be treated comprehensively. However, it's apposite to understand just what chaos is, because it's another one of those terms that gets lobbed around in arguments without being properly understood.

Chaos is, in the simplest terms possible, sensitivity to initial conditions. What this means is that if the initial conditions of a complex system are changed in a tiny way, the long-term evolution of the system can show dramatic differences as a result.

The classic analogy is known as the 'butterfly effect'. In this thought-experiment, we're asked to imagine a butterfly flapping its wings in China and the result being a hurricane in the Caribbean. This is a bit of a simplification and doesn't capture the true essence of chaos. The thing we really need to focus on is that the flap of wings is a change in the initial conditions of the system, and that the long-term evolution of the system is affected by it.

So, let's wind the universe back to the beginning so that we can run it again. One might think that starting with exactly the same conditions will mean that the outcome will be the same, but then we have to consider quantum effects, which are truly random. As discussed in Has Evolution Been Proven, evolution is stochastic. The same is true of the evolution of the universe. This means that the future evolution of the system depends upon initial conditions plus one or more random variables. Since QM tells us that there are many, many random events involved in the evolution of the universe, not least quantum fluctuations during inflation being stretched to macroscopic scales defining the inhomogeneities that eventually give the cosmos the structure we see, the chances of getting Earth are vanishingly small, let alone humans. Thus, fine-tuning of initial conditions with any goal in mind is utter nonsense, and not to be given credence by anybody with more than three or four functioning neurons.

It's also worth noting briefly that our idea of what constitutes the possibility of life is eternally subject to revision. The last several decades have revealed organisms that thrive in conditions we'd previously thought impossible. I won't list all the different environments that extremophiles have been discovered in, I'll simply link to the Wikipedia classification section. Ultimately, all assertions regarding what conditions are necessary for life are extremely (see what I did there?) anthropocentric, and should be treated with some scepticism.

In the end, though, there's one reason above all others that fine-tuning arguments for the existence of god all fail, and it's this: They suggest that god had no choice but to set those values where they are. In other words, god was constrained, and those values were imposed, which about wraps it up for omnipotence. In reality, fine-tuning can serve neither as an argument for nor against god. If god is omnipotent, then whatever values he deems fit will seem to us to be fine tuned, meaning we can take no meaning from the values whatsoever.

So, while this is quite probably the best argument the apologist has available to him, it fails to withstand any sort of rigorous scrutiny.

Here's the late, great, hugely-missed Douglas Adams, from The Salmon of Doubt:
This is rather as if you imagine a puddle waking up one morning and thinking, 'This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn't it? In fact it fits me staggeringly well, must have been made to have me in it!' This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, frantically hanging on to the notion that everything's going to be alright, because this world was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
Not to be believed by a thinking person.

Nits, crits and typos always welcome.