Saturday, 2 April 2016

You Must Be Off Your Brane!

So, branes, eh? What's that all about?

To begin with, we need to cover what motivated string theory in the first place, which will require a potted history of physics. The best place to start here is with General Relativity. 

General relativity is a theory of gravity. Newton's theory, which had stood for 250 years or so, was taken to be the final answer, but it faced a problem in 1905 when a paper that went under the unassuming title of On the Electrodynamics of Moving Bodies was published in a German physics journal, The Annals of Physics. This paper turned out to be a revolution in thought about the nature of space and time. Nowadays, we simply call the theory the Special Theory of Relativity. One of the interesting things about relativity is the popular misconception that Einstein had formulated a theory in which everything had suddenly become subjective, but this wasn't actually the case. Einstein himself had wanted to call it invariance theory. What he'd actually done was to make the subjective objective. 

So, what was all the hoo-ha about, and why did a theory that had nothing to do with gravity present a problem for Newton, whose theory of gravity had withstood all scrutiny for more than two centuries? Well, Newton's theory was based on some assumptions. Among these were the assumptions that space and time were absolute and immutable. In other words, if it's 9am on Monday 4th October 1672 in Cambridge, it's 9am Monday 4th October 1672 on an as-yet-undiscovered planet in the Andromeda galaxy and in the heart of a star in the Kalium galaxy (strictly, although this picture has been fairly concretely undermined, we still have a universal standard of time namely UTC, which is pretty much Greenwich Meantime universalised). 

Furthermore, because simultaneity existed between all bodies in the universe, and because the range of gravity was infinite, it meant that gravity propagated instantaneously. This has some interesting implications, not least that, if the sun were to pop out of existence right this second, Earth, along with all the other bodies that are gravitationally bound to the sun, would go careening instantly off into space, likely with a few collisions along the way (although not as many as you might think; space is big; really big; I mean, you might think it's a long way down the road to the chemist's...) 

Einstein's paper changed all that. The Special Theory of Relativity comprehensively demolished the assumption that space and time were immutable and absolute. Einstein saw the term for the speed of light in Maxwell's equations for electromagnetism, a term that had been introduced purely for mathematical consistency, as far as we can tell, with no term for how the source or the observer might be moving, and ran with the conclusion that light must travel at the same speed for all observers, and tried to work out what that might mean. 

The result was that space and time must move around, and stretch and squeeze, in order to accommodate this constancy of light. It wasn't about gravity, but now we had a new picture of space and time in which neither space nor time existed independently but were different facets of the same entity, spacetime, and Newton's theory was simply not compatible with it, not least because it placed a limitation on the speed at which gravity could propagate. What this means is that if, as discussed above, the sun were to pop out of existence this instant, Earth would happily continue in its orbit for some 8 minutes or so before careening off while the curvature of space propagated.

This troubled Einstein, and for the next 10 years he worked on producing a theory of gravity that was compatible with this new picture of space and time. He said that the moment when it all made sense was when he thought about an elevator falling in its shaft, and the implication for an observer inside the elevator. He worked out that being immersed in a gravitational field and acceleration were basically the same thing (the equivalence principle). He extrapolated this to the General Theory of Relativity, which was published in 1915.


So now we had a successful theory of gravity, and people were really happy with it. People went and played with it for a bit, and some interesting results popped up. For example, Theodor Kaluza, an unknown German mathematician, was mucking about with the equations for GR, and decided to try them out in 5 dimensions. To his surprise, the result that fell out was Maxwell's field equations. This wasn't the first time something like this had happened; Gunnar Nordström, a Finnish physicist who'd independently formulated a theory of gravitation in terms of the geometry of curved spacetime, worked out in 1914 that gravity in 5 dimensions solved electromagnetism in 4, and was working toward unifying electromagnetism and gravity as it had appeared in his theory. This unification was dropped when General Relativity was published, because it comprehensively superseded Nordström's picture of gravity. A few years later, Kaluza's result was recast in a quantum setting by Oskar Klein (famous for the 'Klein bottle' picture of the shape of the cosmos). The upshot is that the idea of extra dimensions was firmly with us, albeit in a form that didn't fully re-surface for many years.


There was still a problem, though. Some work had been going on in a different field for some years, starting with Max Planck working on problems with black body radiation. He'd been trying to solve a problem with working out the energy in an oven. He'd begun by adding up all the frequencies of energy that should be contributing, and to his surprise, he discovered that the energy should be infinite. This was clearly bunk, or we'd never have had any use for microwaves. Clearly something was wrong, but what was it? After much mucking about with the equations, he realised something interesting, and it's all to do with how waves behave.

Look at this picture. It shows a periodic sine wave. You can see that this wave cycle begins at one edge of the image and ends at the other. It also illustrates the zero-point, which is where the amplitude of the wave is zero. What Planck realised was that, if he included in his calculations only those frequencies of energy whose wave returned to the zero point exactly at the wall of the oven, the calculations worked out and gave the correct energies .

This principle allows any frequency in which the energy returns to the zero point, even if that point is halfway through a cycle.

He realised that this meant that energy was quantised, which meant it came in discrete units. If you couldn't get back to the zero line at the wall, you couldn't join the party. This meant that any of the following were perfectly acceptable.


 

While the  following are not:



This was the birth of Quantum Mechanics. Now, QM presents a bit of a problem. Underpinning QM is a principle known as Heisenberg's Uncertainty Principle, after Werner Heisenberg, who formulated it, and who we'll be meeting again soon. This principle tells us that, for any quantum entity, there are pairs of variables known as conjugate variables that are related by a rule that revolves around what we can know about them. 

Here's the critical equation, our first:

\begin{equation} \Delta p \Delta x \geq \hbar/2 \end{equation}

Where Δ (delta) denotes uncertainty, p denotes momentum, x denotes position, and ħ (h-bar) is the reduced Planck constant. The Planck constant is given in joule seconds and has the value 6.626×10−34 Js. The reduced Planck constant (also known as Dirac's constant) is obtained by dividing this result by 2π to give 1.055×10−34 Js.*
 
The pair of conjugate variables most discussed is the momentum and position of a particle, but there are many such pairs, such as the value and rate of change of a field, angular momentum and orientation, energy and time, etc. What the equation tells us is that the uncertainty in momentum multiplied by the uncertainty in position can never be less than this tiny number, ħ/2. In a nutshell, the more accurately we can pin down one of these values, the less certain we can be about the other. My current favourite illustration is a photograph. If we take a photograph of, say, a housefly, with a high shutter speed, we can pin down the position of the fly to extreme accuracy, but we can't know much about its momentum. If, on the other hand, we use a slow shutter speed, we can get some sense of how fast the fly is moving but, because of the blur, we can't tell a great deal about its position.

All well and good, but why is this a problem for General Relativity? Well, General Relativity requires spacetime to be smooth and flat. The uncertainty principle, when applied to the conjugate variables 'value' and 'rate of change' for the field 'spacetime' tells us that, at the smallest scales, spacetime is anything but smooth and flat. It's a seething, roiling mess. Now, for the most part, this isn't an issue. Physicists working in QM generally stick to what they're doing, working with the very small, and physicists working with GR generally stick to what they're doing, working with the very large. Everybody knew there was a problem, but it wasn't causing any major issues. A few people toyed around with trying to get them to play nice together, but what almost invariably resulted was infinities. That's not necessarily a problem but, given that the outputs for these calculations were basically probabilities, it was clear that there was a problem, because a probability cannot exceed 1, let alone get to infinity. So peeps got on with their work, aware that there was a problem looming on the horizon, but not massively troubled by it.


Fast-forward to the 1940s and Werner Heisenberg. He was attempting to construct a theory of particle interactions that was independent of local notions of space and time, because Heisenberg thought that such notions were problematic at quantum scales, not least in the context of point particles. He employed an S-Matrix, which had been introduced by John Wheeler a few years previously. Heisenberg's calculations turned out not to be in accord with observations and in fact were off by miles, but it was clear that his approach might be useful to a quantum theory of gravity.


Fast forward again, this time to the 60s, and we see the emergence of string theory proper, as a theory of strong interactions. (interactions between hadrons; composite particles whose constituents are bound by the strong nuclear force). It was never very successful in this context, but it started the ball rolling.


String theory went through several revisions, and eventually emerged as a theory that all fundamental particles were actually little vibrating strings. The basic idea is extremely straightforward. We know that particles have mass, and that mass corresponds to energy. The idea underpinning string theory, then, is that these strings vibrate with different energies and patterns, each of which corresponds to a particular particle. It vibrates in one way, and it has a given mass and charge corresponding to one particle, a different vibration gives a different particle. One of the key things concerning these strings is that they have a minimum length, the Planck length. Two things got everybody excited about it, namely that one of the string vibrational configurations corresponds to a graviton, a boson thought to transmit gravity in the same way that the photon transmits the electromagnetic force, and that the minimum length imposed by the length of the strings was just enough to make spacetime smooth and flat enough for General Relativity to hold. This is why many physicists talk about it as the only contender for a quantum theory of gravity.


One of the early issues with string theory was that the name didn't actually fit very well, because there wasn't just one string theory, there were five. This was a cause of some consternation. Then, in 1995, Edward Witten, one of the pioneers of string theory, noticed something about the theories. There's a feature of each theory called the 'coupling constant'. When doing calculations in any of the theories, using a perturbative approach, when the coupling constant was large, the calculations were horrendously difficult. When the coupling constant was small, the calculations were considerably easier. What Witten noticed was that the different theories contained dualities, the result of which was that, where the coupling constant was large in one of the theories, it corresponded to a small coupling constant in one of the other theories. All of the theories were dual with each other (except one which, it turned out, was self-dual). This allowed theorists to unify all the string theories, along with a new framework called 11-dimensional supergravity, into a single framework, which became known as M-Theory.


One of the results that came up early on in this newly-unified framework was the suggestion that the constituent entities, the strings themselves, needn't be restricted to one dimension. Physicists started to play around with higher-dimensional versions of these strings, and came up with 'branes', as in 'membranes', which would a two-dimensional brane. This was generalised to branes of any number of dimensions, with this variable denoted 'p', giving p-branes. Also, they needn't be restricted in scale, so they could be any length down to the Planck length. This finally brings us to cosmology.


Physicists Paul Steinhardt, one of the pioneers of inflationary theory, and Neil Turok, then professor of mathematical physics at Cambridge, were playing around with the idea of branes, when an idea struck them; what if the universe we experience actually resides on a 3-brane?


What they came up with is the idea that the Big Bang was simply the collision of two 3-branes. The beauty of this idea is that it completely removes the singularity, known to be problematic since shortly after Hawking and Penrose first presented the singularity theorem in 1970 as already discussed. Moreover, it provides a ready explanation for all sorts of things.


So, in content, the theory basically says that the Big Bang was the collision of these two 3-branes that were (and are) separated by an additional dimension of space, but one that is so small that we can't detect it. The classic analogy employed for how this works is a garden hose seen from a distance. From a long way away, the hose looks 1-dimensional, but as you get closer, you can see that it has girth. The additional dimensions of M-Theory are the same, massively compactified, so small that they lie below our ability to detect them, not least because the most powerful particle accelerator we currently have, the Large Hadron Collider, can only probe to around
1019 metres, while the Planck length is 10−34 metres. To probe to that scale would take a particle accelerator about the size of the solar system which, as Hawking put it, is unlikely to be built in the current economic climate.

Anyhoo, the energy input at the Big Bang was simply the collision of these branes. Strings can be open or closed. Open strings are tethered at the ends to the brane on which they reside, while closed strings can traverse the branes. This provides a ready candidate for a dark matter solution, because gravitons are closed strings, which means that everything is transparent to gravity, which matches our experience. In this framework, dark matter is simply ordinary matter residing on the adjacent brane. Photons are open strings, which is why we can't see anything on the other brane, because any photons over there are tethered to the brane. That's why the only interaction we can detect is via gravity.


Once the branes have collided, expansion proceeds in pretty much the same way as in inflationary theory. The thing that distinguishes between them is their explanation for the inhomogeneities in the CMBR. In inflation, these are caused by quantum fluctuations during the inflationary period getting stretched to macro scale. In the brane model, they're caused by the fact that the branes ripple slightly while on approach, meaning that some bits of the brane make contact before others. This has observable consequences that will allow sensitive experiments to distinguish between the two. The first is that, due to the nature of the explanation for the inhomogeneities in the CMBR, the gravitational waves are predicted to be blue-shifted in the brane model as compared to the inflationary model. Also, because of the way these inhomogeneities are generated in the brane model, the polarisation we discussed in the context of inflationary theory will not be observed. If we observe that B-mode polarisation in the CMBR, brane-worlds is falsified. If we observe the gravitational waves to be toward the red end of the spectrum, brane-worlds is falsified. If it's toward the blue end, inflationary theory is falsified.


I heartily recommend Turok and Steinhardt's book on the subject The Endless Universe: Beyond the Big Bang.


It's also worth noting at this point that the eternal inflationary theory is also rooted in string theory.

That'll do for now, I think. Feel free to raise any questions.


*Some points on notation:

Because we're working with extremely large numbers, we'll use a condensed notation in which exponents are used, just like real physicists. Thus, where a 10 is followed by a positive exponent, it denotes the number of zeroes after the 1, so 1034 is 1 with 34 zeroes after it. Where 10 is followed by a negative exponent, it denotes the number of zeroes before the 1, including the zero to the left of the decimal point, so 10−34 is 0.0000000000000000000000000000000001. 

Edit: Additional to include animation illustrating the answer to a question in the comments below.

 As you can see, all three waves are moving at the same velocity, which we can take to be c. However, their peaks are passing our marker at different times. Those with more peaks in the time-scale carry with them more energy. This difference in energy, we perceive as colour. Einstein showed, with his 1905 paper dealing with the photoelectric effect, that increasing the intensity (this would be analogous to amplitude) in the light didn't trigger the effect, only increase in frequency (bluer).