Why String Theory? Read online

Page 8


  The most striking feature of the universe revealed by the microwave background is its homogeneity. In all directions, it looks the same, at a single temperature of 2.73 degrees above absolute zero. East, north, west and south the microwave background is identical – almost. Just and barely visible, at a level of one part in a hundred thousand, are tiny irregularities. The temperatures of different parts of the sky are, in the end, not quite the same. By one hundred-thousandth of a degree, some parts are hotter and some parts are cooler. In these tiny blemishes of the past, we see our present and future. Time and gravity have grown these dimples into enormous wrinkles. What was then a small local surplus of energy is now a galactic supercluster – a colossal region of overdense space. An underdensity at the level of one part in a hundred thousand has turned into a giant void bereft of galaxies.

  Such is the universe as we rewind cosmic time: younger, smaller, hotter and smoother. The more we go back in time, the more it becomes so. How far back can we go?

  There is strong and more or less incontrovertible evidence that this picture holds back to the period when the universe was one second old. The temperature at this point was around thirty million degrees, and all of the universe we see today could be contained in a box around one light second – three hundred million kilometres – in size. It is at this point in the universe that the first stable nuclei were formed by nuclear reactions in the expanding universe. It is measurements of the primordial abundances of such nuclei, through observations of very old stars, that provide the support for this picture. Looking further back in time than the one second point, our equations show a universe continuing to contract. Beyond this instant, however, there is no definitive observational support for what the equations suggest.

  What did happen earlier than one second? We do not know. In this epoch, all is speculation. There is good speculation and bad speculation. There are indeed ideas such as cosmic inflation, which come with a good deal of indirect empirical support. However, we do not know. Inflation is a plausible, indeed maybe highly plausible, idea. It may explain the detailed form of the tiny blemishes that are present in the microwave background, but it is not clear that it should be the only possibility.

  There is also bad speculation. Much has been said or written about what may have happened before the Big Bang, including anthropic landscapes of different laws of physics and eternally reproducing multiverses of many possible universes. It is not that the ideas are necessarily wrong. It is rather that the extravagance of these conjectures is matched only by the paucity of either rigorous calculation or observational motivation. The danger is that this replaces physics’ long-standing chaste marriage of solid theory and careful experiment with a form of scientific soft porn best suited for the pages of glossy magazines.

  While we will return to this topic in chapter 10, I will for now let sobriety be the better part of speculation and be silent whereof I cannot speak.

  3.7 DIFFERENT CAN BE THE SAME

  I next want to describe a result that is more theoretical than experimental. It is more a discovery about the structure of physical theories than a statement about the meaning of some measured phenomenon. As such, it is not directly about the observable world in the way that the other parts of this chapter are. Nonetheless, the conceptual and intellectual significance of this discovery merits it a place in the pantheon of ideas in theoretical physics. This discovery is the existence of dualities.

  In plain English a duality is the statement that one theory described one way is equivalent to another theory described a different way. Put in this form, this does not sound remarkable. There are many different but equivalent pairs where the equivalence is neither profound nor interesting. The expressions ‘The fourth power of the number of green bottles originally hanging on a wall’ and ‘The number of men the grand old Duke of York marched up a hill’ both describe the number ten thousand. However, any attempt to find a deep relationship between these two statements will end in a mental health unit. Dualities are important because they reveal profound and useful equivalences between structures and theories that on the surface appear totally unrelated.

  Large numbers of dualities have been enumerated, mostly involving relations between different theories. However, some dualities – called self-dualities – can exist between the same theory with different choices for its parameters.

  I describe here in some more detail how this case works and what this entails. I focus in particular on language appropriate to quantum field theories, as these are where some of the most interesting applications of duality occur. Almost all such theories have a parameter – let us call it λ – which measures the strength of interactions between different particles. This is analogous to the idea of electric charge, which measures how charged particles respond to an electromagnetic field. The trajectories of moving charged particles bend within a magnetic field, and the larger the charge, the more they bend. Particles with no charge do not interact at all and continue in a straight line.

  If λ is zero, there are no interactions. We now imagine what happens as we change λ. Initially, λ is almost zero and the particles do not care about one another. They each move past with no deflection, being neither attracted nor repelled. As we slowly increase λ, the interactions become small but non-zero. Particles that pass each other deviate from their original paths. However, these paths are only slightly modified and differ merely by a small perturbation. Now increase λ further so that it becomes large, and the mutual attraction between different particles becomes enormous. The particles no longer simply pass by each other but interact strongly, with trajectories that differ totally from their original routes.

  We can make a human analogy for these cases of free theories with no interactions, theories that interact only very weakly and theories that interact strongly. We replace particles with humans and consider a theory of human interactions. We imagine two people – let us call them Arabella and Bert – walking down a street in opposite directions. Both have definite plans: Arabella is on her way to a rugby match and Bert is going clothes shopping. What happens as they pass each other? If Arabella and Bert are complete strangers to each other, they will walk by with no acknowledgments, continuing to their separate destinations. This is the analogue of the zero coupling case. If Arabella and Bert are weakly coupled – perhaps they are work colleagues – they may stop and greet each other before resuming their journeys. They may be a few minutes later than they would otherwise have been, but their destination is unchanged. The final case is the strong coupling case, where Arabella and Bert are cousins who have not seen each other for years. Here they may abandon their journeys in favour of lunch with each other and an extended catch-up. The effect of the interaction is not a small perturbation on the original state of affairs, but instead a total alteration.

  It is ‘easy’ to predict what happens in a theory with small values of the coupling constant ‘λ’. The result is a small perturbation on the case of zero coupling, and various approximation techniques can be used to evaluate this perturbation. In practice these calculations do become intricate, but the underlying principles of them are clear. However, as the coupling gets large, ‘easy’ becomes ‘difficult’, ‘difficult’ becomes ‘hard’ and ‘hard’ becomes ‘almost impossible’. Returning to our human example, it is as if we asked what would happen in Times Square if everyone there suddenly realised they had all been at elementary school together.

  Once the coupling has been ratcheted up to the ‘impossible’ level, what happens as we increase it further? This is where dualities enter. In a theory with a duality, something remarkable happens. Instead of continuing to ascend a gradated ladder of difficulty, we start descending. ‘Impossible’ becomes ‘difficult’, ‘difficult’ becomes ‘hard’, ‘hard’ becomes ‘easy’ and ‘easy’ becomes ‘trivial’. The theory with infinite coupling turns out to be indistinguishable from the theory with zero coupling. The theory with coupling constant ‘λ’ is, in a sense that can be made
precise, exactly the same as the theory with coupling constant ‘’ Such a theory is called self-dual. We can imagine turning a dial to adjust the value of one of its parameters. As we do this, we end up bringing the theory back to itself.

  For this case of self-duality, the duality relates a theory in one regime to the same theory in a different regime. The other common kind of duality relates a theory in one regime to a second theory in a different regime. Here, we start with theory A at zero coupling. As you dial this coupling up, this original theory becomes progressively harder and harder to solve. However as you continue increasing the coupling, you realise that you have arrived at a second theory, B, in its own weakly coupled regime. The enormous calculational boon from this is that it allows the replacement of theory A in the ‘help-I-can’t-solve-it’ regime with theory B in the ‘small-couplings-yes-I-can-do-this’ regime.

  There is one example of duality that has been especially prominent in the last fifteen years. This is called the AdS/CFT correspondence, or sometimes the gauge/gravity correspondence, and was formulated by the Argentine physicist Juan Maldacena in 1997. We shall discuss this duality and its applications at greater length in chapter 6 and particularly in chapter 8. Here I shall give simply a brief introduction to what is one of the most important manifestations of duality. The AdS/CFT correspondence is an example of an equivalence between one theory in a certain regime with a different theory in another regime. It states the equivalence of a four-dimensional quantum field theory of particle interactions in the large coupling, hard-to-solve regime with five-dimensional gravity in the easily tractable weak coupling regime.20

  The word ‘equivalence’ should attract arch scepticism. What is really meant here by equivalent? After all, one theory lives in five spacetime dimensions and the other theory lives in four spacetime dimensions. Furthermore, one theory involves gravitational forces and the other theory does not. Nonetheless, ‘exactly equivalent’ means precisely that – there exists a dictionary such that every quantity in one theory can be translated into a quantity in the other theory. Calculating a quantity in one theory gives the same result as calculating the equivalent quantity in the other theory. Information is neither lost nor gained, and there is exactly the same content on either side of the translation. As for the inscriptions on the Rosetta stone, there is the same information available however you choose to express it.

  A good indication of the importance of this result is that at first sight it seems obviously wrong. The gravitational theory is in five dimensions. The gauge theory is in four dimensions. How can a five-dimensional theory be equivalent to a four-dimensional one? There is an extra dimension to the theory, and so one would expect, not unreasonably, that there are more degrees of freedom, more internal knobs you can tune, in a theory with an extra dimension.

  It is true that many checks of this conjecture are not appropriate to this book, although I shall describe some at greater lengths in chapter 8. Here however, I only wish to explain, at least heuristically, why this dimensionality argument is wrong, and why it is possible for a five-dimensional gravitational theory to be equivalent to a non-gravitational four-dimensional theory.

  To do so, we imagine what happens in each theory as you put lots and lots of stuff into a box of fixed size. The simplest example of a field theory is pure electromagnetism, the theory of light. In pure electromagnetism, we can in principle put as much radiation as we wish – even infinite amounts – inside a box. Such a box is called a oven. The more radiation you put in, the hotter the oven gets – but as long as the walls are strong enough, there is no actual limit on how hot the oven can be. Now consider a gravitational oven. We again take a box of fixed size and try and squeeze more and more matter into it. However, the difference is that in the gravitational theory, at some point a limit is reached. Once we reach this limit, any more matter we put in will cause the box to collapse and form a black hole. The game of feeding the oven with particles is over, and in a gravitational theory there is a maximum amount of energy that can be stuffed into a box of any given size.

  Black holes are one of the best studied objects in theoretical physics, and one of their most insightful students has been the Cambridge University physicist and science icon Stephen Hawking. In 1973 he, concurrently with related proposals by Jakob Bekenstein, was able to show that black holes have an entropy given by the area of the black hole.

  First, what is meant by the area of a black hole? For every black hole, there is a surface around it such that every object within that surface, no matter how fast it is moving or in what direction, will fall into the black hole. The surface is trapped: once within it, all paths lead to the singularity. The area of a black hole is simply the area of this surface, commonly called the event horizon.

  Second, what is entropy? Entropy is a measure of the number of ‘degrees of freedom’ of an object: the total number of ways to rearrange its internals while keeping its externals unaltered. For example, the entropy of the gas in a room is a measure of the total number of way the gas molecules can arrange themselves within the room. The fact that the entropy of a black hole is proportional to its area tells us that the number of ways of rearranging the black hole internals depends upon the surface area of the black hole and not upon the volume. In contrast, the entropy of the non-gravitational electromagnetic oven grows, precisely as one would naively expect, with the volume of the oven and not its surface area.

  This result is deep. The fact that the entropy of a black hole depends on its area tells us that in a gravitational theory the amount of stuff we can pack into a given volume is set not by the volume of the region, but instead by its surface area – which has one dimension lower. For a five-dimensional gravitational theory, as arises in the AdS/CFT correspondence, what this tells us is that the number of degrees of freedom is actually counted by a quantity which is four-dimensional and not five-dimensional.

  This is very, very far from a proof of the AdS/CFT correspondence. However, it is an illustration of one way in which this apparently obviously wrong correspondence is neither obvious nor wrong.

  I shall return to dualities later in this book. For now, let me restate why dualities are so important. In particular, why are they so important that, despite being entirely theoretical, they deserve a place in a list that includes quantum mechanics and relativity? Dualities matter because they have allowed the landscape of physical theories to be shrunk dramatically. Once upon a time, the strong coupling regime in the map of theoretical physics was marked as Terra Incognita and decorated with pictures of dragons and sea monsters. The explorers have now arrived, and they have found this land to be filled with semi-detached houses from Milton Keynes.

  3.8 NATURE IS SMARTER THAN WE ARE

  There is a final discovery – better, a meta-discovery – that I want to describe. This is not directly a scientific result. It is neither a physical law about nature nor a statement about the structure of physical theories. It is rather an observation founded on the other discoveries. It is an observation of the essential conservatism of nature compared to the radicalism of scientists.

  It can be summarised as the statement that true revolutions are rare. Most big discoveries do not change the laws of physics, but rather reveal unexpected consequences of the existing laws. Scientific research is hard, and some of the problems tackled are very hard indeed. It is easy to spend a lot of time thinking, working and calculating, and yet not make any progress. It can be psychologically easy to give up on a problem, believing its solution requires new principles, when what in the end is required is simply clear thinking about the consequences of existing laws.

  The history of quantum mechanics provides a striking example of this. The benefit of hindsight tells us that the laws of quantum mechanics as formulated in 1926 were correct. The high road that led to the Standard Model and an understanding of three of the forces of nature was marked not by modifying these laws but by understanding and applying them. However, the historical record shows that on
encountering difficulties many physicists instead tried to modify the foundations. We have already discussed two examples: Einstein’s troubles with quantum mechanics, and the problem of infinities in quantum electrodynamics. We can add a third to these: the problem of understanding the strong nuclear force.

  While the existence of the strong nuclear force had been known since the early days of nuclear physics, its detailed experimental study only began in the 1950s and 1960s, when particle colliders obtained sufficient energy to reach the characteristic nuclear energy scales. These colliders soon discovered a menagerie of strongly interacting particles. The particles appearing in quantum electrodynamics had been limited in number: the photon (the electromagnetic force carrier), the electron and positron, and muon and antimuon. In contrast, the study of the strong force revealed the proton and neutron, the neutral and charged pions, the neutral and charged Kaons, the K-star, the rho, the Lambda, the Omega, the phi, the eta, the eta-prime, the delta, the sigma, the xi … prompting the Italian Nobel Laureate Enrico Fermi to quip ‘If I could remember the names of all these particles, I’d be a botanist’. The experimental manifestation of the strong force looked nothing like that of the electromagnetic force. It was entirely bizarre, and there seemed no organising logic to how and where these particles arose from. Freeman Dyson, one of the pioneers of quantum electrodynamics, had remarked in 1970 that he thought the correct theory of strong interactions would not be found for a hundred years.

  The combination of unexplained experimental results and the revolutionary air of 1968 Berkeley led to further radical proposals: total nuclear democracy, the abandonment of fundamental particles and even the fusion of quantum theory and Eastern philosophy into something called the Tao of Physics. All heady, all exciting – and all wrong. In the end, the strong force is a close cousin of the electromagnetic force, and it was explained by conventional quantum field theory revealing aspects that not previously been appreciated.