Free Novel Read

Why String Theory? Page 5


  Quantum mechanics is also the great organising principle on which chemistry is built. It is empirically true that the different chemical elements can be classified into different types. Some elements are highly reactive and immediately bind into compounds – for example sodium and chlorine. Others are inert and stable, such as the noble gases Neon, Argon and Xenon. This knowledge is summarised in the periodic table, made famous by the Russian chemist Dmitri Mendeleev. By itself, this is an alchemical almanac of mysterious patterns and unexplained principles. The behaviour of elements repeats periodically at intervals, but the sizes of the intervals – 2, 8, 8, 18 … – are unexplained. There is a structure, but it is unclear where the structure comes from. Quantum mechanics reveals the origin of these apparently mysterious numbers as simply the number of solutions of a particular equation that describes the quantum mechanics of atoms, Legendre’s equation, named after the eighteenth-century French mathematician Adrien-Marie Legendre.

  Quantum mechanics provides a reason for the rules of chemistry. It explains both qualitatively and quantitatively why different elements have the properties they do, and why certain elements like to bond with other elements. One of the great eureka moments as an undergraduate physicist or chemist comes from solving the fundamental equation of quantum mechanics, called the Schrödinger equation, for the system of a heavy positive charge – a nucleus – and a light negative charge – an electron. Given the right preparation the calculations are not hard, and the result immediately reveals the basic structure of the periodic table.

  It is true that only for simple systems can the equations of quantum mechanics be solved exactly. ‘Simple’ here means atoms with only one electron, while neglecting any effects associated to special relativity. However, a requirement for exact solutions with no approximations – really none at all – is a requirement born in mathematics and not physics. It may not be possible to solve the equations exactly, but with large enough computers they can be solved using approximation methods to any accuracy that is physically useful. The use of such approximation methods has become standard, even if for big problems with big equations, big calculations and big computers may be required.

  None of this affects the underlying truth: chemistry and chemical reactions are driven by quantum mechanics. The chemical bonds that bind all molecules from simple diatomic combinations to the largest biological proteins are all products of quantum mechanics, and they arise from the solution of its equations.

  The physics of quantum mechanics enters not just the microscopic but also the macroscopic properties of matter. As will be discussed at greater length in chapter 8, the bulk properties of metals and materials are also described by quantum mechanics. The flow of electric current is set by whether a material is a conductor, an insulator or a semiconductor. Metals conduct, and ceramics insulate. The reasons for this originate in the quantum mechanics of electrons in matter. For example, metallic conductance arises from a sea of electrons that are free to transport electric charge throughout the material. Why is this sea of electrons present for metals but not for insulators? It is there because the equations of quantum mechanics say it has to be there.

  The discovery of quantum mechanics belongs to the early parts of the 20th century. It took a quarter of century to go from nothing to the full equations of quantum mechanics. The first awakening came with the introduction in 1900 by Max Planck of his eponymous constant, through which he was able to solve a puzzle whereby the classical theory of light predicted that an oven would have infinite energy.3 Planck’s constant formed a key part of the 1905 proposal by Einstein that light energy came in individual packets of energy quanta – called photons. At first quantum mechanics existed as a mongrel theory, either quassical or clantum – one part classical, one part quantum. The need for a full theory of quantum mechanics was recognised, but the form it took was not known. The correct theory of quantum mechanics was suddenly formulated within a few years in the middle of the 1920s, and the names associated with this formulation – Paul Dirac, Werner Heisenberg, Wolfgang Pauli, Erwin Schrödinger – have entered the scientific pantheon. This was one of the truly great moments in science, where a clear and correct revolution took place over the course of a few years, brushing away results that went back centuries.

  All the protagonists except Schrödinger were in their twenties. The Germans called it Knabenphysik – boy physics. Wordsworth’s words about the French Revolution can be applied to this period with no dark overtones:

  Bliss was it in that dawn to be alive,

  But to be young was very heaven!

  3.4 THE WORLD REALLY, HONESTLY, TRULY IS DESCRIBED BY QUANTUM MECHANICS

  The next great insight is that the world really, honestly, truly is described by quantum mechanics. Quantum mechanics is sufficiently strange and sufficiently counterintuitive that this point merits repeating. Looking back, much of the history of 20th century physics consists of smart people trying, ultimately in vain, to deny the implications of quantum mechanics. Time and again, apparently paradoxical results have led to doubt that the formalism of quantum mechanics was correct, when the correct attitude was to spend less time doubting and more time thinking hard about how to understand the theory.

  A famous early example of this is the EPR paradox. E, P and R stand for Einstein – even Homer nods! – Podolsky and Rosen, and this gang of three put forward this paradox in 1935. The original formulation involved the spin of electrons, but we will make it more vivid. We consider a double Schrödinger cat experiment featuring Patches and Milky. Both Patches and Milky are shut up in separate boxes. The experiment involves a single atomic nucleus, which has a 50 per cent chance of undergoing radioactive decay within a given time period. The respective fate of the cats is determined by whether (or not) this decay occurs. If the nucleus has decayed, Milky gets cyanide and Patches fresh mousemeat. In the absence of the decay, it is Milky who feasts and Patches who perishes. After the fateful time, there are two possible states of the system. Either Milky is in cat heaven and Patches is alive, or Patches has passed on and Milky is eating. It is a fact that in quantum mechanics, we do not know the state of a system until we measure it. The system is therefore in a superposition of these two cases, and we cannot know which case is true until we look inside one of the boxes and perform a measurement. However, the fate of Milky and the fate of Patches are entwined and not separable. If we see that Milky is alive we know that Patches is dead, and vice-versa. Technically, such a quantum state is said to be entangled.

  To set up the EPR paradox, we first wait until the fateful time. We then do not look inside, but instead load each of the boxes, both the one containing Milky and the one containing Patches, into separate spaceships. These spaceships are launched and travel several lightyears apart in opposite directions. At this point, a crew member on one spaceship opens Milky’s box and looks in. The experiment has been set up so that there are only two states that can possibly apply – either Milky is dead and Patches is alive, or Patches is dead and Milky is alive. The essence of the EPR paradox is that when the crew member looks at Milky, this automatically determines the fate of Patches – who by now is many light years distant. How, EPR asked, can a measurement here on Milky immediately affect the state of Patches there? This is in apparent contradiction to the basic postulate of relativity that nothing can travel faster than the speed of light. The attitude EPR took was that the paradox implied quantum mechanics was incomplete: the full theory must contain ‘hidden variables’ that we are unable to see.

  On this occasion, Einstein was on the wrong side of the argument. Quantum mechanics is indeed how the world works. A careful analysis shows that there is no conflict with relativity. Information is not sent faster than the speed of light, as the results of the measurement on Milky cannot be immediately communicated to the crew on Patches’ spaceship. It is only this that is the key requirement of relativity, and this is not challenged by the paradox.

  Furthermore, the ‘hidden variables’ t
heories that were thought to be necessary have now been chased into so many foxholes that they are almost entirely ruled out. During the 1960s the CERN physicist John Bell devised a series of tests to distinguish experimentally between the proposed hidden variable theories and the standard formulation of quantum mechanics. These tests were carried out experimentally by a group in Paris led by Alain Aspect throughout the 1970s and 1980s, finding in every case agreement with standard quantum mechanics.

  A second reaction against quantum mechanics occurred almost contemporaneously with the EPR paradox. Once the quantum mechanics of individual particles was understood, it was natural to extend this to the quantum mechanics of the fields that exist throughout spacetime – in particular, the electromagnetic field. The equations of electromagnetism were written down and quantised so that they became quantum equations rather classical equations. No sooner had this been done than a serious problem was encountered. Classical electrodynamics – the theory written down by James Clerk Maxwell in the 1860s – works. It describes electromagnetic effects well in almost all circumstances. Given it works so well, quantum corrections should be tiny in almost all circumstances. They were not. They were not even small, or even close to small. Every time the Nobel Prize-winning physicists of the 1930s attempted to calculate quantum effects in electromagnetism, they got, instead of a small number, infinity. The quantum ‘corrections’ entirely dominated the original classical result. Something had clearly gone badly wrong, and that something had arisen in the quantum mechanical calculation.

  Faced with these apparently meaningless results, a natural reaction was to discard the formalism. There appeared to be something sick with quantum mechanics applied to fields such as electromagnetism. Quantum mechanics was a truly weird theory, and there was still a residual suspicion of it. Perhaps quantum mechanics needed to be reformulated, or perhaps another theory altogether was necessary. Quantum mechanics itself had been so radical it did not seem unreasonable that another radical idea would be necessary again. The 1965 Nobel Laureate Julian Schwinger said of this period,

  The preoccupation of the majority of involved physicists was not with analysing and carefully applying the known relativistic theory of coupled electrons and electromagnetic fields but with changing it.

  Even Paul Dirac, one of the original founders of quantum mechanics, went as far as to propose modifying quantum mechanics through the introduction of negative probability, whatever that might be.

  Confusion existed for a long time before physicists eventually realised that they already had the correct equations and the correct formalism. The necessary priorities were first, to learn how to calculate correctly with the infinities, and second, to understand them.

  The string-and-sealing-wax approach to dealing with the infinities was separately developed in the 1940s by Richard Feynman, Julian Schwinger and Sin-Itiro Tomonaga. The three were very different. Feynman was lively, informal, charismatic and intuitive. He played the bongo drums and frequented a strip club. Schwinger was a former child prodigy with immense calculational prowess: it would be said of Schwinger that other physicists gave talks to show how to do a calculation, but Schwinger gave talks to show that only he could do it. His technical excellence extended to his lectures, which were virtuoso models of clarity and preparation, but as a person he was private and reserved. Tomonaga was from Japan, descended from the samurai class and born to a father who was a professor of western philosophy. He had been inspired to study physics after hearing Einstein lecture in Kyoto in the 1920s, and his Nobel Prize-winning research was done in the isolation of wartime Japan.

  Together, their achievement was to understand how to reduce the plethora of infinities to a set of finite answers. Their approach established the theory of quantum electrodynamics with a set of defined calculational rules, and they were subsequently jointly awarded the 1965 Nobel Prize. The methods used were not entirely pretty. These methods acquired and have not yet shed a slightly inglorious reputation for systematically adding and subtracting infinities so that the eventual answer was finite – ‘Just because something is infinite does not mean it is zero’. This can lead to mumblings among the uninitiated about whether quantum field theory really is deep science or just a black magic recipe from a witches’ cookbook.

  The systematic understanding was provided by Kenneth Wilson in the 1970s. However, first let me try and unpackage the older techniques a little bit. It is true that, whatever quantity in quantum electrodynamics you try to compute, you end up with infinity. However, there is a precise sense in which the sources of these infinities are the same each time. To illustrate this, imagine trying to compute three separate quantities, and getting first

  next

  and finally

  Each sum may not be meaningful by itself, but the differences certainly are meaningful. The difference of the first two sums is one sixth, and that of the second and third sums .

  The methodology is that of isolation and elimination. First, carefully identify the ‘same infinity’ that crops up each time, and systematically subtract it to get finite answers. While the absolute value of any single quantity cannot be determined, the difference of two quantities can be predicted. This approach works, and it underlies the spectacularly successful predictions of quantum electrodynamics.4 However, it does not really explain why this method works and it does not give a physical understanding of why the infinities appeared in the first place: at best, they are ugly, and at worst they cast suspicion on the whole approach.

  As just mentioned, this understanding was supplied by Kenneth Wilson in the 1970s. In broad terms, Wilson’s contribution was to explain where the infinities really came from. They arose from assuming that a theory such as quantum electrodynamics can hold down to arbitrarily small distance scales. It is familiar in physics that equations have a finite realm of validity and break down outside it. For example, Newtonian mechanics fails both when objects move close to the speed of light and also at distances comparable to the size of atoms. Quantum electrodynamics might be correct down to distances of a nanometre, a picometre or a femtometre – but this does not mean it will work for all length scales, however small. In fact, we know that it should not work at all length scales. To take the most extreme example, at some point the quantum effects of the gravitational force will become important. While we might not know exactly what will happen there, it is a fair bet that our existing theories will be modified. The way to get infinite answers is by making the naive assumption that the existing theory holds entirely unmodified over the twenty-one powers of ten in length scale down to the quantum gravity scale – and then also all the way beyond that.5

  Wilson’s work also offered an additional technical insight (these following paragraphs can be freely skipped if desired). Even if this understanding removes the infinities by explaining why we expect some new physics to come in at some incredibly short length scale, it still appears to leave the puzzle of why measured quantities do not still end up finite, but far beyond the known reach of experiments. If we compute the mass of the electron in quantum field theory, we do get an infinite answer. However, in practice the measured mass-energy of the electron is smaller than the expected energy of quantum gravity effects by a factor of 1021. Even if the infinite corrections are capped at some high scale to give finite answers, why are these still not so large that they drag the electron mass far above its measured value?

  The answer is that the infinite corrections involve the logarithm of the high scale, multiplied by the electromagnetic fine structure constant – which is around . Even with the high scale set at the quantum gravity scale, the numerical size of the correction is small.6 Wilson was therefore both able to explain why the calculational infinities were not physical, and also why physics at low energies is only minimally sensitive to physics at high energies.

  With or without Wilson’s understanding of the infinities, quantum electrodynamics was a triumph. It gave accurate predictions for quantum effects in electromagnetism –
in some cases, accurate down to one part in ten billion. The moral to draw from this success should have been that quantum field theory worked, that it was the correct framework to describe particle interactions, and that more time needed to be spent understanding it.

  This is the moral that should have been drawn. In the modern telling, all the non-gravitational forces – the strong force, the weak force and the electromagnetic force – are indeed described by quantum field theories. That of the electromagnetic force is the simplest, but they are all just different examples of quantum field theories. In the 1960s, the view looked different. The weak interactions did not satisfy what was believed to be a basic tenet of quantum field theory. Technically, they were not renormalisable – a property that was believed to be essential to allow any description in terms of a quantum field theory. Theories that were not renormalisable appear to have an infinite number of infinities in them, which would invalidate the technique of extracting the ‘same infinity’ and systematically subtracting it off each time.