12.3.11

Dark Matter Rivisited

This spiral galaxy -- NGC 4603 -- was most distant galaxy helpful to the study.
Jeffrey Newman/NASA Using the Hubble telescope, it took NASA scientists eight years to get a measurement of the expanding universe. This spiral galaxy — NGC 4603 — was the most distant galaxy helpful to the study.
Unfortunately for previous cosmological theories, the universe appears reasonably convincingly to be expanding at an accelerating rate.  This, all physicists know, is no small problem.  It is so severe that physicists now invoke “Dark Energy” to explain this accelerating expansion.
Here is the central problem: Throw a ball into the air. If you do so at less than the escape velocity from the earth’s gravitational field, it will fly upward, exchange kinetic and gravitational potential energy until the two are equal at the top of its trajectory, then the ball will fall back to earth.  If the initial velocity of the ball is just equal to the earth’s gravitational field, the ball will “coast to infinity” ever more slowly, reaching infinity at 0 velocity, in the absence of other matter in the universe to provide additional gravitational fields. If the initial velocity of the ball is still higher, it will fly from the earth and coast to infinity at an ever decreasing, but finite velocity.
The Big Bang, well, Banged. To put the issue over-simply, vast energy arising from nothing became a quark gluon soup, became matter and fields containing energy, hence, by E=MC2, matter, as the universe expanded. The mutual gravitational attraction of all the mass and energy in the universe is supposed to forever slow the expansion of the universe. Given General Relativity, the universe should either expand then collapse in the Big Crunch, like the ball falling again to earth, it should expand forever to infinity reaching infinity at a zero rate of expansion, a critical state of affairs, or it should expand  forever, but at an ever slowing but finite rate.
The first and last cases correspond by General Relativity to a positively curved spacetime, and a negatively curved space time. The critical case corresponds to a flat spacetime.  Cosmologists thought the universe was critical.
What the universe should not be doing is expanding at an accelerating rate.
But it seems to be.
Einstein made a “mistake.”
  He thought the universe was of constant size, and his equations did not predict this. He “fudged” and added the simplest correction term to his equations, the “Cosmological Constant.”  Using it, Einstein could achieve a universe of constant size. Hubble soon discovered that galaxies recede from one another in proportion to their distance apart and concluded the universe was expanding.  Einstein admitted it was the worst mistake of his scientific life.
But the Cosmological Constant has remerged as a start to account for Dark Energy. With it, one can achieve an accelerating expansion of the universe.  The Cosmological Constant is as if there were a “repulsive force” in space, pushing any two regions of space further apart, and the more so as their distance apart increases.
Einstein realized that his Cosmological Constant implied an increasing energy in the expanding universe, such that the vacuum energy per unit volume remained constant. This flies in the face of the Big Bang belief that all the energy of the universe came to exist in that Bang.  Well, OK, so it does.
The problem is, there is, in a deep sense, no physics behind the Cosmological Constant, it is just a constant added the only place Einstein could in his equations and keep the rest of the physics of General Relativity unmodified.
Thus, the deep question is: What conceivable physical process(es) could constitute the Cosmological constant.
I am going to propose a related set of concepts that might do the job:
  1. Space is NOT CONTINUOUS. Space, on the smallest length scale, the Planck length of 1034 centimeters, comes in discrete units. These units cannot interpenetrate, otherwise, once again, space would be continuous. An approach to quantum gravity, Loop Quantum Gravity, kindly taught to me by Lee Smolin, a founder of that field, proposes that on the Planck scale, space is comprised of tetrahedra.
  2. These units of space also cannot expand. But this implies that space itself cannot expand!  The only way “space” can expand is if more units of space, the tetrahedra, come into existence.  In Loop Quantum Gravity this is allowed: tetrahedra can build new tetrahedra on any of their four faces in what are called Pachner moves.  So for space to “expand” it must build itself by “cloning” itself.
  3. Tetrahedra or units of space have a constant “rate” or probability per unit time, of cloning themselves.  This postulate implies that space builds itself and expands EVER FASTER. Indeed, like a bacterial colony freshly plated, space expands exponentially.  We are stepping towards Dark Energy and a Cosmological Constant which yields an exponentially expanding space.  Before I go further, it is clear that the more space now exists, the more tetrahedra there are, so any two regions of space are being “pushed apart” by the cloning of tetrahedra between them.
  4. Every time a tetrahedron of space is created, new energy enters that tetrahedron from nowhere, so the vacuum energy is constant as space expands at an accelerating rate.  This accords with Einstein’s realization that the Cosmological Constant could imply can constant energy per unit space as space expanded: Dark Energy we now say. I note that it makes us uncomfortable in imagine energy arising from “nothing”, but we accept this magic at the Big Bang  It is no more magical here.  Some energy per unit space is needed for point 5) next. Also physicists say that once there is matter, energy, expansion, and gravitation, energy is not conserved.
  5. Newborn tetrahedra are quantum objects. More, by Pachner moves, quantum geometry can be in a superposition of more than one “graph structure” connecting the nodes of the tetrahedra in a diversity of ways. But because they contain energy, they contain mass.  At some scale and volume of cloning tetrahedra, considered as a “system” in a still larger spatial volume  considered as the “environment” of the “system”, the mass in the space in the “system” is sufficient that some number of quantum tetrahedra DECOHERE into CLASSICAL SPACE.
  6. Now two alternative postulates: 6A, even classical space can give birth to new tetrahedra. 6B, decoherence can be reversed occasionally, and newly quantum tetrahedra can clone themselves and give birth to new tetrahedra.  The alternative postulates matter of course, but I will not favor either.

Taking the above non-mathematized schema of a physical a theory of Dark Energy as the cloning of space by space, the first thing to do is estimate the rate of cloning of space needed to  account for the observed accelerating expansion of the universe.  This is a relatively straightforward job.
The Real Line is a Real Mess. Best if space is discrete on some small scale.
There is a separate reason to consider space on some smallest scale to be discrete: The real line is a real mess.  Recall that the real line, a human invention, is a continuous line, for example from 0.0 to 1.0 on some scale like a meter stick.  The real line has both  an infinity of rational numbers, where rational numbers can be expressed as the ratio of two whole numbers such as 34/6501.  But as Pythagoras proved, there are numbers that cannot be expressed as such ratios. These are the irrational numbers and are expressed as INFINITELY LONG sequence of the integers, 0,1,2,3,4,5,6,7,8,9 after the “decimal point”.  How many irrational numbers are there?  Well, between any two rational numbers, no matter how close, there are an infinite number of irrational numbers.  The rational numbers are of the same order of infinity as the integers. The irrationals are of second order infinity.
Here is the problem. Mathematics is largely founded on set theory.  But suppose I want to form two sets of irrational numbers, each with specific irrational numbers in it.  How can I do so?  Each irrational number is an infinite series of the first 10 integers above. How can I pick a “specific” irrational number?
One answer would seem easy: have a computer algorithm to compute each such number, like Pi, or the square root of 2, both of which are irrational, and then just form the sets.
Unfortunately, Alan Turing of the Turing Machine that is the basis of all digital classical and quantum computers today, proved that almost no irrational numbers were “effectively computable” using a Universal Turing Machine.  So we cannot use this idea for our two sets.
The mathematicians, wishing to preserve set theory, postulated an Axiom of Choice asserting, well what the heck, that one can just go ahead and form the two sets.
Axioms are fine, but Turing tells us there is no effectively computable way to achieve the formation of these two sets for arbitrary choices of irrational numbers.
Why should we care?  Because on the one hand, classical physics is based on the continuous line, rational and irrational numbers included. But then, classical physics works at length scales far above the Planck length scale of 10 to the - 34 centimeters.
We don’t need a continuum at that scale for classical physics.
On the other hand, if we remember that it is we who invented the idea of the real line and have learned to love it, this does not mean that the real line is the right description of reality.  And it is filled with problems. As I said, the real line is a mess.
Then if we give up the real line, space must have a smallest length scale.  The Planck length scale is the obvious scale.  Then we get to Loop Quantum Gravity’s tetrahedra, or some other minimal discrete units of space, avoid the continuous line, and with the postulates that these minimal units of space cannot interpenetrate, cannot expand, and can clone themselves we have the start of an exponentially expanding space.  If we add  energy per new unit of space and decoherence of quantum units of space, we get classical space and a constant vacuum energy.  And if on 6A or 6B space tetrahedra can continue to clone themselves at some rate per unit volume of space we have Dark Energy.
There is as yet no matter or other energy in this space, of course, to add its mutual gravitational attraction to that of vacuum energy, as described by General Relativity, but given CLASSICAL SPACE achieved by the decoherence of quantum regions of space considered as “systems” into larger regions of quantum space considered as “environments, General Relativity seems safe.
This could conceivably constitute a theory, or part of a theory, of quantum gravity. The ideas could just possibly unite General Relativity and Quantum Mechanics.  It seems worth thinking about.

Discrete Space, General Relativity, Dark Matter And The Casimir Effect

In an earlier blog, Marcelo discussed Dark Matter and stated that, with the exception of hopes of exotic particles called WIMPs, (Weakly Interacting massive particles) which have not been found, we have no idea what Dark Matter might be.
Fools rush in where physicists, no doubt wisely, do not tread.  That said, I wish to propose concepts that may prove helpful with the issue of Dark Matter, relate to the hypothesis of discrete space on the Planck length scale that I discussed two blogs ago, seem to have potential relevance to General Relativity and perhaps quantum gravity, and appear to be testable now using the Casimir effect.  I shall predict that the strength of the Casimir effect is weaker in a local gravitational field exactly in the proportion that General Relativity shows that space-time curves locally in a gravitational field.
While the ideas are novel, and of course I am not a physicist, they have the virtue of being simply testable.
What is Dark Matter?  If one observes the rotation of galaxies or clusters of galaxies, the outer margins of the galaxies, or clusters of galaxies, are rotating too rapidly to be accounted for by either Newtonian or Einsteinian gravity.  There is essentially no doubt about this excess velocity.  To explain it, physicists have for years postulated the unknown Dark Matter in the outskirts of a galaxy or cluster of galaxies, in a “squashed sphere” hovering about the galaxy or cluster of galaxies.
My attempt to offer a new line of thought about Dark Matter begins with the postulates of my blog on Discrete Space and Dark Energy, two blogs ago.  There, borrowing the general idea from Loop Quantum Gravity, I proposed that on the Planck length scale of 10 to the -34 power centimeters, space is discrete.  Again, but not limited by, Loop Quantum Gravity, I supposed that these discrete units of space were tetrahedra that could not interpenetrate, or else space is again continuous. And I supposed that tetrahedra cannot expand. The latter implies that space must “clone” itself, like bacteria, giving rise to an exponential growth of space, hence the start of a theory of Dark Energy.  I further proposed that when a new tetrahedron is “born”, new energy comes with the tetrahedron “from nowhere”, thus keeping the energy density of space a constant.
You may cavail at energy entering space units “from nowhere” but we accept energy from nowhere in the Big Bang, with no explanation, so one magic is no worse than the other. In addition, my physicist friends say that with an expanding universe, matter, energy, and gravity, energy is not conserved for the universe as a whole in any case.
Finally, I proposed that space units are quantum degrees of freedom with alternative topologies connecting the nodes of the tetrahedra. Because space has energy, it has mass, thus will decohere at some space scale considering a volume of space a “system” and surrounding space as an “environment.  Then quantum space becomes classical because quantum space decoheres.
Now, on to considering a testable hypothesis for Dark Matter.
  I begin with General Relativity and the deformation of space in the vicinity of a mass, M.   General Relativity speaks in terms of a space “metric”.  In the absence of masses, M, space is thought to be flat.  One uses two coordinates, dR and dL, where dR is the flat space metric and dL is the curved space metric.
In General Relativity, one can define dL in relation to dR as a massive object, M, is approached from a distance.  What occurs is that space is curved into a funnel shape, ever steeper as the center of mass of M is approached.
In terms of dR and dL, as the center of mass M is approached, one can plot dL/dR for each small change in either coordinate.  One obtains the funnel described above, where for initial small changes in R, dR, far from M, changes in L, dL are approximately equal to changes in R, dR. That is, space is locally almost flat.  As the center of mass, M, is approached, small changes in R, dR, are associated with large changes in L, dL, yielding the funnel described above.  In short, as the center of mass is approached, General Relativity states that space is curved in such a way that, to accommodate large changes in the metric, dL, as M is approached, space is stretched as the center of mass of M is approached.
How can General Relativity be brought into accord with the hypothesis that space comes in Planck scale discrete units that can neither interpenetrate nor stretch, ie expand?
One possibility seems a bad idea: if space cannot expand, perhaps the stretching of space in the vicinity of a mass, M, requires the generation of more tetrahedra, proportional to the stretching.
This seems a terrible idea, for there is no physical basis then for the “stretching” of space near M.
I turn then to the alternative simple hypothesis. But it has many consequences.
Marcelo pointed out with respect to my blog on Discrete Space, Dark Energy, ...which yielded a Cosmological Constant in the rate at which space clones itself so expands exponentially, that the Cosmological Constant is rich in physical interpretation. Most critically, the Cosmological Constant is identical to the 0 point quantum fluctuations of quantum field theories.
Then I propose a very simple postulate:  The local curvature of space is identical with local changes in the 0 point energy of the vacuum, hence with local changes in the Cosmological Constant.  In particular, the local stretching of space demanded by General Relativity in the vicinity of a mass, M, is exactly equal to a local proportional decrease in the 0 point energy of the vacuum, hence of the Cosmological Constant.  This hypothesis proposes a possible underlying quantum “mechanism” for the curvature of classical space in General Relativity.
Why can this hypothesis explain Dark Matter?  To my knowledge, vacuum energy is not accounted for in the excess rotational velocity of the outer margins of galaxies or clusters of galaxies.
But if the hypothesis I propose is correct, as the center of mass of a galaxy or cluster of galaxies is approached, the vacuum energy density, ie the 0 point quantum energy, ie the Cosmological Constant, decreases in proportion to the “stretching” of space given by the metric of General Relativity.  Thus there is less energy per unit of space where it is stretched by the metric near the center of mass of the galaxy than at its outskirts where space is nearly flat.  Then if there is less energy near the center of mass and more in the outskirts of the galaxy or cluster of galaxies, by E = MC squared, there is less mass due to vacuum energy near the center of the galaxy, and more vacuum energy mass in the outskirts of the galaxy.  This is my proposed explanation of the dark matter postulated in the peripheries of galaxies or clusters of galaxies.
One can consider a further idea:  If the increased mass on the periphery of the galaxy or cluster of galaxies must itself be in an orbit to account for the increased rotational velocity, one can think of wave packets of energy (hence mass) flowing in orbit around the galaxy or cluster of galaxies.
One can consider a second idea: Relax the assumption that the volume of a discrete unit of space cannot expand. Then postulate that the expansion of a unit of space is proportional to its 0 point energy.  Then as space curves and 0 point energy per unit space falls, units of space expand proportionally.
The Casimir Effect
The hypothesis I put forth can, I believe, be tested by the Casimir Effect which should be weaker in a strong gravitational field that far from such a field.  More, the hypothesis of discrete space on the Planck scale may help with an embarrassing infinity in the formulation of the mathematics of the Casimir effect which is “solved” by a questionable “renormalization”.
The Casimir Effect was predicted and confirmed in a setting in which two large conducting metal plates are very close to one another in a vacuum.  A close distance is a micrometer or less.  Casimir realized that for a quantized field as is common throughout quantum field theory, one could think of the field as a set of quantum oscillators, one at each point in space.  Now these oscillators can harbor waves, but the waves must match the boundary conditions of the two metal plates.
The predicted attractive force between the plates, the Casimir Effect, has been measured with two parallel plates and with a plate and a large sphere one of the surfaces of which is near the first plate.  It is a well confirmed effect.
But there is a critical problem.  If space is continuous, then any possible wavelength meeting the boundary conditions, from a micrometer long to arbitrarily short wavelengths, can fit into the vacuum between the plates. The total energy of the Casimir effect should be the sum of the energies of these wavelengths. Each wavelength has an energy which is inversely proportional to its wavelength.  The sum is, unfortunately, infinite because wavelengths can become ever shorter and of ever higher energy.
In standard quantum field theory, this is “handled” with a “renormalization” that many physicists are not comfortable with.
Now consider the hypothesis that space is discrete on the Planck length scale.  Then there is a natural cutoff of wavelengths at this length scale. No shorter wavelengths can occur and the sum of energies is FINITE.   This is, of course, an argument for discrete space, but not a conclusive one.  The calculations have been done, reported for a general audience I believe in Leonard Suskind’s “The Cosmic Landscape”.  If I remember correctly, with a cut off at the Planck length scale, the energy is still much too high.
The hypothesis for Dark Matter which I suggest above, has an experimental consequence, even without detailed calculations:  The Casimir Effect should be less in a strong gravitational field than in a weaker field.  This is rather simply testable by measuring the Casimir effect on the surface of the Earth, and in space at different distances from the Earth, including the position of balance between the lunary and the earth’s gravitational fields.
I close with the obvious caveats: I am not a physicist. Physics is a richly interwoven skein of consistent theory, and some crude steps.  I am much more likely to be entirely wrong than even partially right. For example, consider the central hypothesis relates local curvature of space to a corresponding decreases in the 0 point energy of a quantum field. This seems radical and may of course be entirely wrong.  I do not know the ways the 0 point energy of a quantum field is locally experimentally testable directly. But the Casimir effect would seem to measure the 0 point energy of a local volume of vacuum, so seems a good test of this hypothesis.
The theory I propose in outline seems to contain a conversion of space to matter and energy, for it relates the local curvature of space to a local loss of 0 point energy, hence mass. Thus when space is “stretched” by a nearby mass, M, the energy per volume of space decreases.   Then space, matter and energy are all inter-convertable. Since on this proposal, a stretching of space is related to a proportional decrease in energy per unit space, it would be wonderful if “space, matter and energy” are conserved together. I confess I do like this possibility a lot.

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...