4/8/2002, added 11/21/02

Critique of the META Model of the Universe*

 
Roger A. Rydin, rarydin@earthlink.net
Associate Professor Emeritus of Nuclear Engineering
University of Virginia, Charlottesville, VA 22901

 

Introduction

In reading Van Flandern's book [1] Dark Matter, Missing Planets & New Comets, which is aptly subtitled, "Paradoxes Resolved and Origins Illuminated", it is possible to agree with many of the author's arguments, and yet be unsatisfied with the thrust of many others. It is only fair to examine these points of agreement and disagreement in some depth, if for no other reason than to set a healthy dialog in motion. The present discussion will be limited to the material presented in the first five chapters of the book that describe the META Model of the universe.

If the reader had expected to find that the META Model is a phenomenological description of how the universe started, how it evolved, and what is going to happen to it, he will be sorely disappointed. As I did, the reader will reach the end of Chapter 5 and the last sentence, "This completes the exposition of the META Model", and not have the slightest idea of how the various arguments presented in those chapters fit together to describe the universe! In fact, he will have to jump to the very end of Chapter 22, in a section denoted "Note added in proof", to find out that the essence of the META Model is that the "universe is infinite in both space and time, and is not expanding at all"! In other words, it has always been the way it is, and it will continue to be that way in the future. Since the META Model says that the universe is constant, then the expansion of the universe must apparently be an illusion!

Just prior to this startling conclusion is the statement, "If the field of astronomy were not presently over-invested in the expanding universe paradigm, it is clear that modern observations would now compel us to adopt a static universe model as the basis of any sound cosmological theory". The seven tests that are used in the book only compare Friedmann uniform expansion models to static models, and do not include any other alternatives, so that the comparison is incomplete. What if Einstein's General Theory of Relativity has nothing at all to do with the evolution of the universe?

On the subject of what might cause the redshift if it is not due to the expansion velocity of the universe, the author states that he favors the explanation that the particle or wave serving as the carrier of gravity, dubbed "gravitons", would cause an apparent redshift by inelastic scattering interactions with the light passing large distances through the universe. This is hardly either a proof of validity of the META Model, or a ringing endorsement of how it works. It is also at odds with Arp's [2] explanation of redshift using Narliker's theory that lets mass grow as a function of time. In the absence of such a META proof, we must examine the individual concepts that make up the META Model.

Methodology

Let us begin with the author's own words in the preface to the book. "One procedure I have learned to favor is to adopt a starting point and reason deductively; that is, from cause to effect. The advantages of this are easy to understand: inductive reasoning (from the effect, usually an observation or experimental data, back to its cause) does not, in general, lead to unique answers, while deductive reasoning, if valid, generally does. Obviously a model, which allows us to deduce experimental results that were not used in formulating the model, is intrinsically more reliable than one that merely explains the results after the fact. And deductions made from an incorrect starting point do not usually resemble the experimental data or reality. So only deductions made from a correct starting point might be expected to lead to models which add true insight into phenomena, agree with observations, and make successful predictions."

It is very appealing to think of deductive reasoning as being pure and compelling, while inductive reasoning is ad hoc and error prone. Deductive reasoning should lead to clean answers, while inductive reasoning is a patchwork process. Nonetheless, the positive feature of inductive reasoning is the tendency to keep revisiting the problem to see if anything has been forgotten or erroneously added. The basic flaw in deductive reasoning is thinking that the process is so pure that, once an answer has been obtained, there is no need to inquire further. As a matter of fact, Sir Isaac Newton used a combination of inductive and deductive reasoning to make his amazing advances in expressing the Laws of Motion.

Those who do inductive reasoning are worriers. Each new piece of data or idea has to fit and make sense. This can lead to a patchwork answer, but to open-minded researchers it can also lead to alternate explanations that do a better job of explaining the data. This is a living process that is forced to confront new findings. In this regard, the Hubble telescope and recent computer-controlled telescopic searches of the heavens are providing new data faster than the conventional theorists can cope with their implications!

Those who do deductive reasoning are in danger of complacency. Again, paraphrasing the quote in the book, "only deductions made from a correct starting point, using valid reasoning, might be expected to lead to models which add true insight into phenomena." How does one know when one is at the correct starting point? What is the criterion that determines valid reasoning? When, if ever, is the answer revisited? The author's explanation of these three points is not very satisfying to this particular reader.

In any event, let's examine the META Model from the author's viewpoint.

Critique

Infinite Universe

In Chapter 1, a very abstract argument is made that the universe must logically be infinite in extent and time. The modeling begins as simply as possible, starting from a single particle, and moving to multi-particles, solving Zeno's paradox in the process. It results in five dimensions, where the fifth dimension is one of scale, whatever that means! Scale really has to be a parameter rather than a dimension.

Regardless of the validity of this derivation as pertains to the real universe, the following implications are deduced:

The entire universe must indeed be infinite in space and time, but this conclusion can be based on philosophical and physical reasons! The philosophical argument says that if there are billions and billions of stars and galaxies, why shouldn't there also be billions of universes? The physical argument is that our universe cannot really be surrounded by nothingness, because that would doom it to always lose mass and radiation at the outer edge so that it could never repeat a collapse and another expansion. The beginning would remain a mystery.

My own concept follows the work of Velan [3], and contains multiple universes in an equilibrium repeating cycle of birth and death, where adjacent spherical universes are nestled against one another like springy rubber balls. Of course, if there were multi-universes, they would have to be of quite similar sizes in order to preserve the spectacular balance we see in ours. And they would have to be very far separated indeed, since we don't know when ours will get to its turn-around point. Even the turn-around points could temporarily invade an adjacent space. The key argument is that if this doesn't describe the true picture, then losses of photons, neutrinos and gravitons at the effective outer surface would lead to a dissipative and irreversible process. The META Model does not even consider such a possibility.

Gravity

In Chapter 2, Van Flandern shows that the effect of gravitational attraction moves very much faster than the speed of light, as evidenced by the motion of the sun and planets that respond to the actual instantaneous positions of each other rather than the apparent light-delayed positions. He is correct on this point.

However, no one has yet succeeded in explaining the process behind instantaneous gravitational attraction. It acts as if there were an effective potential gravity field that instantaneously knows the positions of all masses at all times, even when they are moving or located a universe apart. It may operate by a bootstrap effect, which resembles a kind of whispering gallery where each mass continuously tells its neighbors what it knows in terms of positions and higher derivatives, and this gives the effect of faster-than-light-speed analytic continuation of space. Since these are not the properties of a scalar potential field, can this imply that gravitation follows a vector field? Could a vector field explain the bending of space-time? Is this an argument for the existance of an ether that expresses the gravitational field?

The book makes a pretty good case for needing a tremendously high background flux of something related to gravitons to account for gravitational attraction. In the words of the author, "The properties of Newton's Law of Universal Gravitation imply that 'action at a distance' in its purest form, with no agents passing between the acting body and the affected one, must logically be impossible. But if such agents exist, they must propagate and interact in order to have an effect. Gravity would then be an inverse square force, because whatever propagates spreads out in two dimensions while moving in a third."

It continues, "If the action of the agents results from collisions, then their source must be external to the acting body; because if they came from inside, the force of their collisions would be repulsive. For gravity to be a 'universal' force, the universe must be filled with a flux of such agents. To affect every atom of matter in a body, the agents must be small enough to be able to pass through ordinary matter with ease. And the mean distance between mutual collisions of agents with each other must be large compared to the distances separating the acting and affected bodies, or the flux would behave like a 'perfect gas' and produce no force between separate bodies. But if these conditions are met, bodies would 'shadow' one another from some of this universal flux, resulting in a net force toward each other, which would behave exactly like Newton's gravitation."

Now comes an apparent assumption that the agents themselves must move faster than the speed of light in order to make gravitational attraction instantaneous. The author states, "So the picture of gravity we have arrived at here demands a universe filled with gravitational agents moving at velocities much faster than light, in order to explain the nearly instantaneous action of gravity on the local scale." To distinguish from classical bound graviton particles, the author calls the flux agents which give rise to gravitation "C-gravitons" (CGs), and names the largest entities through which CGs cannot pass "matter ingredients" (MIs). He implies, but does not directly state, that the active ingredients of the MIs are, in fact, gravitons.

In a totally different context, namely during the process of positron-negatron pair production and annihilation, it can be argued that, for pair production, gravitational properties must be created for each of the electrons by abstracting entities from the continuum to create bound gravitons. For positron annihilation, gravitational properties must be destroyed by returning these same entities back to the continuum. Hence, since the same two entities that produce the gravitational force are involved in the creation and destruction of that force, they must be related by being dissociation products.

In this paper, the CGs will henceforth be called graviphotons. They will be assigned the theoretical wavelength of the graviton, which is about 800 million light years, but they will be assumed to propagate at the speed of light. Hence, just like the extremely high neutrino flux that is known to pass through us continuously almost undetected, we agree with the author that there exists an even higher flux of graviphotons that also passes through us and accounts for the effects of gravity.

The author uses a new argument to assert that black holes cannot exist. He states, "Another new property is 'shielding.' If matter exhibits gravitation because of the shadowing of other matter from the action of a sea of agents, it follows that at some density the shielding is complete, and no gravitational agents can penetrate at all. If a sphere of matter were to collapse to such a high density that no gravitational agents could penetrate, then only the surface layers would reflect the gravitational agents. None of the matter in the interior of the body would make any contribution to the strength of its gravitational field. It follows that a mathematical singularity could not exist in gravitation, since the force exerted by a finite body cannot approach infinity at its surface as the body collapses. So the concept of 'black holes' is physically impossible in the META Model."

It is agreed that a singularity cannot exist, but where is it proven that a black hole is a singularity? It is entirely possible that a black hole has an internal structure. Let us assume that a neutron star, which is basically a single giant nucleus crystal of neutrons, attracts enough additional mass to fracture the neutrons into much smaller tri-quarks, which then form a tri-quark crystal. The result would be a black hole with a finite internal diameter corresponding to the crystal size, and an apparent external event horizon, or Schwartzchild radius, where no light penetrates. Theoretically, the event horizon expands in proportion to the mass contained, and shrinks with increased rotation, while the internal tri-quark crystal gets bigger at a much slower rate.

Now it may be possible to argue theoretically that even finite-sized black holes do not exist, but astronomers seem to regularly report sighting them. On September 12th 2000, NASA announced that they had used the Chandra X-ray observatory to discover a "middleweight" black hole of about 500 solar masses packed into a region the size of the moon. Until now, they had only found the smaller ones, those with the mass of a few suns, and the gargantuan ones, those with a mass of millions to billions of suns that inhabit the centers of galaxies. In September 2001, the Chandra X-ray observatory detected an x-ray flare from the black hole that inhabits the center of the Milky Way as it digested a comet-sized cloud of dusty gas. In November 2002, the Chandra observatory observed two massive black holes in galaxy NGC 6240.

If this argument is carried one step further, it would seem that neutron stars, commonly called pulsars, would have nowhere near enough density to be self-shielded. Nevertheless, the author extends his argument saying, "It therefore comes as no surprise to learn that pulsars, which are believed to be remnants of supernovas, often have measured masses close to 1.4 solar masses. In standard theory this is coincidence. But in the META Model, we see that it is not truly mass we are measuring but collapsed surface area." An alternate explanation is that this is close to the mass limit where black holes form. The only reasonable conclusion to make is that gravitational self-shielding is not a problem, even for the larger black holes!

On Relativity

The basic tenet of the META Model is that wave behavior provides an alternate explanation for almost everything. The author summarizes this as follows, "Light and other electromagnetic phenomena propagate as three dimensional waves, with properties more nearly like underwater waves. Gravity influences the density of the light-carrying medium near matter ingredients, which in turn can change the speed of propagation of light and electromagnetic forces. Their behavior follows the laws of refraction for light moving through a medium of higher density: propagation slows, directions of propagation bend, and wavelengths shift toward the red. This is why, in the META Model, light bends near the Sun, radar beams to the planets slow their round trip travel times, and light escaping a gravitational field gets red-shifted. The refraction model likewise can exactly predict the advance of Mercury's perihelion. These are the famous tests of Einstein's General Relativity Theory, which are clearly all obeyed in a natural way by the META Model, but without the need for 'curved space-time.' Einstein's Special Relativity Theory predicts not only that all motion is relative, which is required in the META Model, but also that space and time will seem to contract for moving observers."

Now we come to one of the truly profound conclusions of the META Model. In effect, because gravity fundamentally differs from electromagnetism, a theoretical Grand Unification of the four forces of nature is impossible! This is a conclusion that merits support. The author states, "Einstein's theory is also said to predict the existence of a phenomenon called 'gravitational radiation' or 'gravity waves.' The prediction is based on an assumed analogy between gravity and electromagnetism (EM). However, such an analogy is defective in several particulars. Both gravity and EM give rise to an inverse square force, but there the similarity ends. EM forces are both attractive and repulsive, while gravity is attractive only. A body's own charge affects its motion in EM, but its own mass does not affect its own acceleration in gravity. There is no Equivalence Principle in EM, and no analog of magnetism or Maxwell's equations for gravity. EM forces between two bodies act with light?time delay, while gravity acts with no detectable delay. Under gravity, masses are free to move wherever the forces direct them; but electrons are confined to discrete energy levels, and cannot stably orbit at intermediate levels. And the relative strengths of the two forces differ by 40 orders of magnitude."

However, it is also possible to carry these arguments a bit too far. The author concludes, "At the heart of the META Theory, the 'sound analogy' shows how it can be that Special Relativity is valid and yet faster-than-light communication in forward time is still possible." Here is an answer of no great practical significance unless it is simply a rationalization for the apparent instantaneous action of gravity.

Stars, Galaxies and the Universe

There is strong evidence indeed that the Cosmological Model (CM) of the Big Bang is seriously flawed, if not completely erroneous. It has patches on its patches, and its proponents await new experimental evidence with great anticipation in hopes of a bailout from the latest difficulties. An example of this is the current furious and completely unsuccessful search for any type of exotic dark matter that would explain why the universe is nearly balanced, or "flat", when there is apparently not enough mass to account for this behavior. Dark matter is also needed in the CM to trigger the heterogeneous formation of galaxies through small initial random fluctuations that supposedly occurred at the time of the Big Bang.

The entire CM is based upon only three experimental observations:

These observations have been taken in support of the CM hypothesis that the Big Bang was a tremendous hot explosion of energy coming out of nowhere at an infinitesimal point some 14 billion years ago.

But there are serious difficulties with the CM as well, as mentioned in the Preface and partially enumerated in Chapter 4 of the book. Among the difficulties are the following:

A much more detailed discussion of the flaws in the CM can be found in Mitchell's new book [4].

With regard to the above points, the book's analysis leaves a great deal to be desired by focusing strongly upon some problems, while ignoring others. For example:

Summary

The deductive methodology used in deriving the META Model of the universe has much to commend it, but many of the mysteries in the universe exposed by recent astronomical measurements have simply been left out of consideration. The two major items omitted pertain to:

In these cases, the inductive method may supply new answers by considering particle physics on the sub-nuclear scale. Whether or not these new ideas are simply patches on old theories, or are truly new approaches, will have to be demonstrated by future work.

RETURN

* An abbridged version of this article, with comments and refutations by T. Van Flandern, appears in the Meta Research Bulletin, Volume 11, Number 2, pp 17-23, June 15, 2002.

References

1. T. Van Flandern, Dark Matter, Missing Planets, & New Comets, North Atlantic Books, Berkeley, California, 1998.

2. Halton Arp, Seeing Red, Aperion Press, 1998.

3. A. K. Velan, The Multi-Universe Cosmos, Plenum Press, 1992.

4. W. Mitchell, Bye Bye Big Bang - Hello Reality, in press 2001.

5. D. C. Koo, N. Ellman, R.G. Kron, J.A. Munn, A.S. Szalay, T.J. Broadhurst, and R.S. Ellis, "Deep Pencil-Beam Redshift Surveys as Probes of Large Scale Structures", Astronomical Society of the Pacific, Conference Series, Vol. 51, 1993, and S.R. Majewski, class notes, Department of Astronomy, University of Virginia, March 1996.

6. C.A. Bly, "Neutrino-Driven Nucleon Fission Reactors: Supernovas, Quasars, and the Big Bang", Transactions of the American Nuclear Society, Vol. 66, pp 529-532, 1992.