4/8/2002, added 11/21/02
Critique of the META Model of the Universe*
Roger A. Rydin, email@example.com
Associate Professor Emeritus of Nuclear Engineering
University of Virginia, Charlottesville, VA 22901
In reading Van Flandern's book  Dark Matter, Missing Planets & New Comets, which is aptly subtitled, "Paradoxes Resolved and Origins Illuminated", it is possible to agree with many of the author's arguments, and yet be unsatisfied with the thrust of many others. It is only fair to examine these points of agreement and disagreement in some depth, if for no other reason than to set a healthy dialog in motion. The present discussion will be limited to the material presented in the first five chapters of the book that describe the META Model of the universe.
If the reader had expected to find that the META Model is a phenomenological description of how the universe started, how it evolved, and what is going to happen to it, he will be sorely disappointed. As I did, the reader will reach the end of Chapter 5 and the last sentence, "This completes the exposition of the META Model", and not have the slightest idea of how the various arguments presented in those chapters fit together to describe the universe! In fact, he will have to jump to the very end of Chapter 22, in a section denoted "Note added in proof", to find out that the essence of the META Model is that the "universe is infinite in both space and time, and is not expanding at all"! In other words, it has always been the way it is, and it will continue to be that way in the future. Since the META Model says that the universe is constant, then the expansion of the universe must apparently be an illusion!
Just prior to this startling conclusion is the statement, "If the field of astronomy were not presently over-invested in the expanding universe paradigm, it is clear that modern observations would now compel us to adopt a static universe model as the basis of any sound cosmological theory". The seven tests that are used in the book only compare Friedmann uniform expansion models to static models, and do not include any other alternatives, so that the comparison is incomplete. What if Einstein's General Theory of Relativity has nothing at all to do with the evolution of the universe?
On the subject of what might cause the redshift if it is not due to the expansion velocity of the universe, the author states that he favors the explanation that the particle or wave serving as the carrier of gravity, dubbed "gravitons", would cause an apparent redshift by inelastic scattering interactions with the light passing large distances through the universe. This is hardly either a proof of validity of the META Model, or a ringing endorsement of how it works. It is also at odds with Arp's  explanation of redshift using Narliker's theory that lets mass grow as a function of time. In the absence of such a META proof, we must examine the individual concepts that make up the META Model.
Let us begin with the author's own words in the preface to the book. "One procedure I have learned to favor is to adopt a starting point and reason deductively; that is, from cause to effect. The advantages of this are easy to understand: inductive reasoning (from the effect, usually an observation or experimental data, back to its cause) does not, in general, lead to unique answers, while deductive reasoning, if valid, generally does. Obviously a model, which allows us to deduce experimental results that were not used in formulating the model, is intrinsically more reliable than one that merely explains the results after the fact. And deductions made from an incorrect starting point do not usually resemble the experimental data or reality. So only deductions made from a correct starting point might be expected to lead to models which add true insight into phenomena, agree with observations, and make successful predictions."
It is very appealing to think of deductive reasoning as being pure and compelling, while inductive reasoning is ad hoc and error prone. Deductive reasoning should lead to clean answers, while inductive reasoning is a patchwork process. Nonetheless, the positive feature of inductive reasoning is the tendency to keep revisiting the problem to see if anything has been forgotten or erroneously added. The basic flaw in deductive reasoning is thinking that the process is so pure that, once an answer has been obtained, there is no need to inquire further. As a matter of fact, Sir Isaac Newton used a combination of inductive and deductive reasoning to make his amazing advances in expressing the Laws of Motion.
Those who do inductive reasoning are worriers. Each new piece of data or idea has to fit and make sense. This can lead to a patchwork answer, but to open-minded researchers it can also lead to alternate explanations that do a better job of explaining the data. This is a living process that is forced to confront new findings. In this regard, the Hubble telescope and recent computer-controlled telescopic searches of the heavens are providing new data faster than the conventional theorists can cope with their implications!
Those who do deductive reasoning are in danger of complacency. Again, paraphrasing the quote in the book, "only deductions made from a correct starting point, using valid reasoning, might be expected to lead to models which add true insight into phenomena." How does one know when one is at the correct starting point? What is the criterion that determines valid reasoning? When, if ever, is the answer revisited? The author's explanation of these three points is not very satisfying to this particular reader.
In any event, let's examine the META Model from the author's viewpoint.
In Chapter 1, a very abstract argument is made that the universe must logically be infinite in extent and time. The modeling begins as simply as possible, starting from a single particle, and moving to multi-particles, solving Zeno's paradox in the process. It results in five dimensions, where the fifth dimension is one of scale, whatever that means! Scale really has to be a parameter rather than a dimension.
Regardless of the validity of this derivation as pertains to the real universe, the following implications are deduced:
1) "Since dimensions are unlimited in space and time, the Big Bang theory of the origin of our universe can, at best, refer only to a local 'large-scale' event in our region of the universe." Precisely! However, what if this is not a center-less expansion of space itself, but rather a real spherical expansion of the matter in some region of space? The precursor of this expansion does not have to be an explosion. The only new criterion is that the center of the expansion must accidentally be located somewhat near the Earth, of the order of a few hundred million light years away, so that the expansion looks uniform to us;
2) "More likely, some other explanation exists for the red shift of distant light than a big initial explosion of all matter". This is probably true, but how does this follow from the previous argument? Strange as it may seem, what is there to prevent galaxies from actually moving away from the local origin with speeds that increase with distance? Is this impossible if the point of observation is "near" the origin? What is needed is to find a physical mechanism that gives galaxies initial radial velocities that follow Hubble's Law;
3) "Some other explanation exists for the cosmic microwave background (CMB) radiation than a big initial explosion of all matter". This is also probably true, but again how does this follow from the previous argument? All that is needed is a mechanism that produces a spatially uniform CMB in the vicinity of where it has been measured.
The entire universe must indeed be infinite in space and time, but this conclusion can be based on philosophical and physical reasons! The philosophical argument says that if there are billions and billions of stars and galaxies, why shouldn't there also be billions of universes? The physical argument is that our universe cannot really be surrounded by nothingness, because that would doom it to always lose mass and radiation at the outer edge so that it could never repeat a collapse and another expansion. The beginning would remain a mystery.
My own concept follows the work of Velan , and contains multiple universes in an equilibrium repeating cycle of birth and death, where adjacent spherical universes are nestled against one another like springy rubber balls. Of course, if there were multi-universes, they would have to be of quite similar sizes in order to preserve the spectacular balance we see in ours. And they would have to be very far separated indeed, since we don't know when ours will get to its turn-around point. Even the turn-around points could temporarily invade an adjacent space. The key argument is that if this doesn't describe the true picture, then losses of photons, neutrinos and gravitons at the effective outer surface would lead to a dissipative and irreversible process. The META Model does not even consider such a possibility.
In Chapter 2, Van Flandern shows that the effect of gravitational attraction moves very much faster than the speed of light, as evidenced by the motion of the sun and planets that respond to the actual instantaneous positions of each other rather than the apparent light-delayed positions. He is correct on this point.
However, no one has yet succeeded in explaining the process behind instantaneous gravitational attraction. It acts as if there were an effective potential gravity field that instantaneously knows the positions of all masses at all times, even when they are moving or located a universe apart. It may operate by a bootstrap effect, which resembles a kind of whispering gallery where each mass continuously tells its neighbors what it knows in terms of positions and higher derivatives, and this gives the effect of faster-than-light-speed analytic continuation of space. Since these are not the properties of a scalar potential field, can this imply that gravitation follows a vector field? Could a vector field explain the bending of space-time? Is this an argument for the existance of an ether that expresses the gravitational field?
The book makes a pretty good case for needing a tremendously high background flux of something related to gravitons to account for gravitational attraction. In the words of the author, "The properties of Newton's Law of Universal Gravitation imply that 'action at a distance' in its purest form, with no agents passing between the acting body and the affected one, must logically be impossible. But if such agents exist, they must propagate and interact in order to have an effect. Gravity would then be an inverse square force, because whatever propagates spreads out in two dimensions while moving in a third."
It continues, "If the action of the agents results from collisions, then their source must be external to the acting body; because if they came from inside, the force of their collisions would be repulsive. For gravity to be a 'universal' force, the universe must be filled with a flux of such agents. To affect every atom of matter in a body, the agents must be small enough to be able to pass through ordinary matter with ease. And the mean distance between mutual collisions of agents with each other must be large compared to the distances separating the acting and affected bodies, or the flux would behave like a 'perfect gas' and produce no force between separate bodies. But if these conditions are met, bodies would 'shadow' one another from some of this universal flux, resulting in a net force toward each other, which would behave exactly like Newton's gravitation."
Now comes an apparent assumption that the agents themselves must move faster than the speed of light in order to make gravitational attraction instantaneous. The author states, "So the picture of gravity we have arrived at here demands a universe filled with gravitational agents moving at velocities much faster than light, in order to explain the nearly instantaneous action of gravity on the local scale." To distinguish from classical bound graviton particles, the author calls the flux agents which give rise to gravitation "C-gravitons" (CGs), and names the largest entities through which CGs cannot pass "matter ingredients" (MIs). He implies, but does not directly state, that the active ingredients of the MIs are, in fact, gravitons.
In a totally different context, namely during the process of positron-negatron pair production and annihilation, it can be argued that, for pair production, gravitational properties must be created for each of the electrons by abstracting entities from the continuum to create bound gravitons. For positron annihilation, gravitational properties must be destroyed by returning these same entities back to the continuum. Hence, since the same two entities that produce the gravitational force are involved in the creation and destruction of that force, they must be related by being dissociation products.
In this paper, the CGs will henceforth be called graviphotons. They will be assigned the theoretical wavelength of the graviton, which is about 800 million light years, but they will be assumed to propagate at the speed of light. Hence, just like the extremely high neutrino flux that is known to pass through us continuously almost undetected, we agree with the author that there exists an even higher flux of graviphotons that also passes through us and accounts for the effects of gravity.
The author uses a new argument to assert that black holes cannot exist. He states, "Another new property is 'shielding.' If matter exhibits gravitation because of the shadowing of other matter from the action of a sea of agents, it follows that at some density the shielding is complete, and no gravitational agents can penetrate at all. If a sphere of matter were to collapse to such a high density that no gravitational agents could penetrate, then only the surface layers would reflect the gravitational agents. None of the matter in the interior of the body would make any contribution to the strength of its gravitational field. It follows that a mathematical singularity could not exist in gravitation, since the force exerted by a finite body cannot approach infinity at its surface as the body collapses. So the concept of 'black holes' is physically impossible in the META Model."
It is agreed that a singularity cannot exist, but where is it proven that a black hole is a singularity? It is entirely possible that a black hole has an internal structure. Let us assume that a neutron star, which is basically a single giant nucleus crystal of neutrons, attracts enough additional mass to fracture the neutrons into much smaller tri-quarks, which then form a tri-quark crystal. The result would be a black hole with a finite internal diameter corresponding to the crystal size, and an apparent external event horizon, or Schwartzchild radius, where no light penetrates. Theoretically, the event horizon expands in proportion to the mass contained, and shrinks with increased rotation, while the internal tri-quark crystal gets bigger at a much slower rate.
Now it may be possible to argue theoretically that even finite-sized black holes do not exist, but astronomers seem to regularly report sighting them. On September 12th 2000, NASA announced that they had used the Chandra X-ray observatory to discover a "middleweight" black hole of about 500 solar masses packed into a region the size of the moon. Until now, they had only found the smaller ones, those with the mass of a few suns, and the gargantuan ones, those with a mass of millions to billions of suns that inhabit the centers of galaxies. In September 2001, the Chandra X-ray observatory detected an x-ray flare from the black hole that inhabits the center of the Milky Way as it digested a comet-sized cloud of dusty gas. In November 2002, the Chandra observatory observed two massive black holes in galaxy NGC 6240.
If this argument is carried one step further, it would seem that neutron stars, commonly called pulsars, would have nowhere near enough density to be self-shielded. Nevertheless, the author extends his argument saying, "It therefore comes as no surprise to learn that pulsars, which are believed to be remnants of supernovas, often have measured masses close to 1.4 solar masses. In standard theory this is coincidence. But in the META Model, we see that it is not truly mass we are measuring but collapsed surface area." An alternate explanation is that this is close to the mass limit where black holes form. The only reasonable conclusion to make is that gravitational self-shielding is not a problem, even for the larger black holes!
The basic tenet of the META Model is that wave behavior provides an alternate explanation for almost everything. The author summarizes this as follows, "Light and other electromagnetic phenomena propagate as three dimensional waves, with properties more nearly like underwater waves. Gravity influences the density of the light-carrying medium near matter ingredients, which in turn can change the speed of propagation of light and electromagnetic forces. Their behavior follows the laws of refraction for light moving through a medium of higher density: propagation slows, directions of propagation bend, and wavelengths shift toward the red. This is why, in the META Model, light bends near the Sun, radar beams to the planets slow their round trip travel times, and light escaping a gravitational field gets red-shifted. The refraction model likewise can exactly predict the advance of Mercury's perihelion. These are the famous tests of Einstein's General Relativity Theory, which are clearly all obeyed in a natural way by the META Model, but without the need for 'curved space-time.' Einstein's Special Relativity Theory predicts not only that all motion is relative, which is required in the META Model, but also that space and time will seem to contract for moving observers."
Now we come to one of the truly profound conclusions of the META Model. In effect, because gravity fundamentally differs from electromagnetism, a theoretical Grand Unification of the four forces of nature is impossible! This is a conclusion that merits support. The author states, "Einstein's theory is also said to predict the existence of a phenomenon called 'gravitational radiation' or 'gravity waves.' The prediction is based on an assumed analogy between gravity and electromagnetism (EM). However, such an analogy is defective in several particulars. Both gravity and EM give rise to an inverse square force, but there the similarity ends. EM forces are both attractive and repulsive, while gravity is attractive only. A body's own charge affects its motion in EM, but its own mass does not affect its own acceleration in gravity. There is no Equivalence Principle in EM, and no analog of magnetism or Maxwell's equations for gravity. EM forces between two bodies act with light?time delay, while gravity acts with no detectable delay. Under gravity, masses are free to move wherever the forces direct them; but electrons are confined to discrete energy levels, and cannot stably orbit at intermediate levels. And the relative strengths of the two forces differ by 40 orders of magnitude."
However, it is also possible to carry these arguments a bit too far. The author concludes, "At the heart of the META Theory, the 'sound analogy' shows how it can be that Special Relativity is valid and yet faster-than-light communication in forward time is still possible." Here is an answer of no great practical significance unless it is simply a rationalization for the apparent instantaneous action of gravity.
Stars, Galaxies and the Universe
There is strong evidence indeed that the Cosmological Model (CM) of the Big Bang is seriously flawed, if not completely erroneous. It has patches on its patches, and its proponents await new experimental evidence with great anticipation in hopes of a bailout from the latest difficulties. An example of this is the current furious and completely unsuccessful search for any type of exotic dark matter that would explain why the universe is nearly balanced, or "flat", when there is apparently not enough mass to account for this behavior. Dark matter is also needed in the CM to trigger the heterogeneous formation of galaxies through small initial random fluctuations that supposedly occurred at the time of the Big Bang.
The entire CM is based upon only three experimental observations:
1) The Hubble expansion, where far galaxies apparently recede from us faster than near galaxies;
2) The isotropic spatial constancy and purity of the Cosmic Microwave Background as a blackbody at approximately 2.73 degrees K; and
3) The relative abundances in the universe of helium and other light elements, which could only have been formed by fusion in hot plasma and not in stars.
These observations have been taken in support of the CM hypothesis that the Big Bang was a tremendous hot explosion of energy coming out of nowhere at an infinitesimal point some 14 billion years ago.
But there are serious difficulties with the CM as well, as mentioned in the Preface and partially enumerated in Chapter 4 of the book. Among the difficulties are the following:
1) There is no evidence that space itself expands! It certainly doesn't expand inside the Solar system.
2) There is not enough observed mass in the universe to explain the delicate balance that has allowed the universe to exist for at least 14 billion years;
3) There may not even be enough mass present to explain the rotational motion in and near galaxies;
4) No triggering mechanism has yet been found to explain how the shapes and distributions of galaxies arose from an otherwise homogeneous and isotropic distribution of expanding matter;
5) There is serious disagreement about the age of the universe as derived from measurements of the Hubble constant and inferred from other data that say that it is much older;
6) There is no explanation whatsoever for the apparent large scale correlation of galactic Great Walls and Voids, or for large scale galactic drifts; and
7) There is no explanation whatsoever for the energy sources that power quasars, hyper-novae and gamma-bursts.
A much more detailed discussion of the flaws in the CM can be found in Mitchell's new book .
With regard to the above points, the book's analysis leaves a great deal to be desired by focusing strongly upon some problems, while ignoring others. For example:
1) "The META Model also predicts additional properties for gravitation that are not a part of the Newtonian or Einstein models. One of these is that there must be a limited range for gravitational fields, corresponding to the mean distance between mutual collisions of C-gravitons. Over greater distances, CGs should start to behave like a perfect gas, with no net force. Interestingly, this is just how galaxies actually behave at distances over 2 kiloparsecs. Their rotation velocities remain constant at all distances from the center, in defiance of Newton's laws, but just as a perfect gas would do." This "chosen" but not derived saturation distance conveniently makes the existence of either visibly unseen matter or "exotic" dark matter unnecessary in order to explain the virial motion of numerous galaxies. But it neglects to provide any theoretical proof that this mean free path should be of the order of the visible size of a galaxy, or prove that CGs even collide with one another at all! In fact, it is more reasonable to assume that the effect of gravity essentially stops and goes to zero at some magical distance rather than suddenly remaining stronger than one-over-r-squared at some other magical distance.
2) The book mentions that there doesn't seem to have been enough time for the formation of mature galaxies if the universe is only 14 billion years old. Therefore, it concludes that only a static universe model can account for this maturity and not a conventional Big Bang. Unfortunately, the choice is posed as either/or, and since it is easy to find fault with the Big Bang, any single alternative wins. But this is not a proof that the static model is valid, especially if a third and better model can be found. After all, Hoyle proposed his steady state model after he gave up on a repeating universe because he could see no way for all of the matter to escape a central black hole, or if it could, how it could be reconverted into hydrogen to begin a new cycle. Without a process to replenish hydrogen, static models will eventually use up the hydrogen producing heavy elements, up to iron in stars and up to uranium in supernovae. So it is an irreversible process that leads to eventual death.
3) The author does not seem to make any comments about the age controversy, except to point out that the deep red-shift photos show immature galaxies at the limit of visibility. But this entire controversy could be resolved if there were a better model of the Big Bang that gave the universe an age of about 30 billion years and a mechanism to speed galaxy formation.
4) The periodic Great Walls receive strong attention and are treated thusly, "The existence of structure in the universe at the largest observable scales then implies the existence of forces other than gravitation operating on those scales. The first and largest peak in the distribution, at about 420 million light years, is what is called 'The Great Wall'. Such a feature has now been found in two opposite directions from the plane of the Milky Way, one in the northern and one in the southern hemisphere. These peaks alone are said to be incompatible with all galaxy formation mechanisms. The existence of numerous other possible 'great walls' out to the limits of the surveys merely compounds the problems for theorists. In the latest related work, 13 evenly spaced 'walls' of galaxies were found, each 420 million light years apart, covering a total distance of seven billion light years . A line of sight passing through a random pattern of 'bubbles' has less than a 2% chance of producing the observed sequence, which implies that the observations don't fit a random-cell pattern. Big Bang theoreticians don't know what to make of this structure yet. But using the META Model axiom that the universe should look essentially the same at any scale, it would be a reasonable conjecture that we are looking at waves on a huge scale." Note that the walls are attributed to a force and not a process. Then it is assumed that the walls are compatible with the META Model without even questioning whether the waves are planar or spherical, moving or standing, or asking why the galaxy densities are symmetric about the Milky Way and exponentially damped!
One really needs to look at the actual data . The galaxy-count data are amazingly symmetric about the plane of the Milky Way, suggesting that we are somehow centered in the universe! When these data are corrected for the angular spread of the pencil cone, which Koo confirmed had not been done, the resulting radial distribution resembles a symmetric damped sinusoidal distribution! Consider the autocorrelation function for this data . For a purely random distribution of galaxies, the autocorrelation should be a Dirac delta function, that is, a single tall narrow peak at 0 with a magnitude of zero everywhere else. For a correlated distribution, such as a Big Bang expansion from a small original region, the peak at 0 should be much lower, and there should also be an intersecting curve with a smooth drop off because the correlation would lessen with distance as motion randomized the distribution. In both cases, the area under the curve would be exactly the same. The actual autocorrelation function exhibits a very low peak at 0, with most of the area under the curve spreading out slowly with distance, indicating a very high degree of correlation over the entire set of data. Furthermore, a sinusoidal distribution with a fixed period of 410 million light years is superimposed on the slowly decreasing portion, visible until the statistics of the data mask it! What else but a correlated wave resembles this kind of data?
A second pencil survey was taken at an angle of 45 degrees from the plane of the Milky Way. This data has the same periodicity as the polar data, and is strongly cross correlated with it, implying that the wave behavior is spherical and not planar! For the plane wave Meta Model, as pictured in Chapter 4, the period would be lengthened by the reciprocal cosine of the incidence angle. Since this is not the case, the wave must be spherical and centered near the Milky Way.
5) An effort is made in the book to prove that red shifts are not necessarily a measure of velocity. The book states, "In the META Model, the red shift of starlight is an energy-loss phenomenon due to waves attempting to propagate through the resisting C-graviton medium. But if space itself expands, as in the Big Bang theory, then observations within the solar system that show a lack of such expansion at the Hubble rate violate the spirit of that theory." This is a correct observation relative to the expansion of space. All expansion must be actual motion, not some real motion superimposed upon an expanding space. However, it needs to be pointed out that the correlated periodic galactic wall separations were obtained only when Hubble red shift velocity corrections were made to place all of the galaxy count data at a common time. Therefore, these must have been real velocities!
6) The book attempts to explain the CMB as a local event, "The observed cosmic microwave radiation could be produced, as one possibility, by any uniform explosion fireball which has since encompassed the Earth. Indeed it appears that the theoretical equilibrium temperature of the interstellar medium is quite close to the observed 2.7degree radiation. This seems to imply a possible source for the 'background' radiation within our own galaxy, as well as other sources within other galaxies." Unfortunately, this seems to neglect the fact that the reference frame of the CMB appears to be centered near the super-cluster Virgo, which is quite a distance from us! The corresponding "Dipole Anomaly" is quite disturbing to CM theorists!
7) Finally, the question of new energy release mechanisms is only mentioned in passing, although a great deal is made of the need for an energy source to power quasars. The book states, "It is easy to make a case that the very high-red-shift quasars are not at the cosmological distances implied when their red-shifts are interpreted in the Big Bang theory. If these objects are at such great distances, then we have the following dilemmas, which would not exist if quasars were 'local'. There must exist an unknown energy mechanism to produce such intrinsically high-luminosity objects, enabling them to be so bright at such great distances. Energies are often equivalent to thousands of supernovas per year. The existence of rapid light variations implies that most quasar light must come from a small source, of solar-system dimensions. Light variations could not be coordinated in different parts of a larger object, because that would require faster-than-light communication. Yet some quasars that can be resolved are seen to be as big as giant galaxies (if assumed to be at great distances). The number of quasars versus red shift, z, is nearly flat, with a slight drop out to z=2, then a sharp drop. So they do not increase in number as space increases in volume with red shift. Quasars near our part of the universe (i.e., with small red-shifts) are quite rare, seeming to imply that most of the universe's quasars died out long ago. What we see now would be the light of quasars that existed long ago that is just now reaching us. Sources are 'quasi-stellar' by definition, implying little visible angular extent of the sort that galaxies have. Yet quasars must have galaxy-like masses to produce so much energy (if they are far away)." But all of these conclusions might have to change if a new energy source mechanism were found! It is proposed here that such a mechanism, based upon the fission of protons by neutrinos and the fission of neutrons by antineutrinos, is indeed feasible under certain conditions in quasars, collapsing star cores and colliding neutron stars .
The deductive methodology used in deriving the META Model of the universe has much to commend it, but many of the mysteries in the universe exposed by recent astronomical measurements have simply been left out of consideration. The two major items omitted pertain to:
1) The possible existence of as yet unrecognized prompt energy release mechanisms that can power quasars, supernovae, hyper-novae, and gamma bursts; and
2) A new process that accounts for the production of the periodic Great Walls and Voids, and reconciles the controversy about the age of the universe.
In these cases, the inductive method may supply new answers by considering particle physics on the sub-nuclear scale. Whether or not these new ideas are simply patches on old theories, or are truly new approaches, will have to be demonstrated by future work.
* An abbridged version of this article, with comments and refutations by T. Van Flandern, appears in the Meta Research Bulletin, Volume 11, Number 2, pp 17-23, June 15, 2002.
1. T. Van Flandern, Dark Matter, Missing Planets, & New Comets, North Atlantic Books, Berkeley, California, 1998.
2. Halton Arp, Seeing Red, Aperion Press, 1998.
3. A. K. Velan, The Multi-Universe Cosmos, Plenum Press, 1992.
4. W. Mitchell, Bye Bye Big Bang - Hello Reality, in press 2001.
5. D. C. Koo, N. Ellman, R.G. Kron, J.A. Munn, A.S. Szalay, T.J. Broadhurst, and R.S. Ellis, "Deep Pencil-Beam Redshift Surveys as Probes of Large Scale Structures", Astronomical Society of the Pacific, Conference Series, Vol. 51, 1993, and S.R. Majewski, class notes, Department of Astronomy, University of Virginia, March 1996.
6. C.A. Bly, "Neutrino-Driven Nucleon Fission Reactors: Supernovas, Quasars, and the Big Bang", Transactions of the American Nuclear Society, Vol. 66, pp 529-532, 1992.