mapping the galaxy


THREE MILLION GALAXIES, each one containing billions of stars, appear above on the map of just 15% of the sky centered on the constellation Sculptor. Although galaxies fill the sky, making it look roughly the same in each and every direction, they actually tend to fall in to clusters, clumps and chains.


The above map, in which the brightness of each dot is proportional to the number of galaxies it represents, was pieced together by the Automated Plate Measuring Galaxy Survey from black-and-white photographs from the U. S. Schmidt Telescope. On this color-enhanced versionblue, green and red dots depict bright, medium and faint galaxies, respectively. The black patches are areas around bright stars that the survey telescope was unable to probe.


What did the universe look like at the dawn of time? How did it grow and develop into what we live in today? What forms of matter, both mundane and exotic, does it contain?


galaxy map


The recent work has followed two decades of exciting discoveries. In the late 1970s and early 1980s, cosmologists began to map galaxies in a systematic way. In so doing, they sought to measure the distribution of all matter, including the inter-galactic “dark matter” that, unlike galaxies—does not give off light. (The assump-ion that luminous galaxies trace the total mass is no more than an approximation, albeit a constructive one; other research has attempted to quantify the bias that results.)


Cosmo-cartographers discovered that on scales of up to 100 million light-years , galaxies are distributed as a fractal with a dimension of between one and two. The fractal arrangement of mater would be a severe problem for the cosmological principle if it extended to larger scales, because a fractal distribution is never homo-geneous and isotropic. Unlike, say, a crowd of people, a fractal does not approach homogeneity when viewed from a distance, like a coastline, it looks uneven on every scale. In a fractal universe of dimension two, the expected mass within a spherical volume centered on a random galaxy would increase as the square of the radius instead of the cube. In such a universe, the mean density would be a function of scale, and other universal parameters such as the cosmic expansion rate would lose their meaning. In short, the fractal findings seemed to pull the rug out from under modern cosmology.


Subsequent surveys, however, indicated that on stales of hundreds of millions of light-years, the fractal nature broke down. The broader distribution of galaxies could be described in terms of a simple statistical process with a well-defined mean and variance—a noise process. The cosmological principle was saved. But in the late 1980s new problems rose to threaten it. A high-resolution survey detected a “Great Wall” of galaxies 750 million light-years long, more than 250 million light-years wide and 20 million light-years thick. A noise process could not readily explain such a colossal, coherent structure. These discoveries motivated still larger mapping projects, in cluding the Las Campanas Redshift Survey that my colleag-ues and I conducted from 1988 to 1994.


SLICING THROUGH THE UNIVERSE.


B ecause the Las Campanas survey sought to measure the distribution of galaxies on a scale several times that of previous studies, it encountered a number of observational challenges. The most distant galaxies were faint, so photographing them would require a lengthy exposure time. The large survey volume increased the number of objects that had to be observed. In sum, we had to observe more objects with a longer exposure but with only limited telescope time. For these rea-sons, we decided to construct a survey that would be very deep (out to two billion light-years) and wide (85 degrees across the sky) but thin (1.5 degrees)---- effectively sampling the galaxy distribution in only two dimensions. Though thinness compromised the signal, it allowed a first glimpse of the organization of the cosmos on scales of several billion light years. The survey made six separate maps and recorded the positions of more than 26,000 galaxies. The data were collected at the Carnegie Observatories on Cerro Las Campanas in the Atacama Desert of Chile. This information was analyzed by Stephen A. Shect-man of the Carnegie Observatories, Robert P. Kirshner and Huan Lin of the Harvard-Smithsonian Center for Astrophysics, Augustus Oemler and Douglas L. Tucker of Yale University, Paul L. Schechter of the Massachusetts Institute of Technology and me.


The survey involved several steps. First, we made photometric observations— basically, highly sensitive photographs of the sky—with a charge-coupled device (CCD) camera mounted on the one-meter Swope telescope at Las Campanas. For maximum efficiency, we used a specialized technique known as drift-scan photometry, in which we pointed the telescope at the beginning of a survey field and then turned off its automated drive. The telescope stood still as the sky drifted past. Computers read information from the CCD detector at exactly the same rate as the rotation of the earth, producing one long, continuous image at a constant celestial latitude . Completing the photometry took a total of 450 hours.


Second, we analyzed the strips to determine which objects were likely to be galaxies and suitable for inclusion into the survey. Candidates were chosen based on their brightness and galaxylike fuzziness. Finally, we observed these objects with a spectrograph at the 2.5-meter du Pont telescope at Las Campanas. The spectrograph broke the light down into a spectrum of colors, from which we calculated each galaxy’s redshift, a measure of its distance.



Because gathering enough light to measure the spectrum of a galaxy in this survey took about two hours, if we had observed only one galaxy at a time, a survey of this size would have been impossible . But Shectman designed a multiple fiber-optic system to measure the spectra of 112 galaxies simultaneously. This system worked as follows: Once we had chosen the prospective galaxies, we drilled holes in a metal plate to be mounted at the focus of the telescope. These holes corresponded to the positions of the galaxies in the sky. Into these holes we plugged fiber-optic cables that carried the light from each galaxy down to a separate channel on the spectrograph. Even with this parallel processing, it took us 600 hours of observing time over 100 nights to measure all the spectra.


SOUNDING OUT THE UNIVERSE


L ooking at the maps produced by the survey, the eye is struck by the sense that the galaxies are not randomly distributed but instead tend to bunch together Yet one must be careful of visual impressions . Our brains often seek patterns where none exist. In this case, however, statistical techniques bear out the existence of clustering.


The simplest way to measure clustering is to use correlation functions, which represent the number of pairs of objects as a function of their separation . For example, the distribution of deer in a forest is highly clustered on small scales—say, a few tens of yards. In randomly picking out deer in the forest, you would notice that you are much more likely to find another deer a few yards away than a few hundred yards away . The correlation function would show a strong positive signal on the scale of a few tens of yards and a weak or negative signal on the scale of a few hundred yards. It mathematically describes the well-known fact that deer tend to travel in small groups.


A similar analysis can be done on galaxies, and it works well on scales that are much smaller than the size of a survey. But on larger scales it is not very informative. The problem is that the number of galaxies—and therefore the number of galaxy pairs—is set. If there is an excess of pairs at small separations, there must be a deficit of pairs at larger separations, because the total number of pairs is fixed. This zero-sum game contaminates the clustering signal on larger scales.


Fortunately, a complementary technique can reliably measure clustering at large scales: harmonic analysis, also known as power spectrum analysis. Harmonic analysis, as its name suggests, is closely allied to the study of sound. In fact, the mathematical analysis of the distribution of galaxies and of random noise is identical. (The power spectrum is conceptually related to but physically distinct from the kind of spectrum that astronomers usually study, that of light.)


Many common phenomena, such as the waves on the surface of the sea and the air pressure fluctuations in a room, are most naturally described in terms of their power spectra. In fact, the human ear performs a similar analysis on pressure fluctuations —that is, sound. The fluctuations can be thought of as a collection of pure tones, each with a certain strength. The cochleas in our ears decompose the fluctuations into their constituent tones (or frequencies). The signal sent to the brain describes the strength (or amplitude) of each tone.


The power spectrum is a measure of the strength of the pressure fluctuations as a function of frequency. It is what the graphic equalizer of a stereo displays. Large musical instruments, such as a bass or a tuba, put out a large fraction of their power at long wavelengths, which correspond to low frequencies. The sound of breaking glass consists primarily of high frequencies.


Random noise is special because it can be completely described in terms of its power spectrum. Consider two people who go to visit the same waterfall several minutes apart. Each records several minutes of sound. Although their recordings will not be the same—the sound made by the waterfall is always changing—both will record the characteristic sound of the waterfall. If the observers take their re-cordings and perform a harmonic analysis, they will each find the same power spectrum. The statistical properties of their two recordings are identical.


THE COLOR OF SOUND


Noise with a fiat power spectrum, corresponding to equal power at all frequencies, is called white noise. The term comes from an analogy with color . Each color has a different frequency; if you add all the colors together equally, you get white. In terms of sound, white noise is the static between radio stations. Its sound is perfect ly random; at each instant the sound is unrelated to, or uncorrelated with, the sound that came before. Another special power spectrum is that of pink noise, in which each octave delivers the same power. A waterfall produces pink noise.


Harmonic analysis can reconcile the cosmological principle with the clustering of matter, If the universe is homogeneous and isotropic, observers sitting on planets in separate galaxies should measure the same properties for the universe on its largest scales. Of course, they will see different galaxy distributions, just as any two slices in the Las Campanas survey are different. But given enough surveys, or a survey of sufficient size, the two observers should measure the same statistical fluctuations These fluctuations, like those of the sound of a waterfall, can be described in terms of the power spectrum.


As the universe expands and evolves, various physical processes modify the power spectrum of its large-scale structure. Cosmologists generally believe that quantum -mechanical fluctuations imparted the initial power spectrum shortly after the start of the big bang. In the late 1960s English physicist Edward R. Harrison and Russian physicist Yakov B. Zel’dovich derived a shape for this primordial power spectrum—namely, a power law with the functional form of frequency to the negative third power, a pink-noise spectrum in three dimensions.


Harrison and Zel’dovich both reasoned that most natural forces, including gravity, have no intrinsic length scale; they are power laws. Therefore, the initial power spectrum should be some form of power law in frequency, so it does not single out any particular length scale. They also grasped the role of the horizon size in the evolution of the universe. The horizon size is simply the distance a beam of light could have traveled in the universe since the big bang up to any particular moment . Because the influence of gravity also travels at the speed of light, two points in the universe can interact gravitationally only if they are separated by a distance less than or equal to the horizon size. As the universe ages, the horizon size grows. Therefore, the horizon size defines a natural length scale over which gravity can operate.


What Harrison and Zel’dovich realized was that if the initial power-law spectrum was not precisely frequency to the negative third power, then one of two things would occur . If the power law were steeper—say, frequency to the negative fourth power—then fluctuations on very small scales would have been greater. In calculating the density fluctuations in the early history of the universe, when the horizon size was small, they found that many regions would have contained such a high density of matter that they would have quickly collapsed, filling the cosmos with black holes. Fortunately, this did not happen. Our very existence rules out such a power spectrum . On the other hand, if the power law were shallower, then at later times the density on large scales would fluctuate hugely. No such fluctuations exist.


Although this argument is quite persuasive to cosmologists, it does not explain how such a spectrum would have arisen. Cosmological inflation provides an explanation, which was an early success for the theory, as well as being one of its few testable consequences.


A GREAT NUMBER OF GREAT WALLS


The power spectrum of the universe today is very different from the primordial Harrison-Zel’dovich spectrum. Gravity has amplified the initial fluctuations and led to the growth of such structures as clusters of galaxies. At earlier times, the growth of fluctuations on specific scales was enhanced or retarded depending on whether the universe was dominated by matter or by radiation and whether elementary particles were light and fast-moving or heavy and slow-moving. One of the great challenges for modern cosmology is to determine how the initial power spectrum evolved into the spectrum observed today. Only in the past several years have observations, such as those of galaxy distribution and of the cosmic microwave background radiation, acquired enough data to put theories to the test.


So-called cold dark matter models are now the most popular explanations for the growth of structure. Their premise is that most of the mass in the universe resides in some unseen (hence, “dark”), relatively massive type of particle. It is “cold” because, being heavy, it travels slowly. The particle, which would interact with ordinary matter only through the force of gravity, could also account for the missing mass in galaxies and galaxy clusters


One of the surprising results from our survey is its deviation from the cold dark matter model on scales of around 600 million light-years. At smaller scales the predictions of the model match our findings, but something strange singles out the large scales . Previous surveys had suggested such a discrepancy, and one of the principal results of Las Campanas has been to substantiate it. From the strength of the deviation and the size of the survey, we calculated the probability of seeing such a deviation purely by chance as one in several thousand. What is very interesting about this deviation is that it can be traced back to the huge structures seen in the galaxy distribution . These structures are defined by the sharp boundaries, filaments and voids in the galaxy maps. The largest are almost three billion light-years across, several times the size of the Great Wall. The association of these walls and voids with the deviation in the power spectrum is a crucial finding of the Las Campanas survey. It means that on this scale, the galaxy distribution cannot be fully characterized using the mathematics of random noise. Some other physical process must have acted to imprint this characteristic scale on the density fluctuations.


In fact, this inconsistency is what allows these walls and voids to properly be called structures. With a pure-noise process, walls and voids would occasionally appear by chance. But they would be much rarer in keeping with the statistics of noise. They would be statistical fluctuations or chance superpositions, rather than true structures. What could be responsible for the mammoth walls and voids? Gravity might be a good explanation except that it causes smaller-scale fluctuations to collapse more quickly, simply because it takes less time for gravity to pull matter together on small scales. If gravity were the culprit, galaxy clustering should have begun on small scales and then worked its way up to large scales. For the past two decades, such a bottom-up scenario, known as hierarchical clustering, has been the paradigm for explaining structure on scales smaller than about 150 million light-years. Yet the deviations in our survey begin to appear at much larger scales. Hierarchical clustering may still apply on the small scales, but it cannot explain the walls and voids on the larger scales.


THE NEW MUSIC OF THE SPHERES


S everal hypotheses have emerged, although none can yet be reconciled with all the data. The first is a hot dark matter scenario wherein the universe is dominated by light, fast moving particles such as neutrinos. The result would be a top- down progression in structure formation starting on large scales. Unfortunately, this theory has the side effect of washing out structure on small scales, so it fails to account for the small-scale galaxy clustering.


A second hypothesis posits that the universe is less dense than cosmologists now suppose. Most of this decrease in density comes at the expense of exotic dark matter. Ordinary particles such as protons and electrons thus have a proportionately greater influence. They would have constituted a viscous fluid in the early universe. Before the universe was cool enough for the protons and electrons to combine and form atoms, sound waves reverberated through this fluid. When the protons and electrons recombined, the acoustic waves gave a boost to the gravitational collapse on certain scales. Intriguingly, an underdense universe would also resolve other cosmological conundrums.


A third hypothesis points out that 600 million light-years is roughly the horizon distance at the time when the average density of matter in the universe overtook that of radiation. Such a profound change would presumably have affected the power spectrum somehow. Whatever the final explanation, it may be that astronomers are detecting the largest unique length scale associated with any physical process in nature.


Even a survey the size of Las Campanas contains only about 50 independent measurements of the power spectrum at these large scales. Larger surveys are needed, and several are now either in the development stages or under way. An Anglo- Australian consortium called the 2DF Survey is mapping more than a quarter- million galaxies . Soon the American-Japanese Sloan Digital Sky Survey will begin to measure the distances to almost one million galaxies over half of the sky, sampling a volume 20 times greater than that of the Las Campanas survey.


These studies are not the first use of harmonic analysis in the history of astronomy. It was originally Pythagoras in the sixth century B.C. who applied musical analysis to the motion of the sun , moon, stars and planets. He believed that the celestial bodies were holes in a set of crystal spheres through which shone the celestial light. The motions of these spheres, he reasoned, must produce sounds. Their distances and their speeds must be in the same ratios as musical harmonies. This was the first “music of the spheres.”


In the 17th century Johannes Kepler, before formulating his famous laws of celestial motion, believed that the orbits of the planets could be described in terms of heavenly spheres inscribed between the five perfect Pythagorean solids. He reasoned that the harmonious ratios of music might be derived from these solids, and thus he argued for a fundamental relation between the orbits of the planets and these harmonies. This was the second music of the spheres.


Today our notion of harmonic analysis is quite different. It is based on analyzing the harmonic components of random distributions, and the sound is more like the gush of a waterfall than that of divine instruments. Although this modern endeavor may seem neither as pleasing nor as spiritual as those of the past, the concept of an isotropic universe wedded with an understanding of random fields now allows us once again to hear the music of the spheres.


                  THE AUTHOR


                    STEPHEN D. LANDY first became interested in cosmology when

                    he was lost in the woods one night and had nothing to do but stare

                    up at the stars. After receiving his Ph.D. in physics from Johns Hop-

                    kins University in 1994, he did post doctoral research work at the

                    Carnegie Observatories in Pasadena, Calif., and at the University of

                    California, Berkeley. Currently he is a visiting scientist at the College

                    of William and Mary.


SOURCE:

SCIENTIFIC AMERICAN Magazine

June 1999. (Pgs. 38 - 45)



bar_blbk.jpg - 5566 Bytes


Return to the words of wisdom, space index..

Return to the main menu..

D.U.O Project
Church of the Science of God
La Jolla, California 92038-3131

Church of the Science of GOD, 1993
Web Designed by WebDiva