University Lowbrow Astronomers

Computational Cosmology and the NSF Technology Grid.

by Dave Snyder
Printed in Reflections:  February, 1998.

(Note:  the term 10^18 means 10 to the 18th power).

October 1, 1997 was the beginning of the NSF PACI (National Science Foundation Partnership for Advanced Computational Infrastructure).  The NSF will spend not more than $340 million over the next five years to build a new information infrastructure (also known as the NSF Technology Grid).  In this article I will explain what PACI is and how cosmologists intend to use PACI to improve our understanding of the early universe.

In 1985, the NSF started the NSF Supercomputer Centers Program.  This program gave scientists and engineers across the country access to a number of high performance computers.  Use of these facilities has increased by a factor of ten every four years and this growth is expected to continue.  The Supercomputer Centers Program has officially ended:  this program has been replaced by PACI.  However this is more than a simple name change.  In the past older computers were periodically replaced with newer more powerful computers, and PACI will insure that this will continue.  However PACI has a different focus than the Supercomputer Centers Program.  From its onset PACI emphasized a collaboration between government, universities and business to make sure that new hardware meets the needs of the users.  In addition there are plans to create new computer software to manage these computer resources.  The intent is to develop a common set of tools and make these tools available to the research community.  [Note that PACI should not be confused with the Internet-2 Project.  PACI may make use of the Internet-2, but they are not the same].

The impetus for PACI was several large applications that cannot be reasonably executed with current technology.  These applications include problems in the following areas:

I will only discuss the first of these, namely Cosmology.  Cosmology is the study of how the universe has changed during its history.  One of the first steps in understanding the evolution of the universe was the construction of a map that shows where all galaxies are located within our section of the universe.  The current map shows all known galaxies within a cube of one billion billion billion cubic light years.  The study of this map has created more questions than it has answered.  It appears that the galaxies are not evenly distributed, rather there are vast regions of space devoid of galaxies and other regions with many galaxies.  It is not clear at the moment if any of the current theories accurately explain the observed distribution.

The only way to test these theories is to run a computer simulation and see if the result of the simulation matches the observations.  This will require a enormous number of calculations and doesn’t appear to be feasible with present technology.  To see why, lets look at what is involved.

Computer Scientists typically use rough back of the envelope calculations to estimate the running time of a program.  In this way it is possible to make a good guess if the program is feasible or not.  Within the cube mentioned above, there is a mass approximately 10^18 times the mass of our sun.  Lets assume there are 10^18 objects each with one solar mass (this isn’t true, but you have to make some simplifications when doing a simulation).  In order to simulate the evolution of the universe, you need to calculate the gravitational force on each object and use that force to predict the position and velocity of all objects at the next instant in time.  A naive calculation would require calculating the force between every pair of objects.  This takes on the order of 10^36 operations.  There is simply no way with current technology this calculation could be completed within the lifetime of any scientist.

Fortunately this is not necessary, several different mathematical tricks can be used to approximate the force acting on each object in a shorter amount of time.  There are several possible approaches:

These terms may not be familiar to everyone and there isn’t space here to explain them.  However, any of them allow the complicated calculation to be performed much faster and with only a slight introduced error.  The most promising possibility is to use an adaptive grid along with a tree method.  Tree based programs run slower than FFT based programs, but the former seem to give much more resolution (which is necessary for useful results).  It also is likely that the first use of this simulation will use 10^9 objects each with 10^9 solar masses.  Unfortunately using tree based methods and adaptive grids makes the programming more complicated.  Furthermore adaptive grids are hard to do efficiently on parallel computers (most high performance computers are parallel).  However, these problems probably can be resolved.

Doing the calculation is not the only problem facing the cosmologists.  Once the calculation is finished, approximately 1 million million bytes of data (a “terabyte”) will be produced.  This is a large amount of data that will be difficult to store, let alone move from place to place or analyze.  While moving files with a million bytes is commonly done now with the internet, I haven’t heard of people transmitting terabyte files over the internet (while possible, it currently would take at best many days to transmit such a file).  In addition, the analysis of such a large amount of data is almost as big a problem as the simulation itself.

It is expected that both statistical analysis and so called “visualization” will be used by cosmologists.  The later is the process by which a large data set is presented in graphical form.  The simplest type of visualization is to produce a two dimensional picture of the data.  However this will not be adequate for the cosmological data.  Here it will be necessary to show a three dimensional representation and allow a researcher to “move” through this data set in real time.  This often allows a researcher to gain insights into the data which are impossible to gain through statistical analysis.  It is also desirable for one researcher to show his/her results to other researchers (some of whom might be located thousands of miles away).  Technology exists now to do all these tasks, but it is currently impractical with the existing internet and existing computers to perform these tasks on terabyte size data sets.

Cosmologists have an application that is not feasible with current technology.  But it seems likely with a dedicated effort facilities could be created that will make applications like the cosmological simulation possible and even routine within the next few years.  This is the goal that PACI is attempting to achieve.  PACI needs to include leading edge parallel computers that can perform over a million million operations per second (a teraflop).  These computers will need to store hundreds of gigabytes in main memory (a gigabyte is approximately one thousand million bytes) and be able to transfer this data quickly to storage devices.  This is several orders of magnitude more powerful than a typical home computer and an order of magnitude more powerful than the fastest machines in existence.

For further reading there are a series of seven articles in the Communications of the ACM, November 1997-Volume 40, Number 11, pp. 28-94, which go into much more detail about these topics (ACM stands for “Association for Computing Machinery”).  One of the articles is devoted to Cosmology.

Links

Copyright Info

Copyright © 2013, the University Lowbrow Astronomers. (The University Lowbrow Astronomers are an amateur astronomy club based in Ann Arbor, Michigan).
This page originally appeared in Reflections of the University Lowbrow Astronomers (the club newsletter).
This page revised Sunday, March 9, 2014 4:30 PM.
This web server is provided by the University of Michigan; the University of Michigan does not permit profit making activity on this web server.
Do you have comments about this page or want more information about the club? Contact Us.