Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Historical Development of the Gravity Method in Exploration | GEOP 5090, Papers of Engineering

Material Type: Paper; Professor: Zhou; Class: PROB IN GRAVITY & MAGN EXPL; Subject: Geophysical Engineering; University: Montana Tech of the University of Montana; Term: Fall 2005;

Typology: Papers

Pre 2010

Uploaded on 08/08/2009

koofers-user-6u0-1
koofers-user-6u0-1 🇺🇸

5

(1)

10 documents

1 / 27

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
GEOPHYSICS, VOL. 70, NO. 6 (NOVEMBER-DECEMBER 2005); P. 63ND89ND, 6 FIGS.
10.1190/1.2133785
75th Anniversary
Historical development of the gravity method in exploration
M. N. Nabighian1, M. E. Ander2,V.J.S.Grauch
3, R. O. Hansen4, T. R. LaFehr5,
Y. L i 1,W.C.Pearson
6,J.W.Peirce
7, J. D. Phillips3, and M. E. Ruder8
ABSTRACT
The gravity method was the first geophysical technique
to be used in oil and gas exploration. Despite being eclipsed
by seismology, it has continued to be an important and
sometimes crucial constraint in a number of exploration
areas. In oil exploration the gravity method is particularly
applicable in salt provinces, overthrust and foothills belts,
underexplored basins, and targets of interest that underlie
high-velocity zones. The gravity method is used frequently
in mining applications to map subsurface geology and to di-
rectly calculate ore reserves for some massive sulfide ore-
bodies. There is also a modest increase in the use of gravity
techniques in specialized investigations for shallow targets.
Gravimeters have undergone continuous improvement
during the past 25 years, particularly in their ability to func-
tion in a dynamic environment. This and the advent of
global positioning systems (GPS) have led to a marked im-
provement in the quality of marine gravity and have trans-
formed airborne gravity from a regional technique to a
prospect-level exploration tool that is particularly applica-
ble in remote areas or transition zones that are otherwise
inaccessible. Recently, moving-platform gravity gradiome-
ters have become available and promise to play an impor-
tant role in future exploration.
Data reduction, filtering, and visualization, together with
low-cost, powerful personal computers and color graph-
ics, have transformed the interpretation of gravity data.
The state of the art is illustrated with three case histories:
3D modeling of gravity data to map aquifers in the Albu-
querque Basin, the use of marine gravity gradiometry com-
bined with 3D seismic data to map salt keels in the Gulf of
Mexico, and the use of airborne gravity gradiometry in ex-
ploration for kimberlites in Canada.
Manuscript received by the Editor May 9, 2005; revised manuscript received July 27, 2005; published online November 3, 2005.
1Colorado School of Mines, 1500 Illinois St., Golden, Colorado, 80401-1887. E-mail: mnabighi@mines.edu; ygli@mines.edu.
2Ander Laboratory LLC, 3604 Aspen Creek Parkway, Austin, Texas 78749. E-mail: mark@anderlab.com.
3U. S. Geological Survey, Box 25046, Federal Center MS 964, Denver, Colorado 80225. E-mail: tien@usgs.gov; jeff@usgs.gov.
4PRJ Inc., 12640 W. Cedar Dr., Suite 100, Lakewood, Colorado 80228. E-mail: rohansen@prj.com.
5Colorado School of Mines (retired), 1500 Illinois Street, Golden, Colorado 80401-1887. E-mail: lafehr@bresnan.net.
6Pearson Technologies Inc., 1801 Broadway, Suite 600, Denver, Colorado 80202. E-mail: bpearson@pearsontechnologies.com.
7GEDCO, 815 Eighth Ave. S.W., Calgary, Alberta T2P 3E2, Canada. E-mail: jwpeirce@gedco.com.
8Wintermoon Geotechnologies, Inc., 280 Columbine, Suite 301, Denver, Colorado 80206. E-mail: meruder@wintermoon.com.
c2005 Society of Exploration Geophysicists. All rights reserved.
63ND
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b

Partial preview of the text

Download Historical Development of the Gravity Method in Exploration | GEOP 5090 and more Papers Engineering in PDF only on Docsity!

GEOPHYSICS, VOL. 70, NO. 6 (NOVEMBER-DECEMBER 2005); P. 63ND–89ND, 6 FIGS.

10.1190/1.

75th Anniversary

Historical development of the gravity method in exploration

M. N. Nabighian 1 , M. E. Ander^2 , V. J. S. Grauch 3 , R. O. Hansen 4 , T. R. LaFehr^5 ,

Y. Li^1 , W. C. Pearson 6 , J. W. Peirce 7 , J. D. Phillips^3 , and M. E. Ruder 8

ABSTRACT

The gravity method was the first geophysical technique to be used in oil and gas exploration. Despite being eclipsed by seismology, it has continued to be an important and sometimes crucial constraint in a number of exploration areas. In oil exploration the gravity method is particularly applicable in salt provinces, overthrust and foothills belts, underexplored basins, and targets of interest that underlie high-velocity zones. The gravity method is used frequently in mining applications to map subsurface geology and to di- rectly calculate ore reserves for some massive sulfide ore- bodies. There is also a modest increase in the use of gravity techniques in specialized investigations for shallow targets. Gravimeters have undergone continuous improvement during the past 25 years, particularly in their ability to func- tion in a dynamic environment. This and the advent of

global positioning systems (GPS) have led to a marked im- provement in the quality of marine gravity and have trans- formed airborne gravity from a regional technique to a prospect-level exploration tool that is particularly applica- ble in remote areas or transition zones that are otherwise inaccessible. Recently, moving-platform gravity gradiome- ters have become available and promise to play an impor- tant role in future exploration. Data reduction, filtering, and visualization, together with low-cost, powerful personal computers and color graph- ics, have transformed the interpretation of gravity data. The state of the art is illustrated with three case histories: 3D modeling of gravity data to map aquifers in the Albu- querque Basin, the use of marine gravity gradiometry com- bined with 3D seismic data to map salt keels in the Gulf of Mexico, and the use of airborne gravity gradiometry in ex- ploration for kimberlites in Canada.

Manuscript received by the Editor May 9, 2005; revised manuscript received July 27, 2005; published online November 3, 2005. 1 2 Colorado School of Mines, 1500 Illinois St., Golden, Colorado, 80401-1887. E-mail: mnabighi@mines.edu; ygli@mines.edu. 3 Ander Laboratory LLC, 3604 Aspen Creek Parkway, Austin, Texas 78749. E-mail: mark@anderlab.com. 4 U. S. Geological Survey, Box 25046, Federal Center MS 964, Denver, Colorado 80225. E-mail: tien@usgs.gov; jeff@usgs.gov. 5 PRJ Inc., 12640 W. Cedar Dr., Suite 100, Lakewood, Colorado 80228. E-mail: rohansen@prj.com. 6 Colorado School of Mines (retired), 1500 Illinois Street, Golden, Colorado 80401-1887. E-mail: lafehr@bresnan.net. Pearson Technologies Inc., 1801 Broadway, Suite 600, Denver, Colorado 80202. E-mail: bpearson@pearsontechnologies.com. (^7) GEDCO, 815 Eighth Ave. S.W., Calgary, Alberta T2P 3E2, Canada. E-mail: jwpeirce@gedco.com. (^8) Wintermoon Geotechnologies, Inc., 280 Columbine, Suite 301, Denver, Colorado 80206. E-mail: meruder@wintermoon.com. © c 2005 Society of Exploration Geophysicists. All rights reserved.

63ND

64ND Nabighian et al.

HISTORICAL OVERVIEW

Modern gravity exploration began during the first third of the twentieth century and continues to this day as a small but important element in current exploration programs (Appendix A). The first geophysical oil and gas discovery, the Nash dome in coastal Texas, was the result of a torsion- balance survey (LaFehr, 1980). A historical outline of the early development of the gravity method of exploration, from pendulums to torsion balances to gravimeters, is given by Eckhardt (1940). Recent reviews (LaFehr, 1980; Paterson and Reeves, 1985; Hansen, 2001) document the continuous evolution of instru- ments, field operations, data-processing techniques, and meth- ods of interpretation and refer to unpublished works to help provide an accurate understanding of the usefulness of grav- ity and magnetic methods. They also comment on the state of the geophysical literature, which allows mathematical so- phistication to overshadow geologic utility (LaFehr, 1980; Paterson and Reeves, 1985). A steady progression in instru- mentation (torsion balance, a very large number of land gravimeters, underwater gravimeters, shipborne and airborne gravimeters, borehole gravimeters, modern versions of abso- lute gravimeters, and gravity gradiometers) has enabled the acquisition of gravity data in nearly all environments, from inside boreholes and mine shafts in the earth’s shallow crust to the undulating land surface, the sea bottom and surface, in the air, and even on the moon. This has required a sim- ilar progression in improved methods for correcting for un- wanted effects (terrain, tidal, drift, elevation, and motion- induced) and the parallel increase in precision of positioning data. One of the pleasant surprises in recent exploration his- tory has been the marked improvement in gravity data ac- quired aboard 3D seismic vessels. In combination with better control systems, closely spaced seismic traverses, and larger, more stable ships, the quality of marine gravity data acquired at the sea surface now surpasses underwater gravity accu- racy, a claim that could not have been made in 1980. And of course, modern global positioning system (GPS) instru- mentation and data processing have significantly increased accuracies. Gravity interpreters have been able to take advantage of these improvements in data acquisition and processing be- cause of the wide availability of inexpensive workstations and personal computers. Significant early contributions (e.g., Skeels, 1947; Henderson and Zietz, 1949; Bhattacharyya, 1967; Fuller, 1967; Bhattacharyya and Chan, 1977) are still relevant today. The fundamentals of interpretation are the same today as they were 25 years ago, but GPS and small, powerful com- puters have revolutionized the speed and utility of the gravity method. With the availability of software running on laptop computers rather than mainframes or UNIX-based worksta- tions, data are acquired automatically and even processed and interpreted routinely in the field during data acquisition. In- formation can now be transmitted from the field via satel- lite link, stored on centralized data servers, and retrieved on the Web. In hydrocarbon exploration, seismic models derived from prestack depth migration are routinely used as input to gravity modeling, and the latter is being used to further refine seismic depth and velocity models.

APPLICATIONS OF GRAVITY MEASUREMENTS

Gravity measurements are used at a wide range of scales and purposes. On an interstellar scale, understanding the shape of the gravity field is critical to understanding the na- ture of the space-time fabric of the universe. On a global scale, understanding the details of the gravity field is critical in mil- itary applications which, since World War II, have stimulated much of the research and development in the areas of gravity instrumentation and building global databases. On an explo- ration scale, the gravity method has been used widely for both mining and oil exploration, and at the reservoir scale it is used for hydrocarbon development. The use of gravity for exploration has included all manner of targets, beginning with the use of the torsion balance in ex- ploring for salt domes, particularly on the U. S. Gulf Coast. The methodology was so integral to oil exploration that from 1930 to 1935, the total number of gravity crews exceeded the total number of seismic crews (Ransone and Rosaire, 1936). With the advent of more practical field instruments (see sec- tion titled History of Gravity Instrumentation), the use of gravity techniques rapidly expanded in both mining and hy- drocarbon exploration for any targets for which there was a density contrast at depth, such as salt domes, orebodies, struc- tures, and regional geology. Gravity measurements for exploration were often made on a relative basis, where an arbitrary datum for a particular sur- vey was established and all values were mapped relative to it. In 1939, George P. Woollard (1943) undertook a series of gravity and magnetic traverses across the United States with observations at 10-mile intervals to determine the de- gree to which regional geologic features were reflected in the data (Woollard and Rose, 1963). As coverage expanded in the 1940s, the need became apparent to establish a regional set of datum references, all tied back to the reference measurement of the absolute gravity field of the earth in Potsdam, Germany. This led to a program started in 1948 by the U. S. military to test the reliability of the world network of first-order inter- national gravity bases and to simultaneously build up a sec- ondary network of gravity control bases at airports throughout the world (Woollard and Rose, 1963). The final result was the establishment of the International Gravity Standardized Net- work (IGSN) (Morelli, 1974; Woollard, 1979), to which almost all modern gravity measurements are now tied. Unlike seismic data, land and underwater gravity data sel- dom are outdated because the basic corrections have not changed significantly over the years. In older foothills data, the terrain corrections can be updated with modern digital elevation models, but the basic data are still valuable if the original position quality was good and the original observed gravity values are still available. However, modern airborne gravity surveys do offer the possibility of collecting much more evenly spaced data that can alleviate serious problems with spatial aliasing associated with irregularly spaced ground sta- tions (Peirce et al., 2002). With the advent of gravity measurements derived from satellite altimetry (see Satellite-derived Gravity), the density of gravity measurements in the oceans took a quantum leap forward. Many details of the geometry of tectonic plates, particularly in the southern hemisphere, became clear for the first time. In the 1980s, large national gravity databases

66ND Nabighian et al.

an initial precision of about 10 mGal (10−^4 m/s 2 ). Over the next 100 years, several incremental improvements to the re- versible pendulum culminated with Helmert’s substantial re- vision of the theory of reversible pendulums (Helmert, 1884, 1890), which brought its absolute gravity measurement pre- cision to about 1 mGal (10−^5 m/s 2 ). Between 1898 and 1904, K ¨uhnen and Furtw ¨angler performed absolute gravity mea- surements to this precision in Potsdam, which became the base for the Potsdam Gravity System, introduced in 1908 and later extended worldwide by converting previous gravity measure- ments to this datum (Torge, 1989). In the first half of the 1900s, several different pendulums were in use, including Sterneck’s pendulum developed in 1887 (Swick, 1931), the Mendenhall pendulum developed in 1890 (Swick, 1931), the first pendulum developed for use in a sub- marine by Vening Meinesz (Meinesz, 1929), the Gulf pendu- lum developed in 1932 (Gay, 1940; Wyckoff, 1941), and the Holweck-Lejay pendulum (Dobrin, 1960). Sterneck’s pendu- lum was used primarily as a relative instrument. With the ex- ception of the Gulf pendulum, these pendulums were used al- most exclusively for geodetic purposes. The Gulf pendulum, developed by Gulf Research and Development Company, was used extensively for oil exploration for about 10 years with data collected from more than 8500 stations along the U. S. Gulf Coast (Gay, 1940). [For further discussions of pendulum instruments, see Heiskanen and Meinesz (1958) and Torge (1989).]

Free-fall gravimeter

Free-fall gravimeters have advanced rapidly since they were first developed in 1952. The method involves measuring the time of flight of a falling body over a measured distance, where the measurements of time and distance are tied directly to in- ternationally accepted standards. The method requires a very precise measurement of a short time period, which only be- came possible with the introduction of the quartz clock in the 1950s. The first free-fall instruments used a white-light Michel- son interferometer, a photographic recording system, a quartz clock, and a falling body, typically a 1-m-long rod made of quartz, steel, or invar. The final value of gravity was obtained by averaging 10 to 100 drops of several meters. These first in- struments had a crude resolution of greater than 1 mGal. By 1963, the use of a corner-cube mirror for the falling body, a laser interferometer, and an atomic clock substan- tially improved the sensitivity of free-fall instruments. A sec- ond corner cube was fixed and used as a reference. A corner- cube mirror always reflects a laser beam back in the direction from which it came, regardless of the orientation of the corner cube. A beam splitter divides the laser beam into a reference beam and a measurement beam, each beam forming an arm of the Michelson interferometer. Each beam is reflected directly back from its respective corner cube and again passes through the beam splitter, where it is superimposed to produce inter- ference fringes at a photo detector; the fringe frequency is pro- portional to the velocity of the falling body. By the early 1970s, the best measurements were in the range of 0.01 to 0.05 mGal, and by about 1980, free-fall gravime- ters had replaced pendulums for absolute gravity measure- ments. Over time, the falling distances became shorter and the number of drops increased, making the instruments more

portable. The only commercially available free-fall gravime- ters are manufactured by Micro-g Solutions, Inc., and are capable of a resolution of about 1 μGal, which rivals the sen- sitivity of the best relative spring gravimeters. Their disadvan- tages are that they are still larger, slower, and much more expensive than relative gravimeters. Micro-g Solutions has built about 40 absolute free-fall gravimeters. [For reviews of free-fall gravimeters, see Torge (1989), Brown et al. (1999), and Faller (2002).]

Torsion-balance gravity gradiometer

Starting in 1918 and continuing to about 1940, the torsion- balance gravity gradiometer, developed by Baron Roland von E ¨otv ¨os in 1896, saw extensive use in oil exploration. It was first used for oil prospecting by Schweydar (1918) over a salt dome in northern Germany and then in 1922 over the Spindletop salt dome in East Texas. The sensitivity, accuracy, and rela- tive portability of the torsion balance made it the most useful gravity exploration technology of its era. By 1930, about 125 of these instruments were being used in oil exploration world- wide. Several different torsion-balance designs were devel- oped by various manufacturers, including Askania in Berlin and Suess in Budapest. The E ¨otv ¨os torsion balance consists of a vertically sus- pended torsion fiber, usually made of platinum-iridium or tungsten, with a horizontal aluminum bar suspended from its lower end. One end of the bar carries a proof mass, usually made of platinum or gold, and the other end carries an iden- tical proof mass suspended by another fiber several centime- ters below the horizontal plane of the bar. The balance bar rotates when a differential horizontal force acts on the two masses, which happens when the earth’s gravitational field in the neighborhood of the balance is distorted by mass differ- ences at depth, such that the horizontal component of grav- ity at one proof mass is different from that at the other proof mass. The horizontal movement of the bar twists the torsion fiber until the resistance to torsion becomes equivalent to the torque of rotation and the bar comes to rest. The magnitude of the torque can be determined by measuring the angle through which the balance has been rotated by the torque. The angu- lar displacement is measured optically by a mirror mounted in the vertical axis of rotation of the balance bar and reflects an image of a fixed scale to a fixed telescope or reflects a fixed beam of light to a fixed photographic plate. Because the proof masses are suspended from the torsion fiber at unequal heights, one can measure two components of the gravity gra- dient: the horizontal gradient perpendicular to the horizontal component of the field (the torsion), which is what can be ob- tained with masses at equal heights (effectively a Cavendish balance), and the difference between the two inline horizontal gradients, a quantity known as the curvature because it is the difference between the two sectional curvatures of the poten- tial in the horizontal plane. [Detailed discussions can be found in Rybar (1923) and Barton (1928).] With careful measurement procedures, accuracies of a few E ¨otv ¨os units (IE = 10 −^9 S−^2 mGal/km) could be obtained; but in field operations a torsion balance typically took about three to six hours to obtain a gravity station, including setup and teardown, so four to eight stations per day could be surveyed. The instrument was placed inside a tent during measurements

Historical Development of Gravity Method 67ND

to protect it from the perturbing influences of wind and solar radiation. In general, gravity gradiometers, including the torsion bal- ance, are more sensitive to near-sensor mass changes than are gravity sensors. As a consequence, gravity gradiometers have a significant advantage over gravimeters in sensing topo- graphic effects in land and airborne environments or bathy- metric effects in marine environments. But if mass changes exist close to a gradiometer, this sensitivity can mask grav- ity gradient signals from deeper structures. In the 1920s and 1930s, terrain mapping was not as sophisticated as it is today; as a consequence, the torsion balance could only be used in relatively flat areas, e.g., less than 3 m of elevation change within 100 m of the station. Although the instrument had a precision of approximately 1–3 EU, uncertainties in terrain plus influences of the mass of the observer limited the prac- tical field resolution to about ±10 EU. Today, modern grav- ity gradiometers are coupled with high-resolution terrain or bathymetric mapping to take advantage of the tool’s sensitiv- ity to nearby masses. The torsion balance became obsolete with the development of spring gravimeters, although the Geophysical Institute in Budapest continued to develop the torsion balance into the 1950s (J. Rybar, 1957). The compact spring gravimeter proved to be a portable, rugged, and robust instrument, capable of taking dozens of measurements daily.

Spring gravimeters

Spring gravimeters measure the change in the equilibrium position of a proof mass that results from the change in the gravity field between different gravity stations. This measure- ment can be accomplished in one of three ways:

  1. measuring the deflection of the equilibrium position (typ- ically done mechanically by measuring the deflection of a light beam reflected off a mirror mounted on or connected to the proof mass);
  2. measuring the magnitude of a restoring force (typically us- ing a capacitive feedback system but a magnetic system also works) used to return the equilibrium position to its original state;
  3. measuring the change in a force (typically capacitive feed- back) required to keep the equilibrium position at some predefined null point.

Historically, spring gravimeters have been classed as sta- ble or unstable. For stable gravimeters, the displacement of the proof mass is proportional or approximately propor- tional to the change in gravity. An example of a stable gravimeter is a straight-line gravimeter. For unstable gravime- ters, the displacement of the proof mass introduces other forces that magnify the displacement caused by gravity and hence increase system sensitivity. Examples of unstable instru- ments are those with inclined zero-length springs, such as the LaCoste & Romberg (L&R) G-meters, the Worden meter, and the Scintrex meter. Most spring gravimeters use an elastic spring for the restor- ing force, but a torsion wire may also be used. The theory and practical understanding of such instruments has been known

since Robert Hooke formulated the law of elasticity in 1678, and various spring balance instruments have been in practical use since the start of the 18th century. John Hershel first pro- posed using a spring balance to measure gravity in 1833. But it was not until the 1930s that demands of oil exploration, which required that large areas be surveyed quickly, and advances in material science led to the development of a practical spring gravimeter. The simplest design is the straight-line gravimeter, which consists of a proof mass hung on the end of a vertical spring. Straight-line gravimeters are used primarily as marine meters. The first successful straight-line marine gravimeter was devel- oped by A. Graf in 1938 (Graf, 1958) and was manufactured by Askania. L&R also manufactured a few straight-line ma- rine gravimeters. To obtain the higher resolution required for land gravime- try, a more sophisticated spring balance system was devel- oped, involving a mass on the end of a lever arm with an in- clined spring. The added mechanical advantage of the lever arm increased sensitivity by a factor of up to 2000. The first such system, developed by O. H. Truman in 1930, was man- ufactured by Humble Oil Company and had a sensitivity of about 0.5 mGal. Between 1930 and 1950 more than 30 types of spring gravimeter designs were introduced, but by far the most suc- cessful was developed in 1939 by Lucien LaCoste and was manufactured by L&R. Since then, L&R has built more than 1200 G and 232 two-screw D-meters for use on land, 142 air/sea meters for use on ships and airplanes, about 20 ocean- bottom meters, 16 borehole gravity meters, and 2 moon me- ters, one of which was deployed during the Apollo 17 mission (the moon meter did not work). The key to the L&R sensor was the zero-length spring in- vented by LaCoste (LaCoste, 1934). The zero-length spring made relative gravimeters much easier to make, calibrate, and use (LaCoste, 1988). The L&R gravity sensor makes rou- tine relative gravity measurements to an accuracy of about 20 μGal without corrections for instrument errors (Valliant,

  1. and, with great care taken in correcting for both instru- mental and external errors, down to 1–5 μGal in the field and 0.2 μGal in a laboratory (Ander et al., 1999). To obtain a high precision, range changing must not be performed and system corrections must be adjusted for temperature and pressure changes, sensor drift (primarily spring hysteresis and creep), and vibration. LaCoste’s creative genius dominated the field of gravity instrumentation for more than half a century (Clark, 1984; Harrison, 1995). In 1948, Sam Worden of Worden Gravity Meter Company introduced an inclined zero-length spring gravimeter that uses a spring made of fused quartz. The Worden system configura- tion is similar to the L&R meter except that it uses a lighter proof mass (5 mg) than the L&R instrument (15 grams). Wor- den enclosed his sensor in a vacuum flask, which greatly re- duces the instrument’s temperature and pressure sensitivity. As a result, the Worden meter is smaller, lighter, and faster and uses less power than the L&R meter. The practical sen- sitivity of the Worden meter is about 0.01 mGal. There are several advantages and disadvantages to using a quartz spring rather than a metal spring. Quartz springs are easier and faster to manufacture than metal springs; metal springs fatigue, and quartz springs do not. However, quartz springs are much more

Historical Development of Gravity Method 69ND

The L&R BHGM is thermostatically controlled to operate at temperatures up to 125◦C. It can only access well casings with at least a 5 1/2-in diameter, and it can only make gravity mea- surements up to 14◦^ from vertical, which severely limits ac- cess to petroleum wells and gives almost no access to mining boreholes. Despite its severe limitations, the L&R BHGM has proven to be a valuable tool in a variety of applications. L&R manufactured 16 BHGMs, of which 13 still exist today.

Underwater gravity instruments

In the 1940s, extensive seafloor gravity measurements were made for oil exploration in the Gulf of Mexico using specially designed diving bells developed by Robert H. Ray Company (Frowe, 1947). Diving operations were hazardous; therefore, remote-control underwater gravimeters were created. The un- derwater gravimeters consist of a gravity sensor, a pressure housing, a remote control and display unit on the vessel, an electronic cable connection, and a winch with a rope or cable to lower and raise the system. In addition, the system remotely levels, clamps/unclamps, reranges, and reads the sensor. The sensor must be strongly damped to operate on the ocean bot- tom. One of the first ocean-bottom systems was the Gulf un- derwater gravimeter (Pepper, 1941). Ocean-bottom gravime- ters have also been built by Western and by L&R. The L&R U-meter is the most popular underwater gravimeter today and has a maximum depth of about 60 m. Underwater gravity mea- surements have accuracies on the order of 0.01 to 0.3 mGal, depending on sea state, seafloor conditions, and drift rate con- trol, with survey rates on the order of 10 to 20 stations per day, depending on the depth of the measurement and the distribu- tion of stations. An adaptation of the underwater meter was the long-line system developed by Airborne Gravity Ltd. It was designed to operate remotely on a cable suspended from a helicopter. This allowed surveying in rough or forested terrain without having to land the helicopter. Scintrex developed a similar system for their meters.

Moving-platform gravity instruments

Large areas can be covered quickly using gravity sensors at- tached to moving platforms such as trucks, trains, airplanes, helicopters, marine vessels, or submarines. In such systems, large, disturbing accelerations result from vehicle motion and shock and are a function of (1) external conditions such as wind, sea state, and turbulence; (2) the platform type and model; (3) the navigational system; and (4) the type and setup of the gravimeter. The primary factor limiting moving-platform gravity mea- surement resolution is how well the external accelerations are known, particularly vertical acceleration, because the vertical component of acceleration adds directly to the gravity mea- surement. The other components of acceleration couple dif- ferently to the gravity sensor. The horizontal components of acceleration, depending on their orientation to the gravity sensor and the orientation of the gravity sensor to vertical, will have a more indirect and damped effect on gravity mea- surements and may exhibit cross-coupling effects. In cross- coupling, which depends on instrument design, components of horizontal acceleration couple through the instrument to

produce an effect similar to vertical acceleration. In addition, corrections must also be made for Coriolis acceleration, re- sulting from the direction of the moving platform relative to the rotation of the earth. Finally, all platforms are subject to high-frequency vibrations. Gravity measurements on moving platforms are made pri- marily with spring gravimeters; vibrating-string and force- balanced accelerometers are used much less, particularly in airborne systems. The gravity sensor and the setup must be heavily damped against vibrations. Setup includes platform stabilization such as a gyrostabilizer or gimbaled suspension. Low-pass filtering is typically applied to minimize the ef- fect of high-frequency accelerations of the platform. On ma- rine vessels, vertical accelerations can have effects as large as 105 mGal, with frequencies from 0.05–1 Hz. On an aircraft, the effects are on the order of 20 000 mGal but have a broader fre- quency range of 0.002–1 Hz. In addition, airborne gravimeters require short averaging times because of their high relative velocities. The first shipborne gravity instruments were gas-pressured gravimeters, developed in 1903 and used until about 1940. They used atmospheric pressure as the counterforce to grav- ity acting on the mass of a mercury column (Hecker, 1903; Haalck, 1939). Extensive gravity measurements began in sub- marines starting in 1929 when Vening Meinesz modified a pen- dulum to operate on a submarine (Meinesz, 1929). This in- strument was used in submarines until about 1960 and had a precision of about 2 mGal. The first marine spring gravime- ter was the straight-line marine gravimeter developed by A. Graf in 1938 (Graf, 1958) and manufactured by Askania as the Seagravimeters Gss2 and Gss3. This instrument was used from 1939 to the 1980s and had a precision of about 1 mGal (Worzel, 1965). The L&R spring gravimeters were first modified for use on a submarine in 1954 (Spiess and Brown,

  1. and then on a ship in 1958 (Harrison, 1959; LaCoste, 1959). In 1959, an L&R gravimeter was used to make the first airborne gravity measurement tests (Nettleton et al., 1960; Thompson and LaCoste, 1960). In 1965, L&R developed its S-meter, a stabilized-platform gravimeter for use on ships and in airplanes (LaCoste, 1967; LaCoste et al., 1967). The first he- licopter gravity surveys using the S-meter were made in 1979 and were accurate to about 2 mGal. Then in the mid-1980s, the S-meter was adapted for use in deep-sea submersibles (Luyendyk, 1984; Zumberge et al., 1991). Today, the L&R air-sea gravimeter, which uses a gyroscopically stabilized plat- form, is the most widely used moving-platform gravimeter, with 142 systems built since 1967. Another important system is the Bodenseewerk improved version of the Askania Gss2, the KSS30. Carson Services, Inc., has offered airborne gravity surveys using L&R meters in helicopters and Twin Otters since 1978. The advent of GPS dramatically improved accuracy, and sev- eral companies offered airborne gravity services in fixed-wing aircraft. A series of unfortunate plane crashes caused a ma- jor reevaluation of safety procedures, resulting in the creation of the International Airborne Geophysics Safety Association (IAGSA) in 1995 and a realignment of the airborne gravity business. In 1997, Sander Geophysics developed its AirGrav airborne inertially referenced gravimeter system based on a three-axis accelerometer with a wide dynamic range. The sys- tem has extremely low cross-coupling. Sander Geophysics has

70ND Nabighian et al.

built four AirGrav systems so far. A Russian system that uses accelerometers has been introduced recently by Canadian Mi- croGravity, and two of these systems are being operated by Fugro Airborne Surveys. Today, airborne gravity surveying is offered by several companies, including Carson, Fugro, and Sander Geophysics. Currently, the best commercial marine gravity measure- ments have a resolution of about 0.1 mGal over not less than 500-m half-wavelength. The best commercial airborne gravity measurements have a resolution of better than 1 mGal over not less than 2-km half-wavelength from an airplane and bet- ter than 0.5 mGal over less than 1-km half-wavelength from a helicopter. These performance figures are hotly debated, and it is often difficult to find comparable data from different companies because there are many ways to present resolution performance. [A thorough discussion of gravimetry applied to moving platforms can be found in Torge (1989).]

Rotating-disk gravity gradiometers

Since World War II, gravity instrumentation and the pro- liferation of global gravity data have been strongly fueled by various national defense needs. Today, there are two com- mercially available gravity gradiometers: the FTG by Bell Aerospace (now Lockheed Martin) and the Falcon by BHP Billiton. Both are a direct result of gravity gradiometry devel- opments by the U. S. Navy. The FTG is used for land, marine, submarine, and airborne surveys, and the Falcon is used for airborne surveys only. In addition, Stanford University, the University of Western Australia, and ArkEx are each design- ing their own new airborne gravity gradiometer systems. With the development of intercontinental ballistic missiles (ICBMs) early in the Cold War, a need arose for gravity map- ping around all launch sites to correct a missile’s flight path for perturbing gravitational effects resulting from local mass differences. With the advent of missile-launch-capable sub- marines, instruments were needed to collect detailed high- resolution gravity data on board submarines to map underwa- ter missile launch sites. Although various gravity instruments had been used on board submarines since 1929 (Meinesz, 1929), they were inadequate for the Navy’s requirements. To meet this need, the Navy developed a modern gravity gra- diometer. In the late 1960s, Bell Aerospace (now Lockheed Martin), Hughes Aircraft, and MIT each began developing a classified gravity gradiometer for use on Navy submarines. The U. S. Navy chose to develop and deploy the Bell gravity gradiometer, known as the Full Tensor Gradient System, or FTG. The U. S. government dropped the development of the MIT instrument, but the Hughes Aircraft instrument, known as the Forward Gravity Gradiometer (named after its inven- tor, Robert L. Forward, a well-known gravity physicist and celebrated science-fiction author), continued its development under the auspices of the National Security Agency. After many years, the work on the Forward gradiometer was also discontinued. The FTG system uses three small-diameter gravity gra- dient instruments (GGIs) mounted on an inertial stabilized platform. Each GGI contains four gravity accelerometers mounted on a rotating disk in a symmetric arrangement such that each of the individual accelerometer input axes are in the plane of the rotating disk, parallel to the circumference

of the disk and separated by 90◦. The individual accelerom- eters consist of a proof mass on a pendulum-like suspension that is sensed by two capacitive pick-off rings located on ei- ther side of the mass. The signal generated by the pick-off system is amplified and converted to a current that forces the proof mass into a null position. The current is proportional to the acceleration. Vehicle accelerations are eliminated by fre- quency separation, where the gradient measurement is modu- lated at twice the disk-rotation frequency (0.25 Hz), leading to a forced harmonic oscillation. Any acceleration from a slight imbalance of opposing pairs of accelerometers is modulated by the rotation frequency. This permits each opposing pair of accelerometers to be balanced precisely and continuously. Six gravity gradient components are measured and referenced to three different coordinate frames. From these six components, five independent components can be reconstructed in a stan- dard geographic reference frame. The remaining components of the gravity gradient tensor are constructed from Laplace’s equation and the symmetry of the tensor. [For a review of the FTG sensor design, see Jekeli (1988) or Torge (1989).] Although FTGs were initially intended for mapping underwater launch sites, submarines collected gravity data the entire time they were at sea. As submarine captains gained ex- perience, they started to use underwater gravity as a naviga- tional aid. The FTG data provided highly accurate mapping of seamounts and other sources of ocean-floor relief. The FTG, a very large instrument, was not portable but fit well into large nuclear submarines. In the mid-1980s, the U. S. Air Force be- gan operating the FTG in a large van for land measurements. Later, the van was loaded onto a C130 aircraft for airborne measurements (Eckhardt, 1986). In the mid-1990s, after the Cold War, both the Bell and the Forward instruments were declassified. In 1994, as a conse- quence of the downsizing of the U. S. missile submarine fleet, the U. S. government allowed both instruments to become commercialized to maintain the technology for future use. Bell Geospace acquired commercial rights to the FTG for marine surveying and immediately began acquiring marine data in the Gulf of Mexico for the hydrocarbon industry. At the same time, Lockheed Martin began to reengineer the FTG design to fit into a more portable platform, and by 2002, the FTG was deployed on fixed-wing airborne platforms. The FTG has also been used for land surveys over oil fields that are in secondary and tertiary recovery (DiFrancesco and Talwani, 2002). Meanwhile, BHP Billiton, in agreement with Lockheed Martin, developed the Falcon system (Lee, 2001) for opera- tion in small surveying aircraft aimed at shallow targets of in- terest to mineral exploration (see Case History). The Falcon, which collected its first airborne data in 1997, uses a single, large-diameter GGI with its axis of rotation close to vertical. The GGI is kept referenced to geographic coordinates so that the Falcon measures the differential curvature components of the gravity gradient tensor. The data may be transformed to the vertical gravitational acceleration, its vertical gradient, or any of the other components of the gravity gradient tensor by Fourier transform or equivalent source techniques. The tremendous advantage that airborne gradiometry sys- tems provide over conventional land/marine/airborne gravity systems is in their noise-reduction capabilities, speed of ac- quisition, and accuracy. The gradiometer design is relatively insensitive to aircraft accelerations, and with modern GPS

72ND Nabighian et al.

owning and operating gravity equipment. To the surprise of many, a new era of very high-quality surface-ship gravity data was ushered in by the very technology that had been thought to put gravity out of business: the closely spaced traverses of 3D seismic operations. Better determination of the E ¨otv ¨os ef- fects through improved GPS acquisition and processing; more stable platforms on the new, very large ships; and more com- prehensive data reduction required by the very large data sets have led to better definition, in both wavelength and ampli- tude, of gravity anomalies acquired by surface ships than those obtained from underwater surveys. The definition of sub- milligal anomalies over subkilometer wavelengths became a reality.

Airborne operations

In reviewing the history of dynamic gravity, it may be well to recall the initial skepticism endured by early advocates of both marine and airborne operations. Even one of the most prominent pioneers of gravity exploration technology, L. L. Nettleton, commented in the early 1960s that it was impos- sible to obtain exploration-quality data in moving environ- ments. He reasoned that accelerations caused by sensor mo- tion are mathematically indistinguishable from accelerations caused by subsurface density contrasts. (Of course, he had no way of knowing the impact GPS technology would have in identifying and removing motional effects.) Another promi- nent pioneer, Sigmund Hammer, held similar views until he was introduced to airborne gravity data flown over a large, shallow salt dome in the Gulf of Mexico. He was so excited by the potential of the method that he named his resulting pa- per “Airborne gravity is here,” which generated intense and clamorous discussions (Hammer, 1983). The important changes in marine gravity brought about by modern GPS acquisition and processing also revolution- ized airborne operations. The high damping constant gives the L&R meter sensor a time constant of 0.4 ms (LaCoste, 1967). In the early 1970s, the highly filtered output of the gravime- ter was sampled and recorded at 1-min intervals. Sometime around 1980, better digital recording and the desire to fine- tune cross-coupling corrections led to 10-s sampling, which was the standard until the early 1990s. Improved control elec- tronics have helped to optimize the inherent sensitivity of gravity sensors, and GPS has provided accurate corrections for ship and airplane motion. At present, 1-Hz sampling is common for marine gravity acquisition, and 10-Hz sampling is common for airborne gravity. GPS has provided the means to measure boat and aircraft velocity changes very accurately. This increased accuracy has led to faster reading of the gravimeter, more accurate correc- tions, less filtering, and minimized signal distortion as a result of filtering.

Borehole gravity

The acquisition of gravity data in boreholes was discussed as early as 1950 when Neal Smith (1950) suggested that sampling large volumes of rock could improve rock density informa- tion. Hammer (1950) reported on the determination of density by underground gravity measurements. Although a consider- able amount of effort in time and money was expended in the

early development of downhole measurements (Howell et al., 1966), this activity did not become a viable commercial enter- prise until the advent of the L&R instrument in the 1960s and 1970s. During this era, data acquired by the U. S. Geological Survey (McCulloh, 1965) and Amoco Corp. (Bradley, 1974) confirmed the assessment previously made by Smith and re- sulted in L&R designing and building a new tool with dimen- sions and specifications more suitable for oil and gas explo- ration and exploitation. In a BHGM survey, the average formation density is deter- mined from g = 4 πρGz, where z is the height difference between two points on the profile, g is the gravity difference between those two points, G is the universal gravitational con- stant, and ρ is the average formation density between those two points. The BHGM is the only logging tool capable of directly measuring average density at tens of meters from a well, and it is the only logging tool that can reliably obtain bulk density through casing. Because the BHGM is the only density logging tool that samples a large volume of formation to first order, it is not affected by near-borehole effects such as drilling mud, fluid invasion, formation damage, and casing or cement inhomogeneities. Since 1970, about 1100 wells have been logged with the L&R instrument, but the prediction that borehole gravity use would increase (LaFehr, 1980) has not yet been borne out, pri- marily because the physical limitations of the BHGM have yet to be overcome. The difficulty lies in improving the limits of temperature, hole size, and deviation in such a way as to in- crease the applicability of the tool. BHGMs have been used in exploration, formation evalua- tion, early and mature field development, enhanced oil recov- ery, and structural delineation (Chapin and Ander, 1999a,b). In particular, oil companies have used BHGMs in multiyear, time-lapse oil production monitoring (Schultz, 1989; Popta et al., 1990). The BHGM is also an outstanding tool in ex- ploration for bypassed oil and gas, reliably indicating deposits previously overlooked. In addition, it has played a role in the study of possible sites for the burial of nuclear waste and has yielded interesting confirmation of the use of normal free-air correction (LaFehr and Chan, 1986).

Satellite-derived gravity techniques

The modern era of satellite radar altimetry, beginning with Seasat in 1978, ushered in a golden age for imaging and mapping the global marine geoid and its first vertical derivative, the marine free-air gravity field. The advent of a public-domain global marine gravity database with uni- form coverage and measurement quality (Sandwell and Smith, 1997, 2001) provided a significant improvement in our under- standing of plate tectonics. This database represents the first detailed and continuously sampled view of marine gravity fea- tures throughout the world’s oceans, enabling consistent map- ping of large-scale structural features over their entire spatial extent. Understanding of the marine free-air gravity field con- tinues to improve as additional radar altimeter data are ac- quired by new generations of satellites. The current CHAMP mission directly measures the global terrestrial and marine gravity fields at an altitude of 400 km (Reigber et al., 1996). This direct measurement provides important information on the long-wavelength components of the global gravity field.

Historical Development of Gravity Method 73ND

The GRACE and GOCE satellites are also designed for direct measurement of the global gravity field (Tapley et al., 1996; Tapley and Kim, 2001). The Seasat mission, launched by NASA in 1978, was equipped with oceanographic monitoring sensors and a radar altimeter. The altimeter was designed to measure sea-surface topography in an attempt to document the relief caused by wa- ter displacement from either large-scale ocean currents (e.g., the Gulf Stream) or water mounding caused by local grav- ity anomalies within the earth’s crust and upper mantle. Ear- lier missions [Skylab (1973) and GOES-3 (1975–1978)] pro- vided proof of concept that radar altimetry could image ocean surface relief. The three-month Seasat mission provided the first complete imaging of sea-surface relief and completely changed earth scientists’ understanding of tectonic processes at work, from the continental margins to abyssal plains and from midoceanic ridges to subduction zones. Sea-surface topography can be used to compute the marine gravity field (Sandwell and Smith, 1997; Hwang et al., 1998). The approach is based on the ocean’s ability to deform and flow in the presence of an anomalous mass excess or defi- ciency. The ocean surface, as a liquid, is capable of respond- ing dynamically to lateral density contrasts within the solid earth: denser columns of rock will amass more seawater above them. The mean sea surface is the geoid, the equipotential sur- face defined throughout the world’s oceans (and continuing onshore). Prior to the Seasat mission, geodesists understood that geoidal relief should be in the tens to hundreds of meters (Rapp and Yi, 1997). With the Seasat results, however, geode- sists and geophysicists could finally document the geoid’s re- lief, wavelength, and anomaly character (Haxby et al., 1983; Stewart, 1985). Once the marine geoid could be mapped, de- riving the vertical component of the gravity field became a simple derivative computation because the geoid is its inte- gral. Fueled by the success of Seasat and the insight it provided the geologic community, new missions were planned and success- fully launched by NASA and the European Space Agency (ESA). The Topex/Poseidon, Geosat, ERS1, and ERS2 satellites provided improved resolution and accuracy in map- ping sea-surface topography. Continued re- search into better defining the gravity field and its implications for the earth’s tectonic history have been conducted by Sandwell and Smith (1997), Cazenave et al. (1996), Cazenave and Royer (2001), and others. For more than 15 years, the exploration community has made tremendous use of the global marine satellite-derived gravity field. Our ability to image structural compo- nents within the continental margins world- wide has produced countless important new leads offshore. Despite the relatively long- wavelength resolution (7–30 km) of the satellite-derived gravity field (Yale et al., 1998), its ubiquitous coverage and consistent quality are invaluable (Figure 1). Follow-up missions have been proposed (Sandwell et al., 2003) for flying higher-

resolution and more precise altimeters with more closely spaced orbital paths. These would further enhance the reso- lution of the derived gravity field, allowing 5–10-km anomaly wavelength resolution with 2–5-mGal accuracy. Although the gravity anomalies from individual salt domes may never be imaged from satellite-based radar altimeters, individual basins and their structural complexity have already been mapped with greater accuracy.

DATA REDUCTION AND PROCESSING

Gravity data reduction is a process that begins with a gravi- meter reading at a known location (the gravity station) and ends with one or more gravity anomaly values at the same lo- cation. The gravity anomaly values are derived through cor- rections to remove various effects of a defined earth model. The basic reduction of gravity data has not changed substan- tially during the past 75 years; what has changed is the speed of the computations. In the late 1950s, Heiskanen and Meinesz (1958) maintained that barely more than one rough-mountain station a day could be reduced by one computer. In 1958, a “computer” was a person who calculated data. Today, with digital terrain data and electronic computers, full-terrain and isostatic corrections can be calculated in seconds. Corrections leading to the complete Bouguer anomaly are relatively independent of the geology and are called the stan- dard reduction by LaFehr (1991a). The isostatic correction, on the other hand, requires selecting a geologic geodynamic model for isostatic compensation. Additional processing op- tions such as the specification of a nonstandard reduction den- sity to remove residual terrain effects (Nettleton, 1939) or ter- rain corrections using variable densities (Vajk, 1956; Grant and Elsaharty, 1962) stray even farther into the realm of data interpretation.

Figure 1. Satellite-derived marine free-air gravity field, merged with terrestrial gravity field, published by Sandwell and Smith (2001), courtesy of the NOAA- NGDC.

Historical Development of Gravity Method 75ND

variable surface density models have been proposed by Vajk (1956) and Grant and Elsaharty (1962).

Additional corrections

Even after topographic correction, the Bouguer anomaly contains large negative anomalies over mountain ranges, indi- cating the need for additional corrections such as isostatic and decompensation, which require some knowledge or assump- tions about geologic models. Isostatic corrections are intended to remove the effect of masses in the deep crust or mantle that isostatically compensate for topographic loads at the surface. Under an Airy model (Airy, 1855), the compensation is ac- complished by crustal roots under the high topography, which intrude into the higher-density material of the mantle to pro- vide buoyancy for the high elevations. Over oceans, the situ- ation is reversed. The Airy isostatic correction assumes that the Moho is like a scaled mirror image of the smoothed to- pography, that the density contrast across the Moho is a con- stant, and that the thickness of the crust at the shoreline is a known constant. Scaling is determined by the density con- trast and by the fact that the mass deficiency at depth must equal the mass excess of the topography for the topography to be in isostatic equilibrium. Isostatic corrections can also be made for the Pratt model, in which the average densities of the crust and upper mantle vary laterally above a fixed com- pensation depth. Isostatic corrections are relatively insensitive to the choice of model and to the exact parameter values used (Simpson et al., 1986). The isostatic residual gravity anomaly is preferred for displaying and modeling the density structure of the middle and upper crust. Like terrain corrections, early isostatic corrections were ac- complished by means of templates and tables. Some imple- mentations of isostatic corrections using digital computers and digital terrain data include Jachens and Roberts (1981), Sprenke and Kanasewich (1982), and Simpson et al. (1983). The latter algorithm was used to produce an isostatic resid- ual gravity anomaly map for the conterminous United States (Simpson et al., 1986). The isostatic correction is designed to remove the grav- ity effect of crustal roots produced by topographic highs or lows but not the effect of crustal roots derived from regions of increased crustal density without topographic expression. The decompensation correction (Zorin et al., 1985; Cordell et al., 1991) is an attempt to remedy this. It is calculated as an upward-continued isostatic residual anomaly, taken to rep- resent the anomalies produced in the deeper crust and upper mantle. The correction is subtracted from the isostatic resid- ual anomaly to produce the decompensation gravity anomaly. The decompensation correction has been applied to isostatic residual gravity anomaly data of Western Australia to com- pare the oceanic crust with the shallow continental crust (Lockwood, 2004).

Gridding

Once gravity data are reduced to the form of gravity anoma- lies, the next step usually involves gridding the data to produce a map, apply filters, or facilitate 3D interpretation. Because gravity data can be collected along profiles such as ship tracks or roads, as well as in scattered points, the standard gridding

algorithm is minimum curvature (Briggs, 1974). For situations such as marine surveys with parallel ship tracks or airborne surveys, the gridding algorithm may be required to reduce cross-track aliasing. Here, an algorithm with some degree of trend enhancement such as anisotropic kriging (Hansen, 1993) or gradient-enhanced minimum curvature (O’Connell et al.,

  1. could be used.

DATA FILTERING AND ENHANCEMENT

A common first step before interpretation is to render the observed data into a different form by filtering or enhance- ment techniques. The goal may be to support subsequent ap- plication of other techniques, facilitate comparison with other data sets, enhance gravity anomalies of interest, and/or gain some preliminary information on source location or density contrast. Most filter and interpretation techniques apply to both gravity and magnetic data. As such, it is common to ref- erence a paper describing a technique for filtering magnetic data when processing gravity data and vice versa.

Regional-residual separation

Anomalies of interest are commonly superposed on a re- gional field caused by sources larger than the scale of study or too deep to be of interest. In this situation, it is important to perform a regional-residual separation, a crucial first step in data interpretation. Historically, this problem has been ap- proached either by using a simple graphical approach (manu- ally selecting data points to represent a smooth regional field) or by using various mathematical tools to obtain the regional field. Many of the historical methods are still in common use today. The graphical approach initially was limited to analyzing profile data and, to a lesser extent, gridded data. The earli- est nongraphic approach used a regional field defined as the average of field values over a circle of a given radius, with the residual being the difference between the observed value at the center of the circle and this average (Griffin, 1949). Henderson and Zietz (1949) and Roy (1958) showed that such averaging was equivalent to calculating the second vertical derivative except for a constant factor. Agocs (1951) proposed using a least-squares fit to data to determine the regional field, an approach criticized by Skeels (1967), since the anoma- lies themselves will affect somewhat the determined regional. Hammer’s (1963) paper on stripping proposed that the effect of shallow sources could be removed through gravity mod- eling. Zurflueh (1967) proposed using 2D linear-wavelength filtering with filters of different cutoff wavelengths to solve the problem. The method was further expanded by Agarwal and Kanasewich (1971), who also used a crosscorrelation func- tion to obtain trend directions from magnetic data. Syberg (1972a) described a method using a matched filter for sep- arating the residual field from the regional field. A method based on frequency-domain Wiener filtering was proposed by Pawlowski and Hansen (1990). Spector and Grant (1970) analyzed the shape of power spec- tra calculated from observed data. Clear breaks between low- and high-frequency components of the spectrum were used to design either band-pass or matched filters. Guspi and Intro- caso (2000) used the spectrum of observed data and looked

76ND Nabighian et al.

for clear breaks between the low- and high-frequency com- ponents of the spectrum to separate the regional and residual fields. Matched filters and Wiener filters have much in common with other linear band-pass filters but have the distinct advan- tage of being optimal for a class of geologic models. Based on experience, it seems that significantly better results can be obtained by using appropriate statistical geological models rather than by attempting to adjust band-pass filter parame- ters manually. The existence of so many techniques for regional-residual separation demonstrates that unresolved problems still exist in this area. There is no single right answer for how to highlight one’s target of interest.

Upward-downward continuation

In most cases, gravity anomaly data are interpreted on (or near) the original observation surface. In some situations, however, it is useful to move (or continue) the data to another surface for interpretation or for comparison with another data set. One example might involve upward-continuing land or sea-surface gravity data for comparison with airborne or satel- lite gravity or magnetic data. Gravity data measured on a given plane can be trans- formed mathematically to data measured at a higher or lower elevation, thus either attenuating or emphasizing shorter- wavelength anomalies (Kellogg, 1953). These analytic contin- uations lead to convolution integrals that can be solved ei- ther in the space or the frequency domain. The earliest at- tempts were done in the space domain by deriving a set of weights that, when convolved with field data, yielded approx- imately the desired transform (Peters, 1949; Henderson, 1960; Byerly, 1965). Fuller (1967) developed a rigorous approach to determine the required weights and to analyze their per- formance. The space-domain operators were soon replaced by frequency-domain operators. Dean (1958) was the first to recognize the utility of using Fourier transform techniques in performing analytic continuations. Bhattacharyya (1965), Mesko (1965), and Clarke (1969) contributed to the under- standing of such transforms, which now are carried out rou- tinely. Whereas upward continuation is a very stable opera- tion, the opposite is true for downward continuation, where special techniques, including filter response tapering and reg- ularization, must be applied to control noise. Standard Fourier filtering techniques only permit analytic continuation from one level surface to another. To overcome this limitation, Syberg (1972b) and Hansen and Miyazaki (1984) extended the potential field theory to allow contin- uation between arbitrary surfaces, and Parker and Klitgord (1972) used a Schwarz-Christoffel transformation to upward- continue uneven profile data. Cordell (1985) introduced the chessboard technique, which calculates the field at succes- sively higher elevations followed by a vertical interpolation between various strata, and an analytic continuation based on a Taylor-series expansion. An alternative approach uses an equivalent-source gridding routine (Cordell, 1992; Mendonc¸a and Silva, 1994, 1995) to determine a buried equivalent mass distribution that produces the observed gravity anomaly and then uses the obtained equivalent masses to calculate the gravity anomaly on the topographic grid. An equivalent-point

source inversion method for sperical earth and gravity analysis was proposed by von Frese et al. (1981) to facilitate geologic interpretation of satellite elevation potential-Field data. An equivalent source method in wave number domain was pro- posed by Xia et al. (1993). For faster convergence and bet- ter representation of shorter wavelengths, Phillips (1996) ex- tended this method into a hybrid technique using the approach of Cordell (1992).

Derivative-based filters

First and second vertical derivatives are commonly com- puted from gravity data to emphasize short-wavelength anomalies resulting from shallow sources. They can be calcu- lated in both the space and frequency domains using standard operators. Unfortunately, these operators amplify the higher frequency noise, so special tapering of the frequency response is usually required to control noise. A stable calculation of the first vertical derivative was proposed by Nabighian (1984) us- ing 3D Hilbert transforms in the x- and y-directions. Horizontal derivatives, which can easily be computed in the space domain, are now the most common method for de- tecting target edges. Cordell (1979) first demonstrated that peaks in the magnitude of the total horizontal gradient of gravity data (square root of the sum of squares of the x- and y-derivatives) could be used to map near-vertical boundaries of contrasting densities, such as faults and geologic contacts. The method became more widely used following subsequent papers discussing various aspects of the method and show- ing its utility (Cordell and Grauch, 1982, 1985; Blakely and Simpson, 1986; Grauch and Cordell, 1987; Sharpton et al., 1987). In map form, the magnitude of the horizontal gradient can be gridded to display maximum ridges situated approxi- mately over the near-vertical lithological contacts and faults. Blakely and Simpson (1986) developed a useful method for gridded data, automatically locating and plotting the maxima of the horizontal gradient magnitude. A method by Pearson (2001) finds breaks in the horizontal derivative direction by applying a moving-window artificial intelligence operator. A similar technique is skeletonization (Eaton and Vasudevan, 2004); this produces both an image and a database of each lin- eament element, which can be sorted and decimated by length or azimuth criteria. Thurston and Brown (1994) have devel- oped convolution operators for controlling the frequency con- tent of the horizontal derivatives and, thus, of the resulting edges. Cooper and Cowan (2003) introduced the combination of visualization techniques and fractional horizontal gradients to more precisely highlight subtle features of interest. The tilt angle, introduced by Miller and Singh (1994), is the ratio of the first vertical derivative to the horizontal gradient. The tilt angle enhances subtle and prominent features evenly, so the edges mapped by the tilt derivative are not biased to- ward the largest gradient magnitudes. Grauch and Johnston (2002) address the same problem by computing horizontal gradients within moving windows to focus on regional versus local gradients. Another form of filter that can be used to highlight faults is the Goussev filter, which is the scalar difference between the total gradient and the horizontal gradient (Goussev et al., 2003). This filter, in combination with a depth separation filter (Jacobsen, 1987), provides a different perspective than other

78ND Nabighian et al.

Terracing

Terracing (Cordell and McCafferty, 1989) is an iterative fil- tering method that gradually increases the slopes of anomalies until they become vertical while simultaneously flattening the field between gradients. The resulting map is similar to a ter- raced landscape; hence, the name applied to this technique. When imaged as a color map and illuminated from above, a terraced map resembles a geologic map in which the color scheme reflects relative density contrasts. The terrace map can be further refined into a density map by iterative forward modeling and scaling of the terraced values.

Density mapping

Several popular magnetic-anomaly interpretation tech- niques can easily be adapted to gravity data. Grant (1973) in- troduced a special form of inversion in which magnetic data are inverted in the frequency domain to provide the apparent magnetic susceptibility of a basement represented by a large number of infinite vertical prisms. The resulting maps pro- vided a better geologic interpretation in the survey area. The method was later extended to the space domain by solving a large system of equations relating the observed data to mag- netic sources in the ground (Bhattacharyya and Chan, 1977; Misener et al., 1984; Silva and Hohmann, 1984). Another approach was taken by Keating (1992), who as- sumed an earth model consisting of right rectangular prisms of finite depth extent and used Walsh transforms to determine the density of each block. Granser et al. (1989) used the power spectrum of gravity data to separate it into long- and short- wavelength components and then applied an inverse density deconvolution filter to the short-wavelength components to obtain density values for the upper crust. Finally, as men- tioned previously, the terracing method (Cordell and McCaf- ferty, 1989) can be used for density determinations.

GRAVITY FORWARD MODELING

Before the use of electronic computers gravity and mag- netic anomalies were interpreted using characteristic curves calculated from simple models (Nettleton, 1942). The publica- tion of Talwani et al.’s (1959) equations for computing grav- ity anomalies produced by 2D bodies of polygonal cross sec- tion provided the impetus for the first use of computers for gravity modeling. The 2D sources were later modified to have a finite strike length (Rasmussen and Pedersen, 1979; Cady, 1980). This led to publicly available computer programs for 2.5-D gravity modeling (Webring, 1985; Saltus and Blakely, 1983, 1993). Three-dimensional density distributions initially were mod- eled by Talwani and Ewing (1960) using thin, horizontal, polygonal plates. Plouff (1975, 1976) showed that, in cer- tain cases, the use of finite-thickness horizontal plates was a practical and preferable alternative. Right rectangular prisms (Nagy, 1966) and dipping prisms (Hjelt, 1974) remain popu- lar for building complex density models, especially as inex- pensive computers become faster. Barnett (1976) used trian- gular facets to construct 3D bodies of arbitrary shape and to compute their gravity anomalies, whereas Okabe (1979) used polygonal facets.

Parker (1972) was the first to use Fourier transforms to calculate 2D and 3D gravity anomalies from complexly layered models. Because the gravity anomaly is calculated on a flat observation surface above all sources, this approach is particularly well suited to modeling marine gravity data. Fourier methods can provide an alternative to spatial-domain approaches for modeling simple sources such as a point mass or a uniform sphere, a vertical line mass, a horizontal line mass, or a vertical ribbon mass (Blakely, 1995). Blakely (1995) also presented the theory and computer subroutines for com- puting gravity fields of simple bodies in the spatial domain, including a sphere, a horizontal cylinder, a right rectangular prism, a 2D body of polygonal cross section, and a horizontal layer. Today, forward gravity modeling is often done using com- mercial software programs based on the theory and early soft- ware efforts mentioned above but incorporating inversion al- gorithms and sophisticated computer graphics. A relatively re- cent development in forward modeling is the concept of struc- tural geophysics (Jessell et al., 1993; Jessel and Valenta, 1996; Jessell, 2001; Jessell and Fractal Geophysics, 2001), in which a layered-earth model having specified physical properties is subjected to a deformation history involving tilting, faulting, folding, intrusion, and erosion. The resulting gravity field is computed using deformed prisms based on the model of Hjelt (1974).

GRAVITY INVERSE MODELING

Inversion is defined here as an automated numerical pro- cedure that constructs a model of subsurface physical prop- erty (density) variations from measured data and any prior information independent of the measured data. Quantitative interpretation is then carried out by drawing geologic conclu- sions from the inverted models. A model is either parameter- ized to describe source geometry or is described by a distri- bution of a physical property such as density or magnetic sus- ceptibility contrast. The development of inversion algorithms naturally followed these two directions. Bott (1960) first at- tempted to invert for basin depth from gravity data by adjust- ing the depth of vertical prisms through trial and error. Danes (1960) used a similar approach to determine the top of salt. Oldenburg (1974) adopted Parker’s (1972) forward procedure in the Fourier domain to formulate an inversion algorithm for basin depth by applying formal inverse theory. A number of authors extended the approach to different density-depth functions or imposed various constraints on the basement re- lief (e.g., Pedersen, 1977; Chai and Hinze, 1988; Reamer and Ferguson, 1989; Guspi, 1992; Barbosa et al., 1997). Recently, this general methodology has been used exten- sively in inversion for base of salt in oil and gas exploration (e.g., Jorgensen and Kisabeth, 2000; Nagihara and Hall, 2001; Cheng et al., 2003). A similar approach has been used to in- vert for the geometry of isolated causative bodies by repre- senting them as polygonal bodies in two dimensions or poly- hedral bodies in three dimensions (Pedersen, 1979; Moraes and Hansen, 2001) in which the vertices of the objects are re- covered as the unknowns. Alternatively, one may invert for density contrast as a function of position in the subsurface. Green (1975) applied the Backus-Gilbert approach to invert 2D gravity data and guided the inversion by using reference

Historical Development of Gravity Method 79ND

models and associated weights constructed from prior infor- mation. In a similar direction, Last and Kubik (1983) guided the inversion by minimizing the total volume of the causative body, and Guillen and Menichetti (1984) chose to minimize the inertia of the body with respect to the center of the body or an axis passing through it. While these approaches are effective, they are limited to recovering only single bodies. Li and Oldenburg (1998) for- mulated a generalized 3D inversion of gravity data by using the Tikhonov regularization and a model objective function that measures the structural complexity of the model. A lower and upper bound are also imposed on the recovered density contrast to further stabilize the solution. A similar approach has been extended to the inversion of gravity gradient data (Li, 2001; Zhdanov et al., 2004). More recently, there have been efforts to combine the strengths of these two approaches. Krahenbuhl and Li (2002, 2004) formulated the base-of-salt inversion as a binary problem, and Zhang et al. (2004) took a similar approach for crustal studies. Interestingly, in the last approaches the genetic algorithm has been used as the basic solver. This is an area of growing interest, especially when refinement of inversion is desired with constraints based on prior information.

GEOLOGIC INTERPRETATION

Observed gravity anomalies are direct indications of lateral contrasts in density between adjacent vertical columns of rock. These columns extend from the earth’s terrestrial or sea sur- face to depths ranging from about 10 m to more than 100 km. The gravity surveying process measures the sum of all lateral density contrasts at all depths within this column of rock. Data filtering allows one to isolate portions of the gravity anomaly signal that are of exploration interest. These target signatures can then be interpreted together with ancillary geologic infor- mation to construct a constrained shallow-earth model.

Density contrasts

Gravity signatures from relatively shallow-density-contrast sources have historically been used in both mineral and hy- drocarbon exploration to identify important geologic targets for which there was a density contrast ρ at depth. Examples include ore bodies (+ρ), salt domes within sediments salt domes (+ or −), depending on the surrounding sediments sulfur (complex ρ distribution), basin geometry (−ρ), reefs (+ or −ρ, depending on porosity and depth), carbonate leading edges of thrust sheets (+ρ), faults (gradient linea- ments, downthrown or footwall side indicated by lower grav- ity), anticlines and horst blocks (+ρ), regional geology (com- plex ρ distribution and gradient lineaments), and kimberlite pipes (+ or −ρ, depending on country rock and degree of weathering). The wealth of density databases compiled over the years (Dobrin, 1976; Carmichael, 1984) is valuable for establishing standardized relationships of rock properties and gravity sig- natures. However, it is recommended that, whenever possible, one measure densities on relevant rock samples.

Physical properties in boreholes

Hundreds of thousands of individual density determina- tions have been accumulated in thousands of wells around

the earth, generally at depths shallower than 5 km. Com- puted densities are often called bulk density or in-situ density (McCulloh, 1965; Robbins, 1981). McCulloh et al. (1968) ex- plained in some detail the need for a variety of corrections (for example, hole rugosity, terrain, and nonlevel geologic effects) to justify the use of the term formation density. LaFehr (1983) suggested the term apparent density to ac- count for structural effects (a nonhorizontal, uniformly lay- ered earth) at or near the well in a manner analogous to the use of the term apparent velocity when working with seismic refraction data or other apparent geophysical measurements. Thus, the apparent density is not the actual rock density, even if measurements are error free. An interesting result of potential-field theory is the Poisson jump phenomenon, in which borehole gravity measurements can yield the actual change in density across geologic boundaries because the dif- ference between the bulk and apparent densities is the same on both sides of the boundary (for wells penetrating geo- logic boundaries at normal incidence). The Poisson jump has been observed in many different geologic settings. An ex- ample is the Mors salt dome in Denmark (LaFehr et al., 1983), in which laboratory-studied cores as well as surface seismic information helped to confirm the borehole gravity results. The physical properties obtainable from borehole gravity surveys include density and porosity. The latter requires inde- pendent knowledge of matrix and fluid densities. Two general classes of applications can be addressed: (1) formation eval- uation or, in the case of known reservoirs, reservoir charac- terization and (2) remote sensing. In the latter application, a rule of thumb for 2D geology is that the apparent-density anomaly caused by a remote density boundary is proportional to the angle subtended in the wellbore by the remote bound- ary. For 3D geologic structures symmetric about the wellbore, as approximated in some carbonate reef cases, the apparent- density anomaly is proportional to the sine of the subtended angle.

DATA INTEGRATION AND PRESENTATION

The goal of the explorationist is to use knowledge derived from the gravity field to improve understanding of the lo- cal or regional geologic setting and, in turn, to better grasp the exploration potential of the area of interest. To minimize the nonuniqueness of this endeavor, constant and rigorous integration of gravity data with other geophysical and geo- logical information is required in all interpretation projects. Two-dimensional and three-dimensional modeling software can now readily incorporate geologic information in the form of well-log densities, top-structure grids or horizons, locations of faults, and other geologic constraints. This flexibility allows the interpreter to establish end-member geometries and den- sity contrasts for earth models that can (or cannot) satisfy the observed gravity signature. A complete modeling effort in- cludes several models which demonstrate the range of geolog- ically plausible models that fit the data. When modeling, one can choose to match the complete gravity signature or a resid- ual component. If the residual is modeled, the earth model must be consistent with this signal, i.e., the same gravity effects must be removed from the earth model that were subtracted from the complete gravity signal.

Historical Development of Gravity Method 81ND

compares the Gzz (first vertical derivative of gravity) computed from a regular gravity survey (Figure 3a); as mea- sured from the FTG survey (Figure 3b); and as computed from all the FTG vectors (Figure 3c). Note that Figure 3c has higher frequencies, and the subsequent higher-resolution representa- tion of the gravity field can help provide more detail in the base-of-salt structure. Full integration of 3D seismic data and FTG gravity data is accomplished by first constructing an earth-model cube of structure, velocity, and density. This starting earth model incorporates well control where present and a 3D seismic- velocity cube constructed from an initial 3D seismic interpre- tation. A 3D FTG gravity model is then computed for the initial starting model. FTG gravity misfits (observed minus computed) larger than the FTG noise level are used to adapt the gravity model until an acceptable fit is obtained. Figure 4 shows a cross section through the K2 discovery with the ini- tial 3D Kirchhoff seismic image (Figure 4a), the image with the base of salt from the FTG model in yellow (Figure 3b), and a wave-equation prestack-depth-migration seismic image derived in an independent study (Figure 4c). The close agree- ment between the FTG-Kirchhoff solution shown in yellow with that of the wave-equation migration provides a high de- gree of confidence in the position of the base of salt, allowing the updip limits and, thus, the size of the field to be identi- fied. Without this information, the field-development options would have been (1) to drill an updip well to test whether the reservoir extends into the seismic no data zone (a deepwater well drilled to this depth is expensive and, based on the results of this study, would have been a dry hole) or (2) not to drill updip and possibly leave some stranded pay sands untapped.

Detecting kimberlite pipes at Ekati with airborne gravity gradiometry

A BHP Billiton high-resolution airborne gravity gradiome- try (AGG) survey over the Ekati diamond mine in the North- west Territories of Canada detected clear gravity anoma- lies over known and suspected kimberlites (Liu et al., 2001). The airborne survey, the first gravity gradiometry survey con- ducted for kimberlite exploration, was flown at 80 m flight height above ground on 100-m flight-line spacing. AGG aero-

Figure 3. Gzz, the partial derivative with respect to the z-axis of the vertical force of gravity. (a) Gzz derived from conventional Bouguer gravity. (b) Gzz as mea- sured with an FTG survey. (c) Best estimate of Gzz obtained by using all tensor components.

magnetic, and laser profiler data were collected with a single- engine, fixed-wing system. The high-resolution laser terrain profiler is important for making terrain corrections since the most significant den- sity contrast and the contrast closest to the airplane is the air-ground interface. Terrain is modeled with a 3D gravity- modeling algorithm using an appropriate density for surface geology. In outcropping granitic areas, a 2.67-g/cm^3 density is appropriate. In areas of local glacial till, a lower density (1.6–2.0 g/cm 3 ) is applied. Figure 5 is an image of the verti- cal gravity gradient from the Ekati survey scaled in E ¨otv ¨os units. Three known kimberlites are identified as dark spots, corresponding to the lower density of the weathered kimber- lite. Near-surface geology is represented by high-density lin- ear dikes in the magnified area below. In addition to the AGG, gravity gradient measurements can be used to compute gravity. Figure 6 represents the same area as Figure 5 but shows the terrain-corrected vertical compo- nent of gravity. The noise level is estimated at 0.2 mGal with a spatial wavelength of 500 m. For this map to conform to ei- ther free-air or Bouguer standards, it must be calibrated to tie base-station data points. However, the vertical gravity gradi- ent data are preferred for picking potential targets, especially in kimberlite exploration. The AGG survey was successful at imaging 55% of the 136 known kimberlites in the survey area. Additional lead areas not previously mapped as kimberlites prior to the survey were subsequently drilled and were found to be kimberlites. Data resolution was determined to be 7 EU with a 300-m wave- length. Tightening the flight line spacing to 50 m in a local test area slightly improved measurement accuracy and improved the horizontal wavelength to less than 300 m. Integration of the high-sensitivity terrain and airborne magnetic data signifi- cantly improved the sensitivity of the AGG survey data.

THE FUTURE

Operating systems will continue to migrate to the widely used Windows TM^ and LinuxTM^ platforms. Efficient data man- agement will receive more emphasis, and data-retrieval ap- plications will become easier to use. Geophysicists will con- tinue to improve their access to data from remote field offices. Interpretation using detailed and more realistic 3D models with new and improved modeling systems will become more commonplace. Tighter integration with seismic models in potential-field data interpretation will help to improve the seismic-velocity model in a more timely fashion. Joint interpretation with other nonseismic methods such as the emerging marine electromagnetic methods is rapidly finding acceptance in oil companies. New functionalities will take advantage of the additional information and resolution pro- vided by gravity and magnetic gradient data. Much of what has happened over the last 25 years is a refinement of the major breakthroughs of the preceding 50 years; an example is the steady improvement in

82ND Nabighian et al.

both marine and airborne gravity operations. Reservoir mon- itoring and characterization are becoming major activities in the oil and gas industry. By combining the less-expensive rel- ative gravimeters with the calibration of absolute gravimeters,

Figure 4. Prestack depth-migration profile along line B through the K-2 field. (a) Kirchhoff migration. (b) Kirchhoff migration with base-of-salt horizon shown in yellow, as deter- mined by FTG inversion. (c) Wave-equation prestack depth migration, which also shows the presence of a salt keel. Yel- low horizon shows the FTG inversion result.

corrected for tidal effects. Reservoir monitoring may intro- duce an entirely new and robust activity for gravity specialists. The present pace of technical innovation will likely con- tinue into the future, with maximum sensor resolution moving well into the submicrogal range. The time it takes to make a high-resolution measurement will continue to become shorter — 1 min or less to make a 1-μGal resolution measurement

Figure 5. Vertical gravity gradient Gzz from the Ekati survey. The upper figure is a shaded relief image of Gzz with vertical scale as dark (negative, local low-density features) to white (positive, local high-density features). The image below is an enlarged section of the southeastern part of the image. Two higher-density (white) dikes separated at just over 300 m are resolved on the lower image. The white bar has a horizontal dimension of 300 m.

Figure 6. Vertical gravity Gz as a color-shaded image from the Ekati survey. Gz is the equivalent to terrain-corrected Bouguer gravity in mGal.