Abstract

The past century has seen remarkable advances in technologies associated with positioning and the measurement of depth. Lead lines have given way to single beam echo sounders, which in turn are being replaced by multibeam sonars and other means of remotely and rapidly collecting dense bathymetric datasets. Sextants were replaced by radio navigation, then transit satellite, GPS and now differential GPS. With each new advance comes tremendous improvement in the accuracy and resolution of the data we collect. Given these changes and given the vastness of the ocean areas we must map, the charts we produce are mainly compilations of multiple data sets collected over many years and representing a range of technologies. Yet despite our knowledge that the accuracy of the various technologies differs, our compilations have traditionally treated each sounding with equal weight. We address these issues in the context of generating regularly spaced grids containing bathymetric values. Gridded products are required for a number of earth sciences studies and for generating the grid we are often forced to use a complex interpolation scheme due to the sparseness and irregularity of the input data points. Consequently, we are faced with the difficult task of assessing the confidence that we can assign to the final grid product, a task that is not usually addressed in most bathymetric compilations. Traditionally the hydrographic community has considered each sounding equally accurate and there has been no error evaluation of the bathymetric end product. This has important implications for use of the gridded bathymetry, especially when it is used for generating further scientific interpretations. In this paper we approach the problem of assessing the confidence of the final bathymetry gridded product via a direct-simulation Monte Carlo method. We start with a small subset of data from the International Bathymetric Chart of the Arctic Ocean (IBCAO) grid model [Jakobsson et al., 2000]. This grid is compiled from a mixture of data sources ranging from single beam soundings with available metadata, to spot soundings with no available metadata, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign a priori error variances based on available meta-data, and when this is not available, based on a worst-case scenario in an essentially heuristic manner. We then generate a number of synthetic datasets by randomly perturbing the base data using normally distributed random variates, scaled according to the predicted error model. These datasets are next re-gridded using the same methodology as the original product, generating a set of plausible grid models of the regional bathymetry that we can use for standard deviation estimates. Finally, we repeat the entire random estimation process and analyze each run’s standard deviation grids in order to examine sampling bias and standard error in the predictions. The final products of the estimation are a collection of standard deviation grids, which we combine with the source data density in order to create a grid that contains information about the bathymetric model’s reliability.

Department

Center for Coastal and Ocean Mapping

Publication Date

5-2001

Journal Title

U.S. Hydrographic Conference

Conference Date

May 21 - May 24, 2001

Publisher Place

Norfolk, VA, USA

Publisher

Hydrographic Society of America

Document Type

Conference Proceeding

Share

COinS