Error estimation of the international bathymetric chart of the Arctic Ocean

Abstract

During the beginning of 2000 a grid model representing the bathymetry of the Arctic Ocean was released for public use via a web page hosted by the National Geophysical Data Center (NGDC). The data set is a first beta version compiled under the auspices of the International Bathymetric Chart of the Arctic Ocean (IBCAO)[Jakobsson et al., 2000]. Since the release of the bathymetry grid the IBCAO web page have had an average of 700 visitors per week and the data have been widely used within various GIS applications.

To construct the IBCAO grid, several vintages of public-domain observations were extracted from world and national data centers, and complemented by newly released measurements that were collected by US and British submarines operating beneath the permanent polar pack from 1958 to 1988. These were further enhanced by original observations that were collected in recent years by US Navy submarines during unclassified SCICEX missions from 1993 to 1999, and by Swedish and German icebreakers from 1990 to 1997. The sum of these digital holdings represented a substantial quantity of information, but their geographical distribution was not uniform, therefore in several areas, additional depth values in the form of point soundings or bathymetric contours were derived from charts and maps published by Russian and US agencies. In combining these data in order to create a regularly space grid containing bathymetric values we were forced to use a complex interpolation scheme due to the sparseness and irregularity of the input data points. Consequently, we are now faced with the difficult task of assessing the confidence that we can assign to the final grid product, a task that is not usually addressed in most bathymetric compilations. This has important implications on use of the gridded bathymetry, especially when it is used for generating further scientific interpretations.

We approach the problem of assessing the confidence via a direct-simulation Monte Carlo method. We have started with a small subset from the grid model. The test dataset shows examples of all of the data types included in the entire compilation. From this database, we assign a priori error variances based on available meta-data, and when this is not available, based on a worst-case scenario in an essentially heuristic manner. We then generate a number of synthetic datasets by randomly perturbing the base data using normally distributed random variates, scaled according to the predicted error model. These datasets are next re-gridded using the same methodology as the original product, generating a set of plausible grid models of the regional bathymetry that we can use for standard error estimates. Finally, we repeat the entire random estimation process and analyze each run's standard error grids in order to examine sampling bias and variance in the predictions. The final products of the estimation are a collection of standard error grids at different resolutions, a measurement of estimation reliability, and an overall assessment of gridding algorithm stability as a function of grid resolution. Our goal is to use this approach for an error estimation of the entire IBCAO grid and to offer the results to the scientific community for use while interpreting the bathymetry of the Arctic Ocean as portrayed by the IBCAO grid.

Department

Center for Coastal and Ocean Mapping

Publication Date

1-2001

Journal Title

Arctic Geographic Information Systems (AGIS)

Conference Date

Jan 22 - Jan 24, 2001

Publisher Place

Seattle, WA, USA

Publisher

ARCUS

Document Type

Poster

Share

COinS