harmonica.EquivalentSourcesSph#

class harmonica.EquivalentSourcesSph(damping=None, points=None, relative_depth=500, parallel=True)[source]#

Equivalent sources for generic harmonic functions in spherical coordinates

These equivalent sources can be used for:

  • Spherical coordinates (geographic coordinates must be converted before use)

  • Regional or global data where Earth’s curvature must be taken into account

  • Gravity and magnetic data (including derivatives)

  • Single data types

  • Interpolation

  • Upward continuation

  • Finite-difference based derivative calculations

They cannot be used for:

  • Joint inversion of multiple data types (e.g., gravity + gravity gradients)

  • Reduction to the pole of magnetic total field anomaly data

  • Analytical derivative calculations

Point sources are located beneath the observed potential-field measurement points by default [Cordell1992]. Custom source locations can be used by specifying the points argument. Coefficients associated with each point source are estimated through linear least-squares with damping (Tikhonov 0th order) regularization.

The Green’s function for point mass effects used is the inverse Euclidean distance between the grid coordinates and the point source:

\[\phi(\bar{x}, \bar{x}') = \frac{1}{||\bar{x} - \bar{x}'||}\]

where \(\bar{x}\) and \(\bar{x}'\) are the coordinate vectors of the observation point and the source, respectively.

Parameters:
dampingNone or float

The positive damping regularization parameter. Controls how much smoothness is imposed on the estimated coefficients. If None, no regularization is used.

pointsNone or list of arrays (optional)

List containing the coordinates of the equivalent point sources. Coordinates are assumed to be in the following order: (longitude, latitude, radius). Both longitude and latitude must be in degrees and radius in meters. If None, will place one point source below each observation point at a fixed relative depth below the observation point [Cordell1992]. Defaults to None.

relative_depthfloat

Relative depth at which the point sources are placed beneath the observation points. Each source point will be set beneath each data point at a depth calculated as the radius of the data point minus this constant relative_depth. Use positive numbers (negative numbers would mean point sources are above the data points). Ignored if points is specified.

parallelbool

If True any predictions and Jacobian building is carried out in parallel through Numba’s jit.prange, reducing the computation time. If False, these tasks will be run on a single CPU. Default to True.

Attributes:
points_2d-array

Coordinates of the equivalent point sources.

coefs_array

Estimated coefficients of every point source.

region_tuple

The boundaries ([W, E, S, N]) of the data used to fit the interpolator. Used as the default region for the grid method.

Methods

filter(coordinates, data[, weights])

Filter the data through the gridder and produce residuals.

fit(coordinates, data[, weights])

Fit the coefficients of the equivalent sources.

get_metadata_routing()

Get metadata routing of this object.

get_params([deep])

Get parameters for this estimator.

grid(coordinates[, dims, data_names])

Interpolate the data onto a regular grid.

jacobian(coordinates, points[, dtype])

Make the Jacobian matrix for the equivalent sources.

predict(coordinates)

Evaluate the estimated equivalent sources on the given set of points.

profile(point1, point2, size[, dims, ...])

scatter([region, size, random_state, dims, ...])

score(coordinates, data[, weights])

Score the gridder predictions against the given data.

set_fit_request(*[, coordinates, data, weights])

Request metadata passed to the fit method.

set_params(**params)

Set the parameters of this estimator.

set_predict_request(*[, coordinates])

Request metadata passed to the predict method.

set_score_request(*[, coordinates, data, ...])

Request metadata passed to the score method.

EquivalentSourcesSph.filter(coordinates, data, weights=None)#

Filter the data through the gridder and produce residuals.

Calls fit on the data, evaluates the residuals (data - predicted data), and returns the coordinates, residuals, and weights.

Not very useful by itself but this interface makes gridders compatible with other processing operations and is used by verde.Chain to join them together (for example, so you can fit a spline on the residuals of a trend).

Parameters:
coordinatestuple of arrays

Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). For the specific definition of coordinate systems and what these names mean, see the class docstring.

dataarray or tuple of arrays

The data values of each data point. If the data has more than one component, data must be a tuple of arrays (one for each component).

weightsNone or array or tuple of arrays

If not None, then the weights assigned to each data point. If more than one data component is provided, you must provide a weights array for each data component (if not None).

Returns:
coordinates, residuals, weights

The coordinates and weights are same as the input. Residuals are the input data minus the predicted data.

EquivalentSourcesSph.fit(coordinates, data, weights=None)[source]#

Fit the coefficients of the equivalent sources.

The data region is captured and used as default for the grid method.

All input arrays must have the same shape.

Parameters:
coordinatestuple of arrays

Arrays with the coordinates of each data point. Should be in the following order: (longitude, latitude, radius, …). Only longitude, latitude, and radius will be used, all subsequent coordinates will be ignored.

dataarray

The data values of each data point.

weightsNone or array

If not None, then the weights assigned to each data point. Typically, this should be 1 over the data uncertainty squared.

Returns:
self

Returns this estimator instance for chaining operations.

EquivalentSourcesSph.get_metadata_routing()#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

EquivalentSourcesSph.get_params(deep=True)#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

EquivalentSourcesSph.grid(coordinates, dims=None, data_names=None, **kwargs)[source]#

Interpolate the data onto a regular grid.

The coordinates of the regular grid must be passed through the coordinates argument as a tuple containing three arrays in the following order: (longitude, latitude, radius). They can be easily created through the verde.grid_coordinates function. If the grid points must be all at the same radius, it can be specified in the extra_coords argument of verde.grid_coordinates.

Use the dims and data_names arguments to set custom names for the dimensions and the data field(s) in the output xarray.Dataset. Default names will be provided if none are given.

Parameters:
coordinatestuple of arrays

Tuple of arrays containing the coordinates of the grid in the following order: (longitude, latitude, radius). The longitude and latitude arrays could be 1d or 2d arrays, if they are 2d they must be part of a meshgrid. The radius array should be a 2d array with the same shape of longitude and latitude (if they are 2d arrays) or with a shape of (latitude.size, longitude.size) (if they are 1d arrays).

dimslist or None

The names of the latitude and longitude data dimensions, respectively, in the output grid. Default is determined from the dims attribute of the class. Must be defined in the following order: latitude dimension, longitude dimension. NOTE: This is an exception to the “longitude” then “latitude” pattern but is required for compatibility with xarray.

data_nameslist of None

The name(s) of the data variables in the output grid. Defaults to ['scalars'].

Returns:
gridxarray.Dataset

The interpolated grid. Metadata about the interpolator is written to the attrs attribute.

EquivalentSourcesSph.jacobian(coordinates, points, dtype='float64')[source]#

Make the Jacobian matrix for the equivalent sources.

Each column of the Jacobian is the Green’s function for a single point source evaluated on all observation points.

Parameters:
coordinatestuple of arrays

Arrays with the coordinates of each data point. Should be in the following order: (longitude, latitude, radius, …). Only longitude, latitude and radius will be used, all subsequent coordinates will be ignored.

pointstuple of arrays

Tuple of arrays containing the coordinates of the equivalent point sources in the following order: (longitude, latitude, radius).

dtypestr or numpy dtype

The type of the Jacobian array.

Returns:
jacobian2D array

The (n_data, n_points) Jacobian matrix.

EquivalentSourcesSph.predict(coordinates)[source]#

Evaluate the estimated equivalent sources on the given set of points.

Requires a fitted estimator (see fit).

Parameters:
coordinatestuple of arrays

Arrays with the coordinates of each data point. Should be in the following order: (longitude, latitude, radius, …). Only longitude, latitude and radius will be used, all subsequent coordinates will be ignored.

Returns:
dataarray

The data values evaluated on the given points.

EquivalentSourcesSph.profile(point1, point2, size, dims=None, data_names=None, projection=None, **kwargs)[source]#

Warning

Not implemented method. The profile on spherical coordinates should be done using great-circle distances through the Haversine formula.

EquivalentSourcesSph.scatter(region=None, size=None, random_state=None, dims=None, data_names=None, projection=None, **kwargs)[source]#

Warning

Not implemented method. The scatter method will be deprecated on Verde v2.0.0.

EquivalentSourcesSph.score(coordinates, data, weights=None)#

Score the gridder predictions against the given data.

Calculates the R^2 coefficient of determination of between the predicted values and the given data values. A maximum score of 1 means a perfect fit. The score can be negative.

Warning

The default scoring will change from R² to negative root mean squared error (RMSE) in Verde 2.0.0. This may change model selection results slightly. The negative version will be used to maintain the behaviour of larger scores being better, which is more compatible with current model selection code.

If the data has more than 1 component, the scores of each component will be averaged.

Parameters:
coordinatestuple of arrays

Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). For the specific definition of coordinate systems and what these names mean, see the class docstring.

dataarray or tuple of arrays

The data values of each data point. If the data has more than one component, data must be a tuple of arrays (one for each component).

weightsNone or array or tuple of arrays

If not None, then the weights assigned to each data point. If more than one data component is provided, you must provide a weights array for each data component (if not None).

Returns:
scorefloat

The R^2 score

EquivalentSourcesSph.set_fit_request(*, coordinates: bool | None | str = '$UNCHANGED$', data: bool | None | str = '$UNCHANGED$', weights: bool | None | str = '$UNCHANGED$') EquivalentSourcesSph#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
coordinatesstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for coordinates parameter in fit.

datastr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for data parameter in fit.

weightsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for weights parameter in fit.

Returns:
selfobject

The updated object.

EquivalentSourcesSph.set_params(**params)#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

EquivalentSourcesSph.set_predict_request(*, coordinates: bool | None | str = '$UNCHANGED$') EquivalentSourcesSph#

Request metadata passed to the predict method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
coordinatesstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for coordinates parameter in predict.

Returns:
selfobject

The updated object.

EquivalentSourcesSph.set_score_request(*, coordinates: bool | None | str = '$UNCHANGED$', data: bool | None | str = '$UNCHANGED$', weights: bool | None | str = '$UNCHANGED$') EquivalentSourcesSph#

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
coordinatesstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for coordinates parameter in score.

datastr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for data parameter in score.

weightsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for weights parameter in score.

Returns:
selfobject

The updated object.


Examples using harmonica.EquivalentSourcesSph#

Gridding in spherical coordinates

Gridding in spherical coordinates