harmonica.EquivalentSourcesSph#
- class harmonica.EquivalentSourcesSph(damping=None, points=None, relative_depth=500, parallel=True)[source]#
Equivalent sources for generic harmonic functions in spherical coordinates
These equivalent sources can be used for:
Spherical coordinates (geographic coordinates must be converted before use)
Regional or global data where Earth’s curvature must be taken into account
Gravity and magnetic data (including derivatives)
Single data types
Interpolation
Upward continuation
Finite-difference based derivative calculations
They cannot be used for:
Joint inversion of multiple data types (e.g., gravity + gravity gradients)
Reduction to the pole of magnetic total field anomaly data
Analytical derivative calculations
Point sources are located beneath the observed potential-field measurement points by default [Cordell1992]. Custom source locations can be used by specifying the points argument. Coefficients associated with each point source are estimated through linear least-squares with damping (Tikhonov 0th order) regularization.
The Green’s function for point mass effects used is the inverse Euclidean distance between the grid coordinates and the point source:
\[\phi(\bar{x}, \bar{x}') = \frac{1}{||\bar{x} - \bar{x}'||}\]where \(\bar{x}\) and \(\bar{x}'\) are the coordinate vectors of the observation point and the source, respectively.
- Parameters:
- damping
None
orfloat
The positive damping regularization parameter. Controls how much smoothness is imposed on the estimated coefficients. If None, no regularization is used.
- points
None
orlist
of
arrays
(optional
) List containing the coordinates of the equivalent point sources. Coordinates are assumed to be in the following order: (
longitude
,latitude
,radius
). Bothlongitude
andlatitude
must be in degrees andradius
in meters. If None, will place one point source below each observation point at a fixed relative depth below the observation point [Cordell1992]. Defaults to None.- relative_depth
float
Relative depth at which the point sources are placed beneath the observation points. Each source point will be set beneath each data point at a depth calculated as the radius of the data point minus this constant relative_depth. Use positive numbers (negative numbers would mean point sources are above the data points). Ignored if points is specified.
- parallelbool
If True any predictions and Jacobian building is carried out in parallel through Numba’s
jit.prange
, reducing the computation time. If False, these tasks will be run on a single CPU. Default to True.
- damping
- Attributes:
Methods
filter
(coordinates, data[, weights])Filter the data through the gridder and produce residuals.
fit
(coordinates, data[, weights])Fit the coefficients of the equivalent sources.
Get metadata routing of this object.
get_params
([deep])Get parameters for this estimator.
grid
(coordinates[, dims, data_names])Interpolate the data onto a regular grid.
jacobian
(coordinates, points[, dtype])Make the Jacobian matrix for the equivalent sources.
predict
(coordinates)Evaluate the estimated equivalent sources on the given set of points.
profile
(point1, point2, size[, dims, ...])scatter
([region, size, random_state, dims, ...])score
(coordinates, data[, weights])Score the gridder predictions against the given data.
set_fit_request
(*[, coordinates, data, weights])Request metadata passed to the
fit
method.set_params
(**params)Set the parameters of this estimator.
set_predict_request
(*[, coordinates])Request metadata passed to the
predict
method.set_score_request
(*[, coordinates, data, ...])Request metadata passed to the
score
method.
- EquivalentSourcesSph.filter(coordinates, data, weights=None)#
Filter the data through the gridder and produce residuals.
Calls
fit
on the data, evaluates the residuals (data - predicted data), and returns the coordinates, residuals, and weights.Not very useful by itself but this interface makes gridders compatible with other processing operations and is used by
verde.Chain
to join them together (for example, so you can fit a spline on the residuals of a trend).- Parameters:
- coordinates
tuple
of
arrays
Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). For the specific definition of coordinate systems and what these names mean, see the class docstring.
- data
array
ortuple
of
arrays
The data values of each data point. If the data has more than one component, data must be a tuple of arrays (one for each component).
- weights
None
orarray
ortuple
of
arrays
If not None, then the weights assigned to each data point. If more than one data component is provided, you must provide a weights array for each data component (if not None).
- coordinates
- Returns:
coordinates
,residuals
,weights
The coordinates and weights are same as the input. Residuals are the input data minus the predicted data.
- EquivalentSourcesSph.fit(coordinates, data, weights=None)[source]#
Fit the coefficients of the equivalent sources.
The data region is captured and used as default for the
grid
method.All input arrays must have the same shape.
- Parameters:
- coordinates
tuple
of
arrays
Arrays with the coordinates of each data point. Should be in the following order: (
longitude
,latitude
,radius
, …). Onlylongitude
,latitude
, andradius
will be used, all subsequent coordinates will be ignored.- data
array
The data values of each data point.
- weights
None
orarray
If not None, then the weights assigned to each data point. Typically, this should be 1 over the data uncertainty squared.
- coordinates
- Returns:
self
Returns this estimator instance for chaining operations.
- EquivalentSourcesSph.get_metadata_routing()#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routing
MetadataRequest
A
MetadataRequest
encapsulating routing information.
- routing
- EquivalentSourcesSph.get_params(deep=True)#
Get parameters for this estimator.
- EquivalentSourcesSph.grid(coordinates, dims=None, data_names=None, **kwargs)[source]#
Interpolate the data onto a regular grid.
The coordinates of the regular grid must be passed through the
coordinates
argument as a tuple containing three arrays in the following order:(longitude, latitude, radius)
. They can be easily created through theverde.grid_coordinates
function. If the grid points must be all at the same radius, it can be specified in theextra_coords
argument ofverde.grid_coordinates
.Use the dims and data_names arguments to set custom names for the dimensions and the data field(s) in the output
xarray.Dataset
. Default names will be provided if none are given.- Parameters:
- coordinates
tuple
of
arrays
Tuple of arrays containing the coordinates of the grid in the following order: (longitude, latitude, radius). The longitude and latitude arrays could be 1d or 2d arrays, if they are 2d they must be part of a meshgrid. The radius array should be a 2d array with the same shape of longitude and latitude (if they are 2d arrays) or with a shape of
(latitude.size, longitude.size)
(if they are 1d arrays).- dims
list
orNone
The names of the latitude and longitude data dimensions, respectively, in the output grid. Default is determined from the
dims
attribute of the class. Must be defined in the following order: latitude dimension, longitude dimension. NOTE: This is an exception to the “longitude” then “latitude” pattern but is required for compatibility with xarray.- data_names
list
of
None
The name(s) of the data variables in the output grid. Defaults to
['scalars']
.
- coordinates
- Returns:
- grid
xarray.Dataset
The interpolated grid. Metadata about the interpolator is written to the
attrs
attribute.
- grid
- EquivalentSourcesSph.jacobian(coordinates, points, dtype='float64')[source]#
Make the Jacobian matrix for the equivalent sources.
Each column of the Jacobian is the Green’s function for a single point source evaluated on all observation points.
- Parameters:
- coordinates
tuple
of
arrays
Arrays with the coordinates of each data point. Should be in the following order: (
longitude
,latitude
,radius
, …). Onlylongitude
,latitude
andradius
will be used, all subsequent coordinates will be ignored.- points
tuple
of
arrays
Tuple of arrays containing the coordinates of the equivalent point sources in the following order: (
longitude
,latitude
,radius
).- dtype
str
ornumpy
dtype
The type of the Jacobian array.
- coordinates
- Returns:
- jacobian2D
array
The (n_data, n_points) Jacobian matrix.
- jacobian2D
- EquivalentSourcesSph.predict(coordinates)[source]#
Evaluate the estimated equivalent sources on the given set of points.
Requires a fitted estimator (see
fit
).- Parameters:
- coordinates
tuple
of
arrays
Arrays with the coordinates of each data point. Should be in the following order: (
longitude
,latitude
,radius
, …). Onlylongitude
,latitude
andradius
will be used, all subsequent coordinates will be ignored.
- coordinates
- Returns:
- data
array
The data values evaluated on the given points.
- data
- EquivalentSourcesSph.profile(point1, point2, size, dims=None, data_names=None, projection=None, **kwargs)[source]#
Warning
Not implemented method. The profile on spherical coordinates should be done using great-circle distances through the Haversine formula.
- EquivalentSourcesSph.scatter(region=None, size=None, random_state=None, dims=None, data_names=None, projection=None, **kwargs)[source]#
Warning
Not implemented method. The scatter method will be deprecated on Verde v2.0.0.
- EquivalentSourcesSph.score(coordinates, data, weights=None)#
Score the gridder predictions against the given data.
Calculates the R^2 coefficient of determination of between the predicted values and the given data values. A maximum score of 1 means a perfect fit. The score can be negative.
Warning
The default scoring will change from R² to negative root mean squared error (RMSE) in Verde 2.0.0. This may change model selection results slightly. The negative version will be used to maintain the behaviour of larger scores being better, which is more compatible with current model selection code.
If the data has more than 1 component, the scores of each component will be averaged.
- Parameters:
- coordinates
tuple
of
arrays
Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). For the specific definition of coordinate systems and what these names mean, see the class docstring.
- data
array
ortuple
of
arrays
The data values of each data point. If the data has more than one component, data must be a tuple of arrays (one for each component).
- weights
None
orarray
ortuple
of
arrays
If not None, then the weights assigned to each data point. If more than one data component is provided, you must provide a weights array for each data component (if not None).
- coordinates
- Returns:
- score
float
The R^2 score
- score
- EquivalentSourcesSph.set_fit_request(*, coordinates: bool | None | str = '$UNCHANGED$', data: bool | None | str = '$UNCHANGED$', weights: bool | None | str = '$UNCHANGED$') EquivalentSourcesSph #
Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- coordinates
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
coordinates
parameter infit
.- data
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
data
parameter infit
.- weights
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
weights
parameter infit
.
- coordinates
- Returns:
- self
object
The updated object.
- self
- EquivalentSourcesSph.set_params(**params)#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
- **params
dict
Estimator parameters.
- **params
- Returns:
- self
estimator
instance
Estimator instance.
- self
- EquivalentSourcesSph.set_predict_request(*, coordinates: bool | None | str = '$UNCHANGED$') EquivalentSourcesSph #
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.
- EquivalentSourcesSph.set_score_request(*, coordinates: bool | None | str = '$UNCHANGED$', data: bool | None | str = '$UNCHANGED$', weights: bool | None | str = '$UNCHANGED$') EquivalentSourcesSph #
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- coordinates
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
coordinates
parameter inscore
.- data
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
data
parameter inscore
.- weights
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
weights
parameter inscore
.
- coordinates
- Returns:
- self
object
The updated object.
- self
Examples using harmonica.EquivalentSourcesSph
#
Gridding in spherical coordinates