verde.synthetic.CheckerBoard#
- class verde.synthetic.CheckerBoard(amplitude=1000, region=(0, 5000, -5000, 0), w_east=None, w_north=None)[source]#
Generate synthetic data in a checkerboard pattern.
The mathematical model is:
\[f(e, n) = a \sin\left(\frac{2\pi}{w_e} e\right) \cos\left(\frac{2\pi}{w_n} n\right)\]in which \(e\) is the easting coordinate, \(n\) is the northing coordinate, \(a\) is the amplitude, and \(w_e\) and \(w_n\) are the wavelengths in the east and north directions, respectively.
- Parameters:
- amplitude
float
The amplitude of the checkerboard undulations.
- region
tuple
The boundaries (
[W, E, S, N]
) of the region used to generate the synthetic data.- w_east
float
The wavelength in the east direction. Defaults to half of the West-East size of the evaluating region.
- w_north
float
The wavelength in the north direction. Defaults to half of the South-North size of the evaluating region.
- amplitude
Examples
>>> synth = CheckerBoard() >>> # Default values for the wavelengths are selected automatically >>> print(synth.w_east_, synth.w_north_) 2500.0 2500.0 >>> # CheckerBoard.grid produces an xarray.Dataset with data on a grid >>> grid = synth.grid(shape=(11, 6)) >>> # scatter and profile generate pandas.DataFrame objects >>> table = synth.scatter() >>> print(sorted(table.columns)) ['easting', 'northing', 'scalars'] >>> profile = synth.profile(point1=(0, 0), point2=(2500, -2500), size=100) >>> print(sorted(profile.columns)) ['distance', 'easting', 'northing', 'scalars']
- Attributes:
Methods
filter
(coordinates, data[, weights])Filter the data through the gridder and produce residuals.
fit
(coordinates, data[, weights])Fit the gridder to observed data.
Get metadata routing of this object.
get_params
([deep])Get parameters for this estimator.
grid
([region, shape, spacing, dims, ...])Interpolate the data onto a regular grid.
predict
(coordinates)Evaluate the checkerboard function on a given set of points.
profile
(point1, point2, size[, dims, ...])Interpolate data along a profile between two points.
scatter
([region, size, random_state, dims, ...])Generate values on a random scatter of points.
score
(coordinates, data[, weights])Score the gridder predictions against the given data.
set_fit_request
(*[, coordinates, data, weights])Request metadata passed to the
fit
method.set_params
(**params)Set the parameters of this estimator.
set_predict_request
(*[, coordinates])Request metadata passed to the
predict
method.set_score_request
(*[, coordinates, data, ...])Request metadata passed to the
score
method.
Attributes#
- CheckerBoard.data_names_defaults = [('scalars',), ('east_component', 'north_component'), ('east_component', 'north_component', 'vertical_component')]#
- CheckerBoard.dims = ('northing', 'easting')#
- CheckerBoard.extra_coords_name = 'extra_coord'#
- CheckerBoard.region_#
Used to fool the BaseGridder methods
- CheckerBoard.w_east_#
Use half of the E-W extent
- CheckerBoard.w_north_#
Use half of the N-S extent
Methods#
- CheckerBoard.filter(coordinates, data, weights=None)#
Filter the data through the gridder and produce residuals.
Calls
fit
on the data, evaluates the residuals (data - predicted data), and returns the coordinates, residuals, and weights.Not very useful by itself but this interface makes gridders compatible with other processing operations and is used by
verde.Chain
to join them together (for example, so you can fit a spline on the residuals of a trend).- Parameters:
- coordinates
tuple
of
arrays
Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). For the specific definition of coordinate systems and what these names mean, see the class docstring.
- data
array
ortuple
of
arrays
The data values of each data point. If the data has more than one component, data must be a tuple of arrays (one for each component).
- weights
None
orarray
ortuple
of
arrays
If not None, then the weights assigned to each data point. If more than one data component is provided, you must provide a weights array for each data component (if not None).
- coordinates
- Returns:
coordinates
,residuals
,weights
The coordinates and weights are same as the input. Residuals are the input data minus the predicted data.
- CheckerBoard.fit(coordinates, data, weights=None)#
Fit the gridder to observed data. NOT IMPLEMENTED.
This is a dummy placeholder for an actual method.
- Parameters:
- coordinates
tuple
of
arrays
Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …).
- data
array
ortuple
of
arrays
The data values of each data point. If the data has more than one component, data must be a tuple of arrays (one for each component).
- weights
None
orarray
ortuple
of
arrays
If not None, then the weights assigned to each data point. If more than one data component is provided, you must provide a weights array for each data component (if not None).
- coordinates
- Returns:
self
This instance of the gridder. Useful to chain operations.
- CheckerBoard.get_metadata_routing()#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routing
MetadataRequest
A
MetadataRequest
encapsulating routing information.
- routing
- CheckerBoard.get_params(deep=True)#
Get parameters for this estimator.
- CheckerBoard.grid(region=None, shape=None, spacing=None, dims=None, data_names=None, projection=None, coordinates=None, **kwargs)#
Interpolate the data onto a regular grid.
The grid can be specified by two methods:
Pass the actual coordinates of the grid points, as generated by
verde.grid_coordinates
or from an existingxarray.Dataset
grid.Let the method define a new grid by either passing the number of points in each dimension (the shape) or by the grid node spacing. If the interpolator collected the input data region, then it will be used if
region=None
. Otherwise, you must specify the grid region. Seeverde.grid_coordinates
for details. Other arguments forverde.grid_coordinates
can be passed as extra keyword arguments (kwargs
) to this method.
Use the dims and data_names arguments to set custom names for the dimensions and the data field(s) in the output
xarray.Dataset
. Default names will be provided if none are given.- Parameters:
- region
list
= [W
,E
,S
,N
] The west, east, south, and north boundaries of a given region. Use only if
coordinates
is None.- shape
tuple
= (n_north
,n_east
)or
None
The number of points in the South-North and West-East directions, respectively. Use only if
coordinates
is None.- spacing
tuple
= (s_north
,s_east
)or
None
The grid spacing in the South-North and West-East directions, respectively. Use only if
coordinates
is None.- dims
list
orNone
The names of the northing and easting data dimensions, respectively, in the output grid. Default is determined from the
dims
attribute of the class. Must be defined in the following order: northing dimension, easting dimension. NOTE: This is an exception to the “easting” then “northing” pattern but is required for compatibility with xarray.- data_names
str
,list
orNone
The name(s) of the data variables in the output grid. Defaults to
'scalars'
for scalar data,['east_component', 'north_component']
for 2D vector data, and['east_component', 'north_component', 'vertical_component']
for 3D vector data.- projection
callable
orNone
If not None, then should be a callable object
projection(easting, northing) -> (proj_easting, proj_northing)
that takes in easting and northing coordinate arrays and returns projected northing and easting coordinate arrays. This function will be used to project the generated grid coordinates before passing them intopredict
. For example, you can use this to generate a geographic grid from a Cartesian gridder.- coordinates
tuple
of
arrays
Tuple of arrays containing the coordinates of the grid in the following order: (easting, northing, vertical, …). The easting and northing arrays could be 1d or 2d arrays, if they are 2d they must be part of a meshgrid. If coordinates are passed,
region
,shape
, andspacing
are ignored.
- region
- Returns:
- grid
xarray.Dataset
The interpolated grid. Metadata about the interpolator is written to the
attrs
attribute.
- grid
See also
verde.grid_coordinates
Generate the coordinate values for the grid.
- CheckerBoard.predict(coordinates)[source]#
Evaluate the checkerboard function on a given set of points.
- CheckerBoard.profile(point1, point2, size, dims=None, data_names=None, projection=None, **kwargs)#
Interpolate data along a profile between two points.
Generates the profile along a straight line assuming Cartesian distances. Point coordinates are generated by
verde.profile_coordinates
. Other arguments for this function can be passed as extra keyword arguments (kwargs
) to this method.Use the dims and data_names arguments to set custom names for the dimensions and the data field(s) in the output
pandas.DataFrame
. Default names are provided.Includes the calculated Cartesian distance from point1 for each data point in the profile.
To specify point1 and point2 in a coordinate system that would require projection to Cartesian (geographic longitude and latitude, for example), use the
projection
argument. With this option, the input points will be projected using the given projection function prior to computations. The generated Cartesian profile coordinates will be projected back to the original coordinate system. Note that the profile points are evenly spaced in projected coordinates, not the original system (e.g., geographic).Warning
The profile calculation method with a projection has changed in Verde 1.4.0. Previous versions generated coordinates (assuming they were Cartesian) and projected them afterwards. This led to “distances” being incorrectly handled and returned in unprojected coordinates. For example, if
projection
is from geographic to Mercator, the distances would be “angles” (incorrectly calculated as if they were Cartesian). After 1.4.0, point1 and point2 are projected prior to generating coordinates for the profile, guaranteeing that distances are properly handled in a Cartesian system. With this change, the profile points are now evenly spaced in projected coordinates and the distances are returned in projected coordinates as well.- Parameters:
- point1
tuple
The easting and northing coordinates, respectively, of the first point.
- point2
tuple
The easting and northing coordinates, respectively, of the second point.
- size
int
The number of points to generate.
- dims
list
orNone
The names of the northing and easting data dimensions, respectively, in the output dataframe. Default is determined from the
dims
attribute of the class. Must be defined in the following order: northing dimension, easting dimension. NOTE: This is an exception to the “easting” then “northing” pattern but is required for compatibility with xarray.- data_names
str
,list
orNone
The name(s) of the data variables in the output dataframe. Defaults to
'scalars'
for scalar data,['east_component', 'north_component']
for 2D vector data, and['east_component', 'north_component', 'vertical_component']
for 3D vector data.- projection
callable
orNone
If not None, then should be a callable object
projection(easting, northing, inverse=False) -> (proj_easting, proj_northing)
that takes in easting and northing coordinate arrays and returns projected northing and easting coordinate arrays. Should also take an optional keyword argumentinverse
(default to False) that if True will calculate the inverse transform instead. This function will be used to project the profile end points before generating coordinates and passing them intopredict
. It will also be used to undo the projection of the coordinates before returning the results.
- point1
- Returns:
- table
pandas.DataFrame
The interpolated values along the profile.
- table
- CheckerBoard.scatter(region=None, size=300, random_state=0, dims=None, data_names=None, projection=None, **kwargs)[source]#
Generate values on a random scatter of points.
Point coordinates are generated by
verde.scatter_points
. Other arguments for this function can be passed as extra keyword arguments (kwargs
) to this method.By default, the region specified when creating the class instance will be used if
region=None
.Use the dims and data_names arguments to set custom names for the dimensions and the data field(s) in the output
pandas.DataFrame
. Default names are provided.- Parameters:
- region
list
= [W
,E
,S
,N
] The west, east, south, and north boundaries of a given region.
- size
int
The number of points to generate.
- random_state
numpy.random.RandomState
oran
int
seed
A random number generator used to define the state of the random permutations. Use a fixed seed to make sure computations are reproducible. Use
None
to choose a seed automatically (resulting in different numbers with each run).- dims
list
orNone
The names of the northing and easting data dimensions, respectively, in the output dataframe. Default is determined from the
dims
attribute of the class. Must be defined in the following order: northing dimension, easting dimension. NOTE: This is an exception to the “easting” then “northing” pattern but is required for compatibility with xarray.- data_names
str
,list
orNone
The name(s) of the data variables in the output dataframe. Defaults to
'scalars'
for scalar data,['east_component', 'north_component']
for 2D vector data, and['east_component', 'north_component', 'vertical_component']
for 3D vector data.- projection
callable
orNone
If not None, then should be a callable object
projection(easting, northing) -> (proj_easting, proj_northing)
that takes in easting and northing coordinate arrays and returns projected northing and easting coordinate arrays. This function will be used to project the generated scatter coordinates before passing them intopredict
. For example, you can use this to generate a geographic scatter from a Cartesian gridder.
- region
- Returns:
- table
pandas.DataFrame
The interpolated values on a random set of points.
- table
- CheckerBoard.score(coordinates, data, weights=None)#
Score the gridder predictions against the given data.
Calculates the R^2 coefficient of determination of between the predicted values and the given data values. A maximum score of 1 means a perfect fit. The score can be negative.
Warning
The default scoring will change from R² to negative root mean squared error (RMSE) in Verde 2.0.0. This may change model selection results slightly. The negative version will be used to maintain the behaviour of larger scores being better, which is more compatible with current model selection code.
If the data has more than 1 component, the scores of each component will be averaged.
- Parameters:
- coordinates
tuple
of
arrays
Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). For the specific definition of coordinate systems and what these names mean, see the class docstring.
- data
array
ortuple
of
arrays
The data values of each data point. If the data has more than one component, data must be a tuple of arrays (one for each component).
- weights
None
orarray
ortuple
of
arrays
If not None, then the weights assigned to each data point. If more than one data component is provided, you must provide a weights array for each data component (if not None).
- coordinates
- Returns:
- score
float
The R^2 score
- score
- CheckerBoard.set_fit_request(*, coordinates: bool | None | str = '$UNCHANGED$', data: bool | None | str = '$UNCHANGED$', weights: bool | None | str = '$UNCHANGED$') CheckerBoard #
Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- coordinates
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
coordinates
parameter infit
.- data
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
data
parameter infit
.- weights
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
weights
parameter infit
.
- coordinates
- Returns:
- self
object
The updated object.
- self
- CheckerBoard.set_params(**params)#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
- **params
dict
Estimator parameters.
- **params
- Returns:
- self
estimator
instance
Estimator instance.
- self
- CheckerBoard.set_predict_request(*, coordinates: bool | None | str = '$UNCHANGED$') CheckerBoard #
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.
- CheckerBoard.set_score_request(*, coordinates: bool | None | str = '$UNCHANGED$', data: bool | None | str = '$UNCHANGED$', weights: bool | None | str = '$UNCHANGED$') CheckerBoard #
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- coordinates
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
coordinates
parameter inscore
.- data
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
data
parameter inscore
.- weights
str
,True
,False
,or
None
, default=sklearn.utils.metadata_routing.UNCHANGED Metadata routing for
weights
parameter inscore
.
- coordinates
- Returns:
- self
object
The updated object.
- self