Changelog¶
Version 1.4.0¶
Released on: 2020/04/06
Bug fixes:
Profile distances are now returned in projected (Cartesian) coordinates by the
profile
method of gridders if a projection is given. The method has the option to apply a projection to the coordinates before predicting so we can pass geographic coordinates to Cartesian gridders. In these cases, the distance along the profile is calculated by theprofile_coordinates
function with the unprojected coordinates (in the geographic case it would be degrees). The profile point calculation is also done assuming that coordinates are Cartesian, which is clearly wrong if inputs are longitude and latitude. To fix this, we now project the input points prior to passing them toprofile_coordinates
. This means that the distances are Cartesian and generation of profile points is also Cartesian (as is assumed by the function). The generated coordinates are projected back so that the user gets longitude and latitude but distances are still projected Cartesian meters. (#231)Function
verde.grid_to_table
now sets the correct order for coordinates. We were relying on the order of thecoords
attribute of thexarray.Dataset
for the order of the coordinates. This is wrong because xarray takes the coordinate order from thedims
attribute instead, which is what we should also have been doing. (#229)
Documentation:
Generalize coordinate system specifications in
verde.base.BaseGridder
docstrings. Most methods don’t really depend on the coordinate system so use a more generic language to allow derived classes to specify their coordinate systems without having to overload the base methods just to rewrite the docstrings. (#240)
New features:
New function
verde.convexhull_mask
to mask points in a grid that fall outside the convex hull defined by data points. (#237)New function
verde.project_grid
that transforms 2D gridded data using a given projection. It re-samples the data usingScipyGridder
(by default) and runs a blocked mean (optional) to avoid aliasing when the points aren’t evenly distributed in the projected coordinates (like in polar projections). Finally, it applies aconvexhull_mask
to the grid to avoid extrapolation to points that had no original data. (#246)New function
verde.expanding_window
for selecting data that falls inside of an expanding window around a central point. (#238)New function
verde.rolling_window
for rolling window selections of irregularly sampled data. (#236)
Improvements:
Allow
verde.grid_to_table
to takexarray.DataArray
as input. (#235)
Maintenance:
Use newer MacOS images on Azure Pipelines. (#234)
This release contains contributions from:
Leonardo Uieda
Santiago Soler
Jesse Pisel
Version 1.3.0¶
Released on: 2020/01/22
DEPRECATIONS (the following features are deprecated and will be removed in Verde v2.0.0):
Functions and the associated sample dataset
verde.datasets.fetch_rio_magnetic
andverde.datasets.setup_rio_magnetic_map
are deprecated. Please use another dataset instead. (#213)Class
verde.VectorSpline2D
is deprecated. The class is specific for GPS/GNSS data and doesn’t fit the general-purpose nature of Verde. The implementation will be moved to the Erizo package instead. (#214)The
client
keyword argument forverde.cross_val_score
andverde.SplineCV
is deprecated in favor of the newdelayed
argument (see below). (#222)
New features:
Use the
dask.delayed
interface for parallelism in cross-validation instead of the futures interface (dask.distributed.Client
). It’s easier and allows building the entire graph lazily before executing. To use the new feature, passdelayed=True
toverde.cross_val_score
andverde.SplineCV
. The argumentclient
in both of these is deprecated (see above). (#222)Expose the optimal spline in
verde.SplineCV.spline_
. This is the fittedverde.Spline
object using the optimal parameters. (#219)New option
drop_coords
to allowverde.BlockReduce
andverde.BlockMean
to reduce extra elements incoordinates
(basically, treat them as data). Default toTrue
to maintain backwards compatibility. IfFalse
, will no longer drop coordinates after the second one but will apply the reduction in blocks to them as well. The reduced coordinates are returned in the same order in thecoordinates
. (#198)
Improvements:
Use the default system cache location to store the sample data instead of
~/.verde/data
. This is so users can more easily clean up unused files. Because this is system specific, functionverde.datasets.locate
was added to return the cache folder location. (#220)
Bug fixes:
Correctly use
parallel=True
andnumba.prange
in the numba compiled functions. Using it on the Green’s function was raising a warning because there is nothing to parallelize. (#221)
Maintenance:
Add testing and support for Python 3.8. (#211)
Documentation:
Fix a typo in the JOSS paper Bibtex entry. (#215)
Wrap docstrings to 79 characters for better integration with Jupyter and IPython. These systems display docstrings using 80 character windows, causing our larger lines to wrap around and become almost illegible. (#212)
Use napoleon instead of numpydoc to format docstrings. Results is slightly different layout in the website documentation. (#209)
Update contact information to point to the Slack chat instead of Gitter. (#204)
This release contains contributions from:
Santiago Soler
Leonardo Uieda
Version 1.2.0¶
Released on: 2019/07/23
Bug fixes:
Return the correct coordinates when passing
pixel_register=True
andshape
toverde.grid_coordinates
. The returned coordinates had 1 too few elements in each dimension (and the wrong values). This is because we generate grid-line registered points first and then shift them to the center of the pixels and drop the last point. This only works when specifyingspacing
because it will generate the right amount of points. Whenshape
is given, we need to first convert it to “grid-line” shape (with 1 extra point per dimension) before generating coordinates. (#183)Reset force coordinates when refitting splines. Previously, the splines set the force coordinates from the data coordinates only the first time
fit
was called. This means that when fitting on different data, the spline would still use the old coordinates leading to a poor prediction score. Now, the spline will use the coordinates of the current data passed tofit
. This only affects cases whereforce_coords=None
. It’s a slight change and only affects some of the scores for cross-validation. (#191)
New functions/classes:
New class
verde.SplineCV
: a cross-validated version ofSpline
. that performs grid search cross-validation to automatically tune the parameters of aSpline
. (#185)New function
verde.longitude_continuity
to format longitudes to a continuous range so that they can be indexed withverde.inside
(#181)New function
verde.load_surfer
to load grid data from a Surfer ASCII file (a contouring, griding and surface mapping software from GoldenSoftware). (#169)New function
verde.median_distance
that calculates the median near neighbor distance between each point in the given dataset. (#163)
Improvements:
Allow
verde.block_split
andverde.BlockReduce
to take ashape
argument instead ofspacing
. Useful when the size of the block is less meaningful than the number of blocks. (#184)Allow zero degree polynomials in
verde.Trend
, which represents a mean value. (#162)Function
verde.cross_val_score
returns a numpy array instead of a list for easier computations on the results. (#160)Function
verde.maxabs
now handles inputs with NaNs automatically. (#158)
Documentation:
New tutorial to explain the intricacies of grid coordinates generation, adjusting spacing vs region, pixel registration, etc. (#192)
Maintenance:
Drop support for Python 3.5. (#178)
Add support for Python 3.7. (#150)
More functions are now part of the base API:
n_1d_arrays
,check_fit_input
andleast_squares
are now included inverde.base
. (#156)
This release contains contributions from:
Goto15
Lindsey Heagy
Jesse Pisel
Santiago Soler
Leonardo Uieda
Version 1.1.0¶
Released on: 2018/11/06
New features:
New
verde.grid_to_table
function that converts grids to xyz tables with the coordinate and data values for each grid point (#148)Add an
extra_coords
option to coordinate generators (grid_coordinates
,scatter_points
, andprofile_coordinates
) to specify a constant value to be used as an extra coordinate (#145)Allow gridders to pass extra keyword arguments (
**kwargs
) for the coordinate generator functions (#144)
Improvements:
Don’t use the Jacobian matrix for predictions to avoid memory overloads. Use dedicated and numba wrapped functions instead. As a consequence, predictions are also a bit faster when numba is installed (#149)
Set the default
n_splits=5
when usingKFold
from scikit-learn (#143)
Bug fixes:
Use the xarray grid’s pcolormesh method instead of matplotlib to plot grids in the examples. The xarray method takes care of shifting the pixels by half a spacing when grids are not pixel registered (#151)
New contributors to the project:
Jesse Pisel
Version 1.0.0¶
Released on: 2018/09/13
First release of Verde. Establishes the gridder API and includes blocked reductions, bi-harmonic splines [Sandwell1987], coupled 2D interpolation [SandwellWessel2016], chaining operations to form a pipeline, and more.