Changelog¶
2.1.1 [2022-09-01]¶
unpad()
introduced: effectively a helper function to undopad()
from_transforms()
signature extended by a ``padding’’ argument: convenience when creating a padded flow field, automatically adjusting the shape and relevant transform parametersselect()
parameter ``item’’ can be ``None’’, returning ``self’’get_padding()
signature extended by an ``item’’ argument, used to select an item in the batched flow. Returns a simple list of padding values, rather than a list of lists.Minor performance improvement in
combine()
2.1.0 [2022-06-21]¶
combine()
introduced: efficient, generalised combination of flows with any frame of referenceref
combine_with()
improvements, but will become deprecated in a future release in favour ofcombine()
Test coverage improved
Documentation updated and extended
2.0.0 [2022-05-13]¶
Major update, enhancing usability for deep learning applications.
Flow vectors and masks are now batched, meaning the shape is \((N, H, W)\) instead of \((H, W)\). This enables easy integration with any deep learning application or network, harnessing the efficiencies of batch-wise processing.
A differentiable PyTorch function to approximately replace
scipy.interpolate.griddata()
was implementedA toolbox-wide boolean setting called
PURE_PYTORCH
has been introduced. If it is set toTrue
, non-Torch operations are avoided as far as possible. Specifically, this means avoiding the slow Scipy-based functionscipy.interpolate.griddata()
in favour of a more approximate, but significantly faster PyTorch-only method that interpolates unstructured data on a defined regular grid.If
PURE_PYTORCH
is set toTrue
, all oflibpytorch methods that output a float torch tensor are differentiable, again allowing for easy integration with deep learning algorithms.Some utility functions made available
Documentation and unit test updates
Minor bugfixes
1.1.1 [2022-01-28]¶
Type of the flow attribute
device
changed from string to thetorch.device
classIf the CUDA device index is left undefined, it defaults to
torch.cuda.current_device()
. This avoids ambiguities and possible CUDA device mismatches when working with multiple GPUs.
1.1.0 [2021-11-30]¶
Introduced functions that largely replicate functionality of flow class methods, but for Torch tensor and NumPy array flow inputs
Documentation updated with above functions, some older errors corrected
Minor bugfixes
Bibtex citation to use to acknowledge the authors added
1.0.1 [2021-07-09]¶
Fixed bug in visualise (range_max calculation)
Removed all usages of the torch tensor attribute
ndim
for improved backwards compatibility with older torch versions.Removed print statement in test_utils
Minor documentation corrections, addition of this changelog
1.0.0 [2021-06-09]¶
First full release