Transforms#
FlattenSamplesIntoChannels
#
Bases: ImageOnlyTransform
FlattenSamplesIntoChannels is an image transformation that merges the sample (and optionally temporal) dimensions into the channel dimension.
This transform rearranges an input tensor by flattening the sample dimension, and if specified, also the temporal dimension, thereby concatenating these dimensions into a single channel dimension.
Source code in terratorch/datasets/transforms.py
__init__(time_dim=True)
#
Initialize the FlattenSamplesIntoChannels transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
time_dim
|
bool
|
If True, the temporal dimension is included in the flattening process. Default is True. |
True
|
Source code in terratorch/datasets/transforms.py
FlattenTemporalIntoChannels
#
Bases: ImageOnlyTransform
FlattenTemporalIntoChannels is an image transformation that flattens the temporal dimension into the channel dimension.
This transform rearranges an input tensor with a temporal dimension into one where the time and channel dimensions are merged. It expects the input to have a fixed number of dimensions defined by N_DIMS_FOR_TEMPORAL.
Source code in terratorch/datasets/transforms.py
MultimodalTransforms
#
MultimodalTransforms applies albumentations transforms to multiple image modalities.
This class supports both shared transformations across modalities and separate transformations for each modality. It also handles non-image modalities by applying a specified non-image transform.
Source code in terratorch/datasets/transforms.py
__init__(transforms, shared=True, non_image_modalities=None, non_image_transform=None)
#
Initialize the MultimodalTransforms.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
transforms
|
dict or Compose
|
The transformation(s) to apply to the data. |
required |
shared
|
bool
|
If True, the same transform is applied to all modalities; if False, separate transforms are used. |
True
|
non_image_modalities
|
list[str] | None
|
List of keys corresponding to non-image modalities. |
None
|
non_image_transform
|
object | None
|
A transform to apply to non-image modalities. If None, a default transform is used. |
None
|
Source code in terratorch/datasets/transforms.py
Padding
#
Bases: ImageOnlyTransform
Padding to adjust (slight) discrepancies between input images
Source code in terratorch/datasets/transforms.py
Rearrange
#
Bases: ImageOnlyTransform
Rearrange is a generic image transformation that reshapes an input tensor using a custom einops pattern.
This transform allows flexible reordering of tensor dimensions based on the provided pattern and arguments.
Source code in terratorch/datasets/transforms.py
__init__(rearrange, rearrange_args=None, always_apply=True, p=1)
#
Initialize the Rearrange transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
rearrange
|
str
|
The einops rearrangement pattern to apply. |
required |
rearrange_args
|
dict[str, int] | None
|
Additional arguments for the rearrangement pattern. |
None
|
always_apply
|
bool
|
Whether to always apply this transform. Default is True. |
True
|
p
|
float
|
The probability of applying the transform. Default is 1. |
1
|
Source code in terratorch/datasets/transforms.py
SelectBands
#
Bases: ImageOnlyTransform
SelectBands is an image transformation that selects a subset of bands (channels) from an input image.
This transform uses specified band indices to filter and output only the desired channels from the image tensor.
Source code in terratorch/datasets/transforms.py
__init__(band_indices)
#
Initialize the SelectBands transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
band_indices
|
list[int]
|
A list of indices specifying which bands to select. |
required |
Source code in terratorch/datasets/transforms.py
UnflattenSamplesFromChannels
#
Bases: ImageOnlyTransform
UnflattenSamplesFromChannels is an image transformation that restores the sample (and optionally temporal) dimensions from the channel dimension.
This transform is designed to reverse the flattening performed by FlattenSamplesIntoChannels and is typically applied after converting images to a channels-first format.
Source code in terratorch/datasets/transforms.py
__init__(time_dim=True, n_samples=None, n_timesteps=None, n_channels=None)
#
Initialize the UnflattenSamplesFromChannels transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
time_dim
|
bool
|
If True, the temporal dimension is considered during unflattening. |
True
|
n_samples
|
int | None
|
The number of samples. |
None
|
n_timesteps
|
int | None
|
The number of time steps. |
None
|
n_channels
|
int | None
|
The number of channels per time step. |
None
|
Raises:
Type | Description |
---|---|
Exception
|
If time_dim is True and fewer than two of n_channels, n_timesteps, and n_samples are provided. |
Exception
|
If time_dim is False and neither n_channels nor n_samples is provided. |
Source code in terratorch/datasets/transforms.py
UnflattenTemporalFromChannels
#
Bases: ImageOnlyTransform
UnflattenTemporalFromChannels is an image transformation that restores the temporal dimension from the channel dimension.
This transform is typically applied after converting images to a channels-first format (e.g., after ToTensorV2) and rearranges the flattened temporal information back into separate time and channel dimensions.