API¶
The following documentations are automatically extracted from the corresponding original backend libraries.
Operator APIs¶
- class beacon_aug.operators.Add(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Add
) [Source]- value( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Value to add to all pixels.
- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AddElementwise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AddElementwise
) [Source]- value( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Value to add to the pixels.
- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AddToHue(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AddToHue
) [Source]- value( None or int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Value to add to the hue of all pixels. This is expected to be in the range
-255
to+255
and will automatically be projected to an angular representation using(hue/255) * (360/2)
(OpenCV’s hue representation is in the range[0, 180]
instead of[0, 360]
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AddToHueAndSaturation(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AddToHueAndSaturation
) [Source]- value( None or int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Value to add to the hue and saturation of all pixels. It is expected to be in the range
-255
to+255
.- value_hue( None or int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Value to add to the hue of all pixels. This is expected to be in the range
-255
to+255
and will automatically be projected to an angular representation using(hue/255) * (360/2)
(OpenCV’s hue representation is in the range[0, 180]
instead of[0, 360]
). Only this or value may be set, not both.- value_saturation( None or int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Value to add to the saturation of all pixels. It is expected to be in the range
-255
to+255
. Only this or value may be set, not both.- per_channel( bool or float, optional):
Whether to sample per image only one value from value and use it for both hue and saturation (
False
) or to sample independently one value for hue and one for saturation (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
, otherwise asFalse
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AddToSaturation(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AddToSaturation
) [Source]- value( None or int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Value to add to the saturation of all pixels. It is expected to be in the range
-255
to+255
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AdditiveLaplaceNoise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AdditiveLaplaceNoise
) [Source]- loc( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Mean of the laplace distribution that generates the noise.
- scale( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Standard deviation of the laplace distribution that generates the noise. Must be
>=0
. If0
then only loc will be used. Recommended to be around255*0.05
.- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AdditivePoissonNoise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AdditivePoissonNoise
) [Source]- lam( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Lambda parameter of the poisson distribution. Must be
>=0
. Recommended values are around0.0
to10.0
.- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AdjustGamma(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
mmcv
,. Default: mmcv.
- Args:
if library =
mmcv
: (see:AdjustGamma
) [Source]- gamma (float or int): Gamma value used in gamma correction.
Default: 1.0.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Affine(library=None, *args, **kwargs)¶
Bases:
object
Augmentation to apply affine transformations to images. This is mostly a wrapper around the corresponding classes and functions in OpenCV.
Affine transformations involve:
Translation (“move” image on the x-/y-axis)
Rotation
Scaling (“zoom” in/out)
Shear (move one side of the image, turning a square into a trapezoid)
All such transformations can create “new” pixels in the image without a defined content, e.g. if the image is translated to the left, pixels are created on the right. A method has to be defined to deal with these pixel values. The parameters cval and mode of this class deal with this.
Some transformations involve interpolations between several pixels of the input image to generate output pixel values. The parameter order deals with the method of interpolation used for this.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,keras
. Default:
albumentations
.- scale (number, tuple of number or dict): Scaling factor to use, where
1.0
denotes “no change” and 0.5
is zoomed out to50
percent of the original size.If a single number, then that value will be used for all images.
If a tuple
(a, b)
, then a value will be uniformly sampled per image from the interval[a, b]
. That value will be used identically for both x- and y-axis.If a dictionary, then it is expected to have the keys
x
and/ory
. Each of these keys can have the same values as described above. Using a dictionary allows to set different values for the two axis and sampling will then happen independently per axis, resulting in samples that differ between the axes.
- translate_percent (None, number, tuple of number or dict): Translation as a fraction of the image height/width
(x-translation, y-translation), where
0
denotes “no change” and0.5
denotes “half of the axis size”.If
None
then equivalent to0.0
unless translate_px has a value other thanNone
.If a single number, then that value will be used for all images.
If a tuple
(a, b)
, then a value will be uniformly sampled per image from the interval[a, b]
. That sampled fraction value will be used identically for both x- and y-axis.If a dictionary, then it is expected to have the keys
x
and/ory
. Each of these keys can have the same values as described above. Using a dictionary allows to set different values for the two axis and sampling will then happen independently per axis, resulting in samples that differ between the axes.
- translate_px (None, int, tuple of int or dict): Translation in pixels.
If
None
then equivalent to0
unless translate_percent has a value other thanNone
.If a single int, then that value will be used for all images.
If a tuple
(a, b)
, then a value will be uniformly sampled per image from the discrete interval[a..b]
. That number will be used identically for both x- and y-axis.If a dictionary, then it is expected to have the keys
x
and/ory
. Each of these keys can have the same values as described above. Using a dictionary allows to set different values for the two axis and sampling will then happen independently per axis, resulting in samples that differ between the axes.
- rotate (number or tuple of number): Rotation in degrees (NOT radians), i.e. expected value range is
around
[-360, 360]
. Rotation happens around the center of the image, not the top left corner as in some other frameworks.If a number, then that value will be used for all images.
If a tuple
(a, b)
, then a value will be uniformly sampled per image from the interval[a, b]
and used as the rotation value.
- shear (number, tuple of number or dict): Shear in degrees (NOT radians), i.e. expected value range is
- around
[-360, 360]
, with reasonable values being in the range of[-45, 45]
. If a number, then that value will be used for all images as the shear on the x-axis (no shear on the y-axis will be done).
If a tuple
(a, b)
, then two value will be uniformly sampled per image from the interval[a, b]
and be used as the x- and y-shear value.If a dictionary, then it is expected to have the keys
x
and/ory
. Each of these keys can have the same values as described above. Using a dictionary allows to set different values for the two axis and sampling will then happen independently per axis, resulting in samples that differ between the axes.
- around
interpolation (int): OpenCV interpolation flag. mask_interpolation (int): OpenCV interpolation flag. cval (number or sequence of number): The constant value to use when filling in newly created pixels.
(E.g. translating by 1px to the right will create a new 1px-wide column of pixels on the left of the image). The value is only used when mode=constant. The expected value range is
[0, 255]
foruint8
images.cval_mask (number or tuple of number): Same as cval but only for masks. mode (int): OpenCV border flag. fit_output (bool): Whether to modify the affine transformation so that the whole output image is always
contained in the image plane (
True
) or accept parts of the image being outside the image plane (False
). This can be thought of as first applying the affine transformation and then applying a second transformation to “zoom in” on the new image so that it fits the image plane, This is useful to avoid corners of the image being outside of the image plane after applying rotations. It will however negate translation and scaling.p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:Affine
) [Source]- scale( number or tuple of number or list of number or imgaug.parameters.StochasticParameter or dict {“x”: number/tuple/list/StochasticParameter, “y”: number/tuple/list/StochasticParameter}, optional):
Scaling factor to use, where
1.0
denotes “no change” and0.5
is zoomed out to50
percent of the original size.- translate_percent( None or number or tuple of number or list of number or imgaug.parameters.StochasticParameter or dict {“x”: number/tuple/list/StochasticParameter, “y”: number/tuple/list/StochasticParameter}, optional):
Translation as a fraction of the image height/width (x-translation, y-translation), where
0
denotes “no change” and0.5
denotes “half of the axis size”.- translate_px( None or int or tuple of int or list of int or imgaug.parameters.StochasticParameter or dict {“x”: int/tuple/list/StochasticParameter, “y”: int/tuple/list/StochasticParameter}, optional):
Translation in pixels.
- rotate( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Rotation in degrees (NOT radians), i.e. expected value range is around
[-360, 360]
. Rotation happens around the center of the image, not the top left corner as in some other frameworks.- shear( number or tuple of number or list of number or imgaug.parameters.StochasticParameter or dict {“x”: int/tuple/list/StochasticParameter, “y”: int/tuple/list/StochasticParameter}, optional):
Shear in degrees (NOT radians), i.e. expected value range is around
[-360, 360]
, with reasonable values being in the range of[-45, 45]
.- order( int or iterable of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Interpolation order to use. Same meaning as in
skimage
:- cval( number or tuple of number or list of number or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant value to use when filling in newly created pixels. (E.g. translating by 1px to the right will create a new 1px-wide column of pixels on the left of the image). The value is only used when mode=constant. The expected value range is
[0, 255]
foruint8
images. It may be a float value.- mode( str or list of str or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Method to use when filling in newly created pixels. Same meaning as in
skimage
(andnumpy.pad()
):- fit_output( bool, optional):
Whether to modify the affine transformation so that the whole output image is always contained in the image plane (
True
) or accept parts of the image being outside the image plane (False
). This can be thought of as first applying the affine transformation and then applying a second transformation to “zoom in” on the new image so that it fits the image plane, This is useful to avoid corners of the image being outside of the image plane after applying rotations. It will however negate translation and scaling. Note also that activating this may lead to image sizes differing from the input image sizes. To avoid this, wrapAffine
inKeepSizeByResize
, e.g.KeepSizeByResize(Affine(...))
.- backend( str, optional):
Framework to use as a backend. Valid values are
auto
,skimage
(scikit-image’s warp) andcv2
(OpenCV’s warp). Ifauto
is used, the augmenter will automatically try to usecv2
whenever possible (order must be in[0, 1, 3]
). It will silently fall back to skimage if order/dtype is not supported by cv2. cv2 is generally faster than skimage. It also supports RGB cvals, while skimage will resort to intensity cvals (i.e. 3x the same value as RGB). Ifcv2
is chosen and order is2
or4
, it will automatically fall back to order3
.
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:RandomAffine
) [Source]- degrees (sequence or number): Range of degrees to select from.
If degrees is a number instead of sequence like (min, max), the range of degrees will be (-degrees, +degrees). Set to 0 to deactivate rotations.
- translate (tuple, optional): tuple of maximum absolute fraction for horizontal
and vertical translations. For example translate=(a, b), then horizontal shift is randomly sampled in the range -img_width * a < dx < img_width * a and vertical shift is randomly sampled in the range -img_height * b < dy < img_height * b. Will not translate by default.
- scale (tuple, optional): scaling factor interval, e.g (a, b), then scale is
randomly sampled from the range a <= scale <= b. Will keep original scale by default.
- shear (sequence or number, optional): Range of degrees to select from.
If shear is a number, a shear parallel to the x axis in the range (-shear, +shear) will be applied. Else if shear is a sequence of 2 values a shear parallel to the x axis in the range (shear[0], shear[1]) will be applied. Else if shear is a sequence of 4 values, a x-axis shear in (shear[0], shear[1]) and y-axis shear in (shear[2], shear[3]) will be applied. Will not apply shear by default.
- interpolation (InterpolationMode): Desired interpolation enum defined by
torchvision.transforms.InterpolationMode
. Default isInterpolationMode.NEAREST
. If input is Tensor, onlyInterpolationMode.NEAREST
,InterpolationMode.BILINEAR
are supported. For backward compatibility integer values (e.g.PIL.Image.NEAREST
) are still acceptable.- fill (sequence or number): Pixel fill value for the area outside the transformed
image. Default is
0
. If given a number, the value is used for all bands respectively.- fillcolor (sequence or number, optional): deprecated argument and will be removed since v0.10.0.
Please use the
fill
parameter instead.- resample (int, optional): deprecated argument and will be removed since v0.10.0.
Please use the
interpolation
parameter instead.
if library =
keras
: (see:apply_affine_transform
) [Source]x: 2D numpy array, single image. theta: Rotation angle in degrees. tx: Width shift. ty: Heigh shift. shear: Shear angle in degrees. zx: Zoom in x direction. zy: Zoom in y direction row_axis: Index of axis for rows in the input image. col_axis: Index of axis for columns in the input image. channel_axis: Index of axis for channels in the input image. fill_mode: Points outside the boundaries of the input
are filled according to the given mode (one of {‘constant’, ‘nearest’, ‘reflect’, ‘wrap’}).
- cval: Value used for points outside the boundaries
of the input if mode=’constant’.
order: int, order of interpolation
- Targets:
image, mask, keypoints, bboxes
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.Affine(p=1, mode=reflect,order=cv2.INTER_LINEAR,scale=[1, 1],translate_px=[0, 0],library="imgaug") aug = BA.Affine(p=1, degrees=0,interpolation=torch_f.InterpolationMode.BILINEAR,shear=0,library="torchvision") aug = BA.Affine(p=1, fill_mode=reflect,order=4,library="keras") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.AffineCv2(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AffineCv2
) [Source]- scale( number or tuple of number or list of number or imgaug.parameters.StochasticParameter or dict {“x”: number/tuple/list/StochasticParameter, “y”: number/tuple/list/StochasticParameter}, optional):
Scaling factor to use, where
1.0
denotes “no change” and0.5
is zoomed out to50
percent of the original size.- translate_percent( number or tuple of number or list of number or imgaug.parameters.StochasticParameter or dict {“x”: number/tuple/list/StochasticParameter, “y”: number/tuple/list/StochasticParameter}, optional):
Translation as a fraction of the image height/width (x-translation, y-translation), where
0
denotes “no change” and0.5
denotes “half of the axis size”.- translate_px( int or tuple of int or list of int or imgaug.parameters.StochasticParameter or dict {“x”: int/tuple/list/StochasticParameter, “y”: int/tuple/list/StochasticParameter}, optional):
Translation in pixels.
- rotate( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Rotation in degrees (NOT radians), i.e. expected value range is around
[-360, 360]
. Rotation happens around the center of the image, not the top left corner as in some other frameworks.- shear( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Shear in degrees (NOT radians), i.e. expected value range is around
[-360, 360]
.- order( int or list of int or str or list of str or imaug.ALL or imgaug.parameters.StochasticParameter, optional):
Interpolation order to use. Allowed are:
- cval( number or tuple of number or list of number or imaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant value to use when filling in newly created pixels. (E.g. translating by 1px to the right will create a new 1px-wide column of pixels on the left of the image). The value is only used when mode=constant. The expected value range is
[0, 255]
foruint8
images. It may be a float value.- mode( int or str or list of str or list of int or imgaug.ALL or imgaug.parameters.StochasticParameter,):
optional
Method to use when filling in newly created pixels. Same meaning as in OpenCV’s border mode. Let
abcdefgh
be an image’s content and|
be an image boundary after which new pixels are filled in, then the valid modes and their behaviour are the following:
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AllChannelsCLAHE(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AllChannelsCLAHE
) [Source]- clip_limit( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
imgaug.augmenters.contrast.CLAHE
.- tile_grid_size_px( int or tuple of int or list of int or imgaug.parameters.StochasticParameter or tuple of tuple of int or tuple of list of int or tuple of imgaug.parameters.StochasticParameter, optional):
See
imgaug.augmenters.contrast.CLAHE
.- tile_grid_size_px_min( int, optional):
See
imgaug.augmenters.contrast.CLAHE
.- per_channel( bool or float, optional):
Whether to use the same value for all channels (
False
) or to sample a new value for each channel (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
, otherwise asFalse
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AllChannelsHistogramEqualization(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AllChannelsHistogramEqualization
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AssertLambda(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AssertLambda
) [Source]- func_images( None or callable, optional):
The function to call for each batch of images. It must follow the form:
- func_heatmaps( None or callable, optional):
The function to call for each batch of heatmaps. It must follow the form:
- func_segmentation_maps( None or callable, optional):
The function to call for each batch of segmentation maps. It must follow the form:
- func_keypoints( None or callable, optional):
The function to call for each batch of keypoints. It must follow the form:
- func_bounding_boxes( None or callable, optional):
The function to call for each batch of bounding boxes. It must follow the form:
- func_polygons( None or callable, optional):
The function to call for each batch of polygons. It must follow the form:
- func_line_strings( None or callable, optional):
The function to call for each batch of line strings. It must follow the form:
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AssertShape(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AssertShape
) [Source]- shape( tuple):
The expected shape, given as a
tuple
. The number of entries in thetuple
must match the number of dimensions, i.e. it must contain four entries for(N, H, W, C)
. If only a single entity is augmented, e.g. viaaugment_image()
, thenN
is1
in the input to this augmenter. Images that don’t have a channel axis will automatically have one assigned, i.e.C
is at least1
. For each component of thetuple
one of the following datatypes may be used:- check_images( bool, optional):
Whether to validate input images via the given shape.
- check_heatmaps( bool, optional):
Whether to validate input heatmaps via the given shape. The number of heatmaps will be verified as
N
. For eachHeatmapsOnImage
instance its array’s height and width will be verified asH
andW
, but not the channel count.- check_segmentation_maps( bool, optional):
Whether to validate input segmentation maps via the given shape. The number of segmentation maps will be verified as
N
. For eachSegmentationMapOnImage
instance its array’s height and width will be verified asH
andW
, but not the channel count.- check_keypoints( bool, optional):
Whether to validate input keypoints via the given shape. This will check (a) the number of keypoints and (b) for each
KeypointsOnImage
instance the.shape
attribute, i.e. the shape of the corresponding image.- check_bounding_boxes( bool, optional):
Whether to validate input bounding boxes via the given shape. This will check (a) the number of bounding boxes and (b) for each
BoundingBoxesOnImage
instance the.shape
attribute, i.e. the shape of the corresponding image.- check_polygons( bool, optional):
Whether to validate input polygons via the given shape. This will check (a) the number of polygons and (b) for each
PolygonsOnImage
instance the.shape
attribute, i.e. the shape of the corresponding image.- check_line_strings( bool, optional):
Whether to validate input line strings via the given shape. This will check (a) the number of line strings and (b) for each
LineStringsOnImage
instance the.shape
attribute, i.e. the shape of the corresponding image.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Augmenter(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Augmenter
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AutoAugment(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:AutoAugment
) [Source]- policy (AutoAugmentPolicy): Desired policy enum defined by
torchvision.transforms.autoaugment.AutoAugmentPolicy
. Default isAutoAugmentPolicy.IMAGENET
.- interpolation (InterpolationMode): Desired interpolation enum defined by
torchvision.transforms.InterpolationMode
. Default isInterpolationMode.NEAREST
. If input is Tensor, onlyInterpolationMode.NEAREST
,InterpolationMode.BILINEAR
are supported.- fill (sequence or number, optional): Pixel fill value for the area outside the transformed
image. If given a number, the value is used for all bands respectively.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AutoAugmentPolicy(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:AutoAugmentPolicy
) [Source]AutoAugment policies learned on different datasets.
Available policies are IMAGENET, CIFAR10 and SVHN.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Autocontrast(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,torchvision
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Autocontrast
) [Source]- cutoff( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Percentage of values to cut off from the low and high end of each image’s histogram, before stretching it to
[0, 255]
.- per_channel( bool or float, optional):
Whether to use the same value for all channels (
False
) or to sample a new value for each channel (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
, otherwise asFalse
.
if library =
torchvision
: (see:autocontrast
) [Source]- img (PIL Image or Tensor): Image on which autocontrast is applied.
If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, where … means it can have an arbitrary number of leading dimensions. If img is PIL Image, it is expected to be in mode “L” or “RGB”.
e.g.
import beacon_aug as BA aug = BA.Autocontrast(p=1, cutoff=[10, 20],library="imgaug") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.AveragePooling(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AveragePooling
) [Source]- kernel_size( int or tuple of int or list of int or imgaug.parameters.StochasticParameter or tuple of tuple of int or tuple of list of int or tuple of imgaug.parameters.StochasticParameter, optional):
The kernel size of the pooling operation.
- keep_size( bool, optional):
After pooling, the result image will usually have a different height/width compared to the original input image. If this parameter is set to True, then the pooled image will be resized to the input image’s size, i.e. the augmenter’s output shape is always identical to the input shape.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BilateralBlur(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BilateralBlur
) [Source]- d( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Diameter of each pixel neighborhood with value range
[1 .. inf)
. High values for d lead to significantly worse performance. Values equal or less than10
seem to be good. Use<5
for real-time applications.- sigma_color( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Filter sigma in the color space with value range
[1, inf)
. A large value of the parameter means that farther colors within the pixel neighborhood (see sigma_space) will be mixed together, resulting in larger areas of semi-equal color.- sigma_space( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Filter sigma in the coordinate space with value range
[1, inf)
. A large value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigma_color).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlpha(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlpha
) [Source]- factor( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Opacity of the results of the foreground branch. Values close to
0.0
mean that the results from the background branch (see parameter background) make up most of the final image.- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use the same factor for all channels (
False
) or to sample a new value for each channel (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated as True, otherwise as False.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlphaBoundingBoxes(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlphaBoundingBoxes
) [Source]- labels( None or str or list of str or imgaug.parameters.StochasticParameter):
See
BoundingBoxesMaskGen
.- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
- nb_sample_labels( None or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
See
BoundingBoxesMaskGen
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlphaCheckerboard(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlphaCheckerboard
) [Source]- nb_rows( int or tuple of int or list of int or imgaug.parameters.StochasticParameter):
Number of rows of the checkerboard. See
CheckerboardMaskGen
for details.- nb_cols( int or tuple of int or list of int or imgaug.parameters.StochasticParameter):
Number of columns of the checkerboard. Analogous to nb_rows. See
CheckerboardMaskGen
for details.- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlphaElementwise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlphaElementwise
) [Source]- factor( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Opacity of the results of the foreground branch. Values close to
0.0
mean that the results from the background branch (see parameter background) make up most of the final image.- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
- per_channel( bool or float, optional):
Whether to use the same factor for all channels (
False
) or to sample a new value for each channel (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated as True, otherwise as False.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlphaFrequencyNoise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlphaFrequencyNoise
) [Source]- exponent( number or tuple of number of list of number or imgaug.parameters.StochasticParameter, optional):
Exponent to use when scaling in the frequency domain. Sane values are in the range
-4
(large blobs) to4
(small patterns). To generate cloud-like structures, use roughly-2
.- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
- per_channel( bool or float, optional):
Whether to use the same factor for all channels (
False
) or to sample a new value for each channel (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
, otherwise asFalse
.- size_px_max( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
The noise is generated in a low resolution environment. This parameter defines the maximum size of that environment (in pixels). The environment is initialized at the same size as the input image and then downscaled, so that no side exceeds size_px_max (aspect ratio is kept).
- upscale_method( None or imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
After generating the noise maps in low resolution environments, they have to be upscaled to the input image size. This parameter controls the upscaling method.
- iterations( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
How often to repeat the simplex noise generation process per image.
- aggregation_method( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
The noise maps (from each iteration) are combined to one noise map using an aggregation process. This parameter defines the method used for that process. Valid methods are
min
,max
oravg
, where ‘min’ combines the noise maps by taking the (elementwise) minimum over all iteration’s results,max
the (elementwise) maximum andavg
the (elementwise) average.- sigmoid( bool or number, optional):
Whether to apply a sigmoid function to the final noise maps, resulting in maps that have more extreme values (close to
0.0
or1.0
).- sigmoid_thresh( None or number or tuple of number or imgaug.parameters.StochasticParameter, optional):
Threshold of the sigmoid, when applied. Thresholds above zero (e.g.
5.0
) will move the saddle point towards the right, leading to more values close to0.0
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlphaHorizontalLinearGradient(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlphaHorizontalLinearGradient
) [Source]- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
- min_value( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
HorizontalLinearGradientMaskGen
.- max_value( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
HorizontalLinearGradientMaskGen
.- start_at( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
HorizontalLinearGradientMaskGen
.- end_at( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
HorizontalLinearGradientMaskGen
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlphaMask(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlphaMask
) [Source]- mask_generator( IBatchwiseMaskGenerator):
A generator that will be queried per image to generate a mask.
- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlphaRegularGrid(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlphaRegularGrid
) [Source]- nb_rows( int or tuple of int or list of int or imgaug.parameters.StochasticParameter):
Number of rows of the checkerboard. See
CheckerboardMaskGen
for details.- nb_cols( int or tuple of int or list of int or imgaug.parameters.StochasticParameter):
Number of columns of the checkerboard. Analogous to nb_rows. See
CheckerboardMaskGen
for details.- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Alpha value of each cell.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlphaSegMapClassIds(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlphaSegMapClassIds
) [Source]- class_ids( int or tuple of int or list of int or imgaug.parameters.StochasticParameter):
See
SegMapClassIdsMaskGen
.- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
- nb_sample_classes( None or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
See
SegMapClassIdsMaskGen
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlphaSimplexNoise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlphaSimplexNoise
) [Source]- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
- per_channel( bool or float, optional):
Whether to use the same factor for all channels (
False
) or to sample a new value for each channel (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
, otherwise asFalse
.- size_px_max( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
The simplex noise is always generated in a low resolution environment. This parameter defines the maximum size of that environment (in pixels). The environment is initialized at the same size as the input image and then downscaled, so that no side exceeds size_px_max (aspect ratio is kept).
- upscale_method( None or imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
After generating the noise maps in low resolution environments, they have to be upscaled to the input image size. This parameter controls the upscaling method.
- iterations( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
How often to repeat the simplex noise generation process per image.
- aggregation_method( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
The noise maps (from each iteration) are combined to one noise map using an aggregation process. This parameter defines the method used for that process. Valid methods are
min
,max
oravg
, wheremin
combines the noise maps by taking the (elementwise) minimum over all iteration’s results,max
the (elementwise) maximum andavg
the (elementwise) average.- sigmoid( bool or number, optional):
Whether to apply a sigmoid function to the final noise maps, resulting in maps that have more extreme values (close to 0.0 or 1.0).
- sigmoid_thresh( None or number or tuple of number or imgaug.parameters.StochasticParameter, optional):
Threshold of the sigmoid, when applied. Thresholds above zero (e.g.
5.0
) will move the saddle point towards the right, leading to more values close to 0.0.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlphaSomeColors(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlphaSomeColors
) [Source]- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
- nb_bins( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
See
SomeColorsMaskGen
.- smoothness( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
SomeColorsMaskGen
.- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
SomeColorsMaskGen
.- rotation_deg( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
SomeColorsMaskGen
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.BlendAlphaVerticalLinearGradient(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BlendAlphaVerticalLinearGradient
) [Source]- foreground( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the foreground branch. High alpha values will show this branch’s results.
- background( None or imgaug.augmenters.meta.Augmenter or iterable of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) that make up the background branch. Low alpha values will show this branch’s results.
- min_value( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
VerticalLinearGradientMaskGen
.- max_value( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
VerticalLinearGradientMaskGen
.- start_at( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
VerticalLinearGradientMaskGen
.- end_at( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
VerticalLinearGradientMaskGen
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Blur(library=None, *args, **kwargs)¶
Bases:
object
Blur the input image using a random-sized kernel.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,augly
. Default:
albumentations
.- blur_limit (int, (int, int)): maximum kernel size for blurring the input image.
Should be in range [3, inf). Default: (3, 7).
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:AverageBlur
) [Source]- k( int or tuple of int or tuple of tuple of int or imgaug.parameters.StochasticParameter or tuple of StochasticParameter, optional):
Kernel size to use.
- library (str): flag for library. Should be one of:
if library =
augly
: (see:RandomBlur
) Targets:image
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.Blur(p=1, blur_limit=[3, 7],library="albumentations") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.BoundingBoxesMaskGen(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:BoundingBoxesMaskGen
) [Source]- labels( None or str or list of str or imgaug.parameters.StochasticParameter):
Labels of bounding boxes to select for.
- nb_sample_labels( None or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of labels to sample (with replacement) per image. As sampling happens with replacement, fewer unique labels may be sampled.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Brightness(library=None, *args, **kwargs)¶
Bases:
object
Randomly change brightness of the input image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,keras
,augly
,imagenet_c
. Default:
albumentations
.- limit ((float, float) or float): factor range for changing brightness.
If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:AddToBrightness
) [Source]- add( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
Add
.- to_colorspace( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
WithBrightnessChannels
.
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:ColorJitter
) [Source]- brightness (float or tuple of float (min, max)): How much to jitter brightness.
brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness] or the given [min, max]. Should be non negative numbers.
- contrast (float or tuple of float (min, max)): How much to jitter contrast.
contrast_factor is chosen uniformly from [max(0, 1 - contrast), 1 + contrast] or the given [min, max]. Should be non negative numbers.
- saturation (float or tuple of float (min, max)): How much to jitter saturation.
saturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation] or the given [min, max]. Should be non negative numbers.
- hue (float or tuple of float (min, max)): How much to jitter hue.
hue_factor is chosen uniformly from [-hue, hue] or the given [min, max]. Should have 0<= hue <= 0.5 or -0.5 <= min <= max <= 0.5.
if library =
keras
: (see:apply_brightness_shift
) [Source]x: Input tensor. Must be 3D. brightness: Float. The new brightness value. channel_axis: Index of axis for channels in the input tensor.
if library =
augly
: (see:RandomBrightness
) if library =imagenet_c
: (see:imagenet_c
) Targets:image
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.Brightness(p=1, limit=[-0.2, 0.2],library="albumentations") aug = BA.Brightness(p=1, corruption_name=brightness,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.CLAHE(library=None, *args, **kwargs)¶
Bases:
object
Apply Contrast Limited Adaptive Histogram Equalization to the input image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,mmcv
. Default:
albumentations
.- clip_limit (float or (float, float)): upper threshold value for contrast limiting.
If clip_limit is a single float value, the range will be (1, clip_limit). Default: (1, 4).
tile_grid_size ((int, int)): size of grid for histogram equalization. Default: (8, 8). p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:CLAHE
) [Source]- clip_limit( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Clipping limit. Higher values result in stronger contrast. OpenCV uses a default of
40
, though values around5
seem to already produce decent contrast.- tile_grid_size_px( int or tuple of int or list of int or imgaug.parameters.StochasticParameter or tuple of tuple of int or tuple of list of int or tuple of imgaug.parameters.StochasticParameter, optional):
Kernel size, i.e. size of each local neighbourhood in pixels.
- tile_grid_size_px_min( int, optional):
Minimum kernel size in px, per axis. If the sampling results in a value lower than this minimum, it will be clipped to this value.
- to_colorspace( {“Lab”, “HLS”, “HSV”}, optional):
Colorspace in which to perform CLAHE. For
Lab
, CLAHE will only be applied to the first channel (L
), forHLS
to the second (L
) and forHSV
to the third (V
). To apply CLAHE to all channels of an input image (without colorspace conversion), seeimgaug.augmenters.contrast.AllChannelsCLAHE
.
- library (str): flag for library. Should be one of:
if library =
mmcv
: (see:CLAHE
) [Source]clip_limit (float): Threshold for contrast limiting. Default: 40.0. tile_grid_size (tuple[int]): Size of grid for histogram equalization.
Input image will be divided into equally sized rectangular tiles. It defines the number of tiles in row and column. Default: (8, 8).
- Targets:
image
- Image types:
uint8
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Canny(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Canny
) [Source]- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Blending factor to use in alpha blending. A value close to 1.0 means that only the edge image is visible. A value close to 0.0 means that only the original image is visible. A value close to 0.5 means that the images are merged according to 0.5*image + 0.5*edge_image. If a sample from this parameter is 0, no action will be performed for the corresponding image.
- hysteresis_thresholds( number or tuple of number or list of number or imgaug.parameters.StochasticParameter or tuple of tuple of number or tuple of list of number or tuple of imgaug.parameters.StochasticParameter, optional):
Min and max values to use in hysteresis thresholding. (This parameter seems to have not very much effect on the results.) Either a single parameter or a tuple of two parameters. If a single parameter is provided, the sampling happens once for all images with (N,2) samples being requested from the parameter, where each first value denotes the hysteresis minimum and each second the maximum. If a tuple of two parameters is provided, one sampling of (N,) values is independently performed per parameter (first parameter: hysteresis minimum, second: hysteresis maximum).
- sobel_kernel_size( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Kernel size of the sobel operator initially applied to each image. This corresponds to
apertureSize
incv2.Canny()
. If a sample from this parameter is<=1
, no action will be performed for the corresponding image. The maximum for this parameter is7
(inclusive). Higher values are not accepted by OpenCV. If an even valuev
is sampled, it is automatically changed tov-1
.- colorizer( None or imgaug.augmenters.edges.IBinaryImageColorizer, optional):
A strategy to convert binary edge images to color images. If this is
None
, an instance ofRandomColorBinaryImageColorizer
is created, which means that each edge image is converted into anuint8
image, where edge and non-edge pixels each have a different color that was uniformly randomly sampled from the space of alluint8
colors.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Cartoon(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Cartoon
) [Source]- blur_ksize( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Median filter kernel size. See
stylize_cartoon()
for details.- segmentation_size( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Mean-Shift segmentation size multiplier. See
stylize_cartoon()
for details.- saturation( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Saturation multiplier. See
stylize_cartoon()
for details.- edge_prevalence( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Multiplier for the prevalence of edges. See
stylize_cartoon()
for details.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CenterCrop(library=None, *args, **kwargs)¶
Bases:
object
Crop the central part of the input.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,torchvision
. Default:
albumentations
.
height (int): height of the crop. width (int): width of the crop. p (float): probability of applying the transform. Default: 1.
if library =
torchvision
: (see:CenterCrop
) [Source]- size (sequence or int): Desired output size of the crop. If size is an
int instead of sequence like (h, w), a square crop (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
- library (str): flag for library. Should be one of:
- Targets:
image, mask, bboxes, keypoints
- Image types:
uint8, float32
- Note:
It is recommended to use uint8 images as input. Otherwise the operation will require internal conversion float32 -> uint8 -> float32 that causes worse performance.
e.g.
import beacon_aug as BA aug = BA.CenterCrop(p=1, height=64,width=64,library="albumentations") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.CenterCropToAspectRatio(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CenterCropToAspectRatio
) [Source]- aspect_ratio( number):
See
CropToAspectRatio.__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CenterCropToFixedSize(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CropToFixedSize
) [Source]- width( int or None):
Crop images down to this maximum width. If
None
, image widths will not be altered.- height( int or None):
Crop images down to this maximum height. If
None
, image heights will not be altered.- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
Sets the center point of the cropping, which determines how the required cropping amounts are distributed to each side. For a
tuple
(a, b)
, botha
andb
are expected to be in range[0.0, 1.0]
and describe the fraction of cropping applied to the left/right (low/high values fora
) and the fraction of cropping applied to the top/bottom (low/high values forb
). A cropping position at(0.5, 0.5)
would be the center of the image and distribute the cropping equally over all sides. A cropping position at(1.0, 0.0)
would be the right-top and would apply 100% of the required cropping to the right and top sides of the image.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CenterCropToMultiplesOf(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CenterCropToMultiplesOf
) [Source]- width_multiple( int or None):
See
CropToMultiplesOf.__init__()
.- height_multiple( int or None):
See
CropToMultiplesOf.__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CenterCropToPowersOf(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CenterCropToPowersOf
) [Source]- width_base( int or None):
See
CropToPowersOf.__init__()
.- height_base( int or None):
See
CropToPowersOf.__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CenterCropToSquare(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CenterCropToSquare
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CenterPadToAspectRatio(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CenterPadToAspectRatio
) [Source]- aspect_ratio( number):
See
PadToAspectRatio.__init__()
.- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- pad_cval( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CenterPadToFixedSize(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:PadToFixedSize
) [Source]- width( int or None):
Pad images up to this minimum width. If
None
, image widths will not be altered.- height( int or None):
Pad images up to this minimum height. If
None
, image heights will not be altered.- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- pad_cval( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
Sets the center point of the padding, which determines how the required padding amounts are distributed to each side. For a
tuple
(a, b)
, botha
andb
are expected to be in range[0.0, 1.0]
and describe the fraction of padding applied to the left/right (low/high values fora
) and the fraction of padding applied to the top/bottom (low/high values forb
). A padding position at(0.5, 0.5)
would be the center of the image and distribute the padding equally to all sides. A padding position at(0.0, 1.0)
would be the left-bottom and would apply 100% of the required padding to the bottom and left sides of the image so that the bottom left corner becomes more and more the new image center (depending on how much is padded).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CenterPadToMultiplesOf(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CenterPadToMultiplesOf
) [Source]- width_multiple( int or None):
See
PadToMultiplesOf.__init__()
.- height_multiple( int or None):
See
PadToMultiplesOf.__init__()
.- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- pad_cval( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CenterPadToPowersOf(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CenterPadToPowersOf
) [Source]- width_base( int or None):
See
PadToPowersOf.__init__()
.- height_base( int or None):
See
PadToPowersOf.__init__()
.- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- pad_cval( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CenterPadToSquare(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CenterPadToSquare
) [Source]- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- pad_cval( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ChangeAspectRatio(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:ChangeAspectRatio
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ChangeColorTemperature(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:ChangeColorTemperature
) [Source]- kelvin( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Temperature in Kelvin. The temperatures of images will be modified to this value. Must be in the interval
[1000, 40000]
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ChangeColorspace(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:ChangeColorspace
) [Source]- to_colorspace( str or list of str or imgaug.parameters.StochasticParameter):
The target colorspace. Allowed strings are:
RGB
,BGR
,GRAY
,CIE
,YCrCb
,HSV
,HLS
,Lab
,Luv
. These are also accessible viaimgaug.augmenters.color.CSPACE_<NAME>
, e.g.imgaug.augmenters.CSPACE_YCrCb
.- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
The alpha value of the new colorspace when overlayed over the old one. A value close to 1.0 means that mostly the new colorspace is visible. A value close to 0.0 means, that mostly the old image is visible.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ChannelDropout(library=None, *args, **kwargs)¶
Bases:
object
Randomly Drop Channels in the input Image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
channel_drop_range (int, int): range from which we choose the number of channels to drop. fill_value (int, float): pixel value for the dropped channel. p (float): probability of applying the transform. Default: 0.5.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, uint16, unit32, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.ChannelShuffle(library=None, *args, **kwargs)¶
Bases:
object
Randomly rearrange channels of the input RGB image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
. Default:
albumentations
.
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:ChannelShuffle
) [Source]- channels( None or imgaug.ALL or list of int, optional):
Which channels are allowed to be shuffled with each other. If this is
None
orimgaug.ALL
, then all channels may be shuffled. If it is alist
ofint
s, then only the channels with indices in that list may be shuffled. (Values start at0
. All channel indices in the list must exist in each image.)
- library (str): flag for library. Should be one of:
- Targets:
image
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.CheckerboardMaskGen(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CheckerboardMaskGen
) [Source]- nb_rows( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of rows of the checkerboard.
- nb_cols( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of columns of the checkerboard. Analogous to nb_rows.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ClipCBAsToImagePlanes(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:ClipCBAsToImagePlanes
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CloudLayer(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CloudLayer
) [Source]- intensity_mean( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Mean intensity of the clouds (i.e. mean color). Recommended to be in the interval
[190, 255]
.- intensity_freq_exponent( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Exponent of the frequency noise used to add fine intensity to the mean intensity. Recommended to be in the interval
[-2.5, -1.5]
. See__init__()
for details.- intensity_coarse_scale( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Standard deviation of the gaussian distribution used to add more localized intensity to the mean intensity. Sampled in low resolution space, i.e. affects final intensity on a coarse level. Recommended to be in the interval
(0, 10]
.- alpha_min( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Minimum alpha when blending cloud noise with the image. High values will lead to clouds being “everywhere”. Recommended to usually be at around
0.0
for clouds and>0
for fog.- alpha_multiplier( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Multiplier for the sampled alpha values. High values will lead to denser clouds wherever they are visible. Recommended to be in the interval
[0.3, 1.0]
. Note that this parameter currently overlaps with density_multiplier, which is applied a bit later to the alpha mask.- alpha_size_px_max( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Controls the image size at which the alpha mask is sampled. Lower values will lead to coarser alpha masks and hence larger clouds (and empty areas). See
__init__()
for details.- alpha_freq_exponent( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Exponent of the frequency noise used to sample the alpha mask. Similarly to alpha_size_max_px, lower values will lead to coarser alpha patterns. Recommended to be in the interval
[-4.0, -1.5]
. See__init__()
for details.- sparsity( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Exponent applied late to the alpha mask. Lower values will lead to coarser cloud patterns, higher values to finer patterns. Recommended to be somewhere around
1.0
. Do not deviate far from that value, otherwise the alpha mask might get weird patterns with sudden fall-offs to zero that look very unnatural.- density_multiplier( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Late multiplier for the alpha mask, similar to alpha_multiplier. Set this higher to get “denser” clouds wherever they are visible. Recommended to be around
[0.5, 1.5]
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Clouds(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Clouds
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CoarseDropout(library=None, *args, **kwargs)¶
Bases:
object
CoarseDropout of the rectangular regions in the image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
. Default:
albumentations
.
max_holes (int): Maximum number of regions to zero out. max_height (int): Maximum height of the hole. max_width (int): Maximum width of the hole. min_holes (int): Minimum number of regions to zero out. If None,
min_holes is be set to max_holes. Default: None.
- min_height (int): Minimum height of the hole. Default: None. If None,
min_height is set to max_height. Default: None.
- min_width (int): Minimum width of the hole. If None, min_height is
set to max_width. Default: None.
fill_value (int, float, list of int, list of float): value for dropped pixels. mask_fill_value (int, float, list of int, list of float): fill value for dropped pixels
in mask. If None - mask is not affected. Default: None.
if library =
imgaug
: (see:CoarseDropout
) [Source]- size_px( None or int or tuple of int or imgaug.parameters.StochasticParameter, optional):
The size of the lower resolution image from which to sample the dropout mask in absolute pixel dimensions. Note that this means that lower values of this parameter lead to larger areas being dropped (as any pixel in the lower resolution image will correspond to a larger area at the original resolution).
- size_percent( None or float or tuple of float or imgaug.parameters.StochasticParameter, optional):
The size of the lower resolution image from which to sample the dropout mask in percent of the input image. Note that this means that lower values of this parameter lead to larger areas being dropped (as any pixel in the lower resolution image will correspond to a larger area at the original resolution).
- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).- min_size( int, optional):
Minimum height and width of the low resolution mask. If size_percent or size_px leads to a lower value than this, min_size will be used instead. This should never have a value of less than
2
, otherwise one may end up with a1x1
low resolution mask, leading easily to the whole image being dropped.
- library (str): flag for library. Should be one of:
- Targets:
image, mask
- Image types:
uint8, float32
Reference: | https://arxiv.org/abs/1708.04552 | https://github.com/uoguelph-mlrg/Cutout/blob/master/util/cutout.py | https://github.com/aleju/imgaug/blob/master/imgaug/augmenters/arithmetic.py
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.CoarsePepper(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CoarsePepper
) [Source]- size_px( int or tuple of int or imgaug.parameters.StochasticParameter, optional):
The size of the lower resolution image from which to sample the replacement mask in absolute pixel dimensions. Note that this means that lower values of this parameter lead to larger areas being replaced (as any pixel in the lower resolution image will correspond to a larger area at the original resolution).
- size_percent( float or tuple of float or imgaug.parameters.StochasticParameter, optional):
The size of the lower resolution image from which to sample the replacement mask in percent of the input image. Note that this means that lower values of this parameter lead to larger areas being replaced (as any pixel in the lower resolution image will correspond to a larger area at the original resolution).
- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).- min_size( int, optional):
Minimum size of the low resolution mask, both width and height. If size_percent or size_px leads to a lower value than this, min_size will be used instead. This should never have a value of less than 2, otherwise one may end up with a
1x1
low resolution mask, leading easily to the whole image being replaced.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CoarseSalt(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CoarseSalt
) [Source]- size_px( int or tuple of int or imgaug.parameters.StochasticParameter, optional):
The size of the lower resolution image from which to sample the replacement mask in absolute pixel dimensions. Note that this means that lower values of this parameter lead to larger areas being replaced (as any pixel in the lower resolution image will correspond to a larger area at the original resolution).
- size_percent( float or tuple of float or imgaug.parameters.StochasticParameter, optional):
The size of the lower resolution image from which to sample the replacement mask in percent of the input image. Note that this means that lower values of this parameter lead to larger areas being replaced (as any pixel in the lower resolution image will correspond to a larger area at the original resolution).
- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).- min_size( int, optional):
Minimum height and width of the low resolution mask. If size_percent or size_px leads to a lower value than this, min_size will be used instead. This should never have a value of less than
2
, otherwise one may end up with a1x1
low resolution mask, leading easily to the whole image being replaced.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CoarseSaltAndPepper(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CoarseSaltAndPepper
) [Source]- size_px( int or tuple of int or imgaug.parameters.StochasticParameter, optional):
The size of the lower resolution image from which to sample the replacement mask in absolute pixel dimensions. Note that this means that lower values of this parameter lead to larger areas being replaced (as any pixel in the lower resolution image will correspond to a larger area at the original resolution).
- size_percent( float or tuple of float or imgaug.parameters.StochasticParameter, optional):
The size of the lower resolution image from which to sample the replacement mask in percent of the input image. Note that this means that lower values of this parameter lead to larger areas being replaced (as any pixel in the lower resolution image will correspond to a larger area at the original resolution).
- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).- min_size( int, optional):
Minimum height and width of the low resolution mask. If size_percent or size_px leads to a lower value than this, min_size will be used instead. This should never have a value of less than
2
, otherwise one may end up with a1x1
low resolution mask, leading easily to the whole image being replaced.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ColorJitter(library=None, *args, **kwargs)¶
Bases:
object
Randomly changes the brightness, contrast, and saturation of an image. Compared to ColorJitter from torchvision, this transform gives a little bit different results because Pillow (used in torchvision) and OpenCV (used in Albumentations) transform an image to HSV format by different formulas. Another difference - Pillow uses uint8 overflow, but we use value saturation.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,torchvision
,augly
. Default:
albumentations
.- brightness (float or tuple of float (min, max)): How much to jitter brightness.
brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness] or the given [min, max]. Should be non negative numbers.
- contrast (float or tuple of float (min, max)): How much to jitter contrast.
contrast_factor is chosen uniformly from [max(0, 1 - contrast), 1 + contrast] or the given [min, max]. Should be non negative numbers.
- saturation (float or tuple of float (min, max)): How much to jitter saturation.
saturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation] or the given [min, max]. Should be non negative numbers.
- hue (float or tuple of float (min, max)): How much to jitter hue.
hue_factor is chosen uniformly from [-hue, hue] or the given [min, max]. Should have 0 <= hue <= 0.5 or -0.5 <= min <= max <= 0.5.
- library (str): flag for library. Should be one of:
e.g.
import beacon_aug as BA aug = BA.ColorJitter(p=1, brightness=0.2,contrast=0.2,saturation=0.2,hue=0.2,library="albumentations") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Contrast(library=None, *args, **kwargs)¶
Bases:
object
Randomly change contrast of the input image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,augly
,imagenet_c
. Default:
albumentations
.- limit ((float, float) or float): factor range for changing contrast.
If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:MultiplyBrightness
) [Source]- mul( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
Multiply
.- to_colorspace( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
WithBrightnessChannels
.
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:ColorJitter
) [Source]- brightness (float or tuple of float (min, max)): How much to jitter brightness.
brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness] or the given [min, max]. Should be non negative numbers.
- contrast (float or tuple of float (min, max)): How much to jitter contrast.
contrast_factor is chosen uniformly from [max(0, 1 - contrast), 1 + contrast] or the given [min, max]. Should be non negative numbers.
- saturation (float or tuple of float (min, max)): How much to jitter saturation.
saturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation] or the given [min, max]. Should be non negative numbers.
- hue (float or tuple of float (min, max)): How much to jitter hue.
hue_factor is chosen uniformly from [-hue, hue] or the given [min, max]. Should have 0<= hue <= 0.5 or -0.5 <= min <= max <= 0.5.
if library =
augly
: (see:Contrast
) if library =imagenet_c
: (see:imagenet_c
) Targets:image
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.Contrast(p=1, limit=[-0.2, 0.2],library="albumentations") aug = BA.Contrast(p=1, corruption_name=contrast,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.ConvertColor(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:ConvertColor
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ConvertImageDtype(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:ConvertImageDtype
) [Source]dtype (torch.dtype): Desired data type of the output
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Convolve(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Convolve
) [Source]- matrix( None or (H, W) ndarray or imgaug.parameters.StochasticParameter or callable, optional):
The weight matrix of the convolution kernel to apply.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Crop(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,augly
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Crop
) [Source]- px( None or int or imgaug.parameters.StochasticParameter or tuple, optional):
The number of pixels to crop on each side of the image. Expected value range is
[0, inf)
. Either this or the parameter percent may be set, not both at the same time.- percent( None or int or float or imgaug.parameters.StochasticParameter or tuple, optional):
The number of pixels to crop on each side of the image given as a fraction of the image height/width. E.g. if this is set to
0.1
, the augmenter will always crop10%
of the image’s height at both the top and the bottom (both10%
each), as well as10%
of the width at the right and left. Expected value range is[0.0, 1.0)
. Either this or the parameter px may be set, not both at the same time.- keep_size( bool, optional):
After cropping, the result image will usually have a different height/width compared to the original input image. If this parameter is set to
True
, then the cropped image will be resized to the input image’s size, i.e. the augmenter’s output shape is always identical to the input shape.- sample_independently( bool, optional):
If
False
and the values for px/percent result in exactly one probability distribution for all image sides, only one single value will be sampled from that probability distribution and used for all sides. I.e. the crop amount then is the same for all sides. IfTrue
, four values will be sampled independently, one per side.
if library =
augly
: (see:Crop
)e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CropAndPad(library=None, *args, **kwargs)¶
Bases:
object
Crop and pad images by pixel amounts or fractions of image sizes. Cropping removes pixels at the sides (i.e. extracts a subimage from a given full image). Padding adds pixels to the sides (e.g. black pixels). This transformation will never crop images below a height or width of
1
.- Note:
This transformation automatically resizes images back to their original size. To deactivate this, add the parameter
keep_size=False
.- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
. Default:
albumentations
.- px (int or tuple):
The number of pixels to crop (negative values) or pad (positive values) on each side of the image. Either this or the parameter percent may be set, not both at the same time.
If
None
, then pixel-based cropping/padding will not be used.If
int
, then that exact number of pixels will always be cropped/padded.If a
tuple
of twoint
s with valuesa
andb
, then each side will be cropped/padded by a random amount sampled uniformly per image and side from the interval[a, b]
. If however sample_independently is set toFalse
, only one value will be sampled per image and used for all sides.If a
tuple
of four entries, then the entries represent top, right, bottom, left. Each entry may be a singleint
(always crop/pad by exactly that value), atuple
of twoint
sa
andb
(crop/pad by an amount within[a, b]
), alist
ofint
s (crop/pad by a random value that is contained in thelist
).
- percent (float or tuple):
The number of pixels to crop (negative values) or pad (positive values) on each side of the image given as a fraction of the image height/width. E.g. if this is set to
-0.1
, the transformation will always crop away10%
of the image’s height at both the top and the bottom (both10%
each), as well as10%
of the width at the right and left. Expected value range is(-1.0, inf)
. Either this or the parameter px may be set, not both at the same time.If
None
, then fraction-based cropping/padding will not be used.If
float
, then that fraction will always be cropped/padded.If a
tuple
of twofloat
s with valuesa
andb
, then each side will be cropped/padded by a random fraction sampled uniformly per image and side from the interval[a, b]
. If however sample_independently is set toFalse
, only one value will be sampled per image and used for all sides.If a
tuple
of four entries, then the entries represent top, right, bottom, left. Each entry may be a singlefloat
(always crop/pad by exactly that percent value), atuple
of twofloat
sa
andb
(crop/pad by a fraction from[a, b]
), alist
offloat
s (crop/pad by a random value that is contained in the list).
pad_mode (int): OpenCV border mode. pad_cval (number, Sequence[number]):
- The constant value to use if the pad mode is
BORDER_CONSTANT
. If
number
, then that value will be used.If a
tuple
of twonumber
s and at least one of them is afloat
, then a random number will be uniformly sampled per image from the continuous interval[a, b]
and used as the value. If bothnumber
s areint
s, the interval is discrete.If a
list
ofnumber
, then a random value will be chosen from the elements of thelist
and used as the value.
pad_cval_mask (number, Sequence[number]): Same as pad_cval but only for masks. keep_size (bool):
After cropping and padding, the result image will usually have a different height/width compared to the original input image. If this parameter is set to
True
, then the cropped/padded image will be resized to the input image’s size, i.e. the output shape is always identical to the input shape.- sample_independently (bool):
If
False
and the values for px/percent result in exactly one probability distribution for all image sides, only one single value will be sampled from that probability distribution and used for all sides. I.e. the crop/pad amount then is the same for all sides. IfTrue
, four values will be sampled independently, one per side.- interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
if library =
imgaug
: (see:CropAndPad
) [Source]- px( None or int or imgaug.parameters.StochasticParameter or tuple, optional):
The number of pixels to crop (negative values) or pad (positive values) on each side of the image. Either this or the parameter percent may be set, not both at the same time.
- percent( None or number or imgaug.parameters.StochasticParameter or tuple, optional):
The number of pixels to crop (negative values) or pad (positive values) on each side of the image given as a fraction of the image height/width. E.g. if this is set to
-0.1
, the augmenter will always crop away10%
of the image’s height at both the top and the bottom (both10%
each), as well as10%
of the width at the right and left. Expected value range is(-1.0, inf)
. Either this or the parameter px may be set, not both at the same time.- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
Padding mode to use. The available modes match the numpy padding modes, i.e.
constant
,edge
,linear_ramp
,maximum
,median
,minimum
,reflect
,symmetric
,wrap
. The modesconstant
andlinear_ramp
use extra values, which are provided bypad_cval
when necessary. Seepad()
for more details.- pad_cval( number or tuple of number list of number or imgaug.parameters.StochasticParameter, optional):
The constant value to use if the pad mode is
constant
or the end value to use if the mode islinear_ramp
. Seepad()
for more details.- keep_size( bool, optional):
After cropping and padding, the result image will usually have a different height/width compared to the original input image. If this parameter is set to
True
, then the cropped/padded image will be resized to the input image’s size, i.e. the augmenter’s output shape is always identical to the input shape.- sample_independently( bool, optional):
If
False
and the values for px/percent result in exactly one probability distribution for all image sides, only one single value will be sampled from that probability distribution and used for all sides. I.e. the crop/pad amount then is the same for all sides. IfTrue
, four values will be sampled independently, one per side.
- library (str): flag for library. Should be one of:
- Targets:
image, mask, bboxes, keypoints
- Image types:
any
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.CropNonEmptyMaskIfExists(library=None, *args, **kwargs)¶
Bases:
object
Crop area with mask if mask is non-empty, else make random crop.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
height (int): vertical size of crop in pixels width (int): horizontal size of crop in pixels ignore_values (list of int): values to ignore in mask, 0 values are always ignored
(e.g. if background value is 5 set ignore_values=[5] to ignore)
- ignore_channels (list of int): channels to ignore in mask
(e.g. if background is a first channel set ignore_channels=[0] to ignore)
p (float): probability of applying the transform. Default: 1.0.
Targets: image, mask, bboxes, keypoints
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.CropToAspectRatio(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CropToAspectRatio
) [Source]- aspect_ratio( number):
The desired aspect ratio, given as
width/height
. E.g. a ratio of2.0
denotes an image that is twice as wide as it is high.- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
See
CropToFixedSize.__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CropToMultiplesOf(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CropToMultiplesOf
) [Source]- width_multiple( int or None):
Multiple for the width. Images will be cropped down until their width is a multiple of this value. If
None
, image widths will not be altered.- height_multiple( int or None):
Multiple for the height. Images will be cropped down until their height is a multiple of this value. If
None
, image heights will not be altered.- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
See
CropToFixedSize.__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CropToPowersOf(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CropToPowersOf
) [Source]- width_base( int or None):
Base for the width. Images will be cropped down until their width fulfills
width' = width_base ^ E
withE
being any natural number. IfNone
, image widths will not be altered.- height_base( int or None):
Base for the height. Images will be cropped down until their height fulfills
height' = height_base ^ E
withE
being any natural number. IfNone
, image heights will not be altered.- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
See
CropToFixedSize.__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.CropToSquare(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:CropToSquare
) [Source]- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
See
CropToFixedSize.__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Cutout(library=None, *args, **kwargs)¶
Bases:
object
CoarseDropout of the square regions in the image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
. Default:
albumentations
.
num_holes (int): number of regions to zero out max_h_size (int): maximum height of the hole max_w_size (int): maximum width of the hole fill_value (int, float, list of int, list of float): value for dropped pixels.
if library =
imgaug
: (see:Cutout
) [Source]- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
Defines the position of each area to fill. Analogous to the definition in e.g.
CropToFixedSize
. Usually,uniform
(anywhere in the image) ornormal
(anywhere in the image with preference around the center) are sane values.- size( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
The size of the rectangle to fill as a fraction of the corresponding image size, i.e. with value range
[0.0, 1.0]
. The size is sampled independently per image axis.- squared( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to generate only squared areas cutout areas or allow rectangular ones. If this evaluates to a true-like value, the first value from size will be converted to absolute pixels and used for both axes.
- fill_mode( str or list of str or imgaug.parameters.StochasticParameter, optional):
Mode to use in order to fill areas. Corresponds to
mode
parameter in some other augmenters. Valid strings for the mode are:- cval( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
The value to use (i.e. the color) to fill areas if fill_mode is
`constant
.- fill_per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to fill each area in a channelwise fashion (
True
) or not (False
). The behaviour per fill mode is:
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:RandomErasing
) [Source]p: probability that the random erasing operation will be performed. scale: range of proportion of erased area against input image. ratio: range of aspect ratio of erased area. value: erasing value. Default is 0. If a single int, it is used to
erase all pixels. If a tuple of length 3, it is used to erase R, G, B channels respectively. If a str of ‘random’, erasing each pixel with random values.
inplace: boolean to make this transform inplace. Default set to False.
- Targets:
image
- Image types:
uint8, float32
Reference: | https://arxiv.org/abs/1708.04552 | https://github.com/uoguelph-mlrg/Cutout/blob/master/util/cutout.py | https://github.com/aleju/imgaug/blob/master/imgaug/augmenters/arithmetic.py
e.g.
import beacon_aug as BA aug = BA.Cutout(p=1, num_holes=8,library="albumentations") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.DataFrameIterator(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
keras
,. Default: keras.
- Args:
if library =
keras
: (see:DataFrameIterator
) [Source]- dataframe: Pandas dataframe containing the filepaths relative to
directory (or absolute paths if directory is None) of the images in a string column. It should include other column/s depending on the class_mode: - if class_mode is “categorical” (default value) it must
include the y_col column with the class/es of each image. Values in column can be string/list/tuple if a single class or list/tuple if multiple classes.
- if class_mode is “binary” or “sparse” it must include
the given y_col column with class values as strings.
- if class_mode is “raw” or “multi_output” it should contain
the columns specified in y_col.
if class_mode is “input” or None no extra column is needed.
- directory: string, path to the directory to read images from. If None,
data in x_col column should be absolute paths.
- image_data_generator: Instance of ImageDataGenerator to use for
random transformations and normalization. If None, no transformations and normalizations are made.
- x_col: string, column in dataframe that contains the filenames (or
absolute paths if directory is None).
y_col: string or list, column/s in dataframe that has the target data. weight_col: string, column in dataframe that contains the sample
weights. Default: None.
target_size: tuple of integers, dimensions to resize input images to. color_mode: One of “rgb”, “rgba”, “grayscale”.
Color mode to read images.
- classes: Optional list of strings, classes to use (e.g. [“dogs”, “cats”]).
If None, all classes in y_col will be used.
- class_mode: one of “binary”, “categorical”, “input”, “multi_output”,
“raw”, “sparse” or None. Default: “categorical”. Mode for yielding the targets: - “binary”: 1D numpy array of binary labels, - “categorical”: 2D numpy array of one-hot encoded labels.
Supports multi-label output.
- “input”: images identical to input images (mainly used to
work with autoencoders),
“multi_output”: list with the values of the different columns,
“raw”: numpy array of values in y_col column(s),
“sparse”: 1D numpy array of integer labels,
- None, no targets are returned (the generator will only yield
batches of image data, which is useful to use in model.predict_generator()).
batch_size: Integer, size of a batch. shuffle: Boolean, whether to shuffle the data between epochs. seed: Random seed for data shuffling. data_format: String, one of channels_first, channels_last. save_to_dir: Optional directory where to save the pictures
being yielded, in a viewable format. This is useful for visualizing the random transformations being applied, for debugging purposes.
- save_prefix: String prefix to use for saving sample
images (if save_to_dir is set).
- save_format: Format to use for saving sample images
(if save_to_dir is set).
- subset: Subset of data (“training” or “validation”) if
validation_split is set in ImageDataGenerator.
- interpolation: Interpolation method used to resample the image if the
target size is different from that of the loaded image. Supported methods are “nearest”, “bilinear”, and “bicubic”. If PIL version 1.1.3 or newer is installed, “lanczos” is also supported. If PIL version 3.4.0 or newer is installed, “box” and “hamming” are also supported. By default, “nearest” is used.
dtype: Dtype to use for the generated arrays. validate_filenames: Boolean, whether to validate image filenames in x_col. If True, invalid images will be ignored. Disabling this option can lead to speed-up in the instantiation of this class. Default: True.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.DefocusBlur(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imagenet_c
,. Default: imagenet_c.
- Args:
if library =
imagenet_c
: (see:imagenet_c
)
e.g.
import beacon_aug as BA aug = BA.DefocusBlur(p=1, corruption_name=defocus_blur,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.DirectedEdgeDetect(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:DirectedEdgeDetect
) [Source]- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Blending factor of the edge image. At
0.0
, only the original image is visible, at1.0
only the edge image is visible.- direction( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Angle (in degrees) of edges to pronounce, where
0
represents0
degrees and1.0
represents 360 degrees (both clockwise, starting at the top). Default value is(0.0, 1.0)
, i.e. pick a random angle per image.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.DirectoryIterator(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
keras
,. Default: keras.
- Args:
if library =
keras
: (see:DirectoryIterator
) [Source]- directory: string, path to the directory to read images from.
Each subdirectory in this directory will be considered to contain images from one class, or alternatively you could specify class subdirectories via the classes argument.
- image_data_generator: Instance of ImageDataGenerator
to use for random transformations and normalization.
target_size: tuple of integers, dimensions to resize input images to. color_mode: One of “rgb”, “rgba”, “grayscale”.
Color mode to read images.
- classes: Optional list of strings, names of subdirectories
containing images from each class (e.g. [“dogs”, “cats”]). It will be computed automatically if not set.
- class_mode: Mode for yielding the targets:
“binary”: binary targets (if there are only two classes), “categorical”: categorical targets, “sparse”: integer targets, “input”: targets are images identical to input images (mainly
used to work with autoencoders),
None: no targets get yielded (only input images are yielded).
batch_size: Integer, size of a batch. shuffle: Boolean, whether to shuffle the data between epochs.
If set to False, sorts the data in alphanumeric order.
seed: Random seed for data shuffling. data_format: String, one of channels_first, channels_last. save_to_dir: Optional directory where to save the pictures
being yielded, in a viewable format. This is useful for visualizing the random transformations being applied, for debugging purposes.
- save_prefix: String prefix to use for saving sample
images (if save_to_dir is set).
- save_format: Format to use for saving sample images
(if save_to_dir is set).
follow_links: boolean,follow symbolic links to subdirectories subset: Subset of data (“training” or “validation”) if
validation_split is set in ImageDataGenerator.
- interpolation: Interpolation method used to resample the image if the
target size is different from that of the loaded image. Supported methods are “nearest”, “bilinear”, and “bicubic”. If PIL version 1.1.3 or newer is installed, “lanczos” is also supported. If PIL version 3.4.0 or newer is installed, “box” and “hamming” are also supported. By default, “nearest” is used.
dtype: Dtype to use for generated arrays.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Downscale(library=None, *args, **kwargs)¶
Bases:
object
Decreases image quality by downscaling and upscaling back.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
scale_min (float): lower bound on the image scale. Should be < 1. scale_max (float): lower bound on the image scale. Should be . interpolation: cv2 interpolation method. cv2.INTER_NEAREST by default
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Dropout(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Dropout
) [Source]- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Dropout2d(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Dropout2d
) [Source]- nb_keep_channels( int):
Minimum number of channels to keep unaltered in all images. E.g. a value of
1
means that at least one channel in every image will not be dropped, even ifp=1.0
. Set to0
to allow dropping all channels.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.DropoutPointsSampler(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:DropoutPointsSampler
) [Source]- other_points_sampler( IPointsSampler):
Another point sampler that is queried to generate a list of points. The dropout operation will be applied to that list.
- p_drop( number or tuple of number or imgaug.parameters.StochasticParameter):
The probability that a coordinate will be removed from the list of all sampled coordinates. A value of
1.0
would mean that (on average)100
percent of all coordinates will be dropped, while0.0
denotes0
percent. Note that this sampler will always ensure that at least one coordinate is left after the dropout operation, i.e. even1.0
will only drop all except one coordinate.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.EdgeDetect(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:EdgeDetect
) [Source]- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Blending factor of the edge image. At
0.0
, only the original image is visible, at1.0
only the edge image is visible.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ElasticTransform(library=None, *args, **kwargs)¶
Bases:
object
Elastic deformation of images as described in [Simard2003] (with modifications). Based on https://gist.github.com/erniejunior/601cdf56d2b424757de5
- Simard2003
Simard, Steinkraus and Platt, “Best Practices for Convolutional Neural Networks applied to Visual Document Analysis”, in Proc. of the International Conference on Document Analysis and Recognition, 2003.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,imagenet_c
. Default:
albumentations
.
alpha (float): sigma (float): Gaussian filter parameter. alpha_affine (float): The range will be (-alpha_affine, alpha_affine) interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
- border_mode (OpenCV flag): flag that is used to specify the pixel extrapolation method. Should be one of:
cv2.BORDER_CONSTANT, cv2.BORDER_REPLICATE, cv2.BORDER_REFLECT, cv2.BORDER_WRAP, cv2.BORDER_REFLECT_101. Default: cv2.BORDER_REFLECT_101
value (int, float, list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT. mask_value (int, float,
list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT applied for masks.
- approximate (boolean): Whether to smooth displacement map with fixed kernel size.
Enabling this option gives ~2X speedup on large images.
if library =
imgaug
: (see:ElasticTransformation
) [Source]- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Strength of the distortion field. Higher values mean that pixels are moved further with respect to the distortion field’s direction. Set this to around 10 times the value of sigma for visible effects.
- sigma( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Standard deviation of the gaussian kernel used to smooth the distortion fields. Higher values (for
128x128
images around 5.0) lead to more water-like effects, while lower values (for128x128
images around1.0
and lower) lead to more noisy, pixelated images. Set this to around 1/10th of alpha for visible effects.- order( int or list of int or imaug.ALL or imgaug.parameters.StochasticParameter, optional):
Interpolation order to use. Same meaning as in
scipy.ndimage.map_coordinates()
and may take any integer value in the range0
to5
, where orders close to0
are faster.- cval( number or tuple of number or list of number or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant intensity value used to fill in new pixels. This value is only used if mode is set to
constant
. For standarduint8
images (value range0
to255
), this value may also should also be in the range0
to255
. It may be afloat
value, even for images with integer dtypes.- mode( str or list of str or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Parameter that defines the handling of newly created pixels. May take the same values as in
scipy.ndimage.map_coordinates()
, i.e.constant
,nearest
,reflect
orwrap
.- polygon_recoverer( ‘auto’ or None or imgaug.augmentables.polygons._ConcavePolygonRecoverer, optional):
The class to use to repair invalid polygons. If
"auto"
, a new instance of :class`imgaug.augmentables.polygons._ConcavePolygonRecoverer` will be created. IfNone
, no polygon recoverer will be used. If an object, then that object will be used and must provide arecover_from()
method, similar to_ConcavePolygonRecoverer
.
- library (str): flag for library. Should be one of:
if library =
imagenet_c
: (see:imagenet_c
) Targets:image, mask
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.ElasticTransform(p=1, corruption_name=elastic_transform,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Emboss(library=None, *args, **kwargs)¶
Bases:
object
Emboss the input image and overlays the result with the original image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
. Default:
albumentations
.- alpha ((float, float)): range to choose the visibility of the embossed image. At 0, only the original image is
visible,at 1.0 only its embossed version is visible. Default: (0.2, 0.5).
strength ((float, float)): strength range of the embossing. Default: (0.2, 0.7). p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:Emboss
) [Source]- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Blending factor of the embossed image. At
0.0
, only the original image is visible, at1.0
only its embossed version is visible.- strength( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Parameter that controls the strength of the embossing. Sane values are somewhere in the interval
[0.0, 2.0]
with1.0
being the standard embossing effect. Default value is1.0
.
- library (str): flag for library. Should be one of:
- Targets:
image
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.EncodingQuality(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:EncodingQuality
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.EnhanceBrightness(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:EnhanceBrightness
) [Source]- factor( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Brightness of the image. Values below
1.0
decrease the brightness, leading to a black image around0.0
. Values above1.0
increase the brightness. Sane values are roughly in[0.5, 1.5]
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.EnhanceColor(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:EnhanceColor
) [Source]- factor( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Colorfulness of the output image. Values close to
0.0
lead to grayscale images, values above1.0
increase the strength of colors. Sane values are roughly in[0.0, 3.0]
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.EnhanceContrast(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:EnhanceContrast
) [Source]- factor( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Strength of contrast in the image. Values below
1.0
decrease the contrast, leading to a gray image around0.0
. Values above1.0
increase the contrast. Sane values are roughly in[0.5, 1.5]
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.EnhanceSharpness(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:EnhanceSharpness
) [Source]- factor( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Sharpness of the image. Values below
1.0
decrease the sharpness, values above1.0
increase it. Sane values are roughly in[0.0, 2.0]
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Enum(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:Enum
) [Source]Generic enumeration.
Derive from this class to define new enumerations.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Equalize(library=None, *args, **kwargs)¶
Bases:
object
Equalize the image histogram.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
. Default:
albumentations
.
mode (str): {‘cv’, ‘pil’}. Use OpenCV or Pillow equalization method. by_channels (bool): If True, use equalization by channels separately,
else convert image to YCbCr representation and use equalization by Y channel.
- mask (np.ndarray, callable): If given, only the pixels selected by
the mask are included in the analysis. Maybe 1 channel or 3 channel array or callable. Function signature must include image argument.
mask_params (list of str): Params for mask function.
if library =
imgaug
: (see:Equalize
) [Source]- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:RandomEqualize
) [Source]p (float): probability of the image being equalized. Default value is 0.5
- Targets:
image
- Image types:
uint8
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.FancyPCA(library=None, *args, **kwargs)¶
Bases:
object
Augment RGB image using FancyPCA from Krizhevsky’s paper “ImageNet Classification with Deep Convolutional Neural Networks”
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- alpha (float): how much to perturb/scale the eigen vecs and vals.
scale is samples from gaussian distribution (mu=0, sigma=alpha)
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
3-channel uint8 images only
- Credit:
http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf https://deshanadesai.github.io/notes/Fancy-PCA-with-Scikit-Image https://pixelatedbrian.github.io/2018-04-29-fancy_pca/
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.FastSnowyLandscape(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:FastSnowyLandscape
) [Source]- lightness_threshold( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
All pixels with lightness in HLS colorspace that is below this value will have their lightness increased by lightness_multiplier.
- lightness_multiplier( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Multiplier for pixel’s lightness value in HLS colorspace. Affects all pixels selected via lightness_threshold.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.FilterBlur(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:FilterBlur
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.FilterContour(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:FilterContour
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.FilterDetail(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:FilterDetail
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.FilterEdgeEnhance(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:FilterEdgeEnhance
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.FilterEdgeEnhanceMore(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:FilterEdgeEnhanceMore
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.FilterEmboss(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:FilterEmboss
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.FilterFindEdges(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:FilterFindEdges
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.FilterSharpen(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:FilterSharpen
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.FilterSmooth(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:FilterSmooth
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.FilterSmoothMore(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:FilterSmoothMore
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.FiveCrop(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:FiveCrop
) [Source]- size (sequence or int): Desired output size of the crop. If size is an
int
instead of sequence like (h, w), a square crop of size (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
- size (sequence or int): Desired output size of the crop. If size is an
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Fog(library=None, *args, **kwargs)¶
Bases:
object
Simulates fog for the image
From https://github.com/UjjwalSaxena/Automold–Road-Augmentation-Library
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,imagenet_c
. Default:
albumentations
.
fog_coef_lower (float): lower limit for fog intensity coefficient. Should be in [0, 1] range. fog_coef_upper (float): upper limit for fog intensity coefficient. Should be in [0, 1] range. alpha_coef (float): transparency of the fog circles. Should be in [0, 1] range.
if library =
imgaug
: (see:Fog
) [Source]- library (str): flag for library. Should be one of:
if library =
imagenet_c
: (see:imagenet_c
) Targets:image
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.Fog(p=1, corruption_name=fog,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.FromFloat(library=None, *args, **kwargs)¶
Bases:
object
Take an input array where all values should lie in the range [0, 1.0], multiply them by max_value and then cast the resulted value to a type specified by dtype. If max_value is None the transform will try to infer the maximum value for the data type from the dtype argument.
This is the inverse transform for
ToFloat
.- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
max_value (float): maximum possible input value. Default: None. dtype (string or numpy data type): data type of the output. See the ‘Data types’ page from the NumPy docs.
Default: ‘uint16’.
p (float): probability of applying the transform. Default: 1.0.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Frost(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imagenet_c
,. Default: imagenet_c.
- Args:
if library =
imagenet_c
: (see:imagenet_c
)
e.g.
import beacon_aug as BA aug = BA.Frost(p=1, corruption_name=frost,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.GammaContrast(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:GammaContrast
) [Source]- gamma( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Exponent for the contrast adjustment. Higher values darken the image.
- per_channel( bool or float, optional):
Whether to use the same value for all channels (
False
) or to sample a new value for each channel (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
, otherwise asFalse
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.GaussNoise(library=None, *args, **kwargs)¶
Bases:
object
Apply gaussian noise to the input image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- var_limit ((float, float) or float): variance range for noise. If var_limit is a single float, the range
will be (0, var_limit). Default: (10.0, 50.0).
mean (float): mean of the noise. Default: 0 per_channel (bool): if set to True, noise will be sampled for each channel independently.
Otherwise, the noise will be sampled once for all channels. Default: True
p (float): probability of applying the transform. Default: 0.5.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.GaussianBlur(library=None, *args, **kwargs)¶
Bases:
object
Blur the input image using a Gaussian filter with a random kernel size.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,imagenet_c
. Default:
albumentations
.- blur_limit (int, (int, int)): maximum Gaussian kernel size for blurring the input image.
Must be zero or odd and in range [0, inf). If set to 0 it will be computed from sigma as round(sigma * (3 if img.dtype == np.uint8 else 4) * 2 + 1) + 1. If set single value blur_limit will be in range (0, blur_limit). Default: (3, 7).
- sigma_limit (float, (float, float)): Gaussian kernel standard deviation. Must be greater in range [0, inf).
If set single value sigma_limit will be in range (0, sigma_limit). If set to 0 sigma will be computed as sigma = 0.3*((ksize-1)*0.5 - 1) + 0.8. Default: 0.
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:GaussianBlur
) [Source]- sigma( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Standard deviation of the gaussian kernel. Values in the range
0.0
(no blur) to3.0
(strong blur) are common.
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:GaussianBlur
) [Source]kernel_size (int or sequence): Size of the Gaussian kernel. sigma (float or tuple of float (min, max)): Standard deviation to be used for
creating kernel to perform blurring. If float, sigma is fixed. If it is tuple of float (min, max), sigma is chosen uniformly at random to lie in the given range.
if library =
imagenet_c
: (see:imagenet_c
) Targets:image
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.GaussianBlur(p=1, kernel_size=3,library="torchvision") aug = BA.GaussianBlur(p=1, corruption_name=gaussian_blur,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.GaussianNoise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,imagenet_c
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:AdditiveGaussianNoise
) [Source]- loc( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Mean of the normal distribution from which the noise is sampled.
- scale( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Standard deviation of the normal distribution that generates the noise. Must be
>=0
. If0
then loc will simply be added to all pixels.- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
if library =
imagenet_c
: (see:imagenet_c
)e.g.
import beacon_aug as BA aug = BA.GaussianNoise(p=1, corruption_name=gaussian_noise,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.GlassBlur(library=None, *args, **kwargs)¶
Bases:
object
Apply glass noise to the input image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imagenet_c
. Default:
albumentations
.
sigma (float): standard deviation for Gaussian kernel. max_delta (int): max distance between pixels which are swapped. iterations (int): number of repeats.
Should be in range [1, inf). Default: (2).
mode (str): mode of computation: fast or exact. Default: “fast”. p (float): probability of applying the transform. Default: 0.5.
if library =
imagenet_c
: (see:imagenet_c
)- library (str): flag for library. Should be one of:
- Targets:
image
- Image types:
uint8, float32
Reference: | https://arxiv.org/abs/1903.12261 | https://github.com/hendrycks/robustness/blob/master/ImageNet-C/create_c/make_imagenet_c.py
e.g.
import beacon_aug as BA aug = BA.GlassBlur(p=1, corruption_name=glass_blur,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Grayscale(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,torchvision
,augly
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Grayscale
) [Source]- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
The alpha value of the grayscale image when overlayed over the old image. A value close to 1.0 means, that mostly the new grayscale image is visible. A value close to 0.0 means, that mostly the old image is visible.
if library =
torchvision
: (see:Grayscale
) [Source]num_output_channels (int): (1 or 3) number of channels desired for output image
if library =
augly
: (see:Grayscale
)e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.GridDistortion(library=None, *args, **kwargs)¶
Bases:
object
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
num_steps (int): count of grid cells on each side. distort_limit (float, (float, float)): If distort_limit is a single float, the range
will be (-distort_limit, distort_limit). Default: (-0.03, 0.03).
- interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
- border_mode (OpenCV flag): flag that is used to specify the pixel extrapolation method. Should be one of:
cv2.BORDER_CONSTANT, cv2.BORDER_REPLICATE, cv2.BORDER_REFLECT, cv2.BORDER_WRAP, cv2.BORDER_REFLECT_101. Default: cv2.BORDER_REFLECT_101
value (int, float, list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT. mask_value (int, float,
list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT applied for masks.
Targets: image, mask
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.GridDropout(library=None, *args, **kwargs)¶
Bases:
object
GridDropout, drops out rectangular regions of an image and the corresponding mask in a grid fashion.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- ratio (float): the ratio of the mask holes to the unit_size (same for horizontal and vertical directions).
Must be between 0 and 1. Default: 0.5.
- unit_size_min (int): minimum size of the grid unit. Must be between 2 and the image shorter edge.
If ‘None’, holes_number_x and holes_number_y are used to setup the grid. Default: None.
- unit_size_max (int): maximum size of the grid unit. Must be between 2 and the image shorter edge.
If ‘None’, holes_number_x and holes_number_y are used to setup the grid. Default: None.
- holes_number_x (int): the number of grid units in x direction. Must be between 1 and image width//2.
If ‘None’, grid unit width is set as image_width//10. Default: None.
- holes_number_y (int): the number of grid units in y direction. Must be between 1 and image height//2.
If None, grid unit height is set equal to the grid unit width or image height, whatever is smaller.
- shift_x (int): offsets of the grid start in x direction from (0,0) coordinate.
Clipped between 0 and grid unit_width - hole_width. Default: 0.
- shift_y (int): offsets of the grid start in y direction from (0,0) coordinate.
Clipped between 0 and grid unit height - hole_height. Default: 0.
- random_offset (boolean): weather to offset the grid randomly between 0 and grid unit size - hole size
If ‘True’, entered shift_x, shift_y are ignored and set randomly. Default: False.
fill_value (int): value for the dropped pixels. Default = 0 mask_fill_value (int): value for the dropped pixels in mask.
If None, transformation is not applied to the mask. Default: None.
Targets: image, mask
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
- References:
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.HidePatch(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
custom
,. Default: custom.
- Args:
if library =
custom
: (see:HidePatch
) [Source]- grid_size: The size of the grid to hide
None (default): randomly choose a grid size from [0, 16, 32, 44, 56] int: a fix grid size tuple/list: randomly choose a grid size from the input
e.g.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.HistogramEqualization(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:HistogramEqualization
) [Source]- to_colorspace( {“Lab”, “HLS”, “HSV”}, optional):
Colorspace in which to perform Histogram Equalization. For
Lab
, the equalization will only be applied to the first channel (L
), forHLS
to the second (L
) and forHSV
to the third (V
). To apply histogram equalization to all channels of an input image (without colorspace conversion), seeimgaug.augmenters.contrast.AllChannelsHistogramEqualization
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.HorizontalFlip(library=None, *args, **kwargs)¶
Bases:
object
Flip the input horizontally around the y-axis.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,keras
,augly
. Default:
albumentations
.
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:Fliplr
) [Source]- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:RandomHorizontalFlip
) [Source]p (float): probability of the image being flipped. Default value is 0.5
if library =
keras
: (see:flip_axis
) [Source]if library =
augly
: (see:HFlip
) Targets:image, mask, bboxes, keypoints
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.HorizontalFlip(p=1, axis=1,library="keras") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.HorizontalLinearGradientMaskGen(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:HorizontalLinearGradientMaskGen
) [Source]- min_value( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Minimum value that the mask will have up to the start point of the linear gradient. Note that min_value is allowed to be larger than max_value, in which case the gradient will start at the (higher) min_value and decrease towards the (lower) max_value.
- max_value( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Maximum value that the mask will have at the end of the linear gradient.
- start_at( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Position on the x-axis where the linear gradient starts, given as a fraction of the axis size. Interval is
[0.0, 1.0]
, where0.0
is at the left of the image. Ifend_at < start_at
the gradient will be inverted.- end_at( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Position on the x-axis where the linear gradient ends, given as a fraction of the axis size. Interval is
[0.0, 1.0]
, where0.0
is at the right of the image.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.HueSaturationValue(library=None, *args, **kwargs)¶
Bases:
object
Randomly change hue, saturation and value of the input image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- hue_shift_limit ((int, int) or int): range for changing hue. If hue_shift_limit is a single int, the range
will be (-hue_shift_limit, hue_shift_limit). Default: (-20, 20).
- sat_shift_limit ((int, int) or int): range for changing saturation. If sat_shift_limit is a single int,
the range will be (-sat_shift_limit, sat_shift_limit). Default: (-30, 30).
- val_shift_limit ((int, int) or int): range for changing value. If val_shift_limit is a single int, the range
will be (-val_shift_limit, val_shift_limit). Default: (-20, 20).
p (float): probability of applying the transform. Default: 0.5.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.IBatchwiseMaskGenerator(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:IBatchwiseMaskGenerator
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.IBinaryImageColorizer(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:IBinaryImageColorizer
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.IPointsSampler(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:IPointsSampler
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ISONoise(library=None, *args, **kwargs)¶
Bases:
object
Apply camera sensor noise.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- color_shift (float, float): variance range for color hue change.
Measured as a fraction of 360 degree Hue angle in HLS colorspace.
- intensity ((float, float): Multiplicative factor that control strength
of color and luminace noise.
p (float): probability of applying the transform. Default: 0.5.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Identity(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Identity
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ImageCompression(library=None, *args, **kwargs)¶
Bases:
object
Decrease Jpeg, WebP compression of an image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- quality_lower (float): lower bound on the image quality.
Should be in [0, 100] range for jpeg and [1, 100] for webp.
- quality_upper (float): upper bound on the image quality.
Should be in [0, 100] range for jpeg and [1, 100] for webp.
- compression_type (ImageCompressionType): should be ImageCompressionType.JPEG or ImageCompressionType.WEBP.
Default: ImageCompressionType.JPEG
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.ImageDataGenerator(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
keras
,. Default: keras.
- Args:
if library =
keras
: (see:ImageDataGenerator
) [Source]- featurewise_center: Boolean.
Set input mean to 0 over the dataset, feature-wise.
samplewise_center: Boolean. Set each sample mean to 0. featurewise_std_normalization: Boolean.
Divide inputs by std of the dataset, feature-wise.
samplewise_std_normalization: Boolean. Divide each input by its std. zca_whitening: Boolean. Apply ZCA whitening. zca_epsilon: epsilon for ZCA whitening. Default is 1e-6. rotation_range: Int. Degree range for random rotations. width_shift_range: Float, 1-D array-like or int
float: fraction of total width, if < 1, or pixels if >= 1.
1-D array-like: random elements from the array.
- int: integer number of pixels from interval
(-width_shift_range, +width_shift_range)
- With width_shift_range=2 possible values
are integers [-1, 0, +1], same as with width_shift_range=[-1, 0, +1], while with width_shift_range=1.0 possible values are floats in the interval [-1.0, +1.0).
- height_shift_range: Float, 1-D array-like or int
float: fraction of total height, if < 1, or pixels if >= 1.
1-D array-like: random elements from the array.
- int: integer number of pixels from interval
(-height_shift_range, +height_shift_range)
- With height_shift_range=2 possible values
are integers [-1, 0, +1], same as with height_shift_range=[-1, 0, +1], while with height_shift_range=1.0 possible values are floats in the interval [-1.0, +1.0).
- brightness_range: Tuple or list of two floats. Range for picking
a brightness shift value from.
- shear_range: Float. Shear Intensity
(Shear angle in counter-clockwise direction in degrees)
- zoom_range: Float or [lower, upper]. Range for random zoom.
If a float, [lower, upper] = [1-zoom_range, 1+zoom_range].
channel_shift_range: Float. Range for random channel shifts. fill_mode: One of {“constant”, “nearest”, “reflect” or “wrap”}.
Default is ‘nearest’. Points outside the boundaries of the input are filled according to the given mode: - ‘constant’: kkkkkkkk|abcd|kkkkkkkk (cval=k) - ‘nearest’: aaaaaaaa|abcd|dddddddd - ‘reflect’: abcddcba|abcd|dcbaabcd - ‘wrap’: abcdabcd|abcd|abcdabcd
- cval: Float or Int.
Value used for points outside the boundaries when fill_mode = “constant”.
horizontal_flip: Boolean. Randomly flip inputs horizontally. vertical_flip: Boolean. Randomly flip inputs vertically. rescale: rescaling factor. Defaults to None.
If None or 0, no rescaling is applied, otherwise we multiply the data by the value provided (after applying all other transformations).
- preprocessing_function: function that will be applied on each input.
The function will run after the image is resized and augmented. The function should take one argument: one image (NumPy tensor with rank 3), and should output a NumPy tensor with the same shape.
- data_format: Image data format,
either “channels_first” or “channels_last”. “channels_last” mode means that the images should have shape (samples, height, width, channels), “channels_first” mode means that the images should have shape (samples, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
- validation_split: Float. Fraction of images reserved for validation
(strictly between 0 and 1).
- interpolation_order: int, order to use for
the spline interpolation. Higher is slower.
dtype: Dtype to use for the generated arrays.
# Examples Example of using .flow(x, y):
```python (x_train, y_train), (x_test, y_test) = cifar10.load_data() y_train = np_utils.to_categorical(y_train, num_classes) y_test = np_utils.to_categorical(y_test, num_classes)
- datagen = ImageDataGenerator(
featurewise_center=True, featurewise_std_normalization=True, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True)
# compute quantities required for featurewise normalization # (std, mean, and principal components if ZCA whitening is applied) datagen.fit(x_train)
# fits the model on batches with real-time data augmentation: model.fit_generator(datagen.flow(x_train, y_train, batch_size=32),
steps_per_epoch=len(x_train) / 32, epochs=epochs)
# here’s a more “manual” example for e in range(epochs):
print(‘Epoch’, e) batches = 0 for x_batch, y_batch in datagen.flow(x_train, y_train, batch_size=32):
model.fit(x_batch, y_batch) batches += 1 if batches >= len(x_train) / 32:
# we need to break the loop by hand because # the generator loops indefinitely break
``` Example of using .flow_from_directory(directory):
```python train_datagen = ImageDataGenerator(
rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
- train_generator = train_datagen.flow_from_directory(
‘data/train’, target_size=(150, 150), batch_size=32, class_mode=’binary’)
- validation_generator = test_datagen.flow_from_directory(
‘data/validation’, target_size=(150, 150), batch_size=32, class_mode=’binary’)
- model.fit_generator(
train_generator, steps_per_epoch=2000, epochs=50, validation_data=validation_generator, validation_steps=800)
Example of transforming images and masks together.
```python # we create two instances with the same arguments data_gen_args = dict(featurewise_center=True,
featurewise_std_normalization=True, rotation_range=90, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.2)
image_datagen = ImageDataGenerator(**data_gen_args) mask_datagen = ImageDataGenerator(**data_gen_args)
# Provide the same seed and keyword arguments to the fit and flow methods seed = 1 image_datagen.fit(images, augment=True, seed=seed) mask_datagen.fit(masks, augment=True, seed=seed)
- image_generator = image_datagen.flow_from_directory(
‘data/images’, class_mode=None, seed=seed)
- mask_generator = mask_datagen.flow_from_directory(
‘data/masks’, class_mode=None, seed=seed)
# combine generators into one which yields image and masks train_generator = zip(image_generator, mask_generator)
- model.fit_generator(
train_generator, steps_per_epoch=2000, epochs=50)
train_df = pandas.read_csv(“./train.csv”) valid_df = pandas.read_csv(“./valid.csv”)
- train_datagen = ImageDataGenerator(
rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
- train_generator = train_datagen.flow_from_dataframe(
dataframe=train_df, directory=’data/train’, x_col=”filename”, y_col=”class”, target_size=(150, 150), batch_size=32, class_mode=’binary’)
- validation_generator = test_datagen.flow_from_dataframe(
dataframe=valid_df, directory=’data/validation’, x_col=”filename”, y_col=”class”, target_size=(150, 150), batch_size=32, class_mode=’binary’)
- model.fit_generator(
train_generator, steps_per_epoch=2000, epochs=50, validation_data=validation_generator, validation_steps=800)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ImpulseNoise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,imagenet_c
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:ImpulseNoise
) [Source]
if library =
imagenet_c
: (see:imagenet_c
)e.g.
import beacon_aug as BA aug = BA.ImpulseNoise(p=1, corruption_name=impulse_noise,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Invert(library=None, *args, **kwargs)¶
Bases:
object
Invert the input image by subtracting pixel values from 255.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
. Default:
albumentations
.
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:Invert
) [Source]- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).- min_value( None or number, optional):
Minimum of the value range of input images, e.g.
0
foruint8
images. If set toNone
, the value will be automatically derived from the image’s dtype.- max_value( None or number, optional):
Maximum of the value range of input images, e.g.
255
foruint8
images. If set toNone
, the value will be automatically derived from the image’s dtype.- threshold( None or number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
A threshold to use in order to invert only numbers above or below the threshold. If
None
no thresholding will be used.- invert_above_threshold( bool or float or imgaug.parameters.StochasticParameter, optional):
If
True
, only values>=threshold
will be inverted. Otherwise, only values<threshold
will be inverted. If anumber
, then expected to be in the interval[0.0, 1.0]
and denoting an imagewise probability. If aStochasticParameter
then(N,)
values will be sampled from the parameter per batch of sizeN
and interpreted asTrue
if>0.5
. If threshold isNone
this parameter has no effect.
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:RandomInvert
) [Source]p (float): probability of the image being color inverted. Default value is 0.5
- Targets:
image
- Image types:
uint8
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.InvertMaskGen(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:InvertMaskGen
) [Source]- child( IBatchwiseMaskGenerator):
The other mask generator to invert.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Iterator(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
keras
,. Default: keras.
- Args:
if library =
keras
: (see:Iterator
) [Source]n: Integer, total number of samples in the dataset to loop over. batch_size: Integer, size of a batch. shuffle: Boolean, whether to shuffle the data between epochs. seed: Random seeding for data shuffling.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Jigsaw(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Jigsaw
) [Source]- nb_rows( int or list of int or tuple of int or imgaug.parameters.StochasticParameter, optional):
How many rows the jigsaw pattern should have.
- nb_cols( int or list of int or tuple of int or imgaug.parameters.StochasticParameter, optional):
How many cols the jigsaw pattern should have.
- max_steps( int or list of int or tuple of int or imgaug.parameters.StochasticParameter, optional):
How many steps each jigsaw cell may be moved.
- allow_pad( bool, optional):
Whether to allow automatically padding images until they are evenly divisible by
nb_rows
andnb_cols
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.JpegCompression(library=None, *args, **kwargs)¶
Bases:
object
Decrease Jpeg compression of an image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,custom
,imagenet_c
. Default:
albumentations
.
quality_lower (float): lower bound on the jpeg quality. Should be in [0, 100] range quality_upper (float): upper bound on the jpeg quality. Should be in [0, 100] range
if library =
imgaug
: (see:JpegCompression
) [Source]- compression( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Degree of compression used during JPEG compression within value range
[0, 100]
. Higher values denote stronger compression and will cause low-frequency components to disappear. Note that JPEG’s compression strength is also often set as a quality, which is the inverse of this parameter. Common choices for the quality setting are around 80 to 95, depending on the image. This translates here to a compression parameter of around 20 to 5.
- library (str): flag for library. Should be one of:
if library =
custom
: (see:JpegCompression
) [Source]text (str): overlay text x (int or list): value or range of the x_coordinate of text,
Default: None. (random select in range of image)
- y (int or list): value or range of the y_coordinate of text,
Default: None. (random select in range of image)
- size (int or list): value or range of the size of text,
Default: 25
- color ( RGB value): value of text color
Default: (0,255,0)
if library =
imagenet_c
: (see:imagenet_c
) Targets:image
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.JpegCompression(p=1, quality_lower=99,quality_upper=100,library="albumentations") aug = BA.JpegCompression(p=1, corruption_name=jpeg_compression,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.KMeansColorQuantization(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:KMeansColorQuantization
) [Source]- n_colors( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Target number of colors in the generated output image. This corresponds to the number of clusters in k-Means, i.e.
k
. Sampled values below2
will always be clipped to2
.- to_colorspace( None or str or list of str or imgaug.parameters.StochasticParameter):
The colorspace in which to perform the quantization. See
change_colorspace_()
for valid values. This will be ignored for grayscale input images.- max_size( int or None, optional):
Maximum image size at which to perform the augmentation. If the width or height of an image exceeds this value, it will be downscaled before running the augmentation so that the longest side matches max_size. This is done to speed up the augmentation. The final output image has the same size as the input image. Use
None
to apply no downscaling.- interpolation( int or str, optional):
Interpolation method to use during downscaling when max_size is exceeded. Valid methods are the same as in
imresize_single_image()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.KeepSizeByResize(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:KeepSizeByResize
) [Source]- children( Augmenter or list of imgaug.augmenters.meta.Augmenter or None, optional):
One or more augmenters to apply to images. These augmenters may change the image size.
- interpolation( KeepSizeByResize.NO_RESIZE or {‘nearest’, ‘linear’, ‘area’, ‘cubic’} or {cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_AREA, cv2.INTER_CUBIC} or list of str or list of int or StochasticParameter, optional):
The interpolation mode to use when resizing images. Can take any value that
imresize_single_image()
accepts, e.g.cubic
.- interpolation_heatmaps( KeepSizeByResize.SAME_AS_IMAGES or KeepSizeByResize.NO_RESIZE or {‘nearest’, ‘linear’, ‘area’, ‘cubic’} or {cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_AREA, cv2.INTER_CUBIC} or list of str or list of int or StochasticParameter, optional):
The interpolation mode to use when resizing heatmaps. Meaning and valid values are similar to interpolation. This parameter may also take the value
KeepSizeByResize.SAME_AS_IMAGES
, which will lead to copying the interpolation modes used for the corresponding images. The value may also be returned on a per-image basis if interpolation_heatmaps is provided as aStochasticParameter
or may be one possible value if it is provided as alist
ofstr
.- interpolation_segmaps( KeepSizeByResize.SAME_AS_IMAGES or KeepSizeByResize.NO_RESIZE or {‘nearest’, ‘linear’, ‘area’, ‘cubic’} or {cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_AREA, cv2.INTER_CUBIC} or list of str or list of int or StochasticParameter, optional):
The interpolation mode to use when resizing segmentation maps. Similar to interpolation_heatmaps. Note: For segmentation maps, only
NO_RESIZE
or nearest neighbour interpolation (i.e.nearest
) make sense in the vast majority of all cases.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.KeepSizeCrop(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
custom
,. Default: custom.
- Args:
if library =
custom
: (see:KeepSizeCrop
) [Source]scale ((float, float)): range of size of the origin size cropped ratio ((float, float)): range of aspect ratio of the origin aspect ratio cropped interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Lambda(library=None, *args, **kwargs)¶
Bases:
object
A flexible transformation class for using user-defined transformation functions per targets. Function signature must include **kwargs to accept optinal arguments like interpolation method, image size, etc:
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,augly
. Default:
albumentations
.
image (callable): Image transformation function. mask (callable): Mask transformation function. keypoint (callable): Keypoint transformation function. bbox (callable): BBox transformation function. always_apply (bool): Indicates whether this transformation should be always applied. p (float): probability of applying the transform. Default: 1.0.
if library =
imgaug
: (see:Lambda
) [Source]- func_images( None or callable, optional):
The function to call for each batch of images. It must follow the form:
- func_heatmaps( None or callable, optional):
The function to call for each batch of heatmaps. It must follow the form:
- func_segmentation_maps( None or callable, optional):
The function to call for each batch of segmentation maps. It must follow the form:
- func_keypoints( None or callable, optional):
The function to call for each batch of keypoints. It must follow the form:
- func_bounding_boxes( “keypoints” or None or callable, optional):
The function to call for each batch of bounding boxes. It must follow the form:
- func_polygons( “keypoints” or None or callable, optional):
The function to call for each batch of polygons. It must follow the form:
- func_line_strings( “keypoints” or None or callable, optional):
The function to call for each batch of line strings. It must follow the form:
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:Lambda
) [Source]lambd (function): Lambda/function to be used for transform.
if library =
augly
: (see:ApplyLambda
) Targets:image, mask, bboxes, keypoints
- Image types:
Any
e.g.
import beacon_aug as BA aug = BA.Lambda(p=1, lambd=lambda x: np.abs(x),library="torchvision") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.LinearContrast(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:LinearContrast
) [Source]- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Multiplier to linearly pronounce (
>1.0
), dampen (0.0
to1.0
) or invert (<0.0
) the difference between each pixel value and the dtype’s center value, e.g.127
foruint8
.- per_channel( bool or float, optional):
Whether to use the same value for all channels (
False
) or to sample a new value for each channel (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
, otherwise asFalse
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.LinearTransformation(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:LinearTransformation
) [Source]transformation_matrix (Tensor): tensor [D x D], D = C x H x W mean_vector (Tensor): tensor [D], D = C x H x W
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.LogContrast(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:LogContrast
) [Source]- gain( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Multiplier for the logarithm result. Values around
1.0
lead to a contrast-adjusted images. Values above1.0
quickly lead to partially broken images due to exceeding the datatype’s value range.- per_channel( bool or float, optional):
Whether to use the same value for all channels (
False
) or to sample a new value for each channel (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
, otherwise asFalse
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.LongestMaxSize(library=None, *args, **kwargs)¶
Bases:
object
Rescale an image so that maximum side is equal to max_size, keeping the aspect ratio of the initial image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
max_size (int): maximum size of the image after the transformation. interpolation (OpenCV flag): interpolation method. Default: cv2.INTER_LINEAR. p (float): probability of applying the transform. Default: 1.
Targets: image, mask, bboxes, keypoints
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.MaskDropout(library=None, *args, **kwargs)¶
Bases:
object
Image & mask augmentation that zero out mask and image regions corresponding to randomly chosen object instance from mask.
Mask must be single-channel image, zero values treated as background. Image can be any number of channels.
Inspired by https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/114254
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.MaskedComposite(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:MaskedComposite
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.MaxPooling(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:MaxPooling
) [Source]- kernel_size( int or tuple of int or list of int or imgaug.parameters.StochasticParameter or tuple of tuple of int or tuple of list of int or tuple of imgaug.parameters.StochasticParameter, optional):
The kernel size of the pooling operation.
- keep_size( bool, optional):
After pooling, the result image will usually have a different height/width compared to the original input image. If this parameter is set to True, then the pooled image will be resized to the input image’s size, i.e. the augmenter’s output shape is always identical to the input shape.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.MeanShiftBlur(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:MeanShiftBlur
) [Source]- spatial_radius( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Spatial radius for pixels that are assumed to be similar.
- color_radius( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Color radius for pixels that are assumed to be similar.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.MedianBlur(library=None, *args, **kwargs)¶
Bases:
object
Blur the input image using a median filter with a random aperture linear size.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
. Default:
albumentations
.- blur_limit (int): maximum aperture linear size for blurring the input image.
Must be odd and in range [3, inf). Default: (3, 7).
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:MedianBlur
) [Source]- k( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Kernel size.
- library (str): flag for library. Should be one of:
- Targets:
image
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.MedianPooling(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:MedianPooling
) [Source]- kernel_size( int or tuple of int or list of int or imgaug.parameters.StochasticParameter or tuple of tuple of int or tuple of list of int or tuple of imgaug.parameters.StochasticParameter, optional):
The kernel size of the pooling operation.
- keep_size( bool, optional):
After pooling, the result image will usually have a different height/width compared to the original input image. If this parameter is set to True, then the pooled image will be resized to the input image’s size, i.e. the augmenter’s output shape is always identical to the input shape.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.MemeFormat(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:MemeFormat
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.MinPooling(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:MinPooling
) [Source]- kernel_size( int or tuple of int or list of int or imgaug.parameters.StochasticParameter or tuple of tuple of int or tuple of list of int or tuple of imgaug.parameters.StochasticParameter, optional):
The kernel size of the pooling operation.
- keep_size( bool, optional):
After pooling, the result image will usually have a different height/width compared to the original input image. If this parameter is set to True, then the pooled image will be resized to the input image’s size, i.e. the augmenter’s output shape is always identical to the input shape.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.MotionBlur(library=None, *args, **kwargs)¶
Bases:
object
Apply motion blur to the input image using a random-sized kernel.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,imagenet_c
. Default:
albumentations
.- blur_limit (int): maximum kernel size for blurring the input image.
Should be in range [3, inf). Default: (3, 7).
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:MotionBlur
) [Source]- k( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Kernel size to use.
- angle( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Angle of the motion blur in degrees (clockwise, relative to top center direction).
- direction( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Forward/backward direction of the motion blur. Lower values towards
-1.0
will point the motion blur towards the back (with angle provided via angle). Higher values towards1.0
will point the motion blur forward. A value of0.0
leads to a uniformly (but still angled) motion blur.- order( int or iterable of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Interpolation order to use when rotating the kernel according to angle. See
__init__()
. Recommended to be0
or1
, with0
being faster, but less continuous/smooth as angle is changed, particularly around multiple of45
degrees.
- library (str): flag for library. Should be one of:
if library =
imagenet_c
: (see:imagenet_c
) Targets:image
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.MotionBlur(p=1, corruption_name=motion_blur,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.MultiplicativeNoise(library=None, *args, **kwargs)¶
Bases:
object
Multiply image to random number or array of numbers.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- multiplier (float or tuple of floats): If single float image will be multiplied to this number.
If tuple of float multiplier will be in range [multiplier[0], multiplier[1]). Default: (0.9, 1.1).
- per_channel (bool): If False, same values for all channels will be used.
If True use sample values for each channels. Default False.
- elementwise (bool): If False multiply multiply all pixels in an image with a random value sampled once.
If True Multiply image pixels with values that are pixelwise randomly sampled. Defaule: False.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
Any
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Multiply(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Multiply
) [Source]- mul( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
The value with which to multiply the pixel values in each image.
- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.MultiplyAndAddToBrightness(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:MultiplyAndAddToBrightness
) [Source]- mul( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
Multiply
.- add( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
Add
.- to_colorspace( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
WithBrightnessChannels
.- random_order( bool, optional):
Whether to apply the add and multiply operations in random order (
True
). IfFalse
, this augmenter will always first multiply and then add.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.MultiplyElementwise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:MultiplyElementwise
) [Source]- mul( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
The value with which to multiply pixel values in the image.
- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.MultiplyHue(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:MultiplyHue
) [Source]- mul( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Multiplier with which to multiply all hue values. This is expected to be in the range
-10.0
to+10.0
and will automatically be projected to an angular representation using(hue/255) * (360/2)
(OpenCV’s hue representation is in the range[0, 180]
instead of[0, 360]
). Only this or mul may be set, not both.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.MultiplyHueAndSaturation(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:MultiplyHueAndSaturation
) [Source]- mul( None or number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Multiplier with which to multiply all hue and saturation values of all pixels. It is expected to be in the range
-10.0
to+10.0
. Note that values of0.0
or lower will remove all saturation.- mul_hue( None or number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Multiplier with which to multiply all hue values. This is expected to be in the range
-10.0
to+10.0
and will automatically be projected to an angular representation using(hue/255) * (360/2)
(OpenCV’s hue representation is in the range[0, 180]
instead of[0, 360]
). Only this or mul may be set, not both.- mul_saturation( None or number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Multiplier with which to multiply all saturation values. It is expected to be in the range
0.0
to+10.0
. Only this or mul may be set, not both.- per_channel( bool or float, optional):
Whether to sample per image only one value from mul and use it for both hue and saturation (
False
) or to sample independently one value for hue and one for saturation (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
, otherwise asFalse
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Normalize(library=None, *args, **kwargs)¶
Bases:
object
Normalization is applied by the formula: img = (img - mean * max_pixel_value) / (std * max_pixel_value)
- Args:
- library (str): flag for library. Should be one of:
albumentations
,mmcv
. Default:
albumentations
.
mean (float, list of float): mean values std (float, list of float): std values max_pixel_value (float): maximum possible pixel value
if library =
mmcv
: (see:Normalize
) [Source]mean (sequence): Mean values of 3 channels. std (sequence): Std values of 3 channels. to_rgb (bool): Whether to convert the image from BGR to RGB,
default is true.
- library (str): flag for library. Should be one of:
- Targets:
image
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.Normalize(p=1, mean=[125, 125, 125],std=[10, 10, 10],library="mmcv") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.NumpyArrayIterator(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
keras
,. Default: keras.
- Args:
if library =
keras
: (see:NumpyArrayIterator
) [Source]- x: Numpy array of input data or tuple.
If tuple, the second elements is either another numpy array or a list of numpy arrays, each of which gets passed through as an output without any modifications.
y: Numpy array of targets data. image_data_generator: Instance of ImageDataGenerator
to use for random transformations and normalization.
batch_size: Integer, size of a batch. shuffle: Boolean, whether to shuffle the data between epochs. sample_weight: Numpy array of sample weights. seed: Random seed for data shuffling. data_format: String, one of channels_first, channels_last. save_to_dir: Optional directory where to save the pictures
being yielded, in a viewable format. This is useful for visualizing the random transformations being applied, for debugging purposes.
- save_prefix: String prefix to use for saving sample
images (if save_to_dir is set).
- save_format: Format to use for saving sample images
(if save_to_dir is set).
- subset: Subset of data (“training” or “validation”) if
validation_split is set in ImageDataGenerator.
dtype: Dtype to use for the generated arrays.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.OneOf(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:OneOf
) [Source]- children( imgaug.augmenters.meta.Augmenter or list of imgaug.augmenters.meta.Augmenter):
The choices of augmenters to apply.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Opacity(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:Opacity
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.OpticalDistortion(library=None, *args, **kwargs)¶
Bases:
object
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- distort_limit (float, (float, float)): If distort_limit is a single float, the range
will be (-distort_limit, distort_limit). Default: (-0.05, 0.05).
- shift_limit (float, (float, float))): If shift_limit is a single float, the range
will be (-shift_limit, shift_limit). Default: (-0.05, 0.05).
- interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
- border_mode (OpenCV flag): flag that is used to specify the pixel extrapolation method. Should be one of:
cv2.BORDER_CONSTANT, cv2.BORDER_REPLICATE, cv2.BORDER_REFLECT, cv2.BORDER_WRAP, cv2.BORDER_REFLECT_101. Default: cv2.BORDER_REFLECT_101
value (int, float, list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT. mask_value (int, float,
list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT applied for masks.
Targets: image, mask
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.OverlayEmoji(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:OverlayEmoji
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.OverlayImage(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:OverlayImage
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.OverlayOntoScreenshot(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:OverlayOntoScreenshot
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.OverlayStripes(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:OverlayStripes
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.OverlayText(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
custom
,augly
,. Default: custom.
- Args:
if library =
custom
: (see:OverlayText
) [Source]text (str): overlay text x (int or list): value or range of the x_coordinate of text,
Default: None. (random select in range of image)
- y (int or list): value or range of the y_coordinate of text,
Default: None. (random select in range of image)
- size (int or list): value or range of the size of text,
Default: 25
- color ( RGB value): value of text color
Default: (0,255,0)
if library =
augly
: (see:OverlayText
)e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.PILToTensor(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:PILToTensor
) [Source]Convert a
PIL Image
to a tensor of the same type. This transform does not support torchscript.
Converts a PIL Image (H x W x C) to a Tensor of shape (C x H x W).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Pad(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,torchvision
,augly
,mmcv
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Pad
) [Source]- px( None or int or imgaug.parameters.StochasticParameter or tuple, optional):
The number of pixels to pad on each side of the image. Expected value range is
[0, inf)
. Either this or the parameter percent may be set, not both at the same time.- percent( None or int or float or imgaug.parameters.StochasticParameter or tuple, optional):
The number of pixels to pad on each side of the image given as a fraction of the image height/width. E.g. if this is set to
0.1
, the augmenter will always pad10%
of the image’s height at both the top and the bottom (both10%
each), as well as10%
of the width at the right and left. Expected value range is[0.0, inf)
. Either this or the parameter px may be set, not both at the same time.- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
Padding mode to use. The available modes match the numpy padding modes, i.e.
constant
,edge
,linear_ramp
,maximum
,median
,minimum
,reflect
,symmetric
,wrap
. The modesconstant
andlinear_ramp
use extra values, which are provided bypad_cval
when necessary. Seepad()
for more details.- pad_cval( number or tuple of number list of number or imgaug.parameters.StochasticParameter, optional):
The constant value to use if the pad mode is
constant
or the end value to use if the mode islinear_ramp
. Seepad()
for more details.- keep_size( bool, optional):
After padding, the result image will usually have a different height/width compared to the original input image. If this parameter is set to
True
, then the padded image will be resized to the input image’s size, i.e. the augmenter’s output shape is always identical to the input shape.- sample_independently( bool, optional):
If
False
and the values for px/percent result in exactly one probability distribution for all image sides, only one single value will be sampled from that probability distribution and used for all sides. I.e. the pad amount then is the same for all sides. IfTrue
, four values will be sampled independently, one per side.
if library =
torchvision
: (see:Pad
) [Source]- padding (int or sequence): Padding on each border. If a single int is provided this
is used to pad all borders. If sequence of length 2 is provided this is the padding on left/right and top/bottom respectively. If a sequence of length 4 is provided this is the padding for the left, top, right and bottom borders respectively.
if library =
augly
: (see:Pad
) if library =mmcv
: (see:Pad
) [Source]size (tuple, optional): Fixed padding size. size_divisor (int, optional): The divisor of padded size. pad_val (float, optional): Padding value. Default: 0. seg_pad_val (float, optional): Padding value of segmentation map.
Default: 255.
e.g.
import beacon_aug as BA aug = BA.Pad(p=1, padding=30,library="torchvision") aug = BA.Pad(p=1, size_divisor=30,library="mmcv") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.PadIfNeeded(library=None, *args, **kwargs)¶
Bases:
object
Pad side of the image / max if side is less than desired number.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
min_height (int): minimal result image height. min_width (int): minimal result image width. pad_height_divisor (int): if not None, ensures image height is dividable by value of this argument. pad_width_divisor (int): if not None, ensures image width is dividable by value of this argument. position (Union[str, PositionType]): Position of the image. should be PositionType.CENTER or
PositionType.TOP_LEFT or PositionType.TOP_RIGHT or PositionType.BOTTOM_LEFT or PositionType.BOTTOM_RIGHT. Default: PositionType.CENTER.
border_mode (OpenCV flag): OpenCV border mode. value (int, float, list of int, list of float): padding value if border_mode is cv2.BORDER_CONSTANT. mask_value (int, float,
list of int, list of float): padding value for mask if border_mode is cv2.BORDER_CONSTANT.
p (float): probability of applying the transform. Default: 1.0.
Targets: image, mask, bbox, keypoints
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.PadSquare(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:PadSquare
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.PadToAspectRatio(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:PadToAspectRatio
) [Source]- aspect_ratio( number):
The desired aspect ratio, given as
width/height
. E.g. a ratio of2.0
denotes an image that is twice as wide as it is high.- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- pad_cval( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
See
PadToFixedSize.__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.PadToFixedSize(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:PadToFixedSize
) [Source]- width( int or None):
Pad images up to this minimum width. If
None
, image widths will not be altered.- height( int or None):
Pad images up to this minimum height. If
None
, image heights will not be altered.- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- pad_cval( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
Sets the center point of the padding, which determines how the required padding amounts are distributed to each side. For a
tuple
(a, b)
, botha
andb
are expected to be in range[0.0, 1.0]
and describe the fraction of padding applied to the left/right (low/high values fora
) and the fraction of padding applied to the top/bottom (low/high values forb
). A padding position at(0.5, 0.5)
would be the center of the image and distribute the padding equally to all sides. A padding position at(0.0, 1.0)
would be the left-bottom and would apply 100% of the required padding to the bottom and left sides of the image so that the bottom left corner becomes more and more the new image center (depending on how much is padded).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.PadToMultiplesOf(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:PadToMultiplesOf
) [Source]- width_multiple( int or None):
Multiple for the width. Images will be padded until their width is a multiple of this value. If
None
, image widths will not be altered.- height_multiple( int or None):
Multiple for the height. Images will be padded until their height is a multiple of this value. If
None
, image heights will not be altered.- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- pad_cval( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
See
PadToFixedSize.__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.PadToPowersOf(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:PadToPowersOf
) [Source]- width_base( int or None):
Base for the width. Images will be padded down until their width fulfills
width' = width_base ^ E
withE
being any natural number. IfNone
, image widths will not be altered.- height_base( int or None):
Base for the height. Images will be padded until their height fulfills
height' = height_base ^ E
withE
being any natural number. IfNone
, image heights will not be altered.- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- pad_cval( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
See
PadToFixedSize.__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.PadToSquare(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:PadToSquare
) [Source]- pad_mode( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- pad_cval( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
See
PadToFixedSize.__init__()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.PaletteRecolor(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
custom
,. Default: custom.
- Args:
if library =
custom
: (see:PaletteRecolor
) [Source]image(numpy array): input image delta_limit (int or list): value or range of recolor shift range
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Pepper(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Pepper
) [Source]- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Perspective(library=None, *args, **kwargs)¶
Bases:
object
Perform a random four point perspective transform of the input.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- scale (float or (float, float)): standard deviation of the normal distributions. These are used to sample
the random distances of the subimage’s corners from the full image’s corners. If scale is a single float value, the range will be (0, scale). Default: (0.05, 0.1).
- keep_size (bool): Whether to resize image’s back to their original size after applying the perspective
transform. If set to False, the resulting images may end up having different shapes and will always be a list, never an array. Default: True
pad_mode (OpenCV flag): OpenCV border mode. pad_val (int, float, list of int, list of float): padding value if border_mode is cv2.BORDER_CONSTANT.
Default: 0
- mask_pad_val (int, float, list of int, list of float): padding value for mask
if border_mode is cv2.BORDER_CONSTANT. Default: 0
- fit_output (bool): If True, the image plane size and position will be adjusted to still capture
the whole image after perspective transformation. (Followed by image resizing if keep_size is set to True.) Otherwise, parts of the transformed image may be outside of the image plane. This setting should not be set to True when using large scale values as it could lead to very large images. Default: False
p (float): probability of applying the transform. Default: 0.5.
Targets: image, mask, keypoints, bboxes
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.PerspectiveTransform(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,augly
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:PerspectiveTransform
) [Source]- scale( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Standard deviation of the normal distributions. These are used to sample the random distances of the subimage’s corners from the full image’s corners. The sampled values reflect percentage values (with respect to image height/width). Recommended values are in the range
0.0
to0.1
.- cval( number or tuple of number or list of number or imaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant value used to fill up pixels in the result image that didn’t exist in the input image (e.g. when translating to the left, some new pixels are created at the right). Such a fill-up with a constant value only happens, when mode is
constant
. The expected value range is[0, 255]
foruint8
images. It may be a float value.- mode( int or str or list of str or list of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Parameter that defines the handling of newly created pixels. Same meaning as in OpenCV’s border mode. Let
abcdefgh
be an image’s content and|
be an image boundary, then:- keep_size( bool, optional):
Whether to resize image’s back to their original size after applying the perspective transform. If set to
False
, the resulting images may end up having different shapes and will always be a list, never an array.- fit_output( bool, optional):
If
True
, the image plane size and position will be adjusted to still capture the whole image after perspective transformation. (Followed by image resizing if keep_size is set toTrue
.) Otherwise, parts of the transformed image may be outside of the image plane. This setting should not be set toTrue
when using large scale values as it could lead to very large images.- polygon_recoverer( ‘auto’ or None or imgaug.augmentables.polygons._ConcavePolygonRecoverer, optional):
The class to use to repair invalid polygons. If
"auto"
, a new instance of :class`imgaug.augmentables.polygons._ConcavePolygonRecoverer` will be created. IfNone
, no polygon recoverer will be used. If an object, then that object will be used and must provide arecover_from()
method, similar to_ConcavePolygonRecoverer
.
if library =
augly
: (see:PerspectiveTransform
)e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.PhotoMetricDistortion(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
mmcv
,. Default: mmcv.
- Args:
if library =
mmcv
: (see:PhotoMetricDistortion
) [Source]brightness_delta (int): delta of brightness. contrast_range (tuple): range of contrast. saturation_range (tuple): range of saturation. hue_delta (int): delta of hue.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.PiecewiseAffine(library=None, *args, **kwargs)¶
Bases:
object
Apply affine transformations that differ between local neighbourhoods. This augmentation places a regular grid of points on an image and randomly moves the neighbourhood of these point around via affine transformations. This leads to local distortions.
This is mostly a wrapper around scikit-image’s
PiecewiseAffine
. See alsoAffine
for a similar technique.- Note:
This augmenter is very slow. Try to use
ElasticTransformation
instead, which is at least 10x faster.- Note:
For coordinate-based inputs (keypoints, bounding boxes, polygons, …), this augmenter still has to perform an image-based augmentation, which will make it significantly slower and not fully correct for such inputs than other transforms.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
. Default:
albumentations
.- scale (float, tuple of float): Each point on the regular grid is moved around via a normal distribution.
This scale factor is equivalent to the normal distribution’s sigma. Note that the jitter (how far each point is moved in which direction) is multiplied by the height/width of the image if
absolute_scale=False
(default), so this scale can be the same for different sized images. Recommended values are in the range0.01
to0.05
(weak to strong augmentations).If a single
float
, then that value will always be used as the scale.If a tuple
(a, b)
offloat
s, then a random value will be uniformly sampled per image from the interval[a, b]
.
- nb_rows (int, tuple of int): Number of rows of points that the regular grid should have.
Must be at least
2
. For large images, you might want to pick a higher value than4
. You might have to then adjust scale to lower values.If a single
int
, then that value will always be used as the number of rows.If a tuple
(a, b)
, then a value from the discrete interval[a..b]
will be uniformly sampled per image.
nb_cols (int, tuple of int): Number of columns. Analogous to nb_rows. interpolation (int): The order of interpolation. The order has to be in the range 0-5:
0: Nearest-neighbor
1: Bi-linear (default)
2: Bi-quadratic
3: Bi-cubic
4: Bi-quartic
5: Bi-quintic
mask_interpolation (int): same as interpolation but for mask. cval (number): The constant value to use when filling in newly created pixels. cval_mask (number): Same as cval but only for masks. mode (str): {‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional
Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad.
absolute_scale (bool): Take scale as an absolute value rather than a relative value. keypoints_threshold (float): Used as threshold in conversion from distance maps to keypoints.
The search for keypoints works by searching for the argmin (non-inverted) or argmax (inverted) in each channel. This parameters contains the maximum (non-inverted) or minimum (inverted) value to accept in order to view a hit as a keypoint. Use
None
to use no min/max. Default: 0.01if library =
imgaug
: (see:PiecewiseAffine
) [Source]- scale( float or tuple of float or imgaug.parameters.StochasticParameter, optional):
Each point on the regular grid is moved around via a normal distribution. This scale factor is equivalent to the normal distribution’s sigma. Note that the jitter (how far each point is moved in which direction) is multiplied by the height/width of the image if
absolute_scale=False
(default), so this scale can be the same for different sized images. Recommended values are in the range0.01
to0.05
(weak to strong augmentations).- nb_rows( int or tuple of int or imgaug.parameters.StochasticParameter, optional):
Number of rows of points that the regular grid should have. Must be at least
2
. For large images, you might want to pick a higher value than4
. You might have to then adjust scale to lower values.- nb_cols( int or tuple of int or imgaug.parameters.StochasticParameter, optional):
Number of columns. Analogous to nb_rows.
- order( int or list of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- cval( int or float or tuple of float or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- mode( str or list of str or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
See
__init__()
.- absolute_scale( bool, optional):
Take scale as an absolute value rather than a relative value.
- polygon_recoverer( ‘auto’ or None or imgaug.augmentables.polygons._ConcavePolygonRecoverer, optional):
The class to use to repair invalid polygons. If
"auto"
, a new instance of :class`imgaug.augmentables.polygons._ConcavePolygonRecoverer` will be created. IfNone
, no polygon recoverer will be used. If an object, then that object will be used and must provide arecover_from()
method, similar to_ConcavePolygonRecoverer
.
- library (str): flag for library. Should be one of:
- Targets:
image, mask, keypoints, bboxes
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Pixelization(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,imagenet_c
,. Default: augly.
- Args:
if library =
augly
: (see:RandomPixelization
)
if library =
imagenet_c
: (see:imagenet_c
)e.g.
import beacon_aug as BA aug = BA.Pixelization(p=1, corruption_name=pixelate,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Posterize(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:UniformColorQuantizationToNBits
) [Source]- nb_bits( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of bits to keep in each image’s array component.
- to_colorspace( None or str or list of str or imgaug.parameters.StochasticParameter):
The colorspace in which to perform the quantization. See
change_colorspace_()
for valid values. This will be ignored for grayscale input images.- max_size( None or int, optional):
Maximum image size at which to perform the augmentation. If the width or height of an image exceeds this value, it will be downscaled before running the augmentation so that the longest side matches max_size. This is done to speed up the augmentation. The final output image has the same size as the input image. Use
None
to apply no downscaling.- interpolation( int or str, optional):
Interpolation method to use during downscaling when max_size is exceeded. Valid methods are the same as in
imresize_single_image()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RGB2Gray(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
mmcv
,. Default: mmcv.
- Args:
if library =
mmcv
: (see:RGB2Gray
) [Source]- out_channels (int): Expected number of output channels after
transforming. Default: None.
- weights (tuple[float]): The weights to calculate the weighted mean.
Default: (0.299, 0.587, 0.114).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RGBShift(library=None, *args, **kwargs)¶
Bases:
object
Randomly shift values for each channel of the input RGB image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- r_shift_limit ((int, int) or int): range for changing values for the red channel. If r_shift_limit is a single
int, the range will be (-r_shift_limit, r_shift_limit). Default: (-20, 20).
- g_shift_limit ((int, int) or int): range for changing values for the green channel. If g_shift_limit is a
single int, the range will be (-g_shift_limit, g_shift_limit). Default: (-20, 20).
- b_shift_limit ((int, int) or int): range for changing values for the blue channel. If b_shift_limit is a single
int, the range will be (-b_shift_limit, b_shift_limit). Default: (-20, 20).
p (float): probability of applying the transform. Default: 0.5.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Rain(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Rain
) [Source]- drop_size( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
See
RainLayer
.- speed( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
See
RainLayer
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RainLayer(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:RainLayer
) [Source]- density( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Same as in
SnowflakesLayer
.- density_uniformity( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Same as in
SnowflakesLayer
.- drop_size( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Same as flake_size in
SnowflakesLayer
.- drop_size_uniformity( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Same as flake_size_uniformity in
SnowflakesLayer
.- angle( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Same as in
SnowflakesLayer
.- speed( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Same as in
SnowflakesLayer
.- blur_sigma_fraction( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Same as in
SnowflakesLayer
.- blur_sigma_limits( tuple of float, optional):
Same as in
SnowflakesLayer
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandAugment(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:RandAugment
) [Source]- n( int or tuple of int or list of int or imgaug.parameters.StochasticParameter or None, optional):
Parameter
N
in the paper, i.e. number of transformations to apply. The paper suggestsN=2
for ImageNet. See also parametern
i- m( int or tuple of int or list of int or imgaug.parameters.StochasticParameter or None, optional):
Parameter
M
in the paper, i.e. magnitude/severity/strength of the applied transformations in interval[0 .. 30]
withM=0
being the weakest. The paper suggests for ImageNetM=9
in case of ResNet-50 andM=28
in case of EfficientNet-B7. This implementation uses a default value of(6, 12)
, i.e. the value is uniformly sampled per image from the interval[6 .. 12]
. This ensures greater diversity of transformations than using a single fixed value.- cval( number or tuple of number or list of number or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant value to use when filling in newly created pixels. See parameter fillcolor in
Affine
for details.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomAdjustSharpness(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:RandomAdjustSharpness
) [Source]- sharpness_factor (float): How much to adjust the sharpness. Can be
any non negative number. 0 gives a blurred image, 1 gives the original image while 2 increases the sharpness by a factor of 2.
p (float): probability of the image being color inverted. Default value is 0.5
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomApply(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:RandomApply
) [Source]transforms (sequence or torch.nn.Module): list of transformations p (float): probability
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomAspectRatio(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:RandomAspectRatio
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomAutocontrast(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:RandomAutocontrast
) [Source]p (float): probability of the image being autocontrasted. Default value is 0.5
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomBrightnessContrast(library=None, *args, **kwargs)¶
Bases:
object
Randomly change brightness and contrast of the input image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- brightness_limit ((float, float) or float): factor range for changing brightness.
If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).
- contrast_limit ((float, float) or float): factor range for changing contrast.
If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).
- brightness_by_max (Boolean): If True adjust contrast by image dtype maximum,
else adjust contrast by image mean.
p (float): probability of applying the transform. Default: 0.5.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomChoice(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:RandomChoice
) [Source]Apply single transformation randomly picked from a list. This transform does not support torchscript.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomColorsBinaryImageColorizer(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:RandomColorsBinaryImageColorizer
) [Source]- color_true( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Color of the foreground, i.e. all pixels in binary images that are
True
. This parameter will be queried once per image to generate(3,)
samples denoting the color. (Note that even for grayscale images three values will be sampled and converted to grayscale according to0.299*R + 0.587*G + 0.114*B
. This is the same equation that is also used by OpenCV.)- color_false( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Analogous to color_true, but denotes the color for all pixels that are
False
in the binary input image.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomCrop(library=None, *args, **kwargs)¶
Bases:
object
Crop a random part of the input.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,mmcv
. Default:
albumentations
.
height (int): height of the crop. width (int): width of the crop. p (float): probability of applying the transform. Default: 1.
if library =
imgaug
: (see:CropToFixedSize
) [Source]- width( int or None):
Crop images down to this maximum width. If
None
, image widths will not be altered.- height( int or None):
Crop images down to this maximum height. If
None
, image heights will not be altered.- position( {‘uniform’, ‘normal’, ‘center’, ‘left-top’, ‘left-center’, ‘left-bottom’, ‘center-top’, ‘center-center’, ‘center-bottom’, ‘right-top’, ‘right-center’, ‘right-bottom’} or tuple of float or StochasticParameter or tuple of StochasticParameter, optional):
Sets the center point of the cropping, which determines how the required cropping amounts are distributed to each side. For a
tuple
(a, b)
, botha
andb
are expected to be in range[0.0, 1.0]
and describe the fraction of cropping applied to the left/right (low/high values fora
) and the fraction of cropping applied to the top/bottom (low/high values forb
). A cropping position at(0.5, 0.5)
would be the center of the image and distribute the cropping equally over all sides. A cropping position at(1.0, 0.0)
would be the right-top and would apply 100% of the required cropping to the right and top sides of the image.
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:RandomCrop
) [Source]- size (sequence or int): Desired output size of the crop. If size is an
int instead of sequence like (h, w), a square crop (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
- padding (int or sequence, optional): Optional padding on each border
of the image. Default is None. If a single int is provided this is used to pad all borders. If sequence of length 2 is provided this is the padding on left/right and top/bottom respectively. If a sequence of length 4 is provided this is the padding for the left, top, right and bottom borders respectively.
if library =
mmcv
: (see:RandomCrop
) [Source]crop_size (tuple): Expected size after cropping, (h, w). cat_max_ratio (float): The maximum ratio that single category could
occupy.
- Targets:
image, mask, bboxes, keypoints
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.RandomCrop(p=1, height=64,width=64,library="albumentations") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomCropNearBBox(library=None, *args, **kwargs)¶
Bases:
object
Crop bbox from image with random shift by x,y coordinates
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
max_part_shift (float): float value in (0.0, 1.0) range. Default 0.3 p (float): probability of applying the transform. Default: 1.
Targets: image, mask, bboxes, keypoints
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomEmojiOverlay(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:RandomEmojiOverlay
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomFlip(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
mmcv
,. Default: mmcv.
- Args:
if library =
mmcv
: (see:RandomFlip
) [Source]prob (float, optional): The flipping probability. Default: None. direction(str, optional): The flipping direction. Options are
‘horizontal’ and ‘vertical’. Default: ‘horizontal’.
e.g.
import beacon_aug as BA aug = BA.RandomFlip(p=1, prob=1,library="mmcv") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomGamma(library=None, *args, **kwargs)¶
Bases:
object
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- gamma_limit (float or (float, float)): If gamma_limit is a single float value,
the range will be (-gamma_limit, gamma_limit). Default: (80, 120).
eps: Deprecated.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomGrayscale(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:RandomGrayscale
) [Source]p (float): probability that image should be converted to grayscale.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomGridShuffle(library=None, *args, **kwargs)¶
Bases:
object
Random shuffle grid’s cells on image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
grid ((int, int)): size of grid for splitting image.
Targets: image, mask
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomNoise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:RandomNoise
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomOrder(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:RandomOrder
) [Source]Apply a list of transformations in a random order. This transform does not support torchscript.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomPerspective(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:RandomPerspective
) [Source]- distortion_scale (float): argument to control the degree of distortion and ranges from 0 to 1.
Default is 0.5.
p (float): probability of the image being transformed. Default is 0.5. interpolation (InterpolationMode): Desired interpolation enum defined by
torchvision.transforms.InterpolationMode
. Default isInterpolationMode.BILINEAR
. If input is Tensor, onlyInterpolationMode.NEAREST
,InterpolationMode.BILINEAR
are supported. For backward compatibility integer values (e.g.PIL.Image.NEAREST
) are still acceptable.- fill (sequence or number): Pixel fill value for the area outside the transformed
image. Default is
0
. If given a number, the value is used for all bands respectively.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RandomRain(library=None, *args, **kwargs)¶
Bases:
object
Adds rain effects.
From https://github.com/UjjwalSaxena/Automold–Road-Augmentation-Library
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
slant_lower: should be in range [-20, 20]. slant_upper: should be in range [-20, 20]. drop_length: should be in range [0, 100]. drop_width: should be in range [1, 5]. drop_color (list of (r, g, b)): rain lines color. blur_value (int): rainy view are blurry brightness_coefficient (float): rainy days are usually shady. Should be in range [0, 1]. rain_type: One of [None, “drizzle”, “heavy”, “torrestial”]
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomResizedCrop(library=None, *args, **kwargs)¶
Bases:
object
Torchvision’s variant of crop a random part of the input and rescale it to some size.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,torchvision
. Default:
albumentations
.
height (int): height after crop and resize. width (int): width after crop and resize. scale ((float, float)): range of size of the origin size cropped ratio ((float, float)): range of aspect ratio of the origin aspect ratio cropped interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
p (float): probability of applying the transform. Default: 1.
if library =
torchvision
: (see:RandomResizedCrop
) [Source]- size (int or sequence): expected output size of the crop, for each edge. If size is an
int instead of sequence like (h, w), a square output size
(size, size)
is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
- library (str): flag for library. Should be one of:
- Targets:
image, mask, bboxes, keypoints
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.RandomResizedCrop(p=1, size=[64, 64],library="torchvision") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomRotate90(library=None, *args, **kwargs)¶
Bases:
object
Randomly rotate the input by 90 degrees zero or more times.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
p (float): probability of applying the transform. Default: 0.5.
Targets: image, mask, bboxes, keypoints
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomScale(library=None, *args, **kwargs)¶
Bases:
object
Randomly resize the input. Output image size is different from the input image size.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- scale_limit ((float, float) or float): scaling factor range. If scale_limit is a single float value, the
range will be (1 - scale_limit, 1 + scale_limit). Default: (0.9, 1.1).
- interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
p (float): probability of applying the transform. Default: 0.5.
Targets: image, mask, bboxes, keypoints
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomShadow(library=None, *args, **kwargs)¶
Bases:
object
Simulates shadows for the image
From https://github.com/UjjwalSaxena/Automold–Road-Augmentation-Library
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- shadow_roi (float, float, float, float): region of the image where shadows
will appear (x_min, y_min, x_max, y_max). All values should be in range [0, 1].
- num_shadows_lower (int): Lower limit for the possible number of shadows.
Should be in range [0, num_shadows_upper].
- num_shadows_upper (int): Lower limit for the possible number of shadows.
Should be in range [num_shadows_lower, inf].
shadow_dimension (int): number of edges in the shadow polygons
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomSizedBBoxSafeCrop(library=None, *args, **kwargs)¶
Bases:
object
Crop a random part of the input and rescale it to some size without loss of bboxes.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
height (int): height after crop and resize. width (int): width after crop and resize. erosion_rate (float): erosion rate applied on input image height before crop. interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
p (float): probability of applying the transform. Default: 1.
Targets: image, mask, bboxes
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomSizedCrop(library=None, *args, **kwargs)¶
Bases:
object
Crop a random part of the input and rescale it to some size.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,torchvision
. Default:
albumentations
.
min_max_height ((int, int)): crop size limits. height (int): height after crop and resize. width (int): width after crop and resize. w2h_ratio (float): aspect ratio of crop. interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
p (float): probability of applying the transform. Default: 1.
if library =
torchvision
: (see:RandomResizedCrop
) [Source]- size (int or sequence): expected output size of the crop, for each edge. If size is an
int instead of sequence like (h, w), a square output size
(size, size)
is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
- library (str): flag for library. Should be one of:
- Targets:
image, mask, bboxes, keypoints
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.RandomSizedCrop(p=1, height=64,width=64,library="albumentations") aug = BA.RandomSizedCrop(p=1, size=[64, 64],library="torchvision") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomSunFlare(library=None, *args, **kwargs)¶
Bases:
object
Simulates Sun Flare for the image
From https://github.com/UjjwalSaxena/Automold–Road-Augmentation-Library
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- flare_roi (float, float, float, float): region of the image where flare will
appear (x_min, y_min, x_max, y_max). All values should be in range [0, 1].
angle_lower (float): should be in range [0, angle_upper]. angle_upper (float): should be in range [angle_lower, 1]. num_flare_circles_lower (int): lower limit for the number of flare circles.
Should be in range [0, num_flare_circles_upper].
- num_flare_circles_upper (int): upper limit for the number of flare circles.
Should be in range [num_flare_circles_lower, inf].
src_radius (int): src_color ((int, int, int)): color of the flare
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RandomToneCurve(library=None, *args, **kwargs)¶
Bases:
object
Randomly change the relationship between bright and dark areas of the image by manipulating its tone curve.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- scale (float): standard deviation of the normal distribution.
Used to sample random distances to move two control points that modify the image’s curve. Values should be in range [0, 1]. Default: 0.1
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.RegularGridMaskGen(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:RegularGridMaskGen
) [Source]- nb_rows( int or tuple of int or list of int or imgaug.parameters.StochasticParameter):
Number of rows of the regular grid.
- nb_cols( int or tuple of int or list of int or imgaug.parameters.StochasticParameter):
Number of columns of the checkerboard. Analogous to nb_rows.
- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Alpha value of each cell.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RegularGridPointsSampler(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:RegularGridPointsSampler
) [Source]- n_rows( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of rows of coordinates to place on each image, i.e. the number of coordinates on the y-axis. Note that for each image, the sampled value is clipped to the interval
[1..H]
, whereH
is the image height.- n_cols( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of columns of coordinates to place on each image, i.e. the number of coordinates on the x-axis. Note that for each image, the sampled value is clipped to the interval
[1..W]
, whereW
is the image width.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RegularGridVoronoi(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:RegularGridVoronoi
) [Source]- n_rows( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of rows of coordinates to place on each image, i.e. the number of coordinates on the y-axis. Note that for each image, the sampled value is clipped to the interval
[1..H]
, whereH
is the image height.- n_cols( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of columns of coordinates to place on each image, i.e. the number of coordinates on the x-axis. Note that for each image, the sampled value is clipped to the interval
[1..W]
, whereW
is the image width.- p_drop_points( number or tuple of number or imgaug.parameters.StochasticParameter, optional):
The probability that a coordinate will be removed from the list of all sampled coordinates. A value of
1.0
would mean that (on average)100
percent of all coordinates will be dropped, while0.0
denotes0
percent. Note that this sampler will always ensure that at least one coordinate is left after the dropout operation, i.e. even1.0
will only drop all except one coordinate.- p_replace( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Defines for any segment the probability that the pixels within that segment are replaced by their average color (otherwise, the pixels are not changed). Examples:
- max_size( int or None, optional):
Maximum image size at which the augmentation is performed. If the width or height of an image exceeds this value, it will be downscaled before the augmentation so that the longest side matches max_size. This is done to speed up the process. The final output image has the same size as the input image. Note that in case p_replace is below
1.0
, the down-/upscaling will affect the not-replaced pixels too. UseNone
to apply no down-/upscaling.- interpolation( int or str, optional):
Interpolation method to use during downscaling when max_size is exceeded. Valid methods are the same as in
imresize_single_image()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RelativeRegularGridPointsSampler(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:RelativeRegularGridPointsSampler
) [Source]- n_rows_frac( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Relative number of coordinates to place on the y-axis. For a value
y
and image heightH
the number of actually placed coordinates (i.e. computed rows) is given byint(round(y*H))
. Note that for each image, the number of coordinates is clipped to the interval[1,H]
, whereH
is the image height.- n_cols_frac( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Relative number of coordinates to place on the x-axis. For a value
x
and image heightW
the number of actually placed coordinates (i.e. computed columns) is given byint(round(x*W))
. Note that for each image, the number of coordinates is clipped to the interval[1,W]
, whereW
is the image width.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RelativeRegularGridVoronoi(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:RelativeRegularGridVoronoi
) [Source]- n_rows_frac( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Relative number of coordinates to place on the y-axis. For a value
y
and image heightH
the number of actually placed coordinates (i.e. computed rows) is given byint(round(y*H))
. Note that for each image, the number of coordinates is clipped to the interval[1,H]
, whereH
is the image height.- n_cols_frac( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Relative number of coordinates to place on the x-axis. For a value
x
and image heightW
the number of actually placed coordinates (i.e. computed columns) is given byint(round(x*W))
. Note that for each image, the number of coordinates is clipped to the interval[1,W]
, whereW
is the image width.- p_drop_points( number or tuple of number or imgaug.parameters.StochasticParameter, optional):
The probability that a coordinate will be removed from the list of all sampled coordinates. A value of
1.0
would mean that (on average)100
percent of all coordinates will be dropped, while0.0
denotes0
percent. Note that this sampler will always ensure that at least one coordinate is left after the dropout operation, i.e. even1.0
will only drop all except one coordinate.- p_replace( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Defines for any segment the probability that the pixels within that segment are replaced by their average color (otherwise, the pixels are not changed). Examples:
- max_size( int or None, optional):
Maximum image size at which the augmentation is performed. If the width or height of an image exceeds this value, it will be downscaled before the augmentation so that the longest side matches max_size. This is done to speed up the process. The final output image has the same size as the input image. Note that in case p_replace is below
1.0
, the down-/upscaling will affect the not-replaced pixels too. UseNone
to apply no down-/upscaling.- interpolation( int or str, optional):
Interpolation method to use during downscaling when max_size is exceeded. Valid methods are the same as in
imresize_single_image()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RemoveCBAsByOutOfImageFraction(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:RemoveCBAsByOutOfImageFraction
) [Source]- fraction( number):
Remove any augmentable for which
fraction_{actual} >= fraction
, wherefraction_{actual}
denotes the estimated out of image fraction.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.RemoveSaturation(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:RemoveSaturation
) [Source]- mul( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Inverse multiplier to use for the saturation values. High values denote stronger color removal. E.g.
1.0
will remove all saturation,0.0
will remove nothing. Expected value range is[0.0, 1.0]
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ReplaceElementwise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:ReplaceElementwise
) [Source]- mask( float or tuple of float or list of float or imgaug.parameters.StochasticParameter):
Mask that indicates the pixels that are supposed to be replaced. The mask will be binarized using a threshold of
0.5
. A value of1
then indicates a pixel that is supposed to be replaced.- replacement( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
The replacement to use at all locations that are marked as
1
in the mask.- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Rerange(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
mmcv
,. Default: mmcv.
- Args:
if library =
mmcv
: (see:Rerange
) [Source]- min_value (float or int): Minimum value of the reranged image.
Default: 0.
- max_value (float or int): Maximum value of the reranged image.
Default: 255.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Resize(library=None, *args, **kwargs)¶
Bases:
object
Resize the input to the given height and width.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,augly
,mmcv
. Default:
albumentations
.
height (int): desired height of the output. width (int): desired width of the output. interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
p (float): probability of applying the transform. Default: 1.
if library =
imgaug
: (see:Resize
) [Source]- size( ‘keep’ or int or float or tuple of int or tuple of float or list of int or list of float or imgaug.parameters.StochasticParameter or dict):
The new size of the images.
- interpolation( imgaug.ALL or int or str or list of int or list of str or imgaug.parameters.StochasticParameter, optional):
Interpolation to use.
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:Resize
) [Source]- size (sequence or int): Desired output size. If size is a sequence like
(h, w), output size will be matched to this. If size is an int, smaller edge of the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size).
if library =
augly
: (see:Resize
) if library =mmcv
: (see:Resize
) [Source]- img_scale (tuple or list[tuple]): Images scales for resizing.
Default:None.
- multiscale_mode (str): Either “range” or “value”.
Default: ‘range’
- ratio_range (tuple[float]): (min_ratio, max_ratio).
Default: None
- keep_ratio (bool): Whether to keep the aspect ratio when resizing the
image. Default: True
- Targets:
image, mask, bboxes, keypoints
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.Resize(p=1, height=64,width=64,interpolation=cv2.INTER_AREA,library="albumentations") aug = BA.Resize(p=1, interpolation=area,library="imgaug") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Rot90(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Rot90
) [Source]- k( int or list of int or tuple of int or imaug.ALL or imgaug.parameters.StochasticParameter, optional):
How often to rotate clockwise by 90 degrees.
- keep_size( bool, optional):
After rotation by an odd-valued k (e.g. 1 or 3), the resulting image may have a different height/width than the original image. If this parameter is set to
True
, then the rotated image will be resized to the input image’s size. Note that this might also cause the augmented image to look distorted.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Rotate(library=None, *args, **kwargs)¶
Bases:
object
Rotate the input by an angle selected randomly from the uniform distribution.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,keras
,augly
,mmcv
. Default:
albumentations
.- limit ((int, int) or int): range from which a random angle is picked. If limit is a single int
an angle is picked from (-limit, limit). Default: (-90, 90)
- interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
- border_mode (OpenCV flag): flag that is used to specify the pixel extrapolation method. Should be one of:
cv2.BORDER_CONSTANT, cv2.BORDER_REPLICATE, cv2.BORDER_REFLECT, cv2.BORDER_WRAP, cv2.BORDER_REFLECT_101. Default: cv2.BORDER_REFLECT_101
value (int, float, list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT. mask_value (int, float,
list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT applied for masks.
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:Rotate
) [Source]- rotate( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Rotation in degrees (NOT radians), i.e. expected value range is around
[-360, 360]
. Rotation happens around the center of the image, not the top left corner as in some other frameworks.- order( int or iterable of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Interpolation order to use. Same meaning as in
skimage
:- cval( number or tuple of number or list of number or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant value to use when filling in newly created pixels. (E.g. translating by 1px to the right will create a new 1px-wide column of pixels on the left of the image). The value is only used when mode=constant. The expected value range is
[0, 255]
foruint8
images. It may be a float value.- mode( str or list of str or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Method to use when filling in newly created pixels. Same meaning as in
skimage
(andnumpy.pad()
):- fit_output( bool, optional):
Whether to modify the affine transformation so that the whole output image is always contained in the image plane (
True
) or accept parts of the image being outside the image plane (False
). This can be thought of as first applying the affine transformation and then applying a second transformation to “zoom in” on the new image so that it fits the image plane, This is useful to avoid corners of the image being outside of the image plane after applying rotations. It will however negate translation and scaling. Note also that activating this may lead to image sizes differing from the input image sizes. To avoid this, wrapAffine
inKeepSizeByResize
, e.g.KeepSizeByResize(Affine(...))
.- backend( str, optional):
Framework to use as a backend. Valid values are
auto
,skimage
(scikit-image’s warp) andcv2
(OpenCV’s warp). Ifauto
is used, the augmenter will automatically try to usecv2
whenever possible (order must be in[0, 1, 3]
). It will silently fall back to skimage if order/dtype is not supported by cv2. cv2 is generally faster than skimage. It also supports RGB cvals, while skimage will resort to intensity cvals (i.e. 3x the same value as RGB). Ifcv2
is chosen and order is2
or4
, it will automatically fall back to order3
.
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:RandomRotation
) [Source]- degrees (sequence or number): Range of degrees to select from.
If degrees is a number instead of sequence like (min, max), the range of degrees will be (-degrees, +degrees).
- interpolation (InterpolationMode): Desired interpolation enum defined by
torchvision.transforms.InterpolationMode
. Default isInterpolationMode.NEAREST
. If input is Tensor, onlyInterpolationMode.NEAREST
,InterpolationMode.BILINEAR
are supported. For backward compatibility integer values (e.g.PIL.Image.NEAREST
) are still acceptable.- expand (bool, optional): Optional expansion flag.
If true, expands the output to make it large enough to hold the entire rotated image. If false or omitted, make the output image the same size as the input image. Note that the expand flag assumes rotation around the center and no translation.
- center (sequence, optional): Optional center of rotation, (x, y). Origin is the upper left corner.
Default is the center of the image.
- fill (sequence or number): Pixel fill value for the area outside the rotated
image. Default is
0
. If given a number, the value is used for all bands respectively.- resample (int, optional): deprecated argument and will be removed since v0.10.0.
Please use the
interpolation
parameter instead.
if library =
keras
: (see:apply_affine_transform
) [Source]x: 2D numpy array, single image. theta: Rotation angle in degrees. tx: Width shift. ty: Heigh shift. shear: Shear angle in degrees. zx: Zoom in x direction. zy: Zoom in y direction row_axis: Index of axis for rows in the input image. col_axis: Index of axis for columns in the input image. channel_axis: Index of axis for channels in the input image. fill_mode: Points outside the boundaries of the input
are filled according to the given mode (one of {‘constant’, ‘nearest’, ‘reflect’, ‘wrap’}).
- cval: Value used for points outside the boundaries
of the input if mode=’constant’.
order: int, order of interpolation
if library =
augly
: (see:RandomRotation
) if library =mmcv
: (see:RandomRotate
) [Source]prob (float): The rotation probability. degree (float, tuple[float]): Range of degrees to select from. If
degree is a number instead of tuple like (min, max), the range of degree will be (
-degree
,+degree
)pad_val (float, optional): Padding value of image. Default: 0. seg_pad_val (float, optional): Padding value of segmentation map.
Default: 255.
- center (tuple[float], optional): Center point (w, h) of the rotation in
the source image. If not specified, the center of the image will be used. Default: None.
- auto_bound (bool): Whether to adjust the image size to cover the whole
rotated image. Default: False
- Targets:
image, mask, bboxes, keypoints
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.Rotate(p=1, limit=[-90, 90],library="albumentations") aug = BA.Rotate(p=1, channel_axis=2,library="keras") aug = BA.Rotate(p=1, prob=1,library="mmcv") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.SafeRotate(library=None, *args, **kwargs)¶
Bases:
object
Rotate the input inside the input’s frame by an angle selected randomly from the uniform distribution.
The resulting image may have artifacts in it. After rotation, the image may have a different aspect ratio, and after resizing, it returns to its original shape with the original aspect ratio of the image. For these reason we may see some artifacts.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- limit ((int, int) or int): range from which a random angle is picked. If limit is a single int
an angle is picked from (-limit, limit). Default: (-90, 90)
- interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
- border_mode (OpenCV flag): flag that is used to specify the pixel extrapolation method. Should be one of:
cv2.BORDER_CONSTANT, cv2.BORDER_REPLICATE, cv2.BORDER_REFLECT, cv2.BORDER_WRAP, cv2.BORDER_REFLECT_101. Default: cv2.BORDER_REFLECT_101
value (int, float, list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT. mask_value (int, float,
list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT applied for masks.
p (float): probability of applying the transform. Default: 0.5.
Targets: image, mask, bboxes, keypoints
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Salt(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Salt
) [Source]- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.SaltAndPepper(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:SaltAndPepper
) [Source]- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use (imagewise) the same sample(s) for all channels (
False
) or to sample value(s) for each channel (True
). Setting this toTrue
will therefore lead to different transformations per image and channel, otherwise only per image. If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
. If it is aStochasticParameter
it is expected to produce samples with values between0.0
and1.0
, where values>0.5
will lead to per-channel behaviour (i.e. same asTrue
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Saturation(library=None, *args, **kwargs)¶
Bases:
object
Randomly changes the brightness, contrast, and saturation of an image. Compared to ColorJitter from torchvision, this transform gives a little bit different results because Pillow (used in torchvision) and OpenCV (used in Albumentations) transform an image to HSV format by different formulas. Another difference - Pillow uses uint8 overflow, but we use value saturation.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,augly
,imagenet_c
. Default:
albumentations
.- brightness (float or tuple of float (min, max)): How much to jitter brightness.
brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness] or the given [min, max]. Should be non negative numbers.
- contrast (float or tuple of float (min, max)): How much to jitter contrast.
contrast_factor is chosen uniformly from [max(0, 1 - contrast), 1 + contrast] or the given [min, max]. Should be non negative numbers.
- saturation (float or tuple of float (min, max)): How much to jitter saturation.
saturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation] or the given [min, max]. Should be non negative numbers.
- hue (float or tuple of float (min, max)): How much to jitter hue.
hue_factor is chosen uniformly from [-hue, hue] or the given [min, max]. Should have 0 <= hue <= 0.5 or -0.5 <= min <= max <= 0.5.
- library (str): flag for library. Should be one of:
e.g.
import beacon_aug as BA aug = BA.Saturation(p=1, saturation=0.2,library="albumentations") aug = BA.Saturation(p=1, corruption_name=saturate,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.SaveDebugImageEveryNBatches(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:SaveDebugImageEveryNBatches
) [Source]- destination( str or _IImageDestination):
Path to a folder. The saved images will follow a filename pattern of
batch_<batch_id>.png
. The latest image will additionally be saved tolatest.png
.- interval( int):
Interval in batches. If set to
N
, everyN
th batch an image will be generated and saved, starting with the first observed batch. Note that the augmenter only counts batches that it sees. If it is executed conditionally or re-instantiated, it may not see all batches or the counter may be wrong in other ways.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Scale(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,augly
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:Resize
) [Source]- size (sequence or int): Desired output size. If size is a sequence like
(h, w), output size will be matched to this. If size is an int, smaller edge of the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size).
if library =
augly
: (see:Scale
)e.g.
import beacon_aug as BA aug = BA.Scale(p=1, size=[64, 64],library="torchvision") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ScaleX(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:ScaleX
) [Source]- scale( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Analogous to
scale
inAffine
, except that this scale value only affects the x-axis. No dictionary input is allowed.- order( int or iterable of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Interpolation order to use. Same meaning as in
skimage
:- cval( number or tuple of number or list of number or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant value to use when filling in newly created pixels. (E.g. translating by 1px to the right will create a new 1px-wide column of pixels on the left of the image). The value is only used when mode=constant. The expected value range is
[0, 255]
foruint8
images. It may be a float value.- mode( str or list of str or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Method to use when filling in newly created pixels. Same meaning as in
skimage
(andnumpy.pad()
):- fit_output( bool, optional):
Whether to modify the affine transformation so that the whole output image is always contained in the image plane (
True
) or accept parts of the image being outside the image plane (False
). This can be thought of as first applying the affine transformation and then applying a second transformation to “zoom in” on the new image so that it fits the image plane, This is useful to avoid corners of the image being outside of the image plane after applying rotations. It will however negate translation and scaling. Note also that activating this may lead to image sizes differing from the input image sizes. To avoid this, wrapAffine
inKeepSizeByResize
, e.g.KeepSizeByResize(Affine(...))
.- backend( str, optional):
Framework to use as a backend. Valid values are
auto
,skimage
(scikit-image’s warp) andcv2
(OpenCV’s warp). Ifauto
is used, the augmenter will automatically try to usecv2
whenever possible (order must be in[0, 1, 3]
). It will silently fall back to skimage if order/dtype is not supported by cv2. cv2 is generally faster than skimage. It also supports RGB cvals, while skimage will resort to intensity cvals (i.e. 3x the same value as RGB). Ifcv2
is chosen and order is2
or4
, it will automatically fall back to order3
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ScaleY(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:ScaleY
) [Source]- scale( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Analogous to
scale
inAffine
, except that this scale value only affects the y-axis. No dictionary input is allowed.- order( int or iterable of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Interpolation order to use. Same meaning as in
skimage
:- cval( number or tuple of number or list of number or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant value to use when filling in newly created pixels. (E.g. translating by 1px to the right will create a new 1px-wide column of pixels on the left of the image). The value is only used when mode=constant. The expected value range is
[0, 255]
foruint8
images. It may be a float value.- mode( str or list of str or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Method to use when filling in newly created pixels. Same meaning as in
skimage
(andnumpy.pad()
):- fit_output( bool, optional):
Whether to modify the affine transformation so that the whole output image is always contained in the image plane (
True
) or accept parts of the image being outside the image plane (False
). This can be thought of as first applying the affine transformation and then applying a second transformation to “zoom in” on the new image so that it fits the image plane, This is useful to avoid corners of the image being outside of the image plane after applying rotations. It will however negate translation and scaling. Note also that activating this may lead to image sizes differing from the input image sizes. To avoid this, wrapAffine
inKeepSizeByResize
, e.g.KeepSizeByResize(Affine(...))
.- backend( str, optional):
Framework to use as a backend. Valid values are
auto
,skimage
(scikit-image’s warp) andcv2
(OpenCV’s warp). Ifauto
is used, the augmenter will automatically try to usecv2
whenever possible (order must be in[0, 1, 3]
). It will silently fall back to skimage if order/dtype is not supported by cv2. cv2 is generally faster than skimage. It also supports RGB cvals, while skimage will resort to intensity cvals (i.e. 3x the same value as RGB). Ifcv2
is chosen and order is2
or4
, it will automatically fall back to order3
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.SegMapClassIdsMaskGen(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:SegMapClassIdsMaskGen
) [Source]- class_ids( int or tuple of int or list of int or imgaug.parameters.StochasticParameter):
Segmentation map classes to mark in the produced mask.
- nb_sample_classes( None or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of class ids to sample (with replacement) per segmentation map. As sampling happens with replacement, fewer unique class ids may be sampled.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.SegRescale(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
mmcv
,. Default: mmcv.
- Args:
if library =
mmcv
: (see:SegRescale
) [Source]scale_factor (float): The scale factor of the final output.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Sequential(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Sequential
) [Source]- children( imgaug.augmenters.meta.Augmenter or list of imgaug.augmenters.meta.Augmenter or None, optional):
The augmenters to apply to images.
- random_order( bool, optional):
Whether to apply the child augmenters in random order. If
True
, the order will be randomly sampled once per batch.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Sharpen(library=None, *args, **kwargs)¶
Bases:
object
Sharpen the input image and overlays the result with the original image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,augly
. Default:
albumentations
.- alpha ((float, float)): range to choose the visibility of the sharpened image. At 0, only the original image is
visible, at 1.0 only its sharpened version is visible. Default: (0.2, 0.5).
lightness ((float, float)): range to choose the lightness of the sharpened image. Default: (0.5, 1.0). p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:Sharpen
) [Source]- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Blending factor of the sharpened image. At
0.0
, only the original image is visible, at1.0
only its sharpened version is visible.- lightness( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Lightness/brightness of the sharped image. Sane values are somewhere in the interval
[0.5, 2.0]
. The value0.0
results in an edge map. Values higher than1.0
create bright images. Default value is1.0
.
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:adjust_sharpness
) [Source]- img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, where … means it can have an arbitrary number of leading dimensions.
- sharpness_factor (float): How much to adjust the sharpness. Can be
any non negative number. 0 gives a blurred image, 1 gives the original image while 2 increases the sharpness by a factor of 2.
if library =
augly
: (see:Sharpen
) Targets:image
e.g.
import beacon_aug as BA aug = BA.Sharpen(p=1, alpha=[0.2, 0.5],lightness=[0.5, 1.0],library="albumentations") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.ShearX(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:ShearX
) [Source]- shear( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Analogous to
shear
inAffine
, except that this shear value only affects the x-axis. No dictionary input is allowed.- order( int or iterable of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Interpolation order to use. Same meaning as in
skimage
:- cval( number or tuple of number or list of number or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant value to use when filling in newly created pixels. (E.g. translating by 1px to the right will create a new 1px-wide column of pixels on the left of the image). The value is only used when mode=constant. The expected value range is
[0, 255]
foruint8
images. It may be a float value.- mode( str or list of str or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Method to use when filling in newly created pixels. Same meaning as in
skimage
(andnumpy.pad()
):- fit_output( bool, optional):
Whether to modify the affine transformation so that the whole output image is always contained in the image plane (
True
) or accept parts of the image being outside the image plane (False
). This can be thought of as first applying the affine transformation and then applying a second transformation to “zoom in” on the new image so that it fits the image plane, This is useful to avoid corners of the image being outside of the image plane after applying rotations. It will however negate translation and scaling. Note also that activating this may lead to image sizes differing from the input image sizes. To avoid this, wrapAffine
inKeepSizeByResize
, e.g.KeepSizeByResize(Affine(...))
.- backend( str, optional):
Framework to use as a backend. Valid values are
auto
,skimage
(scikit-image’s warp) andcv2
(OpenCV’s warp). Ifauto
is used, the augmenter will automatically try to usecv2
whenever possible (order must be in[0, 1, 3]
). It will silently fall back to skimage if order/dtype is not supported by cv2. cv2 is generally faster than skimage. It also supports RGB cvals, while skimage will resort to intensity cvals (i.e. 3x the same value as RGB). Ifcv2
is chosen and order is2
or4
, it will automatically fall back to order3
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ShearY(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:ShearY
) [Source]- shear( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Analogous to
shear
inAffine
, except that this shear value only affects the y-axis. No dictionary input is allowed.- order( int or iterable of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Interpolation order to use. Same meaning as in
skimage
:- cval( number or tuple of number or list of number or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant value to use when filling in newly created pixels. (E.g. translating by 1px to the right will create a new 1px-wide column of pixels on the left of the image). The value is only used when mode=constant. The expected value range is
[0, 255]
foruint8
images. It may be a float value.- mode( str or list of str or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Method to use when filling in newly created pixels. Same meaning as in
skimage
(andnumpy.pad()
):- fit_output( bool, optional):
Whether to modify the affine transformation so that the whole output image is always contained in the image plane (
True
) or accept parts of the image being outside the image plane (False
). This can be thought of as first applying the affine transformation and then applying a second transformation to “zoom in” on the new image so that it fits the image plane, This is useful to avoid corners of the image being outside of the image plane after applying rotations. It will however negate translation and scaling. Note also that activating this may lead to image sizes differing from the input image sizes. To avoid this, wrapAffine
inKeepSizeByResize
, e.g.KeepSizeByResize(Affine(...))
.- backend( str, optional):
Framework to use as a backend. Valid values are
auto
,skimage
(scikit-image’s warp) andcv2
(OpenCV’s warp). Ifauto
is used, the augmenter will automatically try to usecv2
whenever possible (order must be in[0, 1, 3]
). It will silently fall back to skimage if order/dtype is not supported by cv2. cv2 is generally faster than skimage. It also supports RGB cvals, while skimage will resort to intensity cvals (i.e. 3x the same value as RGB). Ifcv2
is chosen and order is2
or4
, it will automatically fall back to order3
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ShiftScaleRotate(library=None, *args, **kwargs)¶
Bases:
object
Randomly apply affine transforms: translate, scale and rotate the input.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.- shift_limit ((float, float) or float): shift factor range for both height and width. If shift_limit
is a single float value, the range will be (-shift_limit, shift_limit). Absolute values for lower and upper bounds should lie in range [0, 1]. Default: (-0.0625, 0.0625).
- scale_limit ((float, float) or float): scaling factor range. If scale_limit is a single float value, the
range will be (-scale_limit, scale_limit). Default: (-0.1, 0.1).
- rotate_limit ((int, int) or int): rotation range. If rotate_limit is a single int value, the
range will be (-rotate_limit, rotate_limit). Default: (-45, 45).
- interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
- border_mode (OpenCV flag): flag that is used to specify the pixel extrapolation method. Should be one of:
cv2.BORDER_CONSTANT, cv2.BORDER_REPLICATE, cv2.BORDER_REFLECT, cv2.BORDER_WRAP, cv2.BORDER_REFLECT_101. Default: cv2.BORDER_REFLECT_101
value (int, float, list of int, list of float): padding value if border_mode is cv2.BORDER_CONSTANT. mask_value (int, float,
list of int, list of float): padding value if border_mode is cv2.BORDER_CONSTANT applied for masks.
- shift_limit_x ((float, float) or float): shift factor range for width. If it is set then this value
instead of shift_limit will be used for shifting width. If shift_limit_x is a single float value, the range will be (-shift_limit_x, shift_limit_x). Absolute values for lower and upper bounds should lie in the range [0, 1]. Default: None.
- shift_limit_y ((float, float) or float): shift factor range for height. If it is set then this value
instead of shift_limit will be used for shifting height. If shift_limit_y is a single float value, the range will be (-shift_limit_y, shift_limit_y). Absolute values for lower and upper bounds should lie in the range [0, 1]. Default: None.
p (float): probability of applying the transform. Default: 0.5.
Targets: image, mask, keypoints
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.ShotNoise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imagenet_c
,. Default: imagenet_c.
- Args:
if library =
imagenet_c
: (see:imagenet_c
)
e.g.
import beacon_aug as BA aug = BA.ShotNoise(p=1, corruption_name=shot_noise,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ShufflePixels(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
augly
,. Default: augly.
- Args:
if library =
augly
: (see:ShufflePixels
)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.SigmoidContrast(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:SigmoidContrast
) [Source]- gain( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Multiplier for the sigmoid function’s output. Higher values lead to quicker changes from dark to light pixels.
- cutoff( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Cutoff that shifts the sigmoid function in horizontal direction. Higher values mean that the switch from dark to light pixels happens later, i.e. the pixels will remain darker.
- per_channel( bool or float, optional):
Whether to use the same value for all channels (
False
) or to sample a new value for each channel (True
). If this value is a floatp
, then forp
percent of all images per_channel will be treated asTrue
, otherwise asFalse
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.SkinTone(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
custom
,. Default: custom.
- Args:
if library =
custom
: (see:SkinTone
) [Source]- grid_size: The size of the grid to hide
None (default): randomly choose a grid size from [0, 16, 32, 44, 56] int: a fix grid size tuple/list: randomly choose a grid size from the input
e.g.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.SmallestMaxSize(library=None, *args, **kwargs)¶
Bases:
object
Rescale an image so that minimum side is equal to max_size, keeping the aspect ratio of the initial image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
max_size (int): maximum size of smallest side of the image after the transformation. interpolation (OpenCV flag): interpolation method. Default: cv2.INTER_LINEAR. p (float): probability of applying the transform. Default: 1.
Targets: image, mask, bboxes, keypoints
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Snow(library=None, *args, **kwargs)¶
Bases:
object
Bleach out some pixel values simulating snow.
From https://github.com/UjjwalSaxena/Automold–Road-Augmentation-Library
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imagenet_c
. Default:
albumentations
.
snow_point_lower (float): lower_bond of the amount of snow. Should be in [0, 1] range snow_point_upper (float): upper_bond of the amount of snow. Should be in [0, 1] range brightness_coeff (float): larger number will lead to a more snow on the image. Should be >= 0
if library =
imagenet_c
: (see:imagenet_c
)- library (str): flag for library. Should be one of:
- Targets:
image
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.Snow(p=1, corruption_name=snow,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.Snowflakes(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Snowflakes
) [Source]- density( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Density of the snowflake layer, as a probability of each pixel in low resolution space to be a snowflake. Valid values are in the interval
[0.0, 1.0]
. Recommended to be in the interval[0.01, 0.075]
.- density_uniformity( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Size uniformity of the snowflakes. Higher values denote more similarly sized snowflakes. Valid values are in the interval
[0.0, 1.0]
. Recommended to be around0.5
.- flake_size( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Size of the snowflakes. This parameter controls the resolution at which snowflakes are sampled. Higher values mean that the resolution is closer to the input image’s resolution and hence each sampled snowflake will be smaller (because of the smaller pixel size).
- flake_size_uniformity( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Controls the size uniformity of the snowflakes. Higher values mean that the snowflakes are more similarly sized. Valid values are in the interval
[0.0, 1.0]
. Recommended to be around0.5
.- angle( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Angle in degrees of motion blur applied to the snowflakes, where
0.0
is motion blur that points straight upwards. Recommended to be in the interval[-30, 30]
. See also__init__()
.- speed( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Perceived falling speed of the snowflakes. This parameter controls the motion blur’s kernel size. It follows roughly the form
kernel_size = image_size * speed
. Hence, values around1.0
denote that the motion blur should “stretch” each snowflake over the whole image.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.SnowflakesLayer(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:SnowflakesLayer
) [Source]- density( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Density of the snowflake layer, as a probability of each pixel in low resolution space to be a snowflake. Valid values are in the interval
[0.0, 1.0]
. Recommended to be in the interval[0.01, 0.075]
.- density_uniformity( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Size uniformity of the snowflakes. Higher values denote more similarly sized snowflakes. Valid values are in the interval
[0.0, 1.0]
. Recommended to be around0.5
.- flake_size( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Size of the snowflakes. This parameter controls the resolution at which snowflakes are sampled. Higher values mean that the resolution is closer to the input image’s resolution and hence each sampled snowflake will be smaller (because of the smaller pixel size).
- flake_size_uniformity( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Controls the size uniformity of the snowflakes. Higher values mean that the snowflakes are more similarly sized. Valid values are in the interval
[0.0, 1.0]
. Recommended to be around0.5
.- angle( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Angle in degrees of motion blur applied to the snowflakes, where
0.0
is motion blur that points straight upwards. Recommended to be in the interval[-30, 30]
. See also__init__()
.- speed( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Perceived falling speed of the snowflakes. This parameter controls the motion blur’s kernel size. It follows roughly the form
kernel_size = image_size * speed
. Hence, values around1.0
denote that the motion blur should “stretch” each snowflake over the whole image.- blur_sigma_fraction( number or tuple of number or list of number or imgaug.parameters.StochasticParameter):
Standard deviation (as a fraction of the image size) of gaussian blur applied to the snowflakes. Valid values are in the interval
[0.0, 1.0]
. Recommended to be in the interval[0.0001, 0.001]
. May still require tinkering based on image size.- blur_sigma_limits( tuple of float, optional):
Controls allowed min and max values of blur_sigma_fraction after(!) multiplication with the image size. First value is the minimum, second value is the maximum. Values outside of that range will be clipped to be within that range. This prevents extreme values for very small or large images.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Solarize(library=None, *args, **kwargs)¶
Bases:
object
Invert all pixel values above a threshold.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
. Default:
albumentations
.
threshold ((int, int) or int, or (float, float) or float): range for solarizing threshold. If threshold is a single value, the range will be [threshold, threshold]. Default: 128. p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:Solarize
) [Source]- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
See
Invert
.- min_value( None or number, optional):
See
Invert
.- max_value( None or number, optional):
See
Invert
.- threshold( None or number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
See
Invert
.- invert_above_threshold( bool or float or imgaug.parameters.StochasticParameter, optional):
See
Invert
.
- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:RandomSolarize
) [Source]threshold (float): all pixels equal or above this value are inverted. p (float): probability of the image being color inverted. Default value is 0.5
- Targets:
image
- Image types:
any
e.g.
import beacon_aug as BA aug = BA.Solarize(p=1, threshold=128,library="albumentations") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.SomeColorsMaskGen(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:SomeColorsMaskGen
) [Source]- nb_bins( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of bins. For
B
bins, each bin denotes roughly360/B
degrees of colors in the hue channel. Lower values lead to a coarser selection of colors. Expected value range is[2, 256]
.- smoothness( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Strength of the 1D gaussian kernel applied to the sampled binwise alpha values. Larger values will lead to more similar grayscaling of neighbouring colors. Expected value range is
[0.0, 1.0]
.- alpha( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Parameter to sample binwise alpha blending factors from. Expected value range is
[0.0, 1.0]
. Note that the alpha values will be smoothed between neighbouring bins. Hence, it is usually a good idea to set this so that the probability distribution peaks are around0.0
and1.0
, e.g. via a list[0.0, 1.0]
or aBeta
distribution. It is not recommended to set this to a deterministic value, otherwise all bins and hence all pixels in the generated mask will have the same value.- rotation_deg( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Rotiational shift of each bin as a fraction of
360
degrees. E.g.0.0
will not shift any bins, while a value of0.5
will shift by around180
degrees. This shift is mainly used so that the0th
bin does not always start at0deg
. Expected value range is[-360, 360]
. This parameter can usually be kept at the default value.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.SomeOf(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:SomeOf
) [Source]- n( int or tuple of int or list of int or imgaug.parameters.StochasticParameter or None, optional):
Count of augmenters to apply.
- children( imgaug.augmenters.meta.Augmenter or list of imgaug.augmenters.meta.Augmenter or None, optional):
The augmenters to apply to images. If this is a list of augmenters, it will be converted to a
Sequential
.- random_order( boolean, optional):
Whether to apply the child augmenters in random order. If
True
, the order will be randomly sampled once per batch.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Sometimes(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Sometimes
) [Source]- then_list( None or imgaug.augmenters.meta.Augmenter or list of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) to apply to p% percent of all images. If this is a list of augmenters, it will be converted to a
Sequential
.- else_list( None or imgaug.augmenters.meta.Augmenter or list of imgaug.augmenters.meta.Augmenter, optional):
Augmenter(s) to apply to
(1-p)
percent of all images. These augmenters will be applied only when the ones in then_list are not applied (either-or-relationship). If this is a list of augmenters, it will be converted to aSequential
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Spatter(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imagenet_c
,. Default: imagenet_c.
- Args:
if library =
imagenet_c
: (see:imagenet_c
)
e.g.
import beacon_aug as BA aug = BA.Spatter(p=1, corruption_name=spatter,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.SpeckleNoise(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imagenet_c
,. Default: imagenet_c.
- Args:
if library =
imagenet_c
: (see:imagenet_c
)
e.g.
import beacon_aug as BA aug = BA.SpeckleNoise(p=1, corruption_name=speckle_noise,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.StochasticParameterMaskGen(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:StochasticParameterMaskGen
) [Source]- parameter( imgaug.parameters.StochasticParameter):
Stochastic parameter to draw mask samples from. Expected to return values in interval
[0.0, 1.0]
(not all stochastic parameters do that) and must be able to handle sampling shapes(H, W)
and(H, W, C)
(all stochastic parameters should do that).- per_channel( bool or float or imgaug.parameters.StochasticParameter, optional):
Whether to use the same mask for all channels (
False
) or to sample a new mask for each channel (True
). If this value is a floatp
, then forp
percent of all rows (i.e. images) per_channel will be treated asTrue
, otherwise asFalse
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.SubsamplingPointsSampler(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:SubsamplingPointsSampler
) [Source]- other_points_sampler( IPointsSampler):
Another point sampler that is queried to generate a
list
of points. The dropout operation will be applied to thatlist
.- n_points_max( int):
Maximum number of allowed points. If other_points_sampler generates more points than this maximum, a random subset of size n_points_max will be selected.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Superpixels(library=None, *args, **kwargs)¶
Bases:
object
Transform images partially/completely to their superpixel representation. This implementation uses skimage’s version of the SLIC algorithm.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
. Default:
albumentations
.- p_replace (float or tuple of float): Defines for any segment the probability that the pixels within that
segment are replaced by their average color (otherwise, the pixels are not changed). Examples:
A probability of
0.0
would mean, that the pixels in no segment are replaced by their average color (image is not changed at all).A probability of
0.5
would mean, that around half of all segments are replaced by their average color.A probability of
1.0
would mean, that all segments are replaced by their average color (resulting in a voronoi image).
- Behaviour based on chosen data types for this parameter:
If a
float
, then thatflat
will always be used.If
tuple
(a, b)
, then a random probability will be sampled from the interval[a, b]
per image.
- n_segments (int, or tuple of int): Rough target number of how many superpixels to generate (the algorithm
may deviate from this number). Lower value will lead to coarser superpixels. Higher values are computationally more intensive and will hence lead to a slowdown * If a single
int
, then that value will always be used as thenumber of segments.
If a
tuple
(a, b)
, then a value from the discrete interval[a..b]
will be sampled per image.
- max_size (int or None): Maximum image size at which the augmentation is performed.
If the width or height of an image exceeds this value, it will be downscaled before the augmentation so that the longest side matches max_size. This is done to speed up the process. The final output image has the same size as the input image. Note that in case p_replace is below
1.0
, the down-/upscaling will affect the not-replaced pixels too. UseNone
to apply no down-/upscaling.- interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of:
cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:Superpixels
) [Source]- p_replace( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Defines for any segment the probability that the pixels within that segment are replaced by their average color (otherwise, the pixels are not changed). Examples:
- n_segments( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Rough target number of how many superpixels to generate (the algorithm may deviate from this number). Lower value will lead to coarser superpixels. Higher values are computationally more intensive and will hence lead to a slowdown.
- max_size( int or None, optional):
Maximum image size at which the augmentation is performed. If the width or height of an image exceeds this value, it will be downscaled before the augmentation so that the longest side matches max_size. This is done to speed up the process. The final output image has the same size as the input image. Note that in case p_replace is below
1.0
, the down-/upscaling will affect the not-replaced pixels too. UseNone
to apply no down-/upscaling.- interpolation( int or str, optional):
Interpolation method to use during downscaling when max_size is exceeded. Valid methods are the same as in
imresize_single_image()
.
- library (str): flag for library. Should be one of:
- Targets:
image
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.TenCrop(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:TenCrop
) [Source]- size (sequence or int): Desired output size of the crop. If size is an
int instead of sequence like (h, w), a square crop (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
vertical_flip (bool): Use vertical flipping instead of horizontal
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Tensor(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:Tensor
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.TextFlow(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
custom
,. Default: custom.
- Args:
if library =
custom
: (see:TextFlow
) [Source]text (str): overlay text x (int or list): value or range of the x_coordinate of text,
Default: None. (random select in range of image)
- y (int or list): value or range of the y_coordinate of text,
Default: None. (random select in range of image)
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ToFloat(library=None, *args, **kwargs)¶
Bases:
object
Divide pixel values by max_value to get a float32 output array where all values lie in the range [0, 1.0]. If max_value is None the transform will try to infer the maximum value by inspecting the data type of the input image.
- See Also:
FromFloat
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
max_value (float): maximum possible input value. Default: None. p (float): probability of applying the transform. Default: 1.0.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
any type
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.ToGray(library=None, *args, **kwargs)¶
Bases:
object
Convert the input RGB image to grayscale. If the mean pixel value for the resulting image is greater than 127, invert the resulting grayscale image.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
p (float): probability of applying the transform. Default: 0.5.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.ToPILImage(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:ToPILImage
) [Source]- mode (`PIL.Image mode`_): color space and pixel depth of input data (optional).
If
mode
isNone
(default) there are some assumptions made about the input data: - If the input has 4 channels, themode
is assumed to beRGBA
. - If the input has 3 channels, themode
is assumed to beRGB
. - If the input has 2 channels, themode
is assumed to beLA
. - If the input has 1 channel, themode
is determined by the data type (i.eint
,float
,short
).
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ToSepia(library=None, *args, **kwargs)¶
Bases:
object
Applies sepia filter to the input RGB image
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
p (float): probability of applying the transform. Default: 0.5.
Targets: image
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.ToTensor(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
torchvision
,. Default: torchvision.
- Args:
if library =
torchvision
: (see:ToTensor
) [Source]Convert a
PIL Image
ornumpy.ndarray
to tensor. This transform does not support torchscript.
Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] if the PIL Image belongs to one of the modes (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1) or if the numpy.ndarray has dtype = np.uint8
In the other cases, tensors are returned without scaling.
Note
Because the input image is scaled to [0.0, 1.0], this transformation should not be used when transforming target image masks. See the references for implementing the transforms for image masks.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.TotalDropout(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:TotalDropout
) [Source]
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.TranslateX(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:TranslateX
) [Source]- percent( None or number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Analogous to
translate_percent
inAffine
, except that this translation value only affects the x-axis. No dictionary input is allowed.- px( None or int or tuple of int or list of int or imgaug.parameters.StochasticParameter or dict {“x”: int/tuple/list/StochasticParameter, “y”: int/tuple/list/StochasticParameter}, optional):
Analogous to
translate_px
inAffine
, except that this translation value only affects the x-axis. No dictionary input is allowed.- order( int or iterable of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Interpolation order to use. Same meaning as in
skimage
:- cval( number or tuple of number or list of number or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant value to use when filling in newly created pixels. (E.g. translating by 1px to the right will create a new 1px-wide column of pixels on the left of the image). The value is only used when mode=constant. The expected value range is
[0, 255]
foruint8
images. It may be a float value.- mode( str or list of str or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Method to use when filling in newly created pixels. Same meaning as in
skimage
(andnumpy.pad()
):- fit_output( bool, optional):
Whether to modify the affine transformation so that the whole output image is always contained in the image plane (
True
) or accept parts of the image being outside the image plane (False
). This can be thought of as first applying the affine transformation and then applying a second transformation to “zoom in” on the new image so that it fits the image plane, This is useful to avoid corners of the image being outside of the image plane after applying rotations. It will however negate translation and scaling. Note also that activating this may lead to image sizes differing from the input image sizes. To avoid this, wrapAffine
inKeepSizeByResize
, e.g.KeepSizeByResize(Affine(...))
.- backend( str, optional):
Framework to use as a backend. Valid values are
auto
,skimage
(scikit-image’s warp) andcv2
(OpenCV’s warp). Ifauto
is used, the augmenter will automatically try to usecv2
whenever possible (order must be in[0, 1, 3]
). It will silently fall back to skimage if order/dtype is not supported by cv2. cv2 is generally faster than skimage. It also supports RGB cvals, while skimage will resort to intensity cvals (i.e. 3x the same value as RGB). Ifcv2
is chosen and order is2
or4
, it will automatically fall back to order3
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.TranslateY(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:TranslateY
) [Source]- percent( None or number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Analogous to
translate_percent
inAffine
, except that this translation value only affects the y-axis. No dictionary input is allowed.- px( None or int or tuple of int or list of int or imgaug.parameters.StochasticParameter or dict {“x”: int/tuple/list/StochasticParameter, “y”: int/tuple/list/StochasticParameter}, optional):
Analogous to
translate_px
inAffine
, except that this translation value only affects the y-axis. No dictionary input is allowed.- order( int or iterable of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Interpolation order to use. Same meaning as in
skimage
:- cval( number or tuple of number or list of number or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
The constant value to use when filling in newly created pixels. (E.g. translating by 1px to the right will create a new 1px-wide column of pixels on the left of the image). The value is only used when mode=constant. The expected value range is
[0, 255]
foruint8
images. It may be a float value.- mode( str or list of str or imgaug.ALL or imgaug.parameters.StochasticParameter, optional):
Method to use when filling in newly created pixels. Same meaning as in
skimage
(andnumpy.pad()
):- fit_output( bool, optional):
Whether to modify the affine transformation so that the whole output image is always contained in the image plane (
True
) or accept parts of the image being outside the image plane (False
). This can be thought of as first applying the affine transformation and then applying a second transformation to “zoom in” on the new image so that it fits the image plane, This is useful to avoid corners of the image being outside of the image plane after applying rotations. It will however negate translation and scaling. Note also that activating this may lead to image sizes differing from the input image sizes. To avoid this, wrapAffine
inKeepSizeByResize
, e.g.KeepSizeByResize(Affine(...))
.- backend( str, optional):
Framework to use as a backend. Valid values are
auto
,skimage
(scikit-image’s warp) andcv2
(OpenCV’s warp). Ifauto
is used, the augmenter will automatically try to usecv2
whenever possible (order must be in[0, 1, 3]
). It will silently fall back to skimage if order/dtype is not supported by cv2. cv2 is generally faster than skimage. It also supports RGB cvals, while skimage will resort to intensity cvals (i.e. 3x the same value as RGB). Ifcv2
is chosen and order is2
or4
, it will automatically fall back to order3
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Transpose(library=None, *args, **kwargs)¶
Bases:
object
Transpose the input by swapping rows and columns.
- Args:
- library (str): flag for library. Should be one of:
albumentations
. Default:
albumentations
.
p (float): probability of applying the transform. Default: 0.5.
Targets: image, mask, bboxes, keypoints
- library (str): flag for library. Should be one of:
- Image types:
uint8, float32
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- class beacon_aug.operators.UniformColorQuantization(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:UniformColorQuantization
) [Source]- n_colors( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Target number of colors to use in the generated output image.
- to_colorspace( None or str or list of str or imgaug.parameters.StochasticParameter):
The colorspace in which to perform the quantization. See
change_colorspace_()
for valid values. This will be ignored for grayscale input images.- max_size( None or int, optional):
Maximum image size at which to perform the augmentation. If the width or height of an image exceeds this value, it will be downscaled before running the augmentation so that the longest side matches max_size. This is done to speed up the augmentation. The final output image has the same size as the input image. Use
None
to apply no downscaling.- interpolation( int or str, optional):
Interpolation method to use during downscaling when max_size is exceeded. Valid methods are the same as in
imresize_single_image()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.UniformColorQuantizationToNBits(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:UniformColorQuantizationToNBits
) [Source]- nb_bits( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of bits to keep in each image’s array component.
- to_colorspace( None or str or list of str or imgaug.parameters.StochasticParameter):
The colorspace in which to perform the quantization. See
change_colorspace_()
for valid values. This will be ignored for grayscale input images.- max_size( None or int, optional):
Maximum image size at which to perform the augmentation. If the width or height of an image exceeds this value, it will be downscaled before running the augmentation so that the longest side matches max_size. This is done to speed up the augmentation. The final output image has the same size as the input image. Use
None
to apply no downscaling.- interpolation( int or str, optional):
Interpolation method to use during downscaling when max_size is exceeded. Valid methods are the same as in
imresize_single_image()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.UniformPointsSampler(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:UniformPointsSampler
) [Source]- n_points( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of points to sample on each image.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.UniformVoronoi(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:UniformVoronoi
) [Source]- n_points( int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional):
Number of points to sample on each image.
- p_replace( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Defines for any segment the probability that the pixels within that segment are replaced by their average color (otherwise, the pixels are not changed). Examples:
- max_size( int or None, optional):
Maximum image size at which the augmentation is performed. If the width or height of an image exceeds this value, it will be downscaled before the augmentation so that the longest side matches max_size. This is done to speed up the process. The final output image has the same size as the input image. Note that in case p_replace is below
1.0
, the down-/upscaling will affect the not-replaced pixels too. UseNone
to apply no down-/upscaling.- interpolation( int or str, optional):
Interpolation method to use during downscaling when max_size is exceeded. Valid methods are the same as in
imresize_single_image()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.VerticalFlip(library=None, *args, **kwargs)¶
Bases:
object
Flip the input vertically around the x-axis.
- Args:
- library (str): flag for library. Should be one of:
albumentations
,imgaug
,torchvision
,keras
,augly
. Default:
albumentations
.
p (float): probability of applying the transform. Default: 0.5.
if library =
imgaug
: (see:Flipud
) [Source]- library (str): flag for library. Should be one of:
if library =
torchvision
: (see:RandomVerticalFlip
) [Source]p (float): probability of the image being flipped. Default value is 0.5
if library =
keras
: (see:flip_axis
) [Source]if library =
augly
: (see:VFlip
) Targets:image, mask, bboxes, keypoints
- Image types:
uint8, float32
e.g.
import beacon_aug as BA aug = BA.VerticalFlip(p=1, axis=0,library="keras") image_auged = aug(image=image)["image"]
- class beacon_aug.operators.VerticalLinearGradientMaskGen(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:VerticalLinearGradientMaskGen
) [Source]- min_value( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Minimum value that the mask will have up to the start point of the linear gradient. Note that min_value is allowed to be larger than max_value, in which case the gradient will start at the (higher) min_value and decrease towards the (lower) max_value.
- max_value( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Maximum value that the mask will have at the end of the linear gradient.
- start_at( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Position on the y-axis where the linear gradient starts, given as a fraction of the axis size. Interval is
[0.0, 1.0]
, where0.0
is at the top of the image. Ifend_at < start_at
the gradient will be inverted.- end_at( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Position on the x-axis where the linear gradient ends, given as a fraction of the axis size. Interval is
[0.0, 1.0]
, where1.0
is at the bottom of the image.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.Voronoi(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:Voronoi
) [Source]- points_sampler( IPointsSampler):
A points sampler which will be queried per image to generate the coordinates of the centers of voronoi cells.
- p_replace( number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional):
Defines for any segment the probability that the pixels within that segment are replaced by their average color (otherwise, the pixels are not changed). Examples:
- max_size( int or None, optional):
Maximum image size at which the augmentation is performed. If the width or height of an image exceeds this value, it will be downscaled before the augmentation so that the longest side matches max_size. This is done to speed up the process. The final output image has the same size as the input image. Note that in case p_replace is below
1.0
, the down-/upscaling will affect the not-replaced pixels too. UseNone
to apply no down-/upscaling.- interpolation( int or str, optional):
Interpolation method to use during downscaling when max_size is exceeded. Valid methods are the same as in
imresize_single_image()
.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.WithBrightnessChannels(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:WithBrightnessChannels
) [Source]- children( imgaug.augmenters.meta.Augmenter or list of imgaug.augmenters.meta.Augmenter or None, optional):
One or more augmenters to apply to the brightness channels. They receive images with a single channel and have to modify these.
- to_colorspace( imgaug.ALL or str or list of str or imgaug.parameters.StochasticParameter, optional):
Colorspace in which to extract the brightness-related channels. Currently,
imgaug.augmenters.color.CSPACE_YCrCb
,CSPACE_HSV
,CSPACE_HLS
,CSPACE_Lab
,CSPACE_Luv
,CSPACE_YUV
,CSPACE_CIE
are supported.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.WithChannels(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:WithChannels
) [Source]- channels( None or int or list of int, optional):
Sets the channels to be extracted from each image. If
None
, all channels will be used. Note that this is not stochastic - the extracted channels are always the same ones.- children( imgaug.augmenters.meta.Augmenter or list of imgaug.augmenters.meta.Augmenter or None, optional):
One or more augmenters to apply to images, after the channels are extracted.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.WithColorspace(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:WithColorspace
) [Source]- to_colorspace( str):
See
change_colorspace_()
.- children( imgaug.augmenters.meta.Augmenter or list of imgaug.augmenters.meta.Augmenter or None, optional):
One or more augmenters to apply to converted images.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.WithHueAndSaturation(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:WithHueAndSaturation
) [Source]- children( imgaug.augmenters.meta.Augmenter or list of imgaug.augmenters.meta.Augmenter or None, optional):
One or more augmenters to apply to converted images. They receive
int16
images with two channels (hue, saturation) and have to modify these.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.WithPolarWarping(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imgaug
,. Default: imgaug.
- Args:
if library =
imgaug
: (see:WithPolarWarping
) [Source]- children( imgaug.augmenters.meta.Augmenter or list of imgaug.augmenters.meta.Augmenter or None, optional):
One or more augmenters to apply to images after they were transformed to polar representation.
e.g.
import beacon_aug as BA image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of:
- class beacon_aug.operators.ZoomBlur(library=None, *args, **kwargs)¶
Bases:
object
- library (str): flag for library. Should be one of:
imagenet_c
,. Default: imagenet_c.
- Args:
if library =
imagenet_c
: (see:imagenet_c
)
e.g.
import beacon_aug as BA aug = BA.ZoomBlur(p=1, corruption_name=zoom_blur,severity=1,library="imagenet_c") image_auged = aug(image=image)["image"]
- library (str): flag for library. Should be one of: