Preprocessing¶

Image Processing¶

brainlit.preprocessing.center(data)[source]

Centers data by subtracting the mean

Parameters

data (array-like) -- data to be centered

Returns

data_centered -- centered-data

Return type

array-like

brainlit.preprocessing.contrast_normalize(data, centered=False)[source]

Normalizes image data to have variance of 1

Parameters
• data (array-like) -- data to be normalized

• centered (boolean) -- When False (the default), centers the data first

Returns

data -- normalized data

Return type

array-like

brainlit.preprocessing.whiten(img, window_size, step_size, centered=False, epsilon=1e-05, type='PCA')[source]

Performs PCA or ZCA whitening on an array. This preprocessing step is described in _[1].

Parameters
• img (array-like) -- image to be vectorized

• window_size (array-like) -- window size dictating the neighborhood to be vectorized, same number of dimensions as img, based on the top-left corner

• step_size (array-like) -- step size in each of direction of window, same number of dimensions as img

• centered (boolean) -- When False (the default), centers the data first

• epsilon (epsilon value for whitening) --

• type (string) -- Determines the type of whitening. Can be either 'PCA' (default) or 'ZCA'

Returns

• data-whitened (array-like) -- whitened data

• S (2D array) -- Singular value array of covariance of vectorized image

References

1

http://ufldl.stanford.edu/tutorial/unsupervised/PCAWhitening/

brainlit.preprocessing.window_pad(img, window_size, step_size)[source]

Pad image at edges so the window can convolve evenly. Padding will be a copy of the edges.

Parameters
• img (array-like) -- image to be padded

• window_size (array-like) -- window size that will be convolved, same number of dimensions as img

• step_size (array-like) -- step size in each of direction of window convolution, same number of dimensions as img

Returns

• pad_size (array-like) -- amount of padding in every direction of the image

brainlit.preprocessing.undo_pad(img, pad_size)[source]

Remove padding from edges of images

Parameters
• img (array-like) -- padded image

• pad_size (array-like) -- amount of padding in every direction of the image

Returns

Return type

array-like

brainlit.preprocessing.vectorize_img(img, window_size, step_size)[source]

Reshapes an image by vectorizing different neighborhoods of the image.

Parameters
• img (array-like) -- image to be vectorized

• window_size (array-like) -- window size dictating the neighborhood to be vectorized, same number of dimensions as img, based on the top-left corner

• step_size (array-like) -- step size in each of direction of window, same number of dimensions as img

Returns

vectorized -- vectorized image

Return type

array-like

brainlit.preprocessing.imagize_vector(img, orig_shape, window_size, step_size)[source]

Reshapes a vectorized image back to its original shape.

Parameters
• img (array-like) -- vectorized image

• orig_shape (tuple) -- dimensions of original image

• window_size (array-like) -- window size dictating the neighborhood to be vectorized, same number of dimensions as img, based on the top-left corner

• step_size (array-like) -- step size in each of direction of window, same number of dimensions as img

Returns

imagized -- original image

Return type

array-like

Image Filters¶

brainlit.preprocessing.gabor_filter(input: np.ndarray, sigma: Union[float, List[float]], phi: Union[float, List[float]], frequency: float, offset: float = 0.0, output: Optional[Union[np.ndarray, np.dtype, None]] = None, mode: str = 'reflect', cval: float = 0.0, truncate: float = 4.0)[source]

Multidimensional Gabor filter. A gabor filter is an elementwise product between a Gaussian and a complex exponential.

Parameters
• input (array_like) -- The input array.

• sigma (scalar or sequence of scalars) -- Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

• phi (scalar or sequence of scalars) -- Angles specifying orientation of the periodic complex exponential. If the input is n-dimensional, then phi is a sequence of length n-1. Convention follows https://en.wikipedia.org/wiki/N-sphere#Spherical_coordinates.

• frequency (scalar) -- Frequency of the complex exponential. Units are revolutions/voxels.

• offset (scalar) -- Phase shift of the complex exponential. Units are radians.

• output (array or dtype, optional) -- The array in which to place the output, or the dtype of the returned array. By default an array of the same dtype as input will be created. Only the real component will be saved if output is an array.

• mode ({‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional) -- The mode parameter determines how the input array is extended beyond its boundaries. Default is ‘reflect’.

• cval (scalar, optional) -- Value to fill past edges of input if mode is ‘constant’. Default is 0.0.

• truncate (float) -- Truncate the filter at this many standard deviations. Default is 4.0.

Returns

real, imaginary -- Returns real and imaginary responses, arrays of same shape as input.

Return type

arrays

Notes

The multidimensional filter is implemented by creating a gabor filter array, then using the convolve method. Also, sigma specifies the standard deviations of the Gaussian along the coordinate axes, and the Gaussian is not rotated. This is unlike skimage.filters.gabor, whose Gaussian is rotated with the complex exponential. The reasoning behind this design choice is that sigma can be more easily designed to deal with anisotropic voxels.

Examples

>>> from brainlit.preprocessing import gabor_filter
>>> a = np.arange(50, step=2).reshape((5,5))
>>> a
array([[ 0,  2,  4,  6,  8],
[10, 12, 14, 16, 18],
[20, 22, 24, 26, 28],
[30, 32, 34, 36, 38],
[40, 42, 44, 46, 48]])
>>> gabor_filter(a, sigma=1, phi=[0.0], frequency=0.1)
(array([[ 3,  5,  6,  8,  9],
[ 9, 10, 12, 13, 14],
[16, 18, 19, 21, 22],
[24, 25, 27, 28, 30],
[29, 30, 32, 34, 35]]),
array([[ 0,  0, -1,  0,  0],
[ 0,  0, -1,  0,  0],
[ 0,  0, -1,  0,  0],
[ 0,  0, -1,  0,  0],
[ 0,  0, -1,  0,  0]]))

>>> from scipy import misc
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> plt.gray()  # show the filtered result in grayscale
>>> ax1 = fig.add_subplot(121)  # left side
>>> ax2 = fig.add_subplot(122)  # right side
>>> ascent = misc.ascent()
>>> result = gabor_filter(ascent, sigma=5, phi=[0.0], frequency=0.1)
>>> ax1.imshow(ascent)
>>> ax2.imshow(result[0])
>>> plt.show()


Segmentation Processing¶

brainlit.preprocessing.getLargestCC(segmentation: np.ndarray)[source]

Returns the largest connected component of a image.

Arguments: segmentation : Segmentation data of image or volume.

Returns: largeCC : Segmentation with only largest connected component.

brainlit.preprocessing.removeSmallCCs(segmentation: np.ndarray, size: Union[int, float])[source]

Removes small connected components from an image.

Parameters: segmentation : Segmentation data of image or volume. size : Maximum connected component size to remove.

Returns: largeCCs : Segmentation with small connected components removed.

brainlit.preprocessing.label_points(labels, points, res)[source]

Adjust points so they fall on a foreground component of labels.

Parameters
• labels (array) -- labeled components, such as output from measure.label

• points (list) -- points to be adjusted

• res (list) -- voxel size

Returns

Return type

[list]

brainlit.preprocessing.split_frags(soma_coords, labels, im_processed, threshold, res)[source]

Preprocesses a neuron image segmentation by splitting up non-soma components into 5 micron segments

Parameters
• soma_coords (list) -- list of voxel coordinates of somas

• labels (np.array) -- image segmentation

• im_processed (np.array) -- voxel-wise probability predictions for foreground

• threshold (float) -- threshold used to segment probability predictions into mask

• res (list) -- voxel size in image

Returns

new image segmentation - different numbers indicate different fragments, 0 is background

Return type

np.array

brainlit.preprocessing.remove_somas(soma_coords, labels, im_processed, res)[source]

Helper function of split_frags. Removes area around somas.

Parameters
• soma_coords (list) -- list of voxel coordinates of somas

• labels (np.array) -- image segmentation

• im_processed (np.array) -- voxel-wise probability predictions for foreground

• res (list) -- voxel size in image

Returns

probability predictions, with the soma regions masked list: coordinates of the points dictionary: map from component in labels, to set of points that were placed there list: masks of the different somas

Return type

np.array

brainlit.preprocessing.split_frags_place_points(image_iterative, labels, radius_states, res, threshold, states, comp_to_states)[source]

Helper function of split_frags. Places points on high probability voxels while keeping the points a certain distance apart from each other.

Parameters
• image_iterative (np.array) -- probability predictions, with the soma regions masked

• labels (np.array) -- image segmentation

• radius_states (float) -- distance constraint between points

• res (list) -- voxel size in image

• threshold (float) -- threshold used to segment probability predictions into mask

• states (list) -- coordinates of the points

• comp_to_states (dictionary) -- map from component in labels, to set of points that were placed there

Returns

coordinates of the points dictionary: map from component in labels, to set of points that were placed there

Return type

list

brainlit.preprocessing.split_frags_split_comps(labels, new_soma_masks, states, comp_to_states)[source]

Helper function of split_frags. Splits the components according to the points that were placed by split_frags_place_points.

Parameters
• labels (np.array) -- image segmentation

• states (list) -- coordinates of the points

• comp_to_states (dictionary) -- map from component in labels, to set of points that were placed there

Returns

new image segmentation - different numbers indicate different fragments, 0 is background

Return type

np.array

brainlit.preprocessing.split_frags_split_fractured_components(new_labels)[source]

Helper function of split_frags. Some fragments from split_frags_split_comps may not be connected so this function separates those.

Parameters

new_labels (np.array) -- new image segmentation - different numbers indicate different fragments, 0 is background

Returns

new image segmentation - different numbers indicate different fragments, 0 is background

Return type

np.array

brainlit.preprocessing.rename_states_consecutively(new_labels)[source]

Helper function of split_frags. Relabel components in image segmentation so the unique values are consecutive.

Parameters

new_labels (np.array) -- new image segmentation - different numbers indicate different fragments, 0 is background

Returns

new image segmentation - different numbers indicate different fragments, 0 is background

Return type

np.array