suite2p.detection package

Submodules

suite2p.detection.anatomical module

suite2p.detection.chan2detect module

suite2p.detection.chan2detect.cellpose_overlap(stats, mimg2)[source]
suite2p.detection.chan2detect.correct_bleedthrough(Ly, Lx, nblks, mimg, mimg2)[source]
suite2p.detection.chan2detect.detect(ops, stats)[source]
suite2p.detection.chan2detect.intensity_ratio(ops, stats)[source]

compute pixels in cell and in area around cell (including overlaps) (exclude pixels from other cells)

suite2p.detection.chan2detect.quadrant_mask(Ly, Lx, ny, nx, sT)[source]

suite2p.detection.denoise module

suite2p.detection.detect module

suite2p.detection.metrics module

suite2p.detection.sourcery module

suite2p.detection.sourcery.circleMask(d0)[source]

creates array with indices which are the radius of that x,y point

Parameters:

d0 – (patch of (-d0,d0+1) over which radius computed

Returns:

  • rs – array (2*d0+1,2*d0+1) of radii

  • dx – indices in rs where the radius is less than d0

  • dy – indices in rs where the radius is less than d0

suite2p.detection.sourcery.connected_region(stat, ops)[source]
suite2p.detection.sourcery.create_neuropil_basis(ops, Ly, Lx)[source]

computes neuropil basis functions

Parameters:
  • ops – ratio_neuropil, tile_factor, diameter, neuropil_type

  • Ly (int) –

  • Lx (int) –

Returns:

basis functions (pixels x nbasis functions)

Return type:

S

suite2p.detection.sourcery.drawClusters(stat, ops)[source]
suite2p.detection.sourcery.extendROI(ypix, xpix, Ly, Lx, niter=1)[source]
suite2p.detection.sourcery.getSVDdata(mov, ops)[source]
suite2p.detection.sourcery.getSVDproj(mov, ops, u)[source]
suite2p.detection.sourcery.getStU(ops, U)[source]
suite2p.detection.sourcery.getVmap(Ucell, sig)[source]
suite2p.detection.sourcery.get_connected(Ly, Lx, stat)[source]

grow i0 until it cannot grow any more

suite2p.detection.sourcery.get_stat(ops, stats, Ucell, codes, frac=0.5)[source]

computes statistics of cells found using sourcery

Parameters:
  • Ly

  • Lx

  • d0

  • mPix ((pixels,ncells)) –

  • mLam ((weights,ncells)) –

  • codes ((ncells,nsvd)) –

  • Ucell ((nsvd,Ly,Lx)) –

Returns:

assigned to stat: ipix, ypix, xpix, med, npix, lam, footprint, compact, aspect_ratio, ellipse

Return type:

stat

suite2p.detection.sourcery.iter_extend(ypix, xpix, Ucell, code, refine=-1, change_codes=False)[source]
suite2p.detection.sourcery.localMax(V, footprint, thres)[source]

find local maxima of V (correlation map) using a filter with (usually circular) footprint

Parameters:
  • V

  • footprint

  • thres

Returns:

i,j

Return type:

indices of local max greater than thres

suite2p.detection.sourcery.localRegion(i, j, dy, dx, Ly, Lx)[source]

returns valid indices of local region surrounding (i,j) of size (dy.size, dx.size)

suite2p.detection.sourcery.minDistance(inputs)[source]
suite2p.detection.sourcery.morphOpen(V, footprint)[source]

computes the morphological opening of V (correlation map) with circular footprint

suite2p.detection.sourcery.pairwiseDistance(y, x)[source]
suite2p.detection.sourcery.postprocess(ops, stat, Ucell, codes)[source]
suite2p.detection.sourcery.r_squared(yp, xp, ypix, xpix, diam_y, diam_x, estimator=<function median>)[source]
suite2p.detection.sourcery.sourcery(mov, ops)[source]
suite2p.detection.sourcery.sub2ind(array_shape, rows, cols)[source]

suite2p.detection.sparsedetect module

class suite2p.detection.sparsedetect.EstimateMode(value)[source]

Bases: Enum

An enumeration.

Estimated = 'estimated'
Forced = 'FORCED'
suite2p.detection.sparsedetect.add_square(yi, xi, lx, Ly, Lx)[source]

return square of pixels around peak with norm 1

Parameters:
  • yi (int) – y-center

  • xi (int) – x-center

  • lx (int) – x-width

  • Ly (int) – full y frame

  • Lx (int) – full x frame

Returns:

  • y0 (array) – pixels in y

  • x0 (array) – pixels in x

  • mask (array) – pixel weightings

suite2p.detection.sparsedetect.estimate_spatial_scale(I)[source]
Return type:

int

suite2p.detection.sparsedetect.extendROI(ypix, xpix, Ly, Lx, niter=1)[source]

extend ypix and xpix by niter pixel(s) on each side

suite2p.detection.sparsedetect.extend_mask(ypix, xpix, lam, Ly, Lx)[source]

extend mask into 8 surrrounding pixels

suite2p.detection.sparsedetect.find_best_scale(I, spatial_scale)[source]

Returns best scale and estimate method (if the spatial scale was forced (if positive) or estimated (the top peaks).

Return type:

Tuple[int, EstimateMode]

suite2p.detection.sparsedetect.iter_extend(ypix, xpix, mov, Lyc, Lxc, active_frames)[source]

extend mask based on activity of pixels on active frames ACTIVE frames determined by threshold

Parameters:
  • ypix (array) – pixels in y

  • xpix (array) – pixels in x

  • mov (2D array) – binned residual movie [nbinned x Lyc*Lxc]

  • active_frames (1D array) – list of active frames

Returns:

  • ypix (array) – extended pixels in y

  • xpix (array) – extended pixels in x

  • lam (array) – pixel weighting

suite2p.detection.sparsedetect.multiscale_mask(ypix0, xpix0, lam0, Lyp, Lxp)[source]
suite2p.detection.sparsedetect.neuropil_subtraction(mov, filter_size)[source]

Returns movie subtracted by a low-pass filtered version of itself to help ignore neuropil.

Return type:

None

suite2p.detection.sparsedetect.sparsery(mov, high_pass, neuropil_high_pass, batch_size, spatial_scale, threshold_scaling, max_iterations, percentile=0)[source]

Returns stats and ops from “mov” using correlations in time.

Return type:

Tuple[Dict[str, Any], List[Dict[str, Any]]]

suite2p.detection.sparsedetect.square_convolution_2d(mov, filter_size)[source]

Returns movie convolved by uniform kernel with width “filter_size”.

Return type:

ndarray

suite2p.detection.sparsedetect.two_comps(mpix0, lam, Th2)[source]

check if splitting ROI increases variance explained

Parameters:
  • mpix0 (2D array) – binned movie for pixels in ROI [nbinned x npix]

  • lam (array) – pixel weighting

  • Th2 (float) – intensity threshold

Returns:

  • vrat (array) – extended pixels in y

  • ipick (tuple) – new ROI

suite2p.detection.stats module

class suite2p.detection.stats.EllipseData(mu, cov, radii, ellipse, dy, dx)[source]

Bases: tuple

property area
property aspect_ratio: float
cov: float

Alias for field number 1

dx: int

Alias for field number 5

dy: int

Alias for field number 4

ellipse: ndarray

Alias for field number 3

mu: float

Alias for field number 0

radii: Tuple[float, float]

Alias for field number 2

property radius: float
class suite2p.detection.stats.ROI(ypix, xpix, lam, med, do_crop, rsort=array([0., 1., 1., ..., 42.42640687, 42.42640687, 42.42640687]))[source]

Bases: object

ROI(ypix: ‘np.ndarray’, xpix: ‘np.ndarray’, lam: ‘np.ndarray’, med: ‘np.ndarray’, do_crop: ‘bool’, rsort: ‘np.ndarray’ = array([ 0. , 1. , 1. , …, 42.42640687, 42.42640687, 42.42640687]))

do_crop: bool
classmethod filter_overlappers(rois, overlap_image, max_overlap)[source]

returns logical array of rois that remain after removing those that overlap more than fraction max_overlap from overlap_img.

Return type:

List[bool]

fit_ellipse(dy, dx)[source]
Return type:

EllipseData

classmethod from_stat_dict(stat, do_crop=True)[source]
Return type:

ROI

classmethod get_mean_r_squared_normed_all(rois, first_n=100)[source]
Return type:

ndarray

classmethod get_n_pixels_normed_all(rois, first_n=100)[source]
Return type:

ndarray

classmethod get_overlap_count_image(rois, Ly, Lx)[source]
Return type:

ndarray

get_overlap_image(overlap_count_image)[source]
Return type:

ndarray

lam: ndarray
property mean_r_squared: float
property mean_r_squared0: float
property mean_r_squared_compact: float
med: ndarray
property n_pixels: int
property npix_soma: int
ravel_indices(Ly, Lx)[source]

Returns a 1-dimensional array of indices from the ypix and xpix coordinates, assuming an image shape Ly x Lx.

Return type:

ndarray

rsort: ndarray = array([ 0.        ,  1.        ,  1.        , ..., 42.42640687,        42.42640687, 42.42640687])
property solidity: float
property soma_crop: ndarray
classmethod stats_dicts_to_3d_array(stats, Ly, Lx, label_id=False)[source]

Outputs a (roi x Ly x Lx) float array from a sequence of stat dicts. Convenience function that repeatedly calls ROI.from_stat_dict() and ROI.to_array() for all rois.

Parameters:
  • stats (List of dictionary "ypix", "xpix", "lam") –

  • Ly (y size of frame) –

  • Lx (x size of frame) –

  • label_id (whether array should be an integer value indicating ROI id or just 1 (indicating precence of ROI).) –

to_array(Ly, Lx)[source]

Returns a 2D boolean array of shape (Ly x Lx) indicating where the roi is located.

Return type:

ndarray

xpix: ndarray
ypix: ndarray
suite2p.detection.stats.aspect_ratio(width, height, offset=0.01)[source]
Return type:

float

suite2p.detection.stats.count_overlaps(Ly, Lx, ypixs, xpixs)[source]
Return type:

ndarray

suite2p.detection.stats.distance_kernel(radius)[source]

Returns 2D array containing geometric distance from center, with radius “radius”

Return type:

ndarray

suite2p.detection.stats.filter_overlappers(ypixs, xpixs, overlap_image, max_overlap)[source]

returns ROI indices that remain after removing those that overlap more than fraction max_overlap from overlap_img.

Return type:

List[bool]

suite2p.detection.stats.fitMVGaus(y, x, lam0, dy, dx, thres=2.5, npts=100)[source]

computes 2D gaussian fit to data and returns ellipse of radius thres standard deviations. :type y: :param y: pixel locations in y :type y: float, array :type x: :param x: pixel locations in x :type x: float, array :type lam0: :param lam0: weights of each pixel :type lam0: float, array

Return type:

EllipseData

suite2p.detection.stats.mean_r_squared(y, x, estimator=<function median>)[source]
Return type:

float

suite2p.detection.stats.median_pix(ypix, xpix)[source]
suite2p.detection.stats.norm_by_average(values, estimator=<function mean>, first_n=100, offset=0.0)[source]

Returns array divided by the (average of the “first_n” values + offset), calculating the average with “estimator”.

Return type:

ndarray

suite2p.detection.stats.roi_stats(stat, Ly, Lx, aspect=None, diameter=None, max_overlap=None, do_crop=True)[source]

computes statistics of ROIs :type stat: :param stat: “ypix”, “xpix”, “lam” :type stat: dictionary :param FOV size: :type FOV size: (Ly, Lx) :type aspect: :param aspect: :type aspect: aspect ratio of recording :type diameter: :param diameter: :type diameter: (dy, dx)

Returns:

stat – adds “npix”, “npix_norm”, “med”, “footprint”, “compact”, “radius”, “aspect_ratio”

Return type:

dictionary

suite2p.detection.utils module

suite2p.detection.utils.downsample(mov, taper_edge=True)[source]

Returns a pixel-downsampled movie from “mov”, tapering the edges of “taper_edge” is True.

Parameters:
  • mov (nImg x Ly x Lx) – The frames to downsample

  • taper_edge (bool) – Whether to taper the edges

Returns:

The downsampled frames

Return type:

filtered_mov

suite2p.detection.utils.hp_gaussian_filter(mov, width)[source]

Returns a high-pass-filtered copy of the 3D array “mov” using a gaussian kernel.

Parameters:
  • mov (nImg x Ly x Lx) – The frames to filter

  • width (int) – The kernel width

Returns:

filtered_mov – The filtered video

Return type:

nImg x Ly x Lx

suite2p.detection.utils.hp_rolling_mean_filter(mov, width)[source]

Returns a high-pass-filtered copy of the 3D array “mov” using a non-overlapping rolling mean kernel over time.

Parameters:
  • mov (nImg x Ly x Lx) – The frames to filter

  • width (int) – The filter width

Returns:

filtered_mov – The filtered frames

Return type:

nImg x Ly x Lx

suite2p.detection.utils.mask_ious(masks_true, masks_pred)[source]

return best-matched masks

Parameters:
  • masks_true (ND-array (int)) – where 0=NO masks; 1,2… are mask labels

  • masks_pred (ND-array (int)) – ND-array (int) where 0=NO masks; 1,2… are mask labels

Returns:

  • iou (float, ND-array) – array of IOU pairs

  • preds (int, ND-array) – array of matched indices

  • iou_all (float, ND-array) – full IOU matrix across all pairs

suite2p.detection.utils.mask_stats(mask)[source]

median and diameter of mask

suite2p.detection.utils.match_masks(iou)[source]
suite2p.detection.utils.square_mask(mask, ly, yi, xi)[source]

crop from mask a square of size ly at position yi,xi

suite2p.detection.utils.standard_deviation_over_time(mov, batch_size)[source]

Returns standard deviation of difference between pixels across time, computed in batches of batch_size.

Parameters:
  • mov (nImg x Ly x Lx) – The frames to filter

  • batch_size (int) – The batch size

Returns:

filtered_mov – The statistics for each pixel

Return type:

Ly x Lx

suite2p.detection.utils.temporal_high_pass_filter(mov, width)[source]

Returns hp-filtered mov over time, selecting an algorithm for computational performance based on the kernel width.

Parameters:
  • mov (nImg x Ly x Lx) – The frames to filter

  • width (int) – The filter width

Returns:

filtered_mov – The filtered frames

Return type:

nImg x Ly x Lx

suite2p.detection.utils.threshold_reduce(mov, intensity_threshold)[source]

Returns standard deviation of pixels, thresholded by “intensity_threshold”. Run in a loop to reduce memory footprint.

Parameters:
  • mov (nImg x Ly x Lx) – The frames to downsample

  • intensity_threshold (float) – The threshold to use

Returns:

Vt – The standard deviation of the non-thresholded pixels

Return type:

Ly x Lx

Module contents