CN111583293B - Self-adaptive image segmentation method for multicolor double-photon image sequence - Google Patents
Self-adaptive image segmentation method for multicolor double-photon image sequence Download PDFInfo
- Publication number
- CN111583293B CN111583293B CN202010393183.3A CN202010393183A CN111583293B CN 111583293 B CN111583293 B CN 111583293B CN 202010393183 A CN202010393183 A CN 202010393183A CN 111583293 B CN111583293 B CN 111583293B
- Authority
- CN
- China
- Prior art keywords
- channel
- image
- background model
- bimodal
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 238000003709 image segmentation Methods 0.000 title claims abstract description 28
- 230000002902 bimodal effect Effects 0.000 claims abstract description 95
- 238000012549 training Methods 0.000 claims abstract description 51
- 238000001514 detection method Methods 0.000 claims abstract description 9
- 230000011218 segmentation Effects 0.000 claims abstract description 4
- 230000003044 adaptive effect Effects 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 7
- 230000007704 transition Effects 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000003708 edge detection Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 4
- 210000002569 neuron Anatomy 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 238000007418 data mining Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 210000004556 brain Anatomy 0.000 description 3
- 229930091051 Arenine Natural products 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000011897 real-time detection Methods 0.000 description 2
- 241000252212 Danio rerio Species 0.000 description 1
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 208000037273 Pathologic Processes Diseases 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002073 fluorescence micrograph Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 210000001161 mammalian embryo Anatomy 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 230000009054 pathological process Effects 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000000482 two photon fluorescence microscopy Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10064—Fluorescence image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及图像数据挖掘技术领域的一种图像处理方法,尤其涉及一种面向多色双光子图像序列的自适应图像分割方法。The invention relates to an image processing method in the technical field of image data mining, and in particular to an adaptive image segmentation method for multi-color two-photon image sequences.
背景技术Background Art
由于多色双光子成像技术可以同时获取含有多种荧光团的待测样品的高对比度双光子荧光图像,其产生的数据通常是多通道高分辨率的图像序列,因此多色双光子成像数据在理论上比单色双光子成像数据具有更高的信息密度与复杂度。然而,也正是由于上述特性,使得对多色双光子成像数据的分析、处理与挖掘利用比较困难。而且随着数据的快速产生和累积,传统由人工主导的数据分析方式显然已经无法延续,因此亟需开发针对多色双光子成像数据特性的高效自动化数据挖掘方案,有效提升数据的利用率,更好地深度挖掘数据价值。Since multicolor two-photon imaging technology can simultaneously obtain high-contrast two-photon fluorescence images of samples containing multiple fluorophores, the data generated is usually a multi-channel high-resolution image sequence. Therefore, multicolor two-photon imaging data theoretically has higher information density and complexity than monochrome two-photon imaging data. However, it is precisely because of the above characteristics that the analysis, processing, and mining of multicolor two-photon imaging data are difficult. Moreover, with the rapid generation and accumulation of data, the traditional data analysis method dominated by humans is obviously no longer sustainable. Therefore, it is urgent to develop an efficient and automated data mining solution for the characteristics of multicolor two-photon imaging data, effectively improve the utilization rate of data, and better and more deeply mine the value of data.
自适应图像分割就是一种可用于多色双光子图像序列自动化数据挖掘的技术。通过对多色双光子图像序列的数据样本进行学习,先构建出图像序列的背景模型,再通过比较指定图像帧与背景模型的差异,即可快速准确地分割出指定图像帧中的待监测目标成分,进而实现对各种生化指标的全自动监测与检测,以及生理或病理过程的动态跟踪。Adaptive image segmentation is a technology that can be used for automated data mining of multi-color two-photon image sequences. By learning the data samples of multi-color two-photon image sequences, the background model of the image sequence is first constructed, and then by comparing the difference between the specified image frame and the background model, the target component to be monitored in the specified image frame can be quickly and accurately segmented, thereby realizing fully automatic monitoring and detection of various biochemical indicators, as well as dynamic tracking of physiological or pathological processes.
然而,专门针对多色双光子图像序列的数据特性而设计的自适应图像分割方法目前还较少。现有方法主要可以分为两大类。However, there are few adaptive image segmentation methods designed specifically for the data characteristics of multi-color two-photon image sequences. Existing methods can be mainly divided into two categories.
一类方法是来源于传统的针对单幅静态图像的图像分割方法,这种方法的问题在于:将具有时间连贯性的图像序列视为孤立无关的单幅图像进行处理,仅仅利用了单幅图像内部的空间信息,而完全丢失了图像序列的时间维度信息,因此无法充分挖掘并利用多色双光子图像序列中待监测目标的隐含动态性信息。One type of method is derived from the traditional image segmentation method for a single static image. The problem with this method is that it treats a temporally coherent image sequence as an isolated and unrelated single image and only utilizes the spatial information within the single image, while completely losing the temporal dimension information of the image sequence. Therefore, it is impossible to fully explore and utilize the implicit dynamic information of the target to be monitored in the multi-color two-photon image sequence.
另一类方法是来源于传统的智能视频监控领域的图像分割方法,这些方法的问题在于:缺少对多色双光子图像序列数据特性的适应与利用。例如,这些方法会在预处理阶段先将彩色图像降维成灰度图像,以提高运算性能。这种处理策略在日常监控场景下是合理的。然而,这种简化数据复杂度的降维必定导致重要彩色信息的丢失,这对于将色彩信息作为优势特征的多色双光子图像序列数据而言,显然必定会大大降低数据价值,甚至会影响分析结果的准确性。Another type of method is the image segmentation method from the traditional intelligent video surveillance field. The problem with these methods is that they lack the adaptation and utilization of the characteristics of multi-color two-photon image sequence data. For example, these methods will reduce the dimension of color images into grayscale images in the preprocessing stage to improve the computing performance. This processing strategy is reasonable in daily monitoring scenarios. However, this dimensionality reduction that simplifies the complexity of the data will inevitably lead to the loss of important color information. For multi-color two-photon image sequence data that uses color information as an advantageous feature, it will obviously greatly reduce the data value and even affect the accuracy of the analysis results.
综上所述,如果盲目移植了不匹配的自适应图像分割方法,不仅无法真正有效地实现对多色双光子图像序列的数据挖掘,而且严重的还会造成对实验结果的误判。In summary, if an incompatible adaptive image segmentation method is blindly transplanted, not only will it be impossible to truly and effectively realize data mining of multi-color two-photon image sequences, but it will also seriously lead to misjudgment of experimental results.
因此,目前在面向多色双光子图像序列的自动化数据挖掘领域,亟需针对多色双光子图像序列数据特性而专门设计有效、高效的自适应图像分割方法。Therefore, in the field of automated data mining for multi-color two-photon image sequences, there is an urgent need to design effective and efficient adaptive image segmentation methods specifically for the data characteristics of multi-color two-photon image sequences.
发明内容Summary of the invention
针对现有技术领域存在的问题,本发明提供了一种面向多色双光子图像序列的自适应图像分割方法。该方法是根据多色双光子图像序列数据特性而专门设计的,不仅针对此类图像的背景特性进行了背景模型框架优化,进而准确分割了图像,而且所设计的在线更新模式也完全满足处理这类多通道高分辨率图像序列数据的准确性、实时性及精度等各种性能要求。In view of the problems existing in the prior art, the present invention provides an adaptive image segmentation method for multi-color two-photon image sequences. The method is specially designed according to the characteristics of multi-color two-photon image sequence data. Not only does it optimize the background model framework according to the background characteristics of such images, thereby accurately segmenting the images, but the designed online update mode also fully meets the various performance requirements of accuracy, real-time performance and precision in processing such multi-channel high-resolution image sequence data.
本发明方法技术方案包括以下步骤:The technical solution of the method of the present invention comprises the following steps:
S1:从多色双光子图像序列中选取第k幅至第n幅图像作为训练样本;S1: Select the kth to nth images from the multi-color two-photon image sequence as training samples;
S2:由训练样本生成初始化的多通道双模态背景模型;S2: A multi-channel bimodal background model initialized by training samples;
S3:对上述多通道双模态背景模型的持续实时更新;S3: continuous real-time update of the above multi-channel bimodal background model;
S4:利用实时更新的多通道双模态背景模型对实时输入的图像进行分割检测。S4: Use the multi-channel bimodal background model updated in real time to perform segmentation detection on the real-time input image.
所述的多色双光子图像序列可以是由脑神经元经双光子荧光显微成像采集获得。The multi-color two-photon image sequence can be obtained by collecting brain neurons through two-photon fluorescence microscopy imaging.
所述步骤S1包括以下步骤:The step S1 comprises the following steps:
S11:从多色双光子图像序列中,选取从第k幅至第n幅的连续图像作为训练样本;S11: Select continuous images from the kth to the nth images in the multi-color two-photon image sequence as training samples;
S12:若训练样本中图像原有像素值的值域不是[0,255],则:S12: If the value range of the original pixel value of the image in the training sample is not [0,255], then:
对训练样本进行预处理,将训练样本中每幅图像的每个颜色通道上像素点的值域均映射到[0,255]的值域范围内,具体方法为以下公式:The training samples are preprocessed to map the value range of the pixels on each color channel of each image in the training samples to the value range of [0,255]. The specific method is as follows:
其中,U代表训练样本中图像原有像素值的值域的上限值,I(x,y)为预处理前的图像中像素点(x,y)的像素值,J(x,y)为预处理后的图像中像素点(x,y)的像素值;Among them, U represents the upper limit of the value range of the original pixel value of the image in the training sample, I(x,y) is the pixel value of the pixel point (x,y) in the image before preprocessing, and J(x,y) is the pixel value of the pixel point (x,y) in the image after preprocessing;
若训练样本中图像原有像素值的值域就是[0,255],则不对训练样本进行预处理。If the value range of the original pixel values in the training sample is [0,255], no preprocessing is performed on the training sample.
S12中,图像的像素值取值被限定为[0,255]的值域范围。In S12, the pixel values of the image are limited to a value range of [0, 255].
所述步骤S2包括以下步骤:The step S2 comprises the following steps:
S21:构建图像序列在RGB的R通道上初始化的多通道双模态背景模型:S21: Construct a multi-channel bimodal background model initialized on the R channel of RGB for the image sequence:
S211、在R通道上,对图像内的每一个像素点(x,y),计算出像素点(x,y)位置上的初始化的多通道双模态背景模型的两个中心值,方法如下:S211, on the R channel, for each pixel point (x, y) in the image, calculate the two center values of the initialized multi-channel bimodal background model at the position of the pixel point (x, y), the method is as follows:
①若像素点(x,y)位于图像四周边缘上,则计算像素点(x,y)在训练样本的所有幅图像内的所有像素值J(x,y)k,J(x,y)k+1,...,J(x,y)n的中位数与众数,众数为出现频率最高的数,J(x,y)k表示第k幅图像的像素点(x,y)的像素值,分别将中位数与众数作为像素点(x,y)位置上的初始化多通道双模态背景模型第一中心值和第二中心值;① If the pixel point (x, y) is located on the edges of the image, the median and mode of all pixel values J(x, y) k , J(x, y) k+1 , ..., J(x, y) n of the pixel point (x, y) in all images of the training sample are calculated. The mode is the number with the highest frequency. J(x, y) k represents the pixel value of the pixel point (x, y) in the kth image. The median and mode are used as the first center value and the second center value of the initialized multi-channel dual-modal background model at the position of the pixel point (x, y), respectively.
②若像素点(x,y)不位于图像四周边缘上,则计算以该像素点为中心在训练样本所有幅图像内的3×3邻域内的所有像素值的中位数与众数,每幅图像的3×3邻域内共九个像素点,在训练样本共有n-k+1幅图像,共计有9×(n-k+1)个像素值,分别将中位数与众数作为像素点(x,y)位置上的初始化多通道双模态背景模型第一中心值和第二中心值;② If the pixel point (x, y) is not located on the edge of the image, the median and mode of all pixel values in the 3×3 neighborhood of all images in the training sample with the pixel point as the center are calculated. There are nine pixels in the 3×3 neighborhood of each image. There are n-k+1 images in the training sample, with a total of 9×(n-k+1) pixel values. The median and mode are used as the first and second center values of the initialized multi-channel dual-modal background model at the pixel point (x, y) position respectively;
从而获得像素点(x,y)位置上的多通道双模态背景模型第一中心值和第二中心值分别和 Thus, the first center value and the second center value of the multi-channel dual-modal background model at the pixel point (x, y) are obtained respectively. and
S212、在R通道上,在训练样本的所有k~n幅图像内计算出像素点共享的初始化的多通道双模态背景模型的半径值,共享是指同一幅图像内所有像素点的多通道双模态背景模型的半径值都相同,计算方法如下:S212. On the R channel, the radius value of the initialized multi-channel bimodal background model shared by the pixels in all k to n images of the training sample is calculated. Sharing means that the radius values of the multi-channel bimodal background model of all pixels in the same image are the same. The calculation method is as follows:
①对训练样本中的每幅图像,运用图像边沿检测算法找出图像中的非边沿像素点,将每一幅图像中所有非边沿像素点构成集合,第z幅图像中非边沿像素点的集合记为 ① For each image in the training sample, use the image edge detection algorithm to find the non-edge pixels in the image, and form a set of all non-edge pixels in each image. The set of non-edge pixels in the zth image is recorded as
②在训练样本的所有k~n幅图像内,在每幅图像的非边沿像素点的集合中,计算像素点共享的初始化多通道双模态背景模型的半径值,按照以下公式:② In all k to n images of the training sample, in the set of non-edge pixels of each image, calculate the radius value of the initialized multi-channel bimodal background model shared by the pixels according to the following formula:
且 and
其中,代表在R通道上第z幅图像内非边沿像素点的总数,z表示图像的序数z=k,...,n-1,是R通道上每个像素点位置上的初始化多通道双模态背景模型的半径值,与像素点位置无关;表示R通道上第z幅图像中像素点(x,y)的像素值,V是图像中像素值的上限值,即255;in, Represents the total number of non-edge pixels in the zth image on the R channel, where z represents the ordinal number of the image z=k,...,n-1. is the radius value of the initialized multi-channel bimodal background model at each pixel position on the R channel, It has nothing to do with the pixel position; Represents the pixel value of the pixel point (x, y) in the z-th image on the R channel, and V is the upper limit of the pixel value in the image, which is 255;
的下标n是代表第n幅图像时的半径值,这个值是由k~n幅图像数据累积计算所得。初始化模型的半径就是看(累积到)第n幅图像时的求出了后,在n+1帧时用去迭代更新背景模型半径也就是说,k~n帧内,背景模型半径不需要用迭代的方法计算获得。从n+1帧开始,才使用迭代的方法计算背景模型半径。 The subscript n represents the radius value when viewing the nth image. This value is calculated by accumulating the data of k to n images. The radius of the initialization model is the radius when viewing (accumulating) the nth image. Asked for Then, in frame n+1, Iterate and update the background model radius That is to say, within the k~n frames, the background model radius does not need to be calculated using the iterative method. Starting from the n+1 frame, the background model radius is calculated using the iterative method.
S213、图像内像素点(x,y)位置上R通道的初始化的多通道双模态背景模型构成如下:初始化的多通道双模态背景模型由两个值域范围组合而成,两个值域范围的中心值分别是和每个值域取值范围的半径均为半 S213, the initialized multi-channel bimodal background model of the R channel at the pixel point (x, y) position in the image is constructed as follows: The initialized multi-channel bimodal background model is composed of two value ranges, and the center values of the two value ranges are respectively and The radius of each value range is half
S22:在R通道上,计算出初始化的多通道双模态背景模型的学习率,方法如下:S22: On the R channel, the learning rate of the initialized multi-channel bimodal background model is calculated as follows:
在训练样本的所有幅图像内,在R通道上对图像内所有像素点的像素值从θ1灰阶跃迁为θ2灰阶的概率进行计算,生成图像内像素点共享的第n幅图像时刻的多通道双模态背景模型的学习率其中θ1表示像素值跃迁前的灰阶等级,θ2表示像素值跃迁后的灰阶等级,θ1,θ2∈[0,255];In all images of the training sample, the probability of the pixel values of all pixels in the image transitioning from grayscale θ 1 to grayscale θ 2 is calculated on the R channel to generate the learning rate of the multi-channel bimodal background model shared by the pixels in the image at the nth image moment Wherein θ 1 represents the grayscale level before the pixel value transition, θ 2 represents the grayscale level after the pixel value transition, θ 1 ,θ 2 ∈[0,255];
S23:依照与上述步骤S21~S22相同的方法,计算出图像序列在RGB的G通道上的初始化背景模型及其学习率,即获得了G通道的初始化多通道双模态背景模型,以及G通道的初始化多通道双模态背景模型的两个值域范围的中心值和两个值域取值范围的相同半径第一个值域范θ1,θ2∈[0,255];S23: According to the same method as steps S21 to S22 above, the initialization background model of the image sequence on the G channel of RGB and its learning rate are calculated, that is, the initialization multi-channel bimodal background model of the G channel and the center value of the two value ranges of the initialization multi-channel bimodal background model of the G channel are obtained. and The same radius of the two ranges The first range θ 1 ,θ 2 ∈[0,255];
S24:依照与上述步骤S21~S22相同的方法,计算出图像序列在RGB的B通道上的初始化背景模型及其学习率,即获得了B通道的初始化多通道双模态背景模型,以及B通道的初始化多通道双模态背景模型的两个值域范围的中心值和两个值域取值范围的相同半径第一个值域范围θ1,θ2∈[0,255]。S24: According to the same method as steps S21 to S22 above, the initialization background model of the image sequence on the B channel of RGB and its learning rate are calculated, that is, the initialization multi-channel bimodal background model of the B channel and the center value of the two value ranges of the initialization multi-channel bimodal background model of the B channel are obtained. and The same radius of the two ranges The first value range θ 1 ,θ 2 ∈[0,255].
所述步骤S3包括以下步骤:The step S3 comprises the following steps:
S31:在R通道上,对多通道双模态背景模型的中心值进行持续更新,方法如下:S31: On the R channel, the center value of the multi-channel bimodal background model is continuously updated as follows:
在新读入训练样本的第n+1幅图像时,对于图像内的每一个像素点(x,y),采用以下公式分别更新该像素点位置上的多通道双模态背景模型的第一中心值和第二中心值:When the n+1th image of the training sample is newly read, for each pixel point (x, y) in the image, the first center value and the second center value of the multi-channel bimodal background model at the pixel point position are updated respectively using the following formulas:
其中,和是像素点(x,y)在第n+1幅图像时的多通道双模态背景模型的两个中心值,和分别是像素点(x,y)在第n幅图像时的多通道双模态背景模型的两个中心值和背景模型学习率,是像素点(x,y)在第n+1幅图像时的像素值;在公式(1)中θ1的取值为在公式(2)中θ1的取值为而θ2则都取值为 in, and are the two center values of the multi-channel bimodal background model of the pixel point (x, y) in the n+1th image, and are the two central values of the multi-channel bimodal background model and the background model learning rate of the pixel point (x, y) in the nth image, is the pixel value of the pixel point (x, y) in the n+1th image; in formula (1), the value of θ 1 is In formula (2), the value of θ1 is And θ 2 is always
S32:在R通道上,对多通道双模态背景模型的半径值进行持续更新,方法如下:S32: On the R channel, continuously update the radius value of the multi-channel dual-modal background model, as follows:
在新读入第n+1幅图像时,对视频视场内的每一个像素点(x,y),更新该像素点位置上的单模态背景模型半径值:When the n+1th image is newly read, for each pixel point (x, y) in the video field of view, the radius value of the unimodal background model at the pixel point position is updated:
且 and
其中,是任意像素点上在n+1帧时的多通道双模态背景模型半径值;in, is the radius value of the multi-channel bimodal background model at any pixel point at frame n+1;
S33:在R通道上,在新读入n+1帧时,视频视场内的每一个像素点(x,y)位置上的多通道双模态背景模型进行更新:S33: On the R channel, when the n+1 frame is newly read, the multi-channel bimodal background model at each pixel point (x, y) in the video field of view is updated:
即背景模型由两个值域范围组合而成,两个值域范围的中心值分别是 That is, the background model is composed of two value ranges, and the center values of the two value ranges are
S34:在R通道上,多通道双模态背景模型的学习率进行持续更新,方法如下:S34: On the R channel, the learning rate of the multi-channel bimodal background model is continuously updated as follows:
在新读入第n+1幅图像时,在R通道上计算图像内所有位于奇数行、奇数列的像素点的像素值在k+1至n+1幅图像内从θ1灰阶跃迁为θ2灰阶的概率,生成图像内像素点共享的第n+1帧时的多通道双模态背景模型的学习率 When the n+1th image is newly read, the probability that the pixel values of all pixels in odd rows and columns in the image transition from θ 1 grayscale to θ 2 grayscale in images k+1 to n+1 is calculated on the R channel, and the learning rate of the multi-channel bimodal background model shared by the pixels in the image at the n+1th frame is generated.
以此类推,在新读入第n+1幅图像时,采用与上述步骤S31~S34中相同的方法,持续更新n+i帧时刻的多通道双模态背景模型,该背景模型可以表示为两 Similarly, when the n+1th image is newly read, the multi-channel dual-modal background model at the n+i frame time is continuously updated using the same method as in the above steps S31 to S34. The background model can be represented by two
S34:依照上述步骤S31~S34中的方法,持续更新图像序列在G通道上的多通道双模态背景模型及背景模型学习率,分别获得为:与 S34: According to the method in the above steps S31 to S34, the multi-channel bimodal background model and the background model learning rate of the image sequence on the G channel are continuously updated, and the obtained values are respectively: and
S35:依照上述步骤S31~S34中的方法,持续更新图像序列在B通道上的多通道双模态背景模型及背景模型学习率,分别获得为:与 S35: According to the method in the above steps S31 to S34, the multi-channel bimodal background model and the background model learning rate of the image sequence on the B channel are continuously updated, and the obtained values are respectively: and
重复上述步骤不断迭代持续更新图像序列的每幅图像在RGB三个通道上的多通道双模态背景模型。Repeat the above steps to iterate and continuously update the multi-channel bimodal background model on the three RGB channels of each image in the image sequence.
所述步骤S4具体是利用多通道双模态背景模型的值域范围对图像的每个像素点进行处理判断:若像素点的像素值在多通道双模态背景模型的两个值域范围内,则该像素点作为背景;若像素点的像素值不在多通道双模态背景模型的两个值域范围内,则该像素点作为前景。The step S4 specifically uses the value range of the multi-channel bimodal background model to process and judge each pixel of the image: if the pixel value of the pixel is within the two value ranges of the multi-channel bimodal background model, the pixel is used as the background; if the pixel value of the pixel is not within the two value ranges of the multi-channel bimodal background model, the pixel is used as the foreground.
具体实施中,针对脑神经元的双光子图像进行实时检测,能够判断出活性神经元和非活性神经元,若像素点的像素值在多通道双模态背景模型的两个值域范围内,则该像素点作为非活性神经元;若像素点的像素值不在多通道双模态背景模型的两个值域范围内,则该像素点作为活性神经元。In a specific implementation, real-time detection of two-photon images of brain neurons can determine active neurons and inactive neurons. If the pixel value of a pixel is within the two value ranges of the multi-channel bimodal background model, the pixel is regarded as an inactive neuron; if the pixel value of a pixel is not within the two value ranges of the multi-channel bimodal background model, the pixel is regarded as an active neuron.
本发明的实质有益效果是:The substantial beneficial effects of the present invention are:
本发明方法缓解了该领域缺少针对多色双光子图像序列数据特性而专门设计自适应图像分割的问题。同时,本发明提供的方法克服了一些现有方法无法适应与利用多色双光子图像序列数据特性的问题:The method of the present invention alleviates the problem of lack of adaptive image segmentation specifically designed for the characteristics of multi-color two-photon image sequence data in the field. At the same time, the method provided by the present invention overcomes the problem that some existing methods cannot adapt to and utilize the characteristics of multi-color two-photon image sequence data:
(1)本方法专门用于挖掘多色双光子图像序列数据,能够充分利用图像序列的时间维度信息,从而有效挖掘出图像序列中待监测目标的隐含动态性信息;(1) This method is specifically used to mine multi-color two-photon image sequence data. It can make full use of the time dimension information of the image sequence, thereby effectively mining the implicit dynamic information of the target to be monitored in the image sequence;
(2)本方法专门用于挖掘多色双光子图像序列数据,能够有效挖掘出不同色彩通道的隐含特征信息,不会出现丢弃色彩信息致使数据贬值或结果准确性下降等问题;(2) This method is specifically used to mine multi-color two-photon image sequence data, and can effectively mine the implicit feature information of different color channels without discarding color information, resulting in data depreciation or reduced result accuracy;
(3)本方法专门针对多色双光子图像序列数据中背景的固有特性,设计了双模态的背景模型框架与在线更新机制,有效保证了背景模型计算的准确性和运算效率,从而提高了图像分割的准确性。(3) This method specifically targets the inherent characteristics of the background in multi-color two-photon image sequence data and designs a dual-modal background model framework and online update mechanism, which effectively ensures the accuracy and operational efficiency of the background model calculation, thereby improving the accuracy of image segmentation.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明方法的流程示意图。FIG1 is a schematic flow diagram of the method of the present invention.
图2是本发明方法的流程示意图。FIG. 2 is a schematic flow diagram of the method of the present invention.
图3是本发明方法的流程示意图。FIG3 is a schematic flow chart of the method of the present invention.
图4是本发明方法中所用训练样本的示例。FIG. 4 is an example of training samples used in the method of the present invention.
图5是本发明方法根据实施例所取得的结果示例。FIG. 5 is an example of the results obtained by the method of the present invention according to an embodiment.
图6是某种通用智能视频监控领域的图像分割方法根据实施例所取得的结果示例。FIG. 6 is an example of the results obtained by a certain general image segmentation method in the field of intelligent video surveillance according to an embodiment.
图7是某种通用面向单幅图像的静态图像分割方法根据实施例所取得的结果示例。FIG. 7 is an example of the results obtained by a certain general static image segmentation method for a single image according to an embodiment.
图8是本发明方法中背景模型学习率获取方法的示意图。FIG8 is a schematic diagram of a method for obtaining a background model learning rate in the method of the present invention.
表1是本发明方法与其他通用方法的图像分割结果定性对比。Table 1 is a qualitative comparison of the image segmentation results of the method of the present invention and other general methods.
具体实施方式DETAILED DESCRIPTION
下面通过实施例,并结合附图,对本发明的技术方案作进一步具体的说明。The technical solution of the present invention is further specifically described below through embodiments and in conjunction with the accompanying drawings.
如图1所示,本发明的实施例如下:As shown in FIG1 , the embodiments of the present invention are as follows:
实施例以斑马鱼活体胚胎的一个多色双光子图像序列为例,图像具有RGB三个色彩通道,每个通道数据深度为16位,每个通道像素值的值域为0~65535,图像分辨率为794×624。图4的示例中分别展示了样本图像序列在第0分钟、第68分钟和第168分钟时对应的三幅图像。In the embodiment, a multi-color two-photon image sequence of a living zebrafish embryo is used as an example. The image has three color channels, RGB, and the data depth of each channel is 16 bits. The range of the pixel value of each channel is 0 to 65535, and the image resolution is 794 × 624. The example in FIG4 shows three images corresponding to the sample image sequence at the 0th minute, the 68th minute, and the 168th minute, respectively.
本实施例具体过程如图1、图2、图3所示,包括以下步骤:The specific process of this embodiment is shown in Figures 1, 2 and 3, and includes the following steps:
S1:从多色双光子图像序列中选取第k=1幅至第n=100幅图像作为训练样本:S1: Select k=1 to n=100 images from the multi-color two-photon image sequence as training samples:
S11:从多色双光子图像序列中,选取从第1幅至第100幅的连续图像作为训练样本;S11: Select continuous images from the 1st to the 100th images in the multi-color two-photon image sequence as training samples;
S12:训练样本中图像原有像素值的值域不是[0,255],对训练样本进行预处理,将训练样本中每幅图像的每个颜色通道上像素点的值域均映射到[0,255]的值域范围内。S12: If the value range of the original pixel values of the images in the training samples is not [0, 255], the training samples are preprocessed to map the value range of the pixel points on each color channel of each image in the training samples to the value range of [0, 255].
S2:由训练样本生成初始化的多通道双模态背景模型:S2: Multi-channel bimodal background model initialized by training samples:
S21:构建图像序列在RGB的R通道上初始化的多通道双模态背景模型:S21: Construct a multi-channel bimodal background model initialized on the R channel of RGB for the image sequence:
S211、在R通道上,对图像内的每一个像素点(x,y),计算出像素点(x,y)位置上的初始化的多通道双模态背景模型的两个中心值,方法如下:S211, on the R channel, for each pixel point (x, y) in the image, calculate the two center values of the initialized multi-channel bimodal background model at the position of the pixel point (x, y), the method is as follows:
①若像素点(x,y)位于图像四周边缘上,则计算像素点(x,y)在训练样本的所有100幅图像内的所有100个像素值J(x,y)1,J(x,y)2,...,J(x,y)100的中位数与众数,众数为出现频率最高的数,分别将中位数与众数作为像素点(x,y)位置上的初始化多通道双模态背景模型第一中心值和第二中心值 ① If the pixel point (x, y) is located on the edge of the image, calculate the median and mode of all 100 pixel values J(x, y) 1 , J(x, y) 2 , ..., J(x, y) 100 of the pixel point (x, y) in all 100 images of the training sample. The mode is the number with the highest frequency. The median and mode are used as the first center value of the initialized multi-channel bimodal background model at the pixel point (x, y) position. and the second center value
②若像素点(x,y)不位于图像四周边缘上,则计算以该像素点为中心在训练样本所有幅图像内的3×3邻域内的所有像素值的中位数与众数,每幅图像的3×3邻域内共九个像素点,在训练样本共有100幅图像,共计有900个像素值,分别将中位数与众数作为像素点(x,y)位置上的初始化多通道双模态背景模型第一中心值和第二中心值;② If the pixel point (x, y) is not located on the edge of the image, the median and mode of all pixel values in the 3×3 neighborhood of all images in the training sample with the pixel point as the center are calculated. There are nine pixels in the 3×3 neighborhood of each image. There are 100 images in the training sample, with a total of 900 pixel values. The median and mode are used as the first and second center values of the initialized multi-channel dual-modal background model at the pixel point (x, y) position respectively;
从而获得像素点(x,y)位置上的多通道双模态背景模型第一中心值和第二中心值 Thus, the first center value of the multi-channel bimodal background model at the pixel point (x, y) is obtained and the second center value
S212、在R通道上,在训练样本的所有1-100幅图像内计算出像素点共享的初始化的多通道双模态背景模型的半径值,计算方法如下:S212, on the R channel, calculate the radius value of the initialized multi-channel bimodal background model shared by the pixels in all 1-100 images of the training sample, and the calculation method is as follows:
①对训练样本中的每幅图像,运用图像边沿检测算法找出图像中的非边沿像素点,将每一幅图像中所有非边沿像素点构成集合记为 ① For each image in the training sample, use the image edge detection algorithm to find the non-edge pixels in the image, and record the set of all non-edge pixels in each image as
②在训练样本的所有1-100幅图像内,在每幅图像的非边沿像素点的集合中,计算像素点共享的初始化多通道双模态背景模型的半径值。② In all 1-100 images of the training sample, in the set of non-edge pixels of each image, calculate the radius value of the initialized multi-channel bimodal background model shared by the pixels.
S213、图像内像素点(x,y)位置上R通道的初始化的多通道双模态背景模型构成如下:初始化的多通道双模态背景模型由两个值域范围组合而成,两个值域范围的中心值分别是和每个值域取值范围的半径均为半径值第一个值域范围为第二个值域范围为 S213, the initialized multi-channel bimodal background model of the R channel at the pixel point (x, y) position in the image is constructed as follows: The initialized multi-channel bimodal background model is composed of two value ranges, and the center values of the two value ranges are respectively and The radius of each value range is the radius value The first range is The second range is
S22:在R通道上,计算出初始化的多通道双模态背景模型的学习率,方法如下:S22: On the R channel, the learning rate of the initialized multi-channel bimodal background model is calculated as follows:
在训练样本的所有幅图像内,在R通道上对图像内所有像素点的像素值从θ1灰阶跃迁为θ2灰阶的概率进行计算,生成图像内像素点共享的第n幅图像时刻的多通道双模态背景模型的学习率本发明方法中背景模型学习率的示意图如图8所示。In all images of the training sample, the probability of the pixel values of all pixels in the image transitioning from grayscale θ 1 to grayscale θ 2 is calculated on the R channel to generate the learning rate of the multi-channel bimodal background model shared by the pixels in the image at the nth image moment Background model learning rate in the method of the present invention The schematic diagram is shown in Figure 8.
作为优选,背景模型学习率的计算可采用如下的迭代算法:As a preference, the background model learning rate The calculation of can be done using the following iterative algorithm:
E(θ1→θ2)=1;E(θ 1 →θ 2 )=1;
其中,和分别代表图像内的任意像素点(x,y)在第k帧和第k+1帧中的像素值,并分别简记为θ1和θ2,由于示例图像的RGB三个通道均为8位深度,即每个通道中像素值具有256级灰阶,所以有:θ1∈[0,255],θ2∈[0,255];E(θ1→θ2)=1表示检测到以下的事件1次:(x,y)的像素值从k帧中的θ1灰阶跳变为k+1帧中的θ2灰阶;∑E(θ1→θ2)是统计图像内所有像素点的像素值从k帧中的θ1灰阶跳变为k+1帧中的θ2灰阶的次数,将∑E(θ1→θ2)的值记录在方阵H的对应单元中;方阵是对训练样本的1~100幅图像内值的累加,中记录了训练样本图像内检测到的像素值从θ1灰阶跳变为θ2灰阶的总次数;将的值归一化为[0,1]之间的概率值,即得到背景模型学习率是大小为256×256的方阵。in, and Represent the pixel values of any pixel point (x, y) in the image in the kth frame and the k+1th frame, and are abbreviated as θ 1 and θ 2 respectively. Since the RGB channels of the example image are all 8-bit deep, that is, the pixel values in each channel have 256 grayscale levels, so: θ 1 ∈ [0, 255], θ 2 ∈ [0, 255]; E(θ 1 →θ 2 ) = 1 means that the following event is detected once: the pixel value of (x, y) jumps from the grayscale θ 1 in the k frame to the grayscale θ 2 in the k+1 frame; ∑E(θ 1 →θ 2 ) is the number of times the pixel values of all pixels in the image jump from the grayscale θ 1 in the k frame to the grayscale θ 2 in the k+1 frame, and the value of ∑E(θ 1 →θ 2 ) is recorded in the corresponding unit of the square matrix H. Middle; Square It is the number of images in the 1 to 100 training samples. The accumulation of values, The total number of times the pixel value detected in the training sample image jumps from grayscale θ 1 to grayscale θ 2 is recorded in The value of is normalized to a probability value between [0,1], that is, the background model learning rate is obtained is a square matrix of size 256×256.
S23:依照与上述步骤S21~S22相同的方法,计算出图像序列在RGB的G通道上的初始化背景模型及其学习率,即获得了G通道的初始化多通道双模态背景模型,以及G通道的初始化多通道双模态背景模型的两个值域范围的中心值和两个值域取值范围的相同半径第一个值域范θ1,θ2∈[0,255];S23: According to the same method as steps S21 to S22 above, the initialization background model of the image sequence on the G channel of RGB and its learning rate are calculated, that is, the initialization multi-channel bimodal background model of the G channel and the center value of the two value ranges of the initialization multi-channel bimodal background model of the G channel are obtained. and The same radius of the two ranges The first range θ 1 ,θ 2 ∈[0,255];
S24:依照与上述步骤S21~S22相同的方法,计算出图像序列在RGB的B通道上的初始化背景模型及其学习率,即获得了B通道的初始化多通道双模态背景模型,以及B通道的初始化多通道双模态背景模型的两个值域范围的中心值和两个值域取值范围的相同半径第一个值域范围θ1,θ2∈[0,255]。S24: According to the same method as steps S21 to S22 above, the initialization background model of the image sequence on the B channel of RGB and its learning rate are calculated, that is, the initialization multi-channel bimodal background model of the B channel and the center value of the two value ranges of the initialization multi-channel bimodal background model of the B channel are obtained. and The same radius of the two ranges The first value range θ 1 ,θ 2 ∈[0,255].
S3:对上述多通道双模态背景模型的持续实时更新:S3: Continuous real-time update of the above multi-channel bimodal background model:
S31:在R通道上,对多通道双模态背景模型的中心值进行持续更新,方法如下:S31: On the R channel, the center value of the multi-channel bimodal background model is continuously updated as follows:
在新读入101帧时,对视频视场内的每一个像素点(x,y),按照以下公式分别更新其位置上的多通道双模态背景模型的第一中心值和第二中心值:When the 101st frame is newly read, for each pixel point (x, y) in the video field of view, the first center value and the second center value of the multi-channel bimodal background model at its position are updated according to the following formulas:
S32:在R通道上,对多通道双模态背景模型的半径值进行持续更新,方法如下:S32: On the R channel, continuously update the radius value of the multi-channel dual-modal background model, as follows:
在新读入101帧时,对视频视场内的每一个像素点(x,y),按照以下公式更新其位置上的单模态背景模型半径值:When the 101st frame is newly read, for each pixel point (x, y) in the video field of view, the radius value of the unimodal background model at its position is updated according to the following formula:
且 and
在R通道上,在新读入101帧时,视频视场内的每一个像素点(x,y)位置上的多通道双模态背景模型更新如下:背景模型由两个值域范围组合而成,两个值域范围的中心值分别是和每个值域取值范围的半径均 On the R channel, when the 101st frame is newly read, the multi-channel bimodal background model at each pixel point (x, y) in the video field of view is updated as follows: the background model is composed of two value ranges, and the center values of the two value ranges are and The radius of each value range is
S33:在R通道上,多通道双模态背景模型的学习率进行持续更新,方法如下:S33: On the R channel, the learning rate of the multi-channel bimodal background model is continuously updated as follows:
在新读入101帧时,在R通道上计算图像内所有位于奇数行、奇数列的像素点的像素值在2至101幅图像内从θ1灰阶跃迁为θ2灰阶的概率,生成图像内像素点共享的第101帧时的多通道双模态背景模型学习率 When the 101st frame is newly read, the probability that the pixel values of all pixels in odd rows and columns in the image transition from grayscale θ 1 to grayscale θ 2 in images 2 to 101 is calculated on the R channel to generate the learning rate of the multi-channel bimodal background model at the 101st frame shared by the pixels in the image
以此类推,在新读入100+i帧时,采用与上述步骤S31~S34中相同的方法,持续更新100+i帧时刻的多通道双模态背景模型,该背景模型可以表示为两个值 Similarly, when the 100+i frame is newly read, the multi-channel bimodal background model at the 100+i frame time is continuously updated by the same method as in the above steps S31 to S34. The background model can be represented by two values:
S34:依照上述步骤S31~S33中的方法,持续更新图像序列在G通道上的多通道双模态背景模型及背景模型学习率,其分别为:与 S34: According to the method in the above steps S31 to S33, the multi-channel bimodal background model and the background model learning rate of the image sequence on the G channel are continuously updated, which are respectively: and
S35:依照上述步骤S31~S33中的方法,持续更新图像序列在B通道上的多通道双模态背景模型及背景模型学习率,其分别为:与 S35: According to the method in steps S31 to S33 above, the multi-channel bimodal background model and the background model learning rate of the image sequence on the B channel are continuously updated, which are respectively: and
如前所述,是大小为255×255的方阵,由于θ1、θ2分别是该方阵的行坐标和列坐标,因此将θ1、θ2的具体值代入即可获取方阵中第θ1行、第θ2列的单元位置上对应的背景模型学习率;根据图3的示例,的值就是该方阵中第160行、第200列的单元位置上对应的背景模型学习率,即0.5。As mentioned earlier, is a square matrix of
S4:利用实时更新的多通道双模态背景模型对实时输入的图像进行分割检测。S4: Use the multi-channel bimodal background model updated in real time to perform segmentation detection on the real-time input image.
具体实施中,针对脑神经元的双光子图像进行实时检测,能够判断出活性神经元和非活性神经元,若像素点的像素值在多通道双模态背景模型的两个值域范围内,则该像素点作为非活性神经元;若像素点的像素值不在多通道双模态背景模型的两个值域范围内,则该像素点作为活性神经元。In a specific implementation, real-time detection of two-photon images of brain neurons can determine active neurons and inactive neurons. If the pixel value of a pixel is within the two value ranges of the multi-channel bimodal background model, the pixel is regarded as an inactive neuron; if the pixel value of a pixel is not within the two value ranges of the multi-channel bimodal background model, the pixel is regarded as an active neuron.
本发明方法根据实施例所取得的结果如图5所示。可以看到,由于本方法是针对多色双光子图像序列的数据特性而设计的,做了专门的特殊优化处理,因此总体上分割出的前景(即白色像素点区域)与待检测目标物体基本一致,漏检(即应该被标记为白色的前景像素点却被标记为了代表背景的黑色)和误检(即应该被标记为黑色背景的像素点却被标记为了代表前景的白色)情况较少。The results obtained by the method of the present invention according to the embodiment are shown in Figure 5. It can be seen that since the method is designed for the data characteristics of the multi-color two-photon image sequence and has been specially optimized, the foreground (i.e., the white pixel area) segmented is generally consistent with the target object to be detected, and there are fewer cases of missed detection (i.e., the foreground pixel that should be marked as white is marked as black representing the background) and false detection (i.e., the pixel that should be marked as black background is marked as white representing the foreground).
同时,选取了某种通用的智能视频监控领域的图像分割方法作为对比,其根据实施例所取得的结果如图6所示。可以看到,由于该方法并非针对多色双光子图像序列的数据特性而设计,因此其分割出的前景与待检测目标物体不够一致,出现了较多的漏检区域。At the same time, a certain general image segmentation method in the field of intelligent video surveillance is selected for comparison, and the results obtained according to the embodiment are shown in Figure 6. It can be seen that since this method is not designed for the data characteristics of multi-color two-photon image sequences, the foreground segmented by it is not consistent with the target object to be detected, and there are many missed detection areas.
此外,还选取了某种通用面向单幅图像的静态图像分割方法作为对比,其根据实施例所取得的结果如图7所示。可以看到,该方法分割出的前景与待检测目标物体的一致性更差,出现了更多的漏检区域。In addition, a general static image segmentation method for a single image is selected for comparison, and the results obtained according to the embodiment are shown in Figure 7. It can be seen that the foreground segmented by this method is less consistent with the target object to be detected, and more missed detection areas appear.
综上所述,本发明方法与上述两种通用图像分割方法的定性对比结果如表1所示。In summary, the qualitative comparison results between the method of the present invention and the above two general image segmentation methods are shown in Table 1.
表1Table 1
由此结果可见,本发明能够解决缺少针对多色双光子图像序列数据特性而专门设计自适应图像分割的问题,克服了一些现有方法无法适应与利用多色双光子图像序列数据特性的问题,提高了图像分割的准确性,取得了突出显著的技术效果。From the results, it can be seen that the present invention can solve the problem of lack of adaptive image segmentation specifically designed for the characteristics of multi-color two-photon image sequence data, overcome the problem that some existing methods are unable to adapt to and utilize the characteristics of multi-color two-photon image sequence data, improve the accuracy of image segmentation, and achieve outstanding and significant technical effects.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010393183.3A CN111583293B (en) | 2020-05-11 | 2020-05-11 | Self-adaptive image segmentation method for multicolor double-photon image sequence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010393183.3A CN111583293B (en) | 2020-05-11 | 2020-05-11 | Self-adaptive image segmentation method for multicolor double-photon image sequence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111583293A CN111583293A (en) | 2020-08-25 |
CN111583293B true CN111583293B (en) | 2023-04-11 |
Family
ID=72126475
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010393183.3A Active CN111583293B (en) | 2020-05-11 | 2020-05-11 | Self-adaptive image segmentation method for multicolor double-photon image sequence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111583293B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018112137A1 (en) * | 2016-12-15 | 2018-06-21 | General Electric Company | System and method for image segmentation using a joint deep learning model |
KR101913952B1 (en) * | 2017-07-11 | 2018-10-31 | 경북대학교 산학협력단 | Automatic Recognition Method of iPSC Colony through V-CNN Approach |
CN108765463A (en) * | 2018-05-30 | 2018-11-06 | 河海大学常州校区 | A kind of moving target detecting method calmodulin binding domain CaM extraction and improve textural characteristics |
KR20190134933A (en) * | 2018-05-18 | 2019-12-05 | 오드컨셉 주식회사 | Method, apparatus and computer program for extracting representative feature of object in image |
AU2019201787A1 (en) * | 2018-05-22 | 2019-12-12 | Adobe Inc. | Compositing aware image search |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10380741B2 (en) * | 2016-12-07 | 2019-08-13 | Samsung Electronics Co., Ltd | System and method for a deep learning machine for object detection |
-
2020
- 2020-05-11 CN CN202010393183.3A patent/CN111583293B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018112137A1 (en) * | 2016-12-15 | 2018-06-21 | General Electric Company | System and method for image segmentation using a joint deep learning model |
KR101913952B1 (en) * | 2017-07-11 | 2018-10-31 | 경북대학교 산학협력단 | Automatic Recognition Method of iPSC Colony through V-CNN Approach |
KR20190134933A (en) * | 2018-05-18 | 2019-12-05 | 오드컨셉 주식회사 | Method, apparatus and computer program for extracting representative feature of object in image |
AU2019201787A1 (en) * | 2018-05-22 | 2019-12-12 | Adobe Inc. | Compositing aware image search |
CN108765463A (en) * | 2018-05-30 | 2018-11-06 | 河海大学常州校区 | A kind of moving target detecting method calmodulin binding domain CaM extraction and improve textural characteristics |
Non-Patent Citations (1)
Title |
---|
何亮时.动态场景中的改进混合高斯背景模型.计算机工程.2012,第38卷(第8期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111583293A (en) | 2020-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110889448A (en) | Electrocardiogram classification method based on convolutional neural network | |
CN111340842B (en) | Correlation filtering target tracking method based on joint model | |
CN111008991B (en) | A Background-aware Correlation Filtering Target Tracking Method | |
CN110942472B (en) | Nuclear correlation filtering tracking method based on feature fusion and self-adaptive blocking | |
CN110111361A (en) | A kind of moving target detecting method based on multi-threshold self-optimizing background modeling | |
CN113628297A (en) | A Deep Learning Diagnosis System for COVID-19 Based on Attention Mechanism and Transfer Learning | |
CN104715490A (en) | Navel orange image segmenting method based on adaptive step size harmony search algorithm | |
CN114663060B (en) | Product manufacturing production line collaborative intelligent management system based on digital twin technology | |
CN116304558A (en) | Epileptic brain magnetic map spike detection method and device | |
CN111583293B (en) | Self-adaptive image segmentation method for multicolor double-photon image sequence | |
CN101742088B (en) | Non-local mean space domain time varying video filtering method | |
CN111028245B (en) | A multi-modal composite high-definition high-speed video background modeling method | |
CN111047654A (en) | A method for modeling high-definition high-speed video background based on color information | |
CN114821767B (en) | Method for identifying motion of graph convolution neural network based on dynamic time warping, electronic equipment and storage medium | |
CN110991361B (en) | Multi-channel multi-modal background modeling method for high-definition high-speed video | |
CN111583292B (en) | Self-adaptive image segmentation method for two-photon calcium imaging video data | |
CN116385935A (en) | Abnormal event detection algorithm based on unsupervised domain self-adaption | |
CN115861349A (en) | Color Image Edge Extraction Method Based on Reduced Conceptual Structural Elements and Matrix Order | |
CN114463543A (en) | Weak supervision semantic segmentation method based on cascade decision and interactive annotation self-promotion | |
CN110942469B (en) | Dual-channel dual-mode background modeling method for high-definition high-speed video | |
CN111008995B (en) | Single-channel multi-modal background modeling method for high-definition high-speed video | |
CN116502139B (en) | Radiation source signal individual identification method based on integrated countermeasure migration | |
Yang et al. | Image Recognition Based on an Improved Deep Residual Shrinkage Network | |
Wang et al. | A Preliminary Study on Retro-reconstruction of Cell Fission Dynamic Process using Convolutional LSTM Neural Networks | |
Wu et al. | Deeply supervised group recursive saliency prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |