CN100545640C - A Method for Automatic Tracking of Cells in Video Microscopic Image - Google Patents

A Method for Automatic Tracking of Cells in Video Microscopic Image Download PDF

Info

Publication number
CN100545640C
CN100545640C CNB2007100710759A CN200710071075A CN100545640C CN 100545640 C CN100545640 C CN 100545640C CN B2007100710759 A CNB2007100710759 A CN B2007100710759A CN 200710071075 A CN200710071075 A CN 200710071075A CN 100545640 C CN100545640 C CN 100545640C
Authority
CN
China
Prior art keywords
image
cell
energy
prime
max
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2007100710759A
Other languages
Chinese (zh)
Other versions
CN101144784A (en
Inventor
彭冬亮
林岳松
金朝阳
薛安克
陈华杰
朱胜利
郭云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CNB2007100710759A priority Critical patent/CN100545640C/en
Publication of CN101144784A publication Critical patent/CN101144784A/en
Application granted granted Critical
Publication of CN100545640C publication Critical patent/CN100545640C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种视频显微图像细胞自动跟踪的方法。现有的细胞跟踪方法自动化程度较低,不能适应细胞运动形式的复杂多变性和多细胞运动跟踪。本发明的步骤包括:将获取的细胞运动视频显微图像逐帧进行增强处理;从增强处理后的细胞运动图像中提取目标细胞;建立细胞运动动力学模型;跟踪运动的目标细胞。本发明在分析细胞运动特性和图像中噪声与干扰特征的基础上,采用动态系统建模和随机建模方法,通过递推Bayesian滤波和数据关联技术,对细胞的运动轨迹进行跟踪,自动化程度高、处理能力强。The invention relates to a method for automatically tracking cells in a video microscopic image. The existing cell tracking methods have a low degree of automation and cannot adapt to the complexity and variability of cell movement patterns and multi-cell movement tracking. The steps of the invention include: enhancing the acquired cell movement video microscopic image frame by frame; extracting target cells from the enhanced cell movement images; establishing a cell movement dynamics model; tracking the moving target cells. Based on the analysis of cell motion characteristics and image noise and interference features, the present invention adopts dynamic system modeling and stochastic modeling methods, and uses recursive Bayesian filtering and data association technology to track cell motion trajectories, with a high degree of automation , Strong processing ability.

Description

一种视频显微图像细胞自动跟踪方法 A Method for Automatic Tracking of Cells in Video Microscopic Image

技术领域 technical field

本发明属于细胞生物学与生物制药领域,涉及一种视频显微图像细胞自动跟踪的方法。The invention belongs to the field of cell biology and biopharmaceuticals, and relates to a method for automatic tracking of cells in video microscopic images.

背景技术 Background technique

细胞跟踪是指在观测时间内对特定细胞的运动轨迹、运动速度、颜色、形状等属性进行定性或定量的分析,它是进行细胞活性、细胞迁徙和细胞趋向性等细胞生物学和生物制药研究的有效方法和必备手段,在生物学、药理学和病理学方面都具有十分重要的研究意义和实用价值。Cell tracking refers to the qualitative or quantitative analysis of the trajectory, speed, color, shape and other attributes of specific cells within the observation time. It has very important research significance and practical value in biology, pharmacology and pathology.

目前国内细胞跟踪的实现方法主要还是显微设备辅助下的人工定时检测、记录,其自动化程度较低。一方面要求工作人员付出辛苦的劳动,具有较高的劳动强度;另一方面由于细胞运动形式的复杂多变性(如细胞分裂、结合、聚集、交叉)和视频图像本身的噪声和干扰等因素,使得人工观测过程更加困难,并且有较高的出错率。At present, the domestic cell tracking method is mainly manual timing detection and recording with the aid of microscopic equipment, and its degree of automation is relatively low. On the one hand, it requires the staff to work hard and has high labor intensity; on the other hand, due to the complexity and variability of cell movement (such as cell division, combination, aggregation, crossover) and the noise and interference of the video image itself, It makes the manual observation process more difficult and has a higher error rate.

近几年,国外研究机构在细胞自动跟踪方面取得了一些研究成果,借助图像分析软件研制出一批细胞自动跟踪系统(如University of Virginia、University of Aberdeen和European Molecular Biology Laboratory等),但在商业化方面还不成熟。这些设备虽然能够通过图像处理和分析软件计算出细胞在图像上的位置,但对于细胞的分裂与结合、细胞聚集和细胞群的分离、多细胞运动轨迹的交叉等复杂现象难以处理。In recent years, foreign research institutes have achieved some research results in automatic cell tracking. With the help of image analysis software, a number of automatic cell tracking systems (such as University of Virginia, University of Aberdeen and European Molecular Biology Laboratory, etc.) have been developed. Culturally immature. Although these devices can calculate the position of cells on the image through image processing and analysis software, it is difficult to deal with complex phenomena such as cell division and combination, cell aggregation and separation of cell groups, and the intersection of multi-cell movement trajectories.

发明内容 Contents of the invention

本发明的目的在于针对现有细胞跟方法的不足,为细胞生物学和生物制药等领域研究提供一种自动化程度高、处理能力强的视频显微图像细胞跟踪的方法。The purpose of the present invention is to aim at the deficiency of existing cell tracking methods, and provide a method for cell tracking of video microscopic images with high degree of automation and strong processing capacity for research in the fields of cell biology and biopharmaceuticals.

本发明包括以下步骤:1.通过相差显微镜获取实时细胞运动图象,将获取的细胞运动视频显微图像逐帧进行增强处理;2.从增强处理后的细胞运动图像中提取目标细胞;3.建立细胞运动动力学模型;4、跟踪运动的目标细胞。The present invention comprises the following steps: 1. Obtain real-time cell motion images through a phase contrast microscope, and enhance the acquired cell motion video microscopic images frame by frame; 2. Extract target cells from the enhanced cell motion images; 3. Establish a dynamic model of cell movement; 4. Track the moving target cells.

步骤1中视频显微图像的增强处理采用广义模糊增强处理方法,具体步骤如下:The enhancement processing of the video microscopic image in step 1 adopts the generalized fuzzy enhancement processing method, and the specific steps are as follows:

1)初始化,输入待处理图像f,设定迭代次数的初始值为r=1;1) Initialize, input the image f to be processed, and set the initial value of the number of iterations as r=1;

2)采用中值滤波器进行平滑滤波,以图像f的每个像素f(i,j)作为窗口(能够覆盖一定图像的单元)的中心,将窗口所覆盖图象的像素灰度平均值作为像素f(i,j)的新的灰度值fij2) Use the median filter for smoothing, take each pixel f(i, j) of the image f as the center of the window (a unit that can cover a certain image), and take the average gray value of the pixel of the image covered by the window as new gray value f ij of pixel f(i, j);

3)将新的灰度值fij进行第r次迭代,确定中值滤波后图像的模糊特征平面{μij(r)};3) Carry out the rth iteration of the new gray value f ij to determine the fuzzy feature plane {μ ij (r)} of the image after median filtering;

4)对μij(r)作如下非线性变换,变换结果记为μ′ij(r);4) Perform the following nonlinear transformation on μ ij (r), and the transformation result is recorded as μ′ ij (r);

&mu;&mu; ijij &prime;&prime; (( rr )) == TT (( &mu;&mu; ijij (( rr )) )) == 22 (( &mu;&mu; ijij (( rr )) )) 22 00 &le;&le; &mu;&mu; ijij (( rr )) &le;&le; 0.50.5 11 -- 22 (( 11 -- &mu;&mu; ijij (( rr )) )) 22 0.50.5 << &mu;&mu; ijij (( rr )) &le;&le; 11

5)对μ′ij(r)作逆变换,得到新的灰度图像{f′ij};5) Perform inverse transformation on μ′ ij (r) to obtain a new grayscale image {f′ ij };

6)灰度变换,设定适当的fmin e,fmax e,计算出广义模糊增强后的图像灰度{fij e};6) Gray scale transformation, set appropriate f min e , f max e , and calculate the image gray scale {f ij e } after generalized fuzzy enhancement;

7)比较第r次和第r-1次增强图像的图像质量评价指标,如果σw(r)小于σw(r-1),则令 &mu; ij &prime; ( r ) = &Delta; &mu; ij ( r ) , r + 1 &DoubleRightArrow; r , 并返回4)对μij(r)进行迭代计算;否则输出第r-1次增强的图像。7) Compare the image quality evaluation indicators of the rth and r-1th enhanced images, if σw (r) is less than σw (r-1), then let &mu; ij &prime; ( r ) = &Delta; &mu; ij ( r ) , r + 1 &DoubleRightArrow; r , And return to 4) perform iterative calculation on μ ij (r); otherwise, output the image enhanced for the r-1th time.

以上算法的计算复杂度可分析如下:对于具有m×n个像素的图像,广义模糊增强算法的计算复杂度为O(m×n)。The computational complexity of the above algorithm can be analyzed as follows: For an image with m×n pixels, the computational complexity of the generalized blur enhancement algorithm is O(m×n).

步骤2中提取目标细胞的方法采用主动轮廓线模型方法(Snake方法),具体实现包括以下步骤:The method for extracting target cells in step 2 adopts the active contour model method (Snake method), and the specific implementation includes the following steps:

1)构造能量模型1) Construct an energy model

设定曲线v(s)=[x(s) y(s)],s∈[0,1],定义其上的总的能量表示为:Set the curve v(s)=[x(s) y(s)], s∈[0,1], define the total energy on it as:

Etotal(v(s))=∫s(Eint(v(s))+Eimage(v(s))+Econ(v(s)))ds           (1)E total (v(s))=∫ s (E int (v(s))+E image (v(s))+E con (v(s)))ds (1)

其中:Eint(v(s))=(α(s)|vs(s)|2+β(s)|vss(s)|2)                  (2)Where: E int (v(s))=(α(s)|v s (s)| 2 +β(s)|v ss (s)| 2 ) (2)

Eimage(v(s))=wlineEline(v(s))+wedgeEedge(v(s))+wtermEterm(v(s))  (3)E image (v(s))=w line E line (v(s))+w edge E edge (v(s))+w term E term (v(s)) (3)

Econ(v(s))=k(x1-x2)2                                             (4)E con (v(s))=k(x 1 -x 2 ) 2 (4)

Eint(v(s))为内部能量,表达了驱使曲线更平滑的力,其中一阶项表达了使相邻点距离更小的拉力,二阶项表达了抵御弯曲的刚性力;α(s)和β(s)表示各自的权重;Eimage(v(s))为图像能量,是从图像得到的引导蛇朝向低灰度或高灰度位置前进的线能量Eline=I(x,y)、边缘能量用 E edge = - | &dtri; I ( v ( s ) ) | 和图像中线的终止点和拐角对轮廓线走向的影响的能量Eterm的三个能量项的加权和;wline、wedge和wterm代表图像能量各分量的权重;Econ(v(s))表示吸引轮廓线到图象位置的弹性力,x1和x2分别表示轮廓线和图像位置的指定点。若将外部能量定义为E int (v(s)) is the internal energy, which expresses the force that drives the curve smoother, where the first-order term expresses the pulling force that makes the distance between adjacent points smaller, and the second-order term expresses the rigid force against bending; α( s) and β(s) represent their respective weights; E image (v(s)) is the image energy, which is the line energy obtained from the image to guide the snake to advance towards the low gray level or high gray level position E line =I(x , y), the edge energy is used E. edge = - | &dtri; I ( v ( the s ) ) | and the weighted sum of the three energy items of the energy E term of the impact of the end point and corner of the line in the image on the direction of the contour line; w line , w edge and w term represent the weight of each component of the image energy; E con (v(s) ) represents the elastic force that attracts the contour line to the image position, and x 1 and x 2 represent the designated points of the contour line and image position, respectively. If the external energy is defined as

Eext(v(s))=Eimage(v(s))+Econ(v(s))        (5)E ext (v(s))=E image (v(s))+E con (v(s)) (5)

则总的能量为Then the total energy is

Etotal(v(s))=∫s(Eint(v(s))+Eext(v(s)))ds (6)E total (v(s))=∫ s (E int (v(s))+E ext (v(s)))ds (6)

2)利用变分法对总能量进行极小化,使轮廓线满足2) Use the variation method to minimize the total energy, so that the contour line satisfies

Ff vv -- &PartialD;&PartialD; &PartialD;&PartialD; sthe s Ff vv sthe s ++ &PartialD;&PartialD; 22 &PartialD;&PartialD; sthe s 22 Ff vv ssss == 00 -- -- -- (( 77 ))

3)通过曲线包围区域的中心位置确定细胞目标在图像上的位置,该位置作为当前帧该细胞图像的位置测量。3) The position of the cell target on the image is determined by the center position of the area surrounded by the curve, and this position is used as the position measurement of the cell image in the current frame.

步骤3中建立细胞运动动力学模型包括建立目标运动模型和测量模型,其中目标运动模型为Establishing a cell movement dynamics model in step 3 includes establishing a target movement model and a measurement model, wherein the target movement model is

x(k+1)=F(k)x(k)+G(k)u(k)+v(k)    (8)x(k+1)=F(k)x(k)+G(k)u(k)+v(k) (8)

测量模型为The measurement model is

z(k)=H(k)x(k)+w(k)               (9)z(k)=H(k)x(k)+w(k) (9)

x(k)表示目标细胞的在k时刻的运动状态(位置、速度),z(k)表示k时刻的图像测量,F(k)、G(k)和H(k)分别表示k时刻的状态转移矩阵、控制输入矩阵和测量矩阵,v(k)和w(k)分别描述了随机系统噪声和测量噪声。x(k) represents the motion state (position, velocity) of the target cell at time k, z(k) represents the image measurement at time k, and F(k), G(k) and H(k) represent the The state transition matrix, control input matrix, and measurement matrix, v(k) and w(k), describe random system noise and measurement noise, respectively.

步骤4中运动目标细胞的跟踪方法采用递推Bayesian滤波方法更新每个目标,获得每个目标的当前状态和估计精度,对于多细胞跟踪和细胞分裂、细胞聚集通过数据关联进行处理。In the tracking method of moving target cells in step 4, the recursive Bayesian filter method is used to update each target, and the current state and estimation accuracy of each target are obtained. Multi-cell tracking, cell division, and cell aggregation are processed through data association.

递推Bayesian滤波方法和联合概率数据关联方法均采用成熟的现有技术,如递推Bayesian滤波方法可采用α-β滤波、Kalman滤波、PF滤波等方法,数据关联采用联合概率数据关联方法(由Bar Shalom Y提出)。Both the recursive Bayesian filtering method and the joint probability data association method adopt mature existing technologies. For example, the recursive Bayesian filtering method can use methods such as α-β filtering, Kalman filtering, and PF filtering, and the data association adopts the joint probability data association method (by presented by Bar Shalom Y).

本发明根据视频显微图像具有较强噪声和扰动、图像对比度差的特点,采用中值滤波、模糊增强、灰度变换和图像质量等评估环节构成的广义模糊增强方法对原始的图像数据逐帧处理,能较大程度提高图像处理效果。提出的广义模糊增强算法较通常的图像增强方法在处理弱对比度、强噪声图像方面,具有更好的性能。According to the characteristics of strong noise and disturbance and poor image contrast of video microscopic images, the present invention adopts the generalized fuzzy enhancement method composed of median filtering, fuzzy enhancement, gray scale transformation and image quality evaluation links to process the original image data frame by frame. Processing can greatly improve the image processing effect. Compared with common image enhancement methods, the proposed generalized fuzzy enhancement algorithm has better performance in dealing with weak contrast and strong noise images.

本发明从获取的原始视频显微图像入手,运用图像处理和分析方法改善图像质量、提取细胞轮廓和确定细胞位置,从而得到时序观测数据。在分析细胞运动特性和图像中噪声与干扰特征的基础上,采用动态系统建模和随机建模方法,建立了细胞运动系统模型,通过递推Bayesian滤波和数据关联技术,对细胞的运动轨迹进行自动跟踪,同时给出了细胞复杂运动的跟踪结果。本发明具有完整的系统性和很强的实用性。The invention starts from the acquired original video microscopic image, uses image processing and analysis methods to improve image quality, extract cell outline and determine cell position, thereby obtaining time-series observation data. On the basis of analyzing the characteristics of cell movement and the characteristics of noise and interference in the image, the dynamic system modeling and stochastic modeling methods are used to establish the model of the cell movement system, and the trajectory of the cell is analyzed by recursive Bayesian filtering and data association technology. Automatic tracking, while giving the tracking results of complex movement of cells. The invention has complete system and strong practicability.

具体实施方式 Detailed ways

视频显微图像细胞自动跟踪的方法包括以下步骤:1.通过相差显微镜获取实时细胞运动图象,将获取的细胞运动视频显微图像逐帧进行增强处理;2.从增强处理后的细胞运动图像中提取目标细胞;3.建立细胞运动动力学模型;4、跟踪运动的目标细胞。The method for automatically tracking cells in a video microscopic image comprises the following steps: 1. Obtain a real-time cell motion image through a phase contrast microscope, and enhance the acquired cell motion video microscopic image frame by frame; 2. From the enhanced cell motion image 3. Establish a dynamic model of cell movement; 4. Track the moving target cells.

步骤1中视频显微图像的增强处理采用广义模糊增强处理方法,具体步骤如下:The enhancement processing of the video microscopic image in step 1 adopts the generalized fuzzy enhancement processing method, and the specific steps are as follows:

(1)为了减弱图像中的颗粒噪声,一般情况下,在空间域内可以用邻域平均来减少噪声,在频率域,因为噪声频谱多在高频段,因此可以采用各种形式的低通滤波方法来减少噪声。为了尽量避免图像边缘模糊又能去除脉冲噪声和所谓“椒盐”噪声(Salt-and-pepper noise),采用中值滤波器进行平滑滤波;(1) In order to weaken the granular noise in the image, in general, the neighborhood average can be used to reduce the noise in the spatial domain. In the frequency domain, because the noise spectrum is mostly in the high frequency band, various forms of low-pass filtering methods can be used to reduce noise. In order to avoid image edge blurring and remove impulse noise and so-called "salt-and-pepper" noise (Salt-and-pepper noise), a median filter is used for smoothing;

(2)采用如下变换方法进行图像增强(2) Use the following transformation method for image enhancement

&mu;&mu; ijij == Hh (( ff ijij )) == (( 11 ++ ff ijij -- ff maxmax ff maxmax -- ff minmin )) Ff cc

其中Fc是模糊特性参数,fij、fmax、fmin分别表示图象中象素(i,j)的灰度值、图象最大灰度和最小灰度值,μij表示象素(i,j)模糊隶属度。运用迭代运算对图像模糊特征平面{μij}进行增强处理,获得新的模糊特征平面{μ′ij},并在此基础上进行如下逆变换,得到模糊增强后图像的灰度值:Among them, F c is the fuzzy characteristic parameter, f ij , f max , and f min respectively represent the gray value of pixel (i, j) in the image, the maximum gray value and the minimum gray value of the image, μ ij represents the pixel ( i, j) Fuzzy membership degree. The iterative operation is used to enhance the image fuzzy feature plane {μ ij } to obtain a new fuzzy feature plane {μ′ ij }, and on this basis, the following inverse transformation is performed to obtain the gray value of the image after blur enhancement:

ff ijij &prime;&prime; == Hh -- 11 (( &mu;&mu; ijij &prime;&prime; )) == ff maxmax -- (( 11 -- &mu;&mu; ijij &prime;&prime; 11 Ff cc )) (( ff maxmax -- ff minmin ))

(3)对模糊增强后的图像进行如下灰度变换(3) Perform the following grayscale transformation on the image after blur enhancement

ff ijij ee == tt (( ff ijij &prime;&prime; )) == ff maxmax ee -- ff minmin ee ff maxmax &prime;&prime; -- ff minmin &prime;&prime; (( ff ijij &prime;&prime; -- ff minmin &prime;&prime; )) ++ ff minmin ee

式中fij e为经过灰度变换t(·)后的图像灰度值,fmin e,fmax e分别为设定的灰度变换后的图像灰度的最大、最小值,f′min,f′max分别为模糊增强图像灰度的最大、最小值,且 f min &prime; &GreaterEqual; f min e , f max &prime; &le; f max e . In the formula, f ij e is the gray value of the image after gray scale transformation t( ), f min e and f max e are the maximum and minimum values of the gray scale of the image after the set gray scale transformation respectively, and f′ min , f′ max are the maximum and minimum values of the grayscale of the blur-enhanced image respectively, and f min &prime; &Greater Equal; f min e , f max &prime; &le; f max e .

(4)利用图像灰度对其灰度直方图的标准差加权,得到如下图像质量评价指标(4) Weighting the standard deviation of its gray histogram by using the image gray level, the following image quality evaluation index is obtained

&sigma;&sigma; ww == &Delta;f&Delta;f -- 11 &Sigma;&Sigma; jj == 11 NN (( pp jj -- pp &OverBar;&OverBar; )) 22 // NN

其中σw是图像灰度直方图的加权标准差,pj为第j灰度等级的象素数量在图像总象素N中所占百分比,p是pj的平均值,Δf为图像灰度范围。Where σ w is the weighted standard deviation of the grayscale histogram of the image, pj is the percentage of the number of pixels of the jth grayscale level in the total pixels N of the image, p is the average value of pj , and Δf is the grayscale of the image scope.

(5)重复(2)-(4)步,直到图象质量评价指标σw不再减小为止。(5) Repeat steps (2)-(4) until the image quality evaluation index σ w no longer decreases.

步骤2中提取目标细胞的方法采用主动轮廓线模型方法(Snake方法),该方法是一种自上而下定位图像特征的机制,首先设定一个初始的轮廓线(“蛇”),然后通过作用在“蛇点”上的约束力推动轮廓线向图像特征方向前进,最终锁定目标结构是通过极小化动态轮廓线总体能量的积分度量首先的,具体实现包括以下步骤:The method of extracting target cells in step 2 adopts the active contour line model method (Snake method), which is a top-down mechanism for locating image features, first setting an initial contour line (“snake”), and then passing The constraint force acting on the "snake point" pushes the contour line to the direction of the image feature, and finally locks the target structure by minimizing the integral measurement of the overall energy of the dynamic contour line. The specific implementation includes the following steps:

(1)能量模型的构造(1) Construction of energy model

令曲线v(s)=[x(s) y(s)],s∈[0,1],则定义在其上的总的能量可以表示为:Let the curve v(s)=[x(s) y(s)], s∈[0,1], then the total energy defined on it can be expressed as:

Etotal(v(s))=∫s(Eint(v(s))+Eimage(v(s))+Econ(v(s)))ds   (1)E total (v(s))=∫ s (E int (v(s))+E image (v(s))+E con (v(s)))ds (1)

其中:Eint(v(s))=(α(s)|v(s)|2+β(s)|vss(s)|2)           (2)Where: E int (v(s))=(α(s)|v(s)| 2 +β(s)|v ss (s)| 2 ) (2)

Eimage(v(s))=wlineEline(v(s))+wedgeEedge(v(s))+wtermEterm(v(s))  (3)E image (v(s))=w line E line (v(s))+w edge E edge (v(s))+w term E term (v(s)) (3)

Econ(v(s))=k(x1-x2)2                                     (4)E con (v(s))=k(x 1 -x 2 ) 2 (4)

内部能量Eint(v(s))表达了驱使曲线更平滑的力,其中一阶项表达了使相邻点距离更小的拉力,二阶项表达了抵御弯曲的刚性力,α(s)和β(s)代表各自的权重。图像能量Eimage(v(s))是从图像得到的三个能量项的加权和:引导蛇朝向低灰度或高灰度位置前进的线能量,Eline=I(x,y);边缘能量用 E edge = - | &dtri; I ( v ( s ) ) | 表示,从而吸引轮廓线到有着高梯度模值的图像边缘点上;Eterm表示图像中线的终止点和拐角对轮廓线走向的影响。wlinewedge和wterm代表图像能量各分量的权重。Econ(v(s))表达了吸引轮廓线到指定某个位置的弹性力,x1和x2分别表示轮廓线和图像位置的指定点。若记The internal energy E int (v(s)) expresses the force that drives the curve smoother, where the first-order term expresses the pulling force that makes the distance between adjacent points smaller, and the second-order term expresses the rigid force against bending, α(s) and β(s) represent the respective weights. The image energy E image (v(s)) is the weighted sum of three energy items obtained from the image: the line energy that guides the snake towards a low-gray or high-gray position, E line =I(x,y); edge For energy E. edge = - | &dtri; I ( v ( the s ) ) | Indicates that the contour line is attracted to the edge point of the image with a high gradient modulus; E term represents the influence of the end point and corner of the line in the image on the direction of the contour line. w line , wedge and w term represent the weight of each component of image energy. E con (v(s)) expresses the elastic force that attracts the contour line to a specified position, and x 1 and x 2 represent the specified point of the contour line and image position, respectively. Ruoji

Eext(v(s))=Eimage(v(s))+Econ(v(s))      (5)E ext (v(s))=E image (v(s))+E con (v(s)) (5)

则有then there is

Etotal(v(s))=∫s(Eint(v(s))+Eext(v(s)))ds    (6)E total (v(s))=∫ s (E int (v(s))+E ext (v(s)))ds (6)

(2)基于变分法的能量极小化计算(2) Energy minimization calculation based on variational method

轮廓线的最终位置可以通过变分方法获得。将(6)式中的积分项用F(s,vs,vss)代替,则推导的曲线方程应该满足如下欧拉-拉个朗日方程:The final position of the contour line can be obtained by a variational method. Replace the integral term in (6) with F(s, vs, v ss ), then the derived curve equation should satisfy the following Euler-Lager Lange equation:

Ff vv -- &PartialD;&PartialD; &PartialD;&PartialD; sthe s Ff vv sthe s ++ &PartialD;&PartialD; 22 &PartialD;&PartialD; sthe s 22 Ff vv ssss == 00 -- -- -- (( 77 ))

(3)在主动轮廓线模型方法处理的结果上,可以通过计算曲线包围区域的中心位置来确定细胞目标在图像上的位置。该位置可以作为当前帧图像某细胞的位置测量。(3) On the results processed by the active contour model method, the position of the cell target on the image can be determined by calculating the center position of the area surrounded by the curve. The position can be measured as the position of a certain cell in the current frame image.

由于细胞运动具有一定的随机性,因此对其运动描述步骤3中建立细胞运动动力学模型采用随机建模的方法,包括建立目标运动模型和测量模型,其中目标运动模型为Due to the randomness of cell movement, stochastic modeling is used to establish the dynamic model of cell movement in step 3 of its movement description, including the establishment of target movement model and measurement model, where the target movement model is

x(k+1)=F(k)x(k)+G(k)u(k)+v(k)              (8)x(k+1)=F(k)x(k)+G(k)u(k)+v(k) (8)

测量模型为The measurement model is

z(k)=H(k)x(k)+w(k)                         (9)z(k)=H(k)x(k)+w(k)

x(k)表示目标细胞的在k时刻的运动状态(位置、速度),z(k)表示k时刻的图像测量,F(k)、G(k)和H(k)分别表示k时刻的状态转移矩阵、控制输入矩阵和测量矩阵,v(k)和w(k)分别描述了随机系统噪声和测量噪声。x(k) represents the motion state (position, velocity) of the target cell at time k, z(k) represents the image measurement at time k, and F(k), G(k) and H(k) represent the The state transition matrix, control input matrix, and measurement matrix, v(k) and w(k), describe random system noise and measurement noise, respectively.

步骤4中运动目标细胞的跟踪方法采用递推Bayesian滤波方法更新每个目标,获得每个目标的当前状态和估计精度,对于多细胞跟踪和细胞分裂、细胞聚集通过数据关联进行处理。In the tracking method of moving target cells in step 4, the recursive Bayesian filter method is used to update each target, and the current state and estimation accuracy of each target are obtained. Multi-cell tracking, cell division, and cell aggregation are processed through data association.

同一帧图像上往往有多个细胞,这些细胞的运动是无规律的,并且还可能存在细胞的分裂和合并等生物现象。由于图像采集设备性能和实际环境的干扰,图像中的噪声和杂波等级较高。这些因素使得基于图像的细胞跟踪必须解决一个关键技术问题:数据关联,即正确判断多个测量信息与多条被跟踪目标航迹的关联性。一旦消除了测量源的不确定性,就可以将多目标跟踪问题转化为多个单目标跟踪问题求解。对于杂波可以通过跟踪门技术和随机建模方法进行处理,细胞的分裂和合并现象可等价为目标航迹的生成和合并问题。There are often multiple cells on the same frame of image, the movement of these cells is irregular, and there may be biological phenomena such as cell division and merger. Due to the performance of image acquisition equipment and the interference of the actual environment, the level of noise and clutter in the image is relatively high. These factors make image-based cell tracking must solve a key technical problem: data association, that is, to correctly judge the correlation between multiple measurement information and multiple tracked target tracks. Once the uncertainty of the measurement source is eliminated, the multi-target tracking problem can be transformed into multiple single-target tracking problems for solution. The clutter can be dealt with by tracking gate technology and stochastic modeling method, and the phenomenon of cell division and merging can be equivalent to the generation and merging of target tracks.

JPDA(联合概率数据关联)是一种解决数据关联问题的次优算法。当跟踪测量空间内接近的多个目标时,测量可能同时落在几个目标的跟踪门内,将这些跟踪门有交集的目标形成“聚”(cluster),设聚内目标数为nt,落入这些目标跟踪门的测量数为nm,将聚用一个二元关联逻辑矩阵 &Omega; = [ &Omega; ij ] n m &times; ( n i + 1 ) 表示。Ωij=1,表示第i个测量可能源于第j个目标(j=0表示该测量为杂波);反之Ωij=0表示第i个测量不可能源于第j个目标。满足如下三个约束条件的一种测量和目标之间的可能配对事件称之为一个可行事件χ(feasible event):JPDA (Joint Probabilistic Data Association) is a suboptimal algorithm for solving data association problems. When tracking multiple targets approaching in the measurement space, the measurement may fall in the tracking gates of several targets at the same time, and the targets that these tracking gates intersect form a "cluster", and the number of targets in the cluster is n t , The number n m of measurements falling into these target tracking gates will be aggregated using a binary associative logic matrix &Omega; = [ &Omega; ij ] no m &times; ( no i + 1 ) express. Ω ij =1, which means that the i-th measurement may originate from the j-th target (j=0 means that the measurement is clutter); otherwise, Ω ij =0 means that the i-th measurement cannot possibly originate from the j-th target. A possible pairing event between a measurement and a target that satisfies the following three constraints is called a feasible event χ (feasible event):

每个目标最多产生一个测量;Each target produces at most one measurement;

每个测量最多来源于一个目标;Each measurement originates from at most one target;

落入某个目标的跟踪门之内的候选测量或者源于该目标,或者源于杂波,或者源于其他目标。Candidate measurements that fall within a target's tracking gate originate either from that target, from clutter, or from other targets.

将可行事件用二元关联逻辑矩阵Φ表示,由可行事件的约束条件可知,Φ的每行元素之和等于1,每列元素之和等于1或者0(第0列除外)。可行事件可以看作是在测量集和待定目标集形成的所有数学组合中按照三个约束条件遴选出的部分组合。求出每个可行事件的后验概率,并将所有的Φij=1可行事件的后验概率相加,即第i个测量属于第j个目标的后验概率,用它求出该测量更新该目标时的权重。记 &tau; i ( &chi; ) = &Sigma; j = 1 n t &Phi; ij 表示可行事件χ中第i个测量的指标函数, &delta; i ( &chi; ) = &Sigma; i = 1 n t &Phi; ij 表示可行事件χ中第j个目标的指标函数,

Figure C20071007107500114
表示可行事件χ中杂波个数。在源自目标的测量满足正态分布,杂波满足均匀分布,杂波个数满足泊松分布的前提下,得可行事件χ发生的后验概率为Feasible events are represented by a binary associative logic matrix Φ. According to the constraints of feasible events, the sum of elements in each row of Φ is equal to 1, and the sum of elements in each column is equal to 1 or 0 (except column 0). Feasible events can be regarded as partial combinations selected according to three constraints in all mathematical combinations formed by the measurement set and the undetermined target set. Find the posterior probability of each feasible event, and add the posterior probability of all Φ ij = 1 feasible events, that is, the posterior probability that the i-th measurement belongs to the j-th target, and use it to find the measurement update The weight of this goal. remember &tau; i ( &chi; ) = &Sigma; j = 1 no t &Phi; ij Denotes the indicator function of the i-th measurement in the feasible event χ, &delta; i ( &chi; ) = &Sigma; i = 1 no t &Phi; ij represents the index function of the jth objective in the feasible event χ,
Figure C20071007107500114
Indicates the number of clutter in the feasible event χ. Under the premise that the measurement from the target satisfies the normal distribution, the clutter satisfies the uniform distribution, and the number of clutter satisfies the Poisson distribution, the posterior probability of the occurrence of the feasible event χ is given as

Figure C20071007107500115
Figure C20071007107500115

其中第一个连乘表示测量属于实际目标的正态分布概率,第二个连乘表示所有目标被检测的概率,第三个连乘表示目标都没有被检测到的概率,c是归一化因子。第i个测量源自第j个目标的后验概率为Among them, the first multiplication indicates the probability that the measurement belongs to the normal distribution of the actual target, the second multiplication indicates the probability that all targets are detected, and the third multiplication indicates the probability that no target is detected, and c is the normalization factor. The posterior probability that the i-th measurement originates from the j-th target is

&beta;&beta; kk ++ 11 (( ii )) (( jj )) == &Sigma;&Sigma; &chi;&chi; :: &Phi;&Phi; ijij == 11 PP {{ &chi;&chi; || ZZ kk ++ 11 }} -- -- -- (( 1111 ))

第j个目标没有产生任何测量的后验概率为The posterior probability that the jth target did not produce any measurement is

&beta;&beta; kk ++ 11 (( 00 )) (( jj )) == 11 -- &Sigma;&Sigma; ii == 11 nno mm &beta;&beta; kk ++ 11 (( ii )) (( jj )) -- -- -- (( 1212 ))

将所有有效测量加权得融合测量:The fused measure is obtained by weighting all valid measures:

ZZ kk ++ 11 == &Sigma;&Sigma; jj == 11 NN tt &beta;&beta; kk ++ 11 (( jj )) zz kk ++ 11 (( jj )) -- -- -- (( 1313 ))

按照递推Bayesian滤波方法(如α-β滤波、Kalman滤波、PF滤波等方法)更新每个目标,获得每个目标的当前状态(估计量)和估计精度(目标状态协方差矩阵)。Update each target according to the recursive Bayesian filter method (such as α-β filter, Kalman filter, PF filter, etc.), and obtain the current state (estimator) and estimation accuracy (target state covariance matrix) of each target.

Claims (1)

1、一种视频显微图像细胞自动跟踪方法,其特征在于该方法包括以下步骤:(1)通过相差显微镜获取实时细胞运动图像,将获取的细胞运动视频显微图像逐帧进行增强处理;(2)从增强处理后的细胞运动图像中提取目标细胞;(3)建立细胞运动动力学模型;(4)跟踪运动的目标细胞;1, a kind of video microscopic image cell automatic tracking method, it is characterized in that the method comprises the following steps: (1) obtain real-time cell motion image by phase contrast microscope, the cell motion video microscopic image of acquisition is carried out frame-by-frame enhanced processing; 2) Extracting target cells from the enhanced cell motion image; (3) Establishing a cell motion dynamics model; (4) Tracking moving target cells; 步骤(1)中视频显微图像的增强处理采用广义模糊增强处理方法,具体包括以下步骤:The enhanced processing of the video microscopic image in step (1) adopts a generalized fuzzy enhanced processing method, which specifically includes the following steps: ①初始化,输入待处理图像f,设定迭代次数的初始值为r=1;①Initialization, input the image f to be processed, and set the initial value of the number of iterations to r=1; ②采用中值滤波器进行平滑滤波,以图像f的每个像素f(i,j)作为窗口的中心,将窗口所覆盖图像的像素灰度平均值作为f(i,j)的新的灰度值fij②Adopt the median filter for smoothing, take each pixel f(i, j) of the image f as the center of the window, and use the average gray value of the pixels of the image covered by the window as the new gray value of f(i, j) degree value f ij ; ③将每个像素的新的灰度值fij按照如下变换方法进行图像增强③The new gray value f ij of each pixel is image enhanced according to the following transformation method &mu;&mu; ijij == Hh (( ff ijij )) == (( 11 ++ ff ijij -- ff maxmax ff maxmax -- ff minmin )) Ff cc 其中Fc是模糊特性参数,fij、fmax、fmin分别表示图像中像素f(i,j)的灰度值、图像最大灰度和最小灰度值,μij表示像素f(i,j)模糊隶属度;Among them, F c is the fuzzy characteristic parameter, f ij , f max , and f min respectively represent the gray value of pixel f(i, j) in the image, the maximum gray value and the minimum gray value of the image, and μ ij represents the pixel f(i, j) fuzzy degree of membership; 所有新的灰度值fij进行第r次迭代,确定中值滤波后图像的模糊特征平面{μij(r)};All new gray values f ij are iterated for the rth time to determine the fuzzy feature plane {μ ij (r)} of the image after median filtering; ④对μij(r)作如下非线性变换,变换结果记为μ′ij(r);④ Perform the following nonlinear transformation on μ ij (r), and the transformation result is recorded as μ′ ij (r); &mu;&mu; ijij &prime;&prime; (( rr )) == TT (( &mu;&mu; ijij (( rr )) )) == 22 (( &mu;&mu; ijij (( rr )) )) 22 00 &le;&le; &mu;&mu; ijij (( rr )) &le;&le; 0.50.5 11 -- 22 (( 11 -- &mu;&mu; ijij (( rr )) )) 22 0.50.5 << &mu;&mu; ijij (( rr )) &le;&le; 11 -- -- -- (( 11 )) ⑤对μ′ij(r)作逆变换,得到新的灰度图像{f′ij};⑤ Perform inverse transformation on μ′ ij (r) to obtain a new grayscale image {f′ ij }; ⑥对模糊增强后的图像进行如下灰度变换⑥ Perform the following grayscale transformation on the image after blur enhancement ff ijij ee == tt (( ff ijij &prime;&prime; )) == ff maxmax ee -- ff minmin ee ff maxmax &prime;&prime; -- ff minmin &prime;&prime; (( ff ijij &prime;&prime; -- ff minmin &prime;&prime; )) ++ ff minmin ee -- -- -- (( 22 )) 式中fij e为经过灰度变换t(·)后的图像灰度值,fmin e,fmax e分别为设定的灰度变换后的图像灰度的最大、最小值,f′min,f′max分别为模糊增强图像灰度的最大、最小值,且 f min &prime; &GreaterEqual; f min e , f max &prime; &le; f max e ; In the formula, f ij e is the gray value of the image after gray scale transformation t( ), f min e and f max e are the maximum and minimum values of the gray scale of the image after the set gray scale transformation respectively, and f′ min , f′ max are the maximum and minimum values of the grayscale of the blur-enhanced image respectively, and f min &prime; &Greater Equal; f min e , f max &prime; &le; f max e ; ⑦比较第r次和第r-1次增强图像的图像质量评价指标,如果σw(r)小于σw(r-1),则令 &mu; ij &prime; ( r ) = &Delta; &mu; ij ( r ) , r + 1 &DoubleRightArrow; r , 并返回④对μij(r)进行迭代计算;否则输出第r-1次增强的图像;⑦Comparing the image quality evaluation index of the rth enhanced image with the r-1th enhanced image, if σ w (r) is less than σ w (r-1), then let &mu; ij &prime; ( r ) = &Delta; &mu; ij ( r ) , r + 1 &DoubleRightArrow; r , And return ④ to iteratively calculate μ ij (r); otherwise, output the image enhanced for the r-1th time; 所述的图像质量评价指标是利用图像灰度对其灰度直方图的标准差加权,得到的如下图像质量评价指标The image quality evaluation index is weighted by the standard deviation of its gray histogram using the image grayscale, and the following image quality evaluation index is obtained &sigma;&sigma; ww == &Delta;&Delta; ff -- 11 &Sigma;&Sigma; jj == 11 NN (( pp jj -- pp &OverBar;&OverBar; )) 22 // NN 其中σw是图像灰度直方图的加权标准差,pj为第j灰度等级的像素数量在图像总像素N中所占百分比,p是pj的平均值,Δf为图像灰度范围;Where σ w is the weighted standard deviation of the grayscale histogram of the image, pj is the percentage of the number of pixels of the jth grayscale level in the total pixels N of the image, p is the average value of pj , and Δf is the grayscale range of the image; 步骤(2)中提取目标细胞的方法采用主动轮廓线模型方法,具体实现包括以下步骤:The method for extracting target cells in step (2) adopts the active contour line model method, and the specific implementation includes the following steps: ①构造能量模型①Construct energy model 设定曲线v(s)=[x(s) y(s)],s∈[0,1],定义其上的总的能量表示为:Set the curve v(s)=[x(s) y(s)], s∈[0,1], define the total energy on it as: Etotal(v(s))=∫s(Eint(v(s))+Eimage(v(s))+Econ(v(s)))ds          (3)E total (v(s))=∫ s (E int (v(s))+E image (v(s))+E con (v(s)))ds (3) 其中:in: Eint(v(s))=(α(s)|vs(s)|2+β(s)|vss(s)|2)                       (4)E int (v(s))=(α(s)|v s (s)| 2 +β(s)|v ss (s)| 2 ) (4) Eimage(v(s))=wlineEline(v(s))+wedgeEedge(v(s))+wtermEterm(v(s)) (5)E image (v(s))=w line E line (v(s))+w edge E edge (v(s))+w term E term (v(s)) (5) Econ(v(s))=k(x1-x2)2                                            (6)E con (v(s))=k(x 1 -x 2 ) 2 (6) Eint(v(s))为内部能量,表达了驱使曲线更平滑的力,其中一阶项表达了使相邻点距离更小的拉力,二阶项表达了抵御弯曲的刚性力;α(s)和β(s)表示各自的权重;Eimage(v(s))为图像能量,是从图像得到的引导蛇朝向低灰度或高灰度位置前进的线能量Eline=I(x,y)、边缘能量 E edge = - | &dtri; I ( v ( s ) ) | 和图像中线的终止点和拐角对轮廓线走向的影响的能量Eterm的三个能量项的加权和;wline、wedge和wterm代表图像能量各分量的权重;Econ(v(s))表示吸引轮廓线到图像位置的弹性力,x1和x2分别表示轮廓线和图像位置的指定点;E int (v(s)) is the internal energy, which expresses the force that drives the curve smoother, where the first-order term expresses the pulling force that makes the distance between adjacent points smaller, and the second-order term expresses the rigid force against bending; α( s) and β(s) represent their respective weights; E image (v(s)) is the image energy, which is the line energy obtained from the image to guide the snake to advance towards the low gray level or high gray level position E line =I(x , y), edge energy E. edge = - | &dtri; I ( v ( the s ) ) | and the weighted sum of the three energy items of the energy E term of the impact of the end point and corner of the line in the image on the direction of the contour line; w line , w edge and w term represent the weight of each component of the image energy; E con (v(s) ) represents the elastic force that attracts the contour line to the image position, and x 1 and x 2 represent the designated points of the contour line and image position, respectively; ②利用变分法对总能量进行极小化,使轮廓线满足② Use the variational method to minimize the total energy, so that the contour line satisfies Ff vv -- &PartialD;&PartialD; &PartialD;&PartialD; sthe s Ff vv sthe s ++ &PartialD;&PartialD; 22 &PartialD;&PartialD; sthe s 22 Ff vv ssss == 00 -- -- -- (( 77 )) ③通过曲线包围区域的中心位置确定细胞目标在图像上的位置,该位置作为当前帧该细胞图像的位置测量;③ The position of the cell target on the image is determined by the center position of the area surrounded by the curve, and this position is used as the position measurement of the cell image in the current frame; 步骤(3)中建立细胞运动动力学模型包括建立目标运动模型和测量模型,其中目标运动模型为Establishing the cell movement dynamics model in step (3) includes establishing a target movement model and a measurement model, wherein the target movement model is x(k+1)=F(k)x(k)+G(k)u(k)+v(k)                                (8)x(k+1)=F(k)x(k)+G(k)u(k)+v(k) (8) 测量模型为The measurement model is z(k)=H(k)x(k)+w(k)                                           (9)z(k)=H(k)x(k)+w(k) x(k)表示目标细胞的在k时刻的运动状态,z(k)表示k时刻的图像测量,F(k)、G(k)和H(k)分别表示k时刻的状态转移矩阵、控制输入矩阵和测量矩阵,v(k)和w(k)分别描述了随机系统噪声和测量噪声;x(k) represents the movement state of the target cell at time k, z(k) represents the image measurement at time k, F(k), G(k) and H(k) represent the state transition matrix, control The input matrix and the measurement matrix, v(k) and w(k) describe the random system noise and measurement noise, respectively; 步骤(4)中运动目标细胞的跟踪方法采用递推Bayesian滤波方法更新每个目标,获得每个目标的当前状态和估计精度,对于多细胞跟踪和细胞分裂、细胞聚集通过数据关联进行处理。The tracking method of moving target cells in step (4) adopts the recursive Bayesian filtering method to update each target, obtains the current state and estimation accuracy of each target, and processes multi-cell tracking, cell division, and cell aggregation through data association.
CNB2007100710759A 2007-09-04 2007-09-04 A Method for Automatic Tracking of Cells in Video Microscopic Image Expired - Fee Related CN100545640C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007100710759A CN100545640C (en) 2007-09-04 2007-09-04 A Method for Automatic Tracking of Cells in Video Microscopic Image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007100710759A CN100545640C (en) 2007-09-04 2007-09-04 A Method for Automatic Tracking of Cells in Video Microscopic Image

Publications (2)

Publication Number Publication Date
CN101144784A CN101144784A (en) 2008-03-19
CN100545640C true CN100545640C (en) 2009-09-30

Family

ID=39207418

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007100710759A Expired - Fee Related CN100545640C (en) 2007-09-04 2007-09-04 A Method for Automatic Tracking of Cells in Video Microscopic Image

Country Status (1)

Country Link
CN (1) CN100545640C (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719278B (en) * 2009-12-21 2012-01-04 西安电子科技大学 Automatic tracking method for video microimage cells based on KHM algorithm
CN102156988B (en) * 2011-05-27 2012-10-10 天津大学 Cell division sequence detection method
CN102999922B (en) * 2012-11-19 2015-04-15 常熟理工学院 Multi-cell automatic tracking method and system based on plurality of task ant systems
CN103268617B (en) * 2013-05-22 2016-02-17 常熟理工学院 A kind of Combined estimator of the many cells multiparameter based on Ant ColonySystem and accurate tracking system
CN104122204B (en) * 2014-07-25 2017-03-08 华中科技大学 A kind of Piezoelectric Driving measurement tracking, system and using method
CN105403989B (en) * 2015-10-28 2018-03-27 清华大学 Nematode identifying system and nematode recognition methods
CN106407887B (en) * 2016-08-24 2020-07-31 重庆大学 Method and device for obtaining step size of candidate frame search
CN106846296A (en) * 2016-12-19 2017-06-13 深圳大学 A kind of cell image tracks intelligent algorithm
CN109035269B (en) * 2018-07-03 2021-05-11 怀光智能科技(武汉)有限公司 Cervical cell pathological section pathological cell segmentation method and system
CN109142356A (en) * 2018-08-06 2019-01-04 王鲁生 A kind of leukorrhea micro-image mycelia automatic identification equipment and method
CN109523577B (en) * 2018-10-29 2020-09-01 浙江大学 A method for determining the motion trajectory of subcellular structures based on microscopic images
CN109932290B (en) * 2019-01-16 2020-10-20 中国科学院水生生物研究所 Particle counting method based on stream image moving target tracking
CN111830278B (en) * 2020-07-29 2021-09-14 南开大学 Growth domain-based method for detecting velocity field of increment type cytoplasm in microtubule

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Degraded image enhancement with applications inrobotvision. Peng Dong-liang , Xue An-ke.The international conference on system, man and cybernetics,Vol.2 . 2005 *
Degraded image enhancement with applications inrobotvision. Peng Dong-liang, Xue An-ke.The international conference on system, man and cybernetics,Vol.2. 2005 *
一种自动提取目标的主动轮廓法. 李熙莹等.光子学报,第31卷第5期. 2002 *
主动轮廓线模型中椒盐噪声对snake的影响. 苑玮琦等.计算机工程,第29卷第21期. 2003 *
基于图像的中心定位方法. 姚志文等.计算机测量与控制,第12卷第1期. 2004 *
多运动目标的无源跟踪与数据关联算法研究. 林岳松.中国优秀博硕士学位论文全文数据库(博士) 信息科技辑,第03期. 2004 *
降质图像处理方法及其在机器人视觉系统中的应用研究. 彭冬亮.中国优秀博硕士学位论文全文数据库(博士) 信息科技辑,第03期. 2003 *

Also Published As

Publication number Publication date
CN101144784A (en) 2008-03-19

Similar Documents

Publication Publication Date Title
CN100545640C (en) A Method for Automatic Tracking of Cells in Video Microscopic Image
Gill et al. Fruit Image Classification Using Deep Learning.
CN101800890B (en) Multiple vehicle video tracking method in expressway monitoring scene
CN103632382B (en) A kind of real-time multiscale target tracking based on compressed sensing
CN101975575B (en) Passive Sensor Multi-Target Tracking Method Based on Particle Filter
CN101719278B (en) Automatic tracking method for video microimage cells based on KHM algorithm
Wang et al. Knowledge transfer for structural damage detection through re-weighted adversarial domain adaptation
CN101968886A (en) Centroid tracking framework based particle filter and mean shift cell tracking method
CN107705321A (en) Moving object detection and tracking method based on embedded system
Ravanfar et al. Low contrast sperm detection and tracking by watershed algorithm and particle filter
CN103761726B (en) Block adaptive image partition method based on FCM
CN111505705B (en) Microseism P wave first arrival pickup method and system based on capsule neural network
CN106780552A (en) Anti-shelter target tracking based on regional area joint tracing detection study
CN104751185A (en) SAR image change detection method based on mean shift genetic clustering
Altabey et al. Research in image processing for pipeline crack detection applications
CN105354860A (en) Box particle filtering based extension target CBMeMBer tracking method
CN106023254A (en) Multi-target video tracking method based on box particle PHD (Probability Hypothesis Density) filtering
Vaghefi et al. A comparison among data mining algorithms for outlier detection using flow pattern experiments
CN116303786B (en) Block chain financial big data management system based on multidimensional data fusion algorithm
Li et al. Gadet: A geometry-aware x-ray prohibited items detector
Tanskanen et al. Non-linearities in Gaussian processes with integral observations
CN104091352A (en) Visual tracking method based on structural similarity
CN102509289A (en) Characteristic matching cell division method based on Kalman frame
CN112991394A (en) KCF target tracking method based on cubic spline interpolation and Markov chain
Liu et al. An end-to-end steel strip surface defects detection framework: Considering complex background interference

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090930

Termination date: 20120904