CN111583292B - Self-adaptive image segmentation method for two-photon calcium imaging video data - Google Patents

Self-adaptive image segmentation method for two-photon calcium imaging video data Download PDF

Info

Publication number
CN111583292B
CN111583292B CN202010393173.XA CN202010393173A CN111583292B CN 111583292 B CN111583292 B CN 111583292B CN 202010393173 A CN202010393173 A CN 202010393173A CN 111583292 B CN111583292 B CN 111583292B
Authority
CN
China
Prior art keywords
pixel
background model
value
image
mode background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010393173.XA
Other languages
Chinese (zh)
Other versions
CN111583292A (en
Inventor
龚薇
斯科
张睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010393173.XA priority Critical patent/CN111583292B/en
Publication of CN111583292A publication Critical patent/CN111583292A/en
Application granted granted Critical
Publication of CN111583292B publication Critical patent/CN111583292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The invention discloses a self-adaptive image segmentation method for two-photon calcium imaging video data. Selecting a kth frame to an nth frame from the two-photon calcium imaging video to construct a training sample; constructing an initialized single-mode background model according to the training sample; continuously updating the single-mode background model in real time; and carrying out segmentation detection on the image input in real time by utilizing the single-mode background model updated in real time. The method solves the problem that the background modeling and image segmentation method is not specially designed aiming at the two-photon calcium imaging video data characteristics, solves the problem that some existing methods cannot adapt to and utilize the two-photon calcium imaging video data characteristics, and effectively ensures the processing accuracy and the operation efficiency.

Description

Self-adaptive image segmentation method for two-photon calcium imaging video data
Technical Field
The invention relates to an image processing method in the technical field of video data mining, in particular to a self-adaptive image segmentation method for two-photon calcium imaging video data.
Background
Two-photon calcium imaging video typically has the characteristics of high resolution, high frame number, high digital depth. Taking a mouse brain nerve two-photon calcium imaging video provided by the open source of the American Allen brain science institute as an example, the total frame number in a single video exceeds 10 ten thousand frames, the resolution of each frame of image is 512 multiplied by 512 pixel points, and the data capacity of the video exceeds 56GB. The huge amount of data, the large number of cerebral neurons recorded in the video and the complex activity rule can not adopt a manual method to perform data mining at all, so that a set of efficient automatic data mining scheme must be researched and designed.
The self-adaptive image segmentation method is a technology which can be used for automatic data mining of two-photon calcium imaging video. Through learning two-photon calcium imaging video data samples, a background model of a video is built, and then differences of designated image frames and the background model are compared, so that all active neurons in the designated image frames can be rapidly and accurately segmented, full-automatic real-time monitoring of the active state of the neurons is further realized, and especially detection of abnormal states of the neurons is realized.
However, adaptive image segmentation methods specifically designed for the data characteristics of two-photon calcium imaging video are currently less. The existing methods can be mainly divided into two main categories.
One type of method is a traditional image segmentation method for a single static image, and the method has the following problems: the video image sequence with time continuity is treated as an isolated and irrelevant single image, only the space information in the single image is utilized, and the time dimension information of the video image sequence is completely lost, so that the implicit dynamic information of brain neurons in the two-photon calcium imaging video cannot be fully mined and utilized.
Another class of methods are image segmentation methods derived from the traditional intelligent video monitoring field, and the problems of these methods are: lack of adaptation and utilization of two-photon calcium imaging video data characteristics. Taking the mouse brain nerve two-photon calcium imaging video of the Allen brain science research institute as an example, the video data of the two-photon calcium imaging video is 16-bit depth (namely, the value range of the pixel value is 0-65535), and the video data of all the current intelligent video monitoring fields only has 8-bit depth (namely, the value range of the pixel value is 0-255). Most image segmentation methods based on 8-bit data depth either cannot process 16-bit depth data at all or suffer from computational performance collapse problems. In addition, in the intelligent video image segmentation method, a multi-mode background model frame is generally used, and the background models of the two-photon calcium imaging video are all single-mode, if the multi-mode background model frame is used strongly, not only can significant operation efficiency be lost, but also detection sensitivity to active neurons can be reduced.
In summary, if the unmatched self-adaptive image segmentation method is blindly transplanted, the data mining of the two-photon calcium imaging video cannot be truly and effectively realized, and the erroneous judgment of the experimental result is seriously caused.
Therefore, in the field of automatic data mining for two-photon calcium imaging video, an effective and efficient adaptive image segmentation method is needed to be specially designed according to the characteristics of the two-photon calcium imaging video data.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a self-adaptive image segmentation method for two-photon calcium imaging video data. The method is specially designed according to the characteristics of two-photon calcium imaging video data, not only is the background model framework optimized for the background characteristics of the video, but also the designed online updating mode completely meets various performance requirements of accuracy, instantaneity, precision and the like for processing 16-bit depth video data.
The method of the invention comprises the following steps:
s1: selecting a kth frame to an nth frame from the two-photon calcium imaging video to construct a training sample;
s2: constructing an initialized single-mode background model according to the training sample;
s3: continuously updating the single-mode background model in real time;
s4: and carrying out segmentation detection on the image input in real time by utilizing the single-mode background model updated in real time.
The two-photon calcium imaging video can be obtained by brain neurons through two-photon fluorescence calcium imaging acquisition.
The step S1 includes the steps of:
s11: selecting the kth frame to the nth frame in the two-photon calcium imaging video as a training sample;
s12: carrying out image enhancement processing on the training sample, and specifically carrying out image enhancement transformation of the following formula on the values of all pixel points in the training sample:
Figure BDA0002486381830000021
wherein V represents the upper limit value of the value range of the pixel value in the two-photon calcium imaging video, I (x, y) is the pixel value of the pixel point (x, y) in the original image, J (x, y) is the pixel value of the pixel point (x, y) in the image after image enhancement; MAX represents the global maximum image pixel value in the training sample and MIN represents the global minimum image pixel value in the training sample.
The step S2 includes the steps of:
s21: for each pixel point (x, y) in the video, calculating and generating a central value of an initialized single-mode background model at the position of the pixel point (x, y), wherein the method comprises the following steps:
(1) For each pixel point (x, y) on the peripheral edge of the video, calculating all pixel values J (x, y) of the pixel points in all frame images of the training sample k ,J(x,y) k+1 ,...,J(x,y) n Is J (x, y) k Pixel value representing pixel point (x, y) of kth image, and the median is used as initialized single-mode background model central value C (x, y) at pixel point (x, y) position n
(2) For each pixel (x, y) at a non-peripheral edge position in the video, computing a median and a mode of all pixel values in a 3×3 neighborhood of all images in the training sample centered on the pixel, nine pixels in total in the 3×3 neighborhood of each image, n-k+1 images in total in the training sample, 9× (n-k+1) pixel values, and using the median as an initialized unimodal background model value C (x, y) at the pixel (x, y) position n
S22: for each pixel point (x, y) in the video, calculating and generating an initialized radius value of the single-mode background model at the position of the pixel point (x, y), wherein the calculating method comprises the following steps:
Figure BDA0002486381830000031
wherein M and N are respectively the height and width of a video frame image, R n Is the radius value of an initialized monomodal background model at the pixel point position, R n Independent of the pixel location, i.e. R of all pixels n Is the same, z represents the ordinal number of the image, z=k..n-1;
in particular embodiments, R may be n The subscript n of (2) represents the radius value for the nth image, which is calculated from the accumulation of k-n image data. The radius of the initialization model is R when the nth image is seen (accumulated) n R is obtained n Then, R can be used in n+1 frames n Iteratively updating the background model radius R n+1 . That is, the background model radius does not need to be calculated in an iterative manner within the k-n frames. Starting from the n+1 frame, the background model radius is calculated using an iterative method.
S23: the initialized unimodal background model structure at each pixel point (x, y) location in the video is as follows: the initialized single-mode background model is a model with C (x, y) n Is the central value, R n The range of values for the radius is denoted as [ C (x, y) n -R n ,C(x,y) n +R n ];
S24: the learning rate of the single-mode background model is calculated and generated, and the method is as follows:
within the training sample range, the pixel values of all pixel points in the video are calculated from theta 1 The gray level transition is theta 2 The probability of gray scale is calculated, and a single-mode background model learning rate F (theta) is generated when the nth frame shared by all pixel points in the video is generated 12 ) n Sharing means that the learning rate of a single-mode background model of all pixel points in the same video frame is the same, wherein theta 12 ∈[0,V]Wherein θ is 1 Representing gray before a pixel value transitionOrder of level, theta 2 And the gray level after the transition of the pixel value is represented, and V represents the upper limit value of the value range of the pixel value in the two-photon calcium imaging video and is as shown in the formula 1.
The step S3 includes the steps of:
s31: continuously updating the central value of the single-mode background model, wherein the method comprises the following steps of:
when an n+1 frame image is read in newly, updating a single-mode background model central value at the position of each pixel point (x, y) in the image:
C(x,y) n+1 =C(x,y) n +[J(x,y) n+1 -C(x,y) n ]×F(θ 12 ) n
wherein C (x, y) n+1 Is the center value of a single-mode background model when the pixel point (x, y) is in an n+1 frame image, and C (x, y) n And F (theta) 12 ) n The center value of a single-mode background model and the learning rate of the background model when pixel points (x, y) are in n frames of images are respectively J (x, y) n+1 Then it is the pixel value of (x, y) at the time of n+1 frame image, θ 1 Is C (x, y) n ,θ 2 Is of value I (x, y) n+1
S32: the radius value of the single-mode background model is continuously updated, and the method comprises the following steps:
when n+1 frames of images are read in newly, for each pixel point (x, y) in the images, the radius value of the single-mode background model at the position is updated
Figure BDA0002486381830000041
Wherein R is n+1 Is a single-mode background model radius value at the time of n+1 frames on any pixel point;
s33: when an n+1 frame image is newly read in, the single-mode background model at the (x, y) position of each pixel point in the image is updated as follows: the background model is a model with C (x, y) n+1 Is the central value, R n+1 Is the range of values for the radius, i.e. [ C (x, y) ] n+1 -R n+1 ,C(x,y) n+1 +R n+1 ];
S34: the learning rate of the single-mode background model is continuously updated, and the method comprises the following steps:
when the n+1 frame image is read in newly, calculating the pixel values of all pixel points positioned in even lines and even columns in the image from theta in the k+1 to n+1 frame images 1 The gray level transition is theta 2 Probability of gray scale, learning rate F (theta) of single-mode background model when n+1st frame image shared by all pixel points in video is generated 12 ) n+1
By analogy, when newly reading in an n+i frame, continuously updating a single-mode background model at the moment of the n+i frame by adopting the same method as that in the steps S31-S34, wherein the background model is a C (x, y) n+i Is the central value, R n+i Is the range of values for the radius, i.e. [ C (x, y) ] n+i -R n+i ,C(x,y) n+i +R n+i ]Simultaneously updating the learning rate F (theta) of the single-mode background model 12 ) n+i
The step S4 specifically includes that each pixel point of the image is processed and judged by using a value range of the single-mode background model: if the pixel value of the pixel point is in the two value range of the single-mode background model, the pixel point is used as a background; if the pixel value of the pixel point is not in the two value ranges of the single-mode background model, the pixel point is used as a foreground.
In the implementation, the two-photon image of the brain neuron is detected in real time, so that the active neuron and the inactive neuron can be judged, and if the pixel value of the pixel point is in the two value ranges of the single-mode background model, the pixel point is used as the inactive neuron; if the pixel value of the pixel point is not in the two value ranges of the single-mode background model, the pixel point is used as an active neuron.
The invention has the substantial beneficial effects that:
the method provided by the invention alleviates the problem that the field lacks of specially designing the self-adaptive image segmentation aiming at the two-photon calcium imaging video data characteristics. Meanwhile, the method provided by the invention solves the problem that some existing methods can not adapt to and utilize the characteristics of two-photon calcium imaging video data:
(1) The method is specially used for mining two-photon calcium imaging video data, and can fully utilize time dimension information of a video image sequence, so that implicit dynamic information of brain neurons in the two-photon calcium imaging video can be effectively mined;
(2) The method is specially used for mining the two-photon calcium imaging video data, is truly suitable for the two-photon calcium imaging video data with 16-bit depth, and can not cause the problem of collapse of the operation performance;
(2) The method designs a single-mode background model frame and an online updating mechanism aiming at the inherent characteristics of the background in the two-photon calcium imaging video data, and effectively ensures the accuracy and the operation efficiency of background model calculation, thereby improving the accuracy of image segmentation.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is an example of a training sample used in the method of the present invention.
Fig. 3 is an exemplary graph of the results of image enhancement processing of a training sample in the method of the present invention.
Fig. 4 is an example of the results obtained by the method according to the present invention according to an embodiment.
Fig. 5 is an example of results obtained by an image segmentation method in a generic intelligent video surveillance field according to an embodiment.
Fig. 6 is an example of the results obtained by some general single image-oriented still image segmentation method according to an embodiment.
Fig. 7 is a schematic diagram of a method for obtaining a learning rate of a background model in the method of the present invention.
Table 1 is a qualitative comparison of the image segmentation results of the method of the present invention with other general methods.
Detailed Description
The technical scheme of the invention is further specifically described below through examples and with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention is as follows:
taking a mouse brain nerve two-photon calcium imaging video provided by the American Allen brain science institute as an example, the video only has a single channel (namely, pixel values only have gray information), and the video comprises 115461 images, wherein the resolution of each frame of image is 512 multiplied by 512, the data depth is 16 bits, and the value range of the pixel values is 0-65535. Fig. 2 illustrates an example of a training sample image.
The specific process of this embodiment is shown in fig. 1, and includes the following steps:
s1: selecting the kth=1 frame to the nth=100 frame from the two-photon calcium imaging video to construct a training sample:
s11: selecting the 1 st frame to the 100 th frame in the two-photon calcium imaging video as a training sample;
s12: carrying out image enhancement processing on the training sample, and specifically carrying out image enhancement transformation of the following formula on the values of all pixel points in the training sample:
Figure BDA0002486381830000061
the result of the image enhancement processing of the training sample image shown in fig. 2 according to the method of the present invention is shown in fig. 3.
S2: constructing an initialized single-mode background model according to the training sample:
s21: for each pixel point (x, y) in each frame of image of the training sample, calculating and generating a central value of an initialized single-mode background model at the position of the pixel point (x, y), wherein the method comprises the following steps:
(1) For each pixel point (x, y) on the peripheral edge of the image, calculating all pixel values J (x, y) of the pixel points in all frame images of the training sample 1 ,J(x,y) 2 ,...,J(x,y) 100 The median is taken as the initialized monomodal background model central value C (x, y) at the pixel point (x, y) position 100
(2) For each pixel point (x, y) on the non-peripheral edge position of the video frame image, calculating the median and mode of all 100 pixel values in the 3 x 3 neighborhood of all 100 images of the training sample centered on the pixel point, 3 of each imageX 3 total nine pixels in the neighborhood, 100 images in total in the training sample, 9 x 100 pixels in total, and the median as initialized unimodal background model frontal center value C (x, y) at the pixel (x, y) position 100
S22: for each pixel point (x, y) in the video image, calculating and generating the radius value of the initialized single-mode background model at the position of the pixel point (x, y) in the frame image of the training sample, wherein the sharing means that the radius values of the single-mode background models of all the pixel points in the same video frame are the same, and the calculation method comprises the following formula:
Figure BDA0002486381830000062
s23: the initialized unimodal background model structure at each pixel point (x, y) location within the image is as follows: the initialized single-mode background model is a model with C (x, y) 100 Is the central value, R 100 The range of values for the radius is denoted as [ C (x, y) 100 -R 100 ,C(x,y) 100 +R 100 ];
S24: the learning rate of the single-mode background model is calculated and generated, and the method is as follows:
in all frame images of training samples, the pixel values of all pixel points in the images are calculated from theta 1 The gray level transition is theta 2 The probability of gray scale is calculated, and a single-mode background model learning rate F (theta) is generated when the nth frame shared by all pixel points in the video is generated 12 ) n Sharing means that the learning rate of a pixel point at a fixed position in a single-mode background model of each image is the same, wherein theta 12 ∈[0,V],θ 12 ∈[0,65535]. Background model learning rate F (theta) in the method of the invention 12 ) n A schematic diagram of (2) is shown in figure 7.
Preferably, the unimodal background model learning rate F (θ 12 ) 100 The calculation of (a) may employ an iterative algorithm as follows:
θ 1 =I(x,y) k2 =I(x,y) k+1
E(θ 1 →θ 2 )=1;
H(θ 12 ) k+1 =∑E(θ 1 →θ 2 );
Figure BDA0002486381830000071
Figure BDA0002486381830000072
wherein I (x, y) k And I (x, y) k+1 Representing the pixel values of any pixel point (x, y) in the video in the kth frame and the (k+1) th frame respectively and is abbreviated as theta 1 And theta 2 Since the pixel values in the video are subject to [0,65535 ]]Therefore, there are: θ 1 ∈[0,65535],θ 2 ∈[0,65535];E(θ 1 →θ 2 ) =1 means that the following event is detected 1 time: the pixel value of (x, y) is derived from θ in k frames 1 The gray level jump is θ in k+1 frames 2 Gray scale; Σe (θ) 1 →θ 2 ) Is the pixel value of all pixel points in the statistical video from theta in k frames 1 The gray level jump is θ in k+1 frames 2 The number of gray levels is represented by Σe (θ 1 →θ 2 ) The values recorded in the corresponding cells H (θ 12 ) k+1 In (a) and (b); square matrix Z (theta) 12 ) 100 Is H (theta) within 1-100 frames of video training samples 12 ) k+1 Value accumulation, Z (θ 12 ) 100 Recording the detected pixel value in the video training sample from theta 1 The gray level jump is theta 2 Total number of gray levels; will Z (theta) 12 ) 100 Normalized to the value of [0,1 ]]The probability value between the two is obtained to obtain the learning rate F (theta) 12 ) 100 ,F(θ 12 )| 100 Is a square matrix with the size of 65535 x 65535.
S3: continuous real-time updating of the single-mode background model:
s31: continuously updating the central value of the single-mode background model, wherein the method comprises the following steps of:
when a 101 frame is read in newly, updating a single-mode background model central value at the position of each pixel point (x, y) in the video image:
C(x,y) 101 =C(x,y) 100 +[I(x,y) 101 -C(x,y) 100 ]×F(θ 12 ) 100
wherein C (x, y) 101 Is the center value of a single-mode background model when the pixel point (x, y) is 101 frames, C (x, y) 100 And F (theta) 12 ) 100 The center value of a single-mode background model and the learning rate of the background model when the pixel point (x, y) is 100 frames are respectively, and I (x, y) 101 Then it is the pixel value of (x, y) at 101 frames, θ 1 Is C (x, y) 100 ,θ 2 Is of value I (x, y) 101
S32: the radius value of the single-mode background model is continuously updated, and the method comprises the following steps:
when a 101 frame is read in newly, updating a single-mode background model radius value at the position of each pixel point (x, y) in the video image:
Figure BDA0002486381830000081
wherein R is 101 Is a single-mode background model radius value at 101 frames on any pixel point;
s33: when a 101 frame is newly read in, the single mode background model at each pixel point (x, y) position in the video image is updated as follows: the background model is a model with C (x, y) 101 Is the central value, R 101 Is the range of values for the radius, i.e. [ C (x, y) ] 101 -R 101 ,C(x,y) 101 +R 101 ];
S34: the learning rate of the single-mode background model is continuously updated, and the method comprises the following steps:
in newWhen 101 frames are read in, calculating the pixel values of all pixel points positioned in even lines and even columns in the video from theta in 2 to 101 frames 1 The gray level transition is theta 2 Probability of gray scale, and single mode background model learning rate F (theta) at 101 st frame shared by all pixel points in video 12 ) 101
By analogy, when a 100+i frame is newly read in, the single-mode background model at the moment of 100+i frame is continuously updated by adopting the same method as that in the steps S31-S34, and the background model is a model with C (x, y) 100+i Is the central value, R 100+i Is the range of values for the radius, i.e. [ C (x, y) ] 100+i -R 100+i ,C(x,y) 100+i +R 100+i ]Simultaneously updating the learning rate F (theta) of the single-mode background model 12 ) n+i
As previously described, F (θ) 12 ) 100 Is a square matrix with the size of 65535 multiplied by 65535, due to theta 1 、θ 2 Respectively the row and column coordinates of the square matrix, so θ will be 1 、θ 2 Specific value of substitution F (θ) 12 ) 100 Can obtain the theta in the square matrix 1 Line, theta 2 The corresponding background model learning rate on the unit positions of the columns; according to the example of FIG. 3, F (3000,2000) 100 The value of (2) is the corresponding background model learning rate at the unit positions of 3000 th row and 2000 th column in the square matrix, namely 0.5.
S4: and carrying out segmentation detection on the image input in real time by utilizing the single-mode background model updated in real time.
In the implementation, the two-photon image of the brain neuron is detected in real time, so that the active neuron and the inactive neuron can be judged, and if the pixel value of the pixel point is in the two value ranges of the single-mode background model, the pixel point is used as the inactive neuron; if the pixel value of the pixel point is not in the two value ranges of the single-mode background model, the pixel point is used as an active neuron.
The results obtained according to the example of the method of the present invention are shown in fig. 4. It can be seen that, since the method is designed for the characteristic of two-photon calcium imaging video data, special optimization is performed, the overall segmented foreground (i.e., white pixel area) is consistent with the object to be detected (i.e., active neuron), and the conditions of missed detection (i.e., foreground pixels that should be marked as white are marked as black representing background) and false detection (i.e., pixels that should be marked as black background are marked as white representing foreground) are less.
Meanwhile, a general image segmentation method in the intelligent video monitoring field is selected as a comparison, and the result obtained according to the embodiment is shown in fig. 5. It can be seen that, since the method is not designed for the characteristics of two-photon calcium imaging video data, the segmented prospect is not consistent with the target object to be detected, and a large amount of false detection and a small amount of omission detection occur.
In addition, a general static image segmentation method for a single image is also selected as a comparison, and the result obtained according to the embodiment is shown in fig. 6. It can be seen that the foreground segmented by the method has poor consistency with the object to be detected, and a large amount of false detection and a small amount of missed detection occur.
In summary, the qualitative comparison results of the method of the present invention and the two general image segmentation methods are shown in table 1.
TABLE 1
Contrast of different image segmentation methods Qualitative comparison of image segmentation results
The invention provides a method Very good
Image segmentation method for contrast certain general intelligent video monitoring field Poor quality
Static image segmentation method for contrast of certain general single-image-oriented image Poor quality
From the result, the invention can solve the problem of lack of specially designed self-adaptive image segmentation aiming at the two-photon calcium imaging video data characteristics, overcomes the problem that the existing method cannot adapt to and utilize the two-photon calcium imaging video data characteristics, improves the accuracy of image segmentation, and achieves remarkable technical effects.

Claims (4)

1. The self-adaptive image segmentation method for the two-photon calcium imaging video data is characterized by comprising the following steps of:
s1: selecting a kth frame to an nth frame from the two-photon calcium imaging video to construct a training sample;
s2: constructing an initialized single-mode background model according to the training sample;
s3: continuously updating the single-mode background model in real time;
s4: dividing and detecting the image input in real time by utilizing a single-mode background model updated in real time;
the step S1 includes the steps of:
s11: selecting the kth frame to the nth frame in the two-photon calcium imaging video as a training sample;
s12: carrying out image enhancement processing on the training sample, and specifically carrying out image enhancement transformation of the following formula on the values of all pixel points in the training sample:
Figure FDA0004212590190000011
wherein V represents the upper limit value of the value range of the pixel value in the two-photon calcium imaging video, I (x, y) is the pixel value of the pixel point (x, y) in the original image, J (x, y) is the pixel value of the pixel point (x, y) in the image after image enhancement; MAX represents the global maximum image pixel value in the training sample and MIN represents the global minimum image pixel value in the training sample.
2. The method for adaptively segmenting the image for the two-photon-oriented calcium imaging video data according to claim 1, wherein the method comprises the following steps of: the step S2 includes the steps of:
s21: for each pixel point (x, y) in the video, calculating and generating a central value of an initialized single-mode background model at the position of the pixel point (x, y), wherein the method comprises the following steps:
(1) For each pixel point (x, y) on the peripheral edge of the video, calculating all pixel values J (x, y) of the pixel points in all frame images of the training sample k ,J(x,y) k+1 ,...,J(x,y) n Is J (x, y) k Pixel value representing pixel point (x, y) of kth image, and the median is used as initialized single-mode background model central value C (x, y) at pixel point (x, y) position n
(2) For each pixel (x, y) at a non-peripheral edge position in the video, computing the median and mode of all pixel values in a 3 x 3 neighborhood of all images in the training sample centered on the pixel, and using the median as an initialized unimodal background model frontal center value C (x, y) at the pixel (x, y) position n
S22: for each pixel point (x, y) in the video, calculating and generating an initialized radius value of the single-mode background model at the position of the pixel point (x, y), wherein the calculating method comprises the following steps:
Figure FDA0004212590190000021
wherein M and N are respectively the height and width of a video frame image, R n The radius value of the initialized single-mode background model at the pixel point position is z, and the z represents the ordinal number of the image;
s23: each image in the videoThe initialized unimodal background model structure at the pixel (x, y) position is as follows: the initialized single-mode background model is a model with C (x, y) n Is the central value, R n The range of values for the radius is denoted as [ C (x, y) n -R n ,C(x,y) n +R n ];
S24: the learning rate of the single-mode background model is calculated and generated, and the method is as follows:
within the training sample range, the pixel values of all pixel points in the video are calculated from theta 1 The gray level transition is theta 2 The probability of gray scale is calculated, and a single-mode background model learning rate F (theta) is generated when the nth frame shared by all pixel points in the video is generated 12 ) n Wherein θ is 12 ∈[0,V]Wherein θ is 1 Represents the gray scale level, θ, before the pixel value transitions 2 And (3) representing the gray level after the transition of the pixel value, wherein V represents the upper limit value of the value range of the pixel value in the two-photon calcium imaging video.
3. The method for adaptively segmenting the image for the two-photon-oriented calcium imaging video data according to claim 1, wherein the method comprises the following steps of: the step S3 includes the steps of:
s31: continuously updating the central value of the single-mode background model, wherein the method comprises the following steps of:
when an n+1 frame image is read in newly, updating a single-mode background model central value at the position of each pixel point (x, y) in the image:
C(x,y) n+1 =C(x,y) n +[J(x,y) n+1 -C(x,y) n ]×F(θ 12 ) n
wherein C (x, y) n+1 Is the center value of a single-mode background model when the pixel point (x, y) is in an n+1 frame image, and C (x, y) n And F (theta) 12 ) n The center value of a single-mode background model and the learning rate of the background model when pixel points (x, y) are in n frames of images are respectively J (x, y) n+1 Then it is the pixel value of (x, y) at the time of n+1 frame image, θ 1 Is C (x, y) n ,θ 2 Is of value I (x, y) n+1
S32: the radius value of the single-mode background model is continuously updated, and the method comprises the following steps:
when n+1 frames of images are read in newly, for each pixel point (x, y) in the images, the radius value of the single-mode background model at the position is updated
Figure FDA0004212590190000022
Wherein R is n+1 Is a single-mode background model radius value at the time of n+1 frames on any pixel point;
s33: when an n+1 frame image is newly read in, the single-mode background model at the (x, y) position of each pixel point in the image is updated as follows: the background model is a model with C (x, y) n+1 Is the central value, R n+1 Is the range of values for the radius, i.e. [ C (x, y) ] n+1 -R n+1 ,C(x,y) n+1 +R n+1 ];
S34: the learning rate of the single-mode background model is continuously updated, and the method comprises the following steps:
when the n+1 frame image is read in newly, calculating the pixel values of all pixel points positioned in even lines and even columns in the image from theta in the k+1 to n+1 frame images 1 The gray level transition is theta 2 Probability of gray scale, learning rate F (theta) of single-mode background model when n+1st frame image shared by all pixel points in video is generated 12 ) n+1
4. The method for adaptively segmenting the image for the two-photon-oriented calcium imaging video data according to claim 1, wherein the method comprises the following steps of: the step S4 specifically includes that each pixel point of the image is processed and judged by using a value range of the single-mode background model: if the pixel value of the pixel point is in the two value range of the single-mode background model, the pixel point is used as a background; if the pixel value of the pixel point is not in the two value ranges of the single-mode background model, the pixel point is used as a foreground.
CN202010393173.XA 2020-05-11 2020-05-11 Self-adaptive image segmentation method for two-photon calcium imaging video data Active CN111583292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010393173.XA CN111583292B (en) 2020-05-11 2020-05-11 Self-adaptive image segmentation method for two-photon calcium imaging video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010393173.XA CN111583292B (en) 2020-05-11 2020-05-11 Self-adaptive image segmentation method for two-photon calcium imaging video data

Publications (2)

Publication Number Publication Date
CN111583292A CN111583292A (en) 2020-08-25
CN111583292B true CN111583292B (en) 2023-07-07

Family

ID=72124836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010393173.XA Active CN111583292B (en) 2020-05-11 2020-05-11 Self-adaptive image segmentation method for two-photon calcium imaging video data

Country Status (1)

Country Link
CN (1) CN111583292B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102033043A (en) * 2010-10-19 2011-04-27 浙江大学 Grain moisture content detecting method based on hyperspectral image technology
CN102332162A (en) * 2011-09-19 2012-01-25 西安百利信息科技有限公司 Method for automatic recognition and stage compression of medical image regions of interest based on artificial neural network
US8313437B1 (en) * 2010-06-07 2012-11-20 Suri Jasjit S Vascular ultrasound intima-media thickness (IMT) measurement system
WO2012162981A1 (en) * 2011-09-16 2012-12-06 华为技术有限公司 Video character separation method and device
CN104616290A (en) * 2015-01-14 2015-05-13 合肥工业大学 Target detection algorithm in combination of statistical matrix model and adaptive threshold
CN105574896A (en) * 2016-02-01 2016-05-11 衢州学院 High-efficiency background modeling method for high-resolution video
CN108154513A (en) * 2017-11-21 2018-06-12 中国人民解放军第三军医大学 Cell based on two photon imaging data detects automatically and dividing method
CN109472801A (en) * 2018-11-22 2019-03-15 廖祥 It is a kind of for multiple dimensioned neuromorphic detection and dividing method
JP2019148801A (en) * 2019-03-20 2019-09-05 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Method for using epi-illumination fluorescence microscope, method for using imaging device, and epi-illumination fluorescence microscope
CN110403576A (en) * 2019-08-01 2019-11-05 中国医学科学院北京协和医院 Application of the three-dimensional photoacoustic imaging in tumor of breast points-scoring system
CN110473166A (en) * 2019-07-09 2019-11-19 哈尔滨工程大学 A kind of urinary formed element recognition methods based on improvement Alexnet model
CN110866906A (en) * 2019-11-12 2020-03-06 安徽师范大学 Three-dimensional culture human myocardial cell pulsation detection method based on image edge extraction
CN111033351A (en) * 2017-05-19 2020-04-17 洛克菲勒大学 Imaging signal extraction device and method of using the same
CN111028245A (en) * 2019-12-06 2020-04-17 衢州学院 Multi-mode composite high-definition high-speed video background modeling method
CN111047654A (en) * 2019-12-06 2020-04-21 衢州学院 High-definition high-speed video background modeling method based on color information

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002003327A2 (en) * 2000-06-29 2002-01-10 The Trustees Of Columbia University In The City Of New York Method and system for analyzing multi-dimensional data
WO2005073911A1 (en) * 2004-01-30 2005-08-11 Baumer Optronic Gmbh Image processing system
US20180267284A1 (en) * 2015-01-31 2018-09-20 Board Of Regents, The University Of Texas System High-speed laser scanning microscopy platform for high-throughput automated 3d imaging and functional volumetric imaging
US10568516B2 (en) * 2015-06-22 2020-02-25 The Board Of Trustees Of The Leland Stanford Junior University Methods and devices for imaging and/or optogenetic control of light-responsive neurons

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8313437B1 (en) * 2010-06-07 2012-11-20 Suri Jasjit S Vascular ultrasound intima-media thickness (IMT) measurement system
CN102033043A (en) * 2010-10-19 2011-04-27 浙江大学 Grain moisture content detecting method based on hyperspectral image technology
WO2012162981A1 (en) * 2011-09-16 2012-12-06 华为技术有限公司 Video character separation method and device
CN102332162A (en) * 2011-09-19 2012-01-25 西安百利信息科技有限公司 Method for automatic recognition and stage compression of medical image regions of interest based on artificial neural network
CN104616290A (en) * 2015-01-14 2015-05-13 合肥工业大学 Target detection algorithm in combination of statistical matrix model and adaptive threshold
CN105574896A (en) * 2016-02-01 2016-05-11 衢州学院 High-efficiency background modeling method for high-resolution video
CN111033351A (en) * 2017-05-19 2020-04-17 洛克菲勒大学 Imaging signal extraction device and method of using the same
CN108154513A (en) * 2017-11-21 2018-06-12 中国人民解放军第三军医大学 Cell based on two photon imaging data detects automatically and dividing method
CN109472801A (en) * 2018-11-22 2019-03-15 廖祥 It is a kind of for multiple dimensioned neuromorphic detection and dividing method
JP2019148801A (en) * 2019-03-20 2019-09-05 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Method for using epi-illumination fluorescence microscope, method for using imaging device, and epi-illumination fluorescence microscope
CN110473166A (en) * 2019-07-09 2019-11-19 哈尔滨工程大学 A kind of urinary formed element recognition methods based on improvement Alexnet model
CN110403576A (en) * 2019-08-01 2019-11-05 中国医学科学院北京协和医院 Application of the three-dimensional photoacoustic imaging in tumor of breast points-scoring system
CN110866906A (en) * 2019-11-12 2020-03-06 安徽师范大学 Three-dimensional culture human myocardial cell pulsation detection method based on image edge extraction
CN111028245A (en) * 2019-12-06 2020-04-17 衢州学院 Multi-mode composite high-definition high-speed video background modeling method
CN111047654A (en) * 2019-12-06 2020-04-21 衢州学院 High-definition high-speed video background modeling method based on color information

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
An Adaptive Learning Rate Method for Improving Adaptability of Background Models;Rui Zhang, Weiguo Gong, Victor Grzeda, Andrew Yaworski, and Michael Greenspan;IEEE SIGNAL PROCESSING LETTERS, VOL. 20, NO. 12, DECEMBER 2013;全文 *
SEGMENTATION OF CORTICAL SPREADING DEPRESSIONWAVEFRONTS THROUGH LOCAL SIMILARITY METRIC;M. Filip Sluzewskiy, Petr Tvrdikz and Scott T. Acton;ICIP 2019;全文 *
具有自适应能力的背景模型构建算法;丁德志;侯德文;;计算机工程与设计(01);全文 *
基于并行波前校正算法的大视场深穿透光学显微成像;赵琪1,石鑫1,龚薇2,胡乐佳1,郑瑶1,祝欣培2,斯科1;中国激光;第45卷(第12期);全文 *

Also Published As

Publication number Publication date
CN111583292A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
KR102516360B1 (en) A method and apparatus for detecting a target
WO2017020723A1 (en) Character segmentation method and device and electronic device
US10769485B2 (en) Framebuffer-less system and method of convolutional neural network
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN110428450B (en) Scale-adaptive target tracking method applied to mine tunnel mobile inspection image
CN111986126B (en) Multi-target detection method based on improved VGG16 network
CN112883795B (en) Rapid and automatic table extraction method based on deep neural network
CN109255799B (en) Target tracking method and system based on spatial adaptive correlation filter
CN107194414A (en) A kind of SVM fast Incremental Learning Algorithms based on local sensitivity Hash
EP3822858A2 (en) Method and apparatus for identifying key point locations in an image, and computer readable medium
CN116030237A (en) Industrial defect detection method and device, electronic equipment and storage medium
CN107798329B (en) CNN-based adaptive particle filter target tracking method
CN113963333B (en) Traffic sign board detection method based on improved YOLOF model
CN111027564A (en) Low-illumination imaging license plate recognition method and device based on deep learning integration
CN111832497B (en) Text detection post-processing method based on geometric features
CN107301652B (en) Robust target tracking method based on local sparse representation and particle swarm optimization
CN111583292B (en) Self-adaptive image segmentation method for two-photon calcium imaging video data
CN117314940A (en) Laser cutting part contour rapid segmentation method based on artificial intelligence
CN110136164B (en) Method for removing dynamic background based on online transmission transformation and low-rank sparse matrix decomposition
CN113780301B (en) Self-adaptive denoising machine learning application method for defending against attack
CN112532938A (en) Video monitoring system based on big data technology
CN110826564A (en) Small target semantic segmentation method and system in complex scene image
CN111028245A (en) Multi-mode composite high-definition high-speed video background modeling method
CN111583293B (en) Self-adaptive image segmentation method for multicolor double-photon image sequence
CN114897782B (en) Gastric cancer pathological section image segmentation prediction method based on generation type countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant