CN111583292A - Self-adaptive image segmentation method for two-photon calcium imaging video data - Google Patents

Self-adaptive image segmentation method for two-photon calcium imaging video data Download PDF

Info

Publication number
CN111583292A
CN111583292A CN202010393173.XA CN202010393173A CN111583292A CN 111583292 A CN111583292 A CN 111583292A CN 202010393173 A CN202010393173 A CN 202010393173A CN 111583292 A CN111583292 A CN 111583292A
Authority
CN
China
Prior art keywords
background model
pixel
image
value
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010393173.XA
Other languages
Chinese (zh)
Other versions
CN111583292B (en
Inventor
龚薇
斯科
张睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010393173.XA priority Critical patent/CN111583292B/en
Publication of CN111583292A publication Critical patent/CN111583292A/en
Application granted granted Critical
Publication of CN111583292B publication Critical patent/CN111583292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a two-photon calcium imaging video data-oriented self-adaptive image segmentation method. Selecting a kth frame to an nth frame from a two-photon calcium imaging video to construct a training sample; establishing an initialized single-mode background model according to the training sample; continuously updating the single-mode background model in real time; and carrying out segmentation detection on the image input in real time by using the real-time updated single-mode background model. The method solves the problem that a method for specially designing background modeling and further segmenting images aiming at the characteristics of two-photon calcium imaging video data is lacked, overcomes the problem that some existing methods cannot adapt to and utilize the characteristics of the two-photon calcium imaging video data, and effectively ensures the accuracy and the operational efficiency of processing.

Description

Self-adaptive image segmentation method for two-photon calcium imaging video data
Technical Field
The invention relates to an image processing method in the technical field of video data mining, in particular to a two-photon calcium imaging video data-oriented self-adaptive image segmentation method.
Background
Two-photon calcium imaging video generally has the characteristics of high resolution, high frame number and high digital depth. Taking a mouse cranial nerve two-photon calcium imaging video provided by the open source of the American Allen brain science research institute as an example, the total frame number in a single video exceeds 10 ten thousand frames, the resolution of each frame of image is 512 x 512 pixel points, and the data capacity of the video exceeds 56 GB. Due to the huge amount of data, the large number of cerebral neurons recorded in the video and the complex activity rule, a manual method cannot be adopted for data mining at all, and therefore a set of efficient automatic data mining scheme must be researched and designed.
The self-adaptive image segmentation method is a technology which can be used for two-photon calcium imaging video automatic data mining. By learning the two-photon calcium imaging video data sample, a background model of the video is constructed, and then the difference between the appointed image frame and the background model is compared, all active neurons in the appointed image frame can be rapidly and accurately segmented, so that the full-automatic real-time monitoring of the neuron activity state is realized, and particularly, the detection of the neuron abnormal state is realized.
However, there are currently fewer adaptive image segmentation methods designed specifically for the data characteristics of two-photon calcium imaging video. Existing methods can be largely divided into two main categories.
One class of methods is derived from the conventional image segmentation method for a single static image, and the problems of the method are as follows: the video image sequence with time consistency is treated as an isolated and unrelated single image, only the internal space information of the single image is utilized, and the time dimension information of the video image sequence is completely lost, so that the implicit dynamic information of brain neurons in the two-photon calcium imaging video cannot be fully mined and utilized.
Another type of methods is image segmentation methods derived from the traditional intelligent video monitoring field, and the problems of the methods are that: the method lacks of adaptation and utilization of the characteristics of two-photon calcium imaging video data. Still take mouse cranial nerve two-photon calcium imaging video of the institute of encephalic science as an example, the video data is 16 bit depth (i.e. the value range of the pixel value is 0-65535), while the video data in all the current intelligent video monitoring fields is only 8 bit depth (i.e. the value range of the pixel value is 0-255). Most image segmentation methods based on 8-bit data depth either cannot process 16-bit depth data at all or suffer from operational performance collapse. In addition, a multi-modal background model frame is generally used in the intelligent video image segmentation method, and the background models of the two-photon calcium imaging video are all in a single mode, so that if the multi-modal background model frame is forcibly used, not only can the remarkable operation efficiency loss be caused, but also the detection sensitivity of the active neurons can be reduced.
In summary, if the unmatched adaptive image segmentation method is transplanted blindly, the data mining of the two-photon calcium imaging video cannot be really and effectively realized, and the misjudgment of the experimental result is seriously caused.
Therefore, in the field of automated data mining for two-photon calcium imaging video, an effective and efficient adaptive image segmentation method specially designed for the data characteristics of the two-photon calcium imaging video is urgently needed.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a two-photon calcium imaging video data-oriented self-adaptive image segmentation method. The method is specially designed according to the characteristics of two-photon calcium imaging video data, not only is background model framework optimization carried out on the background characteristics of the videos, but also the designed online updating mode completely meets various performance requirements on accuracy, instantaneity, precision and the like of processing 16-bit depth video data.
The method comprises the following steps:
s1: selecting a kth frame to an nth frame from a two-photon calcium imaging video to construct a training sample;
s2: establishing an initialized single-mode background model according to the training sample;
s3: continuously updating the single-mode background model in real time;
s4: and carrying out segmentation detection on the image input in real time by using the real-time updated single-mode background model.
The two-photon calcium imaging video can be acquired by brain neurons through two-photon fluorescence calcium imaging collection.
The step S1 includes the steps of:
s11: selecting the kth frame to the nth frame in the two-photon calcium imaging video as training samples;
s12: carrying out image enhancement processing on the training sample, and specifically carrying out image enhancement transformation of the following formula on values of all pixel points in the training sample:
Figure BDA0002486381830000021
wherein, V represents the value range upper limit value of the pixel value in the two-photon calcium imaging video, I (x, y) is the pixel value of the pixel point (x, y) in the original image, and J (x, y) is the pixel value of the pixel point (x, y) in the image after the image enhancement; MAX represents the global maximum image pixel value in the training sample and MIN represents the global minimum image pixel value in the training sample.
The step S2 includes the steps of:
s21: for each pixel point (x, y) in the video, calculating and generating a central value of the initialized single-mode background model at the position of the pixel point (x, y), wherein the method comprises the following steps:
(1) for each pixel point (x, y) on the periphery of the video, calculating all pixel values J (x, y) of the pixel points in all frame images of the training samplek,J(x,y)k+1,...,J(x,y)nMedian of (3), J (x, y)kRepresenting the pixel value of the pixel point (x, y) of the kth image, and taking the median as the central value C (x, y) of the initialized single-mode background model at the position of the pixel point (x, y)n
(2) For each pixel point (x, y) on the non-peripheral edge position in the video, calculating the median and mode of all pixel values in the 3 × 3 neighborhood of all images of the training sample by taking the pixel point as the center, wherein the 3 × 3 neighborhood of each image has nine pixel points, the training sample has n-k +1 images, the total number of the pixel values is 9 × (n-k +1), and the median is taken as the initialized center value C (x, y) of the monomodal background model on the pixel point (x, y) positionn
S22: for each pixel point (x, y) in the video, calculating and generating a radius value of the initialized monomodal background model at the position of the pixel point (x, y), wherein the calculating method comprises the following steps:
Figure BDA0002486381830000031
wherein M and N are the height and width of a frame of video image, RnIs the radius value, R, of the initialized monomodal background model at the pixel locationnIndependent of pixel location, i.e. R for all pixelsnIs the same, z represents the image's ordinal number, z ═ k.
In specific embodiments, R may benThe index n of (a) represents the radius value of the nth image, which is obtained by cumulatively calculating the k-n image data. The radius of the initialized model is R when looking at (accumulating) the nth imagenSolve RnThen, R can be used in n +1 framenDe-iteration update background model radius Rn+1. That is, the background model radius does not need to be calculated by an iterative method within k-n frames. Starting from the n +1 frame, the background model radius is calculated using an iterative method.
S23: the structure of the initialized single-mode background model at each pixel (x, y) position in the video is as follows: the initialized single-mode background model is C (x, y)nIs a central value, RnIs the range of values for the radius, denoted as [ C (x, y)n-Rn,C(x,y)n+Rn];
S24: calculating the learning rate of generating the monomodal background model, wherein the method comprises the following steps:
within the range of the training sample, the pixel values of all pixel points in the video are from theta1Gray scale transition to theta2Calculating the probability of gray scale, and generating the learning rate F (theta) of the monomodal background model when the nth frame is shared by all pixel points in the video12)nThe sharing means that the learning rates of the monomodal background models of all the pixel points in the same video frame are the same, wherein theta12∈[0,V]Wherein theta1Representing the gray level before the pixel value transition, theta2And (3) expressing the gray scale level after the pixel value transition, wherein V represents the upper limit value of the pixel value range in the two-photon calcium imaging video and is the same as the formula 1.
The step S3 includes the steps of:
s31: continuously updating the central value of the single-mode background model, wherein the method comprises the following steps:
when an n +1 frame image is newly read in, updating the center value of the single-mode background model at the position of each pixel point (x, y) in the image:
C(x,y)n+1=C(x,y)n+[J(x,y)n+1-C(x,y)n]×F(θ12)n
wherein, C (x, y)n+1Is the center value of the single mode background model of the pixel point (x, y) in the n +1 frame image, C (x, y)nAnd F (theta)12)nRespectively the center value of the single-mode background model and the learning rate of the background model when the pixel point (x, y) is in n frames of images, J (x, y)n+1Then is the (x, y) pixel value at n +1 frame image, θ1Is C (x, y)n,θ2Is I (x, y)n+1
S32: continuously updating the radius value of the single-mode background model, wherein the method comprises the following steps:
when an n +1 frame image is newly read in, updating the radius value of the single-mode background model at the position of each pixel point (x, y) in the image
Figure BDA0002486381830000041
Wherein R isn+1Is the radius value of the single-mode background model in the n +1 frame on any pixel point;
s33: when an n +1 frame image is newly read in, the single-mode background model at the position of each pixel point (x, y) in the image is updated as follows: the background model is a model expressed by C (x, y)n+1Is a central value, Rn+1Is the range of the radius, i.e. [ C (x, y) ]n+1-Rn+1,C(x,y)n+1+Rn+1];
S34: the learning rate of the single-mode background model is continuously updated, and the method comprises the following steps:
when newly reading in n +1 frame image, calculating image interiorThe pixel values of all the pixel points positioned on the even lines and the even columns are from theta in the k +1 to n +1 frame images1Gray scale transition to theta2Probability of gray scale, learning rate F (theta) of single-mode background model when generating n +1 th frame image shared by all pixel points in video12)n+1
In analogy, when the n + i frame is newly read, the single-mode background model at the moment of the n + i frame is continuously updated by adopting the same method as the steps S31-S34, and the background model is formed by C (x, y)n+iIs a central value, Rn+iIs the range of the radius, i.e. [ C (x, y) ]n+i-Rn+i,C(x,y)n+i+Rn+i]Simultaneously updating the single-mode background model learning rate F (theta)12)n+i
Step S4 is specifically to process and determine each pixel point of the image using the range of the single-mode background model: if the pixel value of the pixel point is within two value range ranges of the single-mode background model, the pixel point is taken as the background; and if the pixel value of the pixel point is not in the two value range ranges of the single-mode background model, the pixel point is taken as the foreground.
In specific implementation, a two-photon image of a brain neuron is detected in real time, active neurons and inactive neurons can be judged, and if the pixel value of a pixel point is within two value range ranges of a single-mode background model, the pixel point is used as the inactive neuron; and if the pixel value of the pixel point is not in the two value range ranges of the single-mode background model, the pixel point is used as an active neuron.
The invention has the following substantial beneficial effects:
the method of the invention solves the problem that the field lacks special design of self-adaptive image segmentation aiming at the characteristics of two-photon calcium imaging video data. Meanwhile, the method provided by the invention overcomes the problems that the existing methods cannot adapt to and utilize the characteristics of two-photon calcium imaging video data:
(1) the method is specially used for mining the two-photon calcium imaging video data, and can fully utilize the time dimension information of a video image sequence, so that the implicit dynamic information of brain neurons in the two-photon calcium imaging video is effectively mined;
(2) the method is specially used for mining the two-photon calcium imaging video data, is really suitable for the two-photon calcium imaging video data with 16-bit depth, and does not have the problem of operational performance collapse;
(2) the method is specially designed for the inherent characteristics of the background in the two-photon calcium imaging video data, a single-mode background model frame and an online updating mechanism are designed, the calculation accuracy and the calculation efficiency of the background model are effectively guaranteed, and therefore the image segmentation accuracy is improved.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Fig. 2 is an example of a training sample used in the method of the present invention.
FIG. 3 is a diagram illustrating an example of the result of image enhancement processing on a training sample in the method of the present invention.
Fig. 4 is an example of the results achieved by the method according to an embodiment of the invention.
Fig. 5 is an example of results obtained by an image segmentation method in the field of general intelligent video surveillance according to an embodiment.
FIG. 6 is an example of the results achieved by a general single-image-oriented static image segmentation method according to an embodiment.
Fig. 7 is a schematic diagram of a background model learning rate obtaining method in the method of the present invention.
Table 1 is a qualitative comparison of the image segmentation results of the method of the present invention with other general methods.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention is as follows:
taking a mouse cranial nerve two-photon calcium imaging video provided by the american college of enbrain science as an example, the video only has a single channel (i.e., the pixel value only has gray information), and comprises 115461 images, the resolution of each image is 512 × 512, the data depth is 16 bits, and the value range of the pixel value is 0-65535. Fig. 2 illustrates an example of a training sample image.
The specific process of this embodiment is shown in fig. 1, and includes the following steps:
s1: selecting a frame k which is 1 to a frame n which is 100 from a two-photon calcium imaging video to construct a training sample:
s11: selecting the 1 st frame to the 100 th frame in the two-photon calcium imaging video as training samples;
s12: carrying out image enhancement processing on the training sample, and specifically carrying out image enhancement transformation of the following formula on values of all pixel points in the training sample:
Figure BDA0002486381830000061
the results of the method of the present invention after performing image enhancement processing on the training sample image shown in fig. 2 according to the embodiment are shown in fig. 3.
S2: and (3) constructing an initialized single-mode background model according to the training sample:
s21: for each pixel point (x, y) in each frame of image of the training sample, calculating and generating a central value of the initialized single-mode background model at the position of the pixel point (x, y), wherein the method comprises the following steps:
(1) for each pixel point (x, y) on the periphery of the image, calculating all pixel values J (x, y) of the pixel points in all frame images of the training sample1,J(x,y)2,...,J(x,y)100The median is taken as the central value C (x, y) of the initialized single-mode background model at the position of the pixel point (x, y)100
(2) For each pixel point (x, y) on the non-peripheral edge position of the video frame image, calculating the median and mode of all 100 pixel values in the 3 × 3 neighborhood of all 100 images of the training sample by taking the pixel point as the center, totally nine pixel points in the 3 × 3 neighborhood of each image, totally 100 images in the training sample, totally 9 × 100 pixel values, and taking the median as the initialized center value C (x, y) of the monomodal background model on the pixel point (x, y) position100
S22: for each pixel point (x, y) in the video image, calculating the radius value of the initialized single-mode background model at the pixel point (x, y) position in the first frame image of the training sample, wherein sharing means that the radius values of the single-mode background models of all the pixel points in the same video frame are the same, and the calculation method comprises the following formula:
Figure BDA0002486381830000062
s23: the initialized single-mode background model structure at each pixel (x, y) position in the image is as follows: the initialized single-mode background model is C (x, y)100Is a central value, R100Is the range of values for the radius, denoted as [ C (x, y)100-R100,C(x,y)100+R100];
S24: calculating the learning rate of generating the monomodal background model, wherein the method comprises the following steps:
in all frame images of the training sample, the pixel values of all pixel points in the images are from theta1Gray scale transition to theta2Calculating the probability of gray scale to generate the learning rate F (theta) of the single-mode background model when the nth frame shared by all pixel points in the video12)nThe sharing means that the learning rate of a pixel point at a fixed position in a single-mode background model of each image is the same, wherein theta12∈[0,V],θ12∈[0,65535]. Background model learning rate F (theta) in the method of the invention12)nIs shown in fig. 7.
Preferably, the single-mode background model learning rate F (θ)12)100The following iterative algorithm may be used for the calculation of (c):
θ1=I(x,y)k2=I(x,y)k+1
E(θ1→θ2)=1;
H(θ12)k+1=∑E(θ1→θ2);
Figure BDA0002486381830000071
Figure BDA0002486381830000072
wherein, I (x, y)kAnd I (x, y)k+1Respectively representing the pixel values of any pixel point (x, y) in the k frame and the k +1 frame in the video, and respectively simplified as theta1And theta2Since the pixel values in the video are subordinate to [0,65535 ]]The natural numbers of (1), therefore: theta1∈[0,65535],θ2∈[0,65535];E(θ1→θ2) 1 denotes that the following event was detected 1 time: (x, y) pixel values from θ in k frame1Gradation jump to theta in k +1 frame2Gray scale ∑ E (theta)1→θ2) Is to count theta of pixel values of all pixel points in the video from k frame1Gradation jump to theta in k +1 frame2The number of gray levels will be ∑ E (theta)1→θ2) The values of (d) are recorded in the corresponding cells H (θ) of the square matrix H12)k+1Performing the following steps; matrix Z (theta)12)100Is H (theta) within 1-100 frames of video training samples12)k+1Accumulation of values, Z (theta)12)100In which the detected pixel values from theta within the video training sample are recorded1Gradation jump to θ2The total number of gray levels; will Z (theta)12)100Is normalized to [0,1 ]]The probability value between the two is the learning rate F (theta) of the monomodal background model12)100,F(θ12)|100Is a square matrix with size 65535 × 65535.
S3: continuously updating the single-mode background model in real time:
s31: continuously updating the central value of the single-mode background model, wherein the method comprises the following steps:
when a 101 frame is newly read in, updating the center value of the single-mode background model at the position of each pixel point (x, y) in the video image:
C(x,y)101=C(x,y)100+[I(x,y)101-C(x,y)100]×F(θ12)100
wherein, C (x, y)101Is the center value of the monomodal background model of the pixel point (x, y) in 101 frames, C (x, y)100And F (theta)12)100The center value of the monomodal background model and the learning rate of the background model when the pixel point (x, y) is 100 frames, I (x, y)101Then (x, y) is the pixel value at 101 frames, θ1Is C (x, y)100,θ2Is I (x, y)101
S32: continuously updating the radius value of the single-mode background model, wherein the method comprises the following steps:
when a 101 frame is newly read in, updating the radius value of the single-mode background model at the position of each pixel point (x, y) in the video image:
Figure BDA0002486381830000081
wherein R is101Is the radius value of the monomodal background model at 101 frames on any pixel point;
s33: when a 101 frame is newly read in, the single-mode background model at the position of each pixel point (x, y) in the video image is updated as follows: the background model is a model expressed by C (x, y)101Is a central value, R101Is the range of the radius, i.e. [ C (x, y) ]101-R101,C(x,y)101+R101];
S34: the learning rate of the single-mode background model is continuously updated, and the method comprises the following steps:
when a 101 frame is newly read in, calculating the pixel values of all pixel points positioned on even rows and even columns in the video from theta in 2-101 frames1Gray scale transition to theta2Probability of gray scale, and learning rate F (theta) of single-mode background model when generating 101 st frame shared by all pixel points in video12)101
By analogy, when a 100+ i frame is newly read, the single-mode background model at the time of the 100+ i frame is continuously updated by the same method as that in the steps S31 to S34, and the background model is formed by C (x, y)100+iIs a central value, R100+iIs the range of the radius, i.e. [ C (x, y) ]100+i-R100+i,C(x,y)100+i+R100+i]Simultaneously updating the single-mode background model learning rate F (theta)12)n+i
F (θ) as described above12)100Is a square matrix with the size of 65535 × 65535, because of theta1、θ2Are the row and column coordinates of the square matrix, respectively, and will therefore be θ1、θ2Substituting specific value of F (theta)12)100That is, the theta in the square matrix can be obtained1Line, theta2The corresponding background model learning rate at the cell position of the column; according to the example of FIG. 3, F (3000,2000)100The value of (b) is the corresponding background model learning rate, i.e., 0.5, at the cell position of the 3000 th row and 2000 th column in the square matrix.
S4: and carrying out segmentation detection on the image input in real time by using the real-time updated single-mode background model.
In specific implementation, a two-photon image of a brain neuron is detected in real time, active neurons and inactive neurons can be judged, and if the pixel value of a pixel point is within two value range ranges of a single-mode background model, the pixel point is used as the inactive neuron; and if the pixel value of the pixel point is not in the two value range ranges of the single-mode background model, the pixel point is used as an active neuron.
The results obtained according to the example of the method of the invention are shown in fig. 4. It can be seen that, because the method is designed for the two-photon calcium imaging video data characteristics and a special optimization treatment is performed, the overall segmented foreground (i.e., white pixel point regions) is consistent with the target object to be detected (i.e., active neurons), and the situations of missed detection (i.e., the foreground pixel points which should be marked as white are marked as black representing the background) and false detection (i.e., the pixel points which should be marked as black background are marked as white representing the foreground) are less.
Meanwhile, a certain general image segmentation method in the field of intelligent video monitoring is selected for comparison, and the obtained result according to the embodiment is shown in fig. 5. It can be seen that because the method is not designed for the two-photon calcium imaging video data characteristics, the divided foreground is not consistent with the target object to be detected, and a large amount of false detection and a small amount of missed detection occur.
In addition, some general single-image-oriented static image segmentation method was chosen for comparison, and the results obtained according to the embodiment are shown in fig. 6. It can be seen that the foreground segmented by the method is poor in consistency with the target object to be detected, and a large amount of false detection and a small amount of missed detection occur.
In summary, the qualitative comparison results between the method of the present invention and the two general image segmentation methods are shown in table 1.
TABLE 1
Comparison of different image segmentation methods Qualitative comparison of image segmentation results
The invention provides a method Is very good
Image segmentation method for certain general intelligent video monitoring field as comparison Is poor
Single image-oriented general still image segmentation method as contrast Is poor
The result shows that the method can solve the problem that self-adaptive image segmentation specially designed for the two-photon calcium imaging video data characteristics is lacked, overcomes the problem that the existing method cannot adapt to and utilize the two-photon calcium imaging video data characteristics, improves the accuracy of image segmentation, and obtains remarkable technical effects.

Claims (5)

1. A two-photon calcium imaging video data-oriented self-adaptive image segmentation method is characterized by comprising the following steps:
s1: selecting a kth frame to an nth frame from a two-photon calcium imaging video to construct a training sample;
s2: establishing an initialized single-mode background model according to the training sample;
s3: continuously updating the single-mode background model in real time;
s4: and carrying out segmentation detection on the image input in real time by using the real-time updated single-mode background model.
2. The adaptive image segmentation method for two-photon calcium imaging video data according to claim 1, wherein: the step S1 includes the steps of:
s11: selecting the kth frame to the nth frame in the two-photon calcium imaging video as training samples;
s12: carrying out image enhancement processing on the training sample, and specifically carrying out image enhancement transformation of the following formula on values of all pixel points in the training sample:
Figure FDA0002486381820000011
wherein, V represents the value range upper limit value of the pixel value in the two-photon calcium imaging video, I (x, y) is the pixel value of the pixel point (x, y) in the original image, and J (x, y) is the pixel value of the pixel point (x, y) in the image after the image enhancement; MAX represents the global maximum image pixel value in the training sample and MIN represents the global minimum image pixel value in the training sample.
3. The adaptive image segmentation method for two-photon calcium imaging video data according to claim 1, wherein: the step S2 includes the steps of:
s21: for each pixel point (x, y) in the video, calculating and generating a central value of the initialized single-mode background model at the position of the pixel point (x, y), wherein the method comprises the following steps:
(1) for each pixel point (x, y) on the periphery of the video, calculating all pixel values J (x, y) of the pixel points in all frame images of the training samplek,J(x,y)k+1,...,J(x,y)nMedian of (3), J (x, y)kRepresenting the pixel value of the pixel point (x, y) of the kth image, and taking the median as the central value C (x, y) of the initialized single-mode background model at the position of the pixel point (x, y)n
(2) For each pixel point (x, y) at the non-peripheral edge position in the video, calculating the median and mode of all pixel values in the 3 × 3 neighborhood in all images of the training sample by taking the pixel point as the center, and taking the median as the initialized center value C (x, y) of the monomodal background model at the pixel point (x, y) positionn
S22: for each pixel point (x, y) in the video, calculating and generating a radius value of the initialized monomodal background model at the position of the pixel point (x, y), wherein the calculating method comprises the following steps:
Figure FDA0002486381820000021
wherein M and N are the height and width of a frame of video image, RnThe radius value of the initialized single-mode background model at the pixel position, and z represents the ordinal number of the image;
s23: the structure of the initialized single-mode background model at each pixel (x, y) position in the video is as follows: the initialized single-mode background model is C (x, y)nIs a central value, RnIs the range of values for the radius, denoted as [ C (x, y)n-Rn,C(x,y)n+Rn];
S24: calculating the learning rate of generating the monomodal background model, wherein the method comprises the following steps:
within the range of the training sample, the pixel values of all pixel points in the video are from theta1Gray scale transition to theta2Calculating the probability of gray scale, and generating the learning rate F (theta) of the monomodal background model when the nth frame is shared by all pixel points in the video12)nWherein theta12∈[0,V]Wherein theta1Representing the gray level before the pixel value transition, theta2And V represents the upper limit value of the pixel value range in the two-photon calcium imaging video.
4. The adaptive image segmentation method for two-photon calcium imaging video data according to claim 1, wherein: the step S3 includes the steps of:
s31: continuously updating the central value of the single-mode background model, wherein the method comprises the following steps:
when an n +1 frame image is newly read in, updating the center value of the single-mode background model at the position of each pixel point (x, y) in the image:
C(x,y)n+1=C(x,y)n+[J(x,y)n+1-C(x,y)n]×F(θ12)n
wherein, C (x, y)n+1Is the center value of the single mode background model of the pixel point (x, y) in the n +1 frame image, C (x, y)nAnd F (theta)12)nRespectively the center value of the single-mode background model and the learning rate of the background model when the pixel point (x, y) is in n frames of images, J (x, y)n+1Then is the (x, y) pixel value at n +1 frame image, θ1Is C (x, y)n,θ2Is I (x, y)n+1
S32: continuously updating the radius value of the single-mode background model, wherein the method comprises the following steps:
when an n +1 frame image is newly read in, updating the radius value of the single-mode background model at the position of each pixel point (x, y) in the image
Figure FDA0002486381820000022
Wherein R isn+1Is the radius value of the single-mode background model in the n +1 frame on any pixel point;
s33: when an n +1 frame image is newly read in, the single-mode background model at the position of each pixel point (x, y) in the image is updated as follows: the background model is a model expressed by C (x, y)n+1Is a central value, Rn+1Is the range of the radius, i.e. [ C (x, y) ]n+1-Rn+1,C(x,y)n+1+Rn+1];
S34: the learning rate of the single-mode background model is continuously updated, and the method comprises the following steps:
when an n +1 frame image is newly read in, calculating the theta of pixel values of all pixel points positioned in even rows and even columns in the image from k +1 to n +1 frame images1Gray scale transition to theta2Probability of gray scale, learning rate F (theta) of single-mode background model when generating n +1 th frame image shared by all pixel points in video12)n+1
5. The adaptive image segmentation method for two-photon calcium imaging video data according to claim 1, wherein: step S4 is specifically to process and determine each pixel point of the image using the range of the single-mode background model: if the pixel value of the pixel point is within two value range ranges of the single-mode background model, the pixel point is taken as the background; and if the pixel value of the pixel point is not in the two value range ranges of the single-mode background model, the pixel point is taken as the foreground.
CN202010393173.XA 2020-05-11 2020-05-11 Self-adaptive image segmentation method for two-photon calcium imaging video data Active CN111583292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010393173.XA CN111583292B (en) 2020-05-11 2020-05-11 Self-adaptive image segmentation method for two-photon calcium imaging video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010393173.XA CN111583292B (en) 2020-05-11 2020-05-11 Self-adaptive image segmentation method for two-photon calcium imaging video data

Publications (2)

Publication Number Publication Date
CN111583292A true CN111583292A (en) 2020-08-25
CN111583292B CN111583292B (en) 2023-07-07

Family

ID=72124836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010393173.XA Active CN111583292B (en) 2020-05-11 2020-05-11 Self-adaptive image segmentation method for two-photon calcium imaging video data

Country Status (1)

Country Link
CN (1) CN111583292B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040015310A1 (en) * 2000-06-29 2004-01-22 Rafael Yuste Method and system for analyzing multi-dimensional data
US20090129699A1 (en) * 2004-01-30 2009-05-21 Baumer Optronic Gmbh Image processing system
CN102033043A (en) * 2010-10-19 2011-04-27 浙江大学 Grain moisture content detecting method based on hyperspectral image technology
CN102332162A (en) * 2011-09-19 2012-01-25 西安百利信息科技有限公司 Method for automatic recognition and stage compression of medical image regions of interest based on artificial neural network
US8313437B1 (en) * 2010-06-07 2012-11-20 Suri Jasjit S Vascular ultrasound intima-media thickness (IMT) measurement system
WO2012162981A1 (en) * 2011-09-16 2012-12-06 华为技术有限公司 Video character separation method and device
CN104616290A (en) * 2015-01-14 2015-05-13 合肥工业大学 Target detection algorithm in combination of statistical matrix model and adaptive threshold
CN105574896A (en) * 2016-02-01 2016-05-11 衢州学院 High-efficiency background modeling method for high-resolution video
CN108154513A (en) * 2017-11-21 2018-06-12 中国人民解放军第三军医大学 Cell based on two photon imaging data detects automatically and dividing method
US20180177401A1 (en) * 2015-06-22 2018-06-28 The Board Of Trustees Of The Leland Stanford Junior University Methods and Devices for Imaging and/or Optogenetic Control of Light-Responsive Neurons
US20180267284A1 (en) * 2015-01-31 2018-09-20 Board Of Regents, The University Of Texas System High-speed laser scanning microscopy platform for high-throughput automated 3d imaging and functional volumetric imaging
CN109472801A (en) * 2018-11-22 2019-03-15 廖祥 It is a kind of for multiple dimensioned neuromorphic detection and dividing method
JP2019148801A (en) * 2019-03-20 2019-09-05 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Method for using epi-illumination fluorescence microscope, method for using imaging device, and epi-illumination fluorescence microscope
CN110403576A (en) * 2019-08-01 2019-11-05 中国医学科学院北京协和医院 Application of the three-dimensional photoacoustic imaging in tumor of breast points-scoring system
CN110473166A (en) * 2019-07-09 2019-11-19 哈尔滨工程大学 A kind of urinary formed element recognition methods based on improvement Alexnet model
CN110866906A (en) * 2019-11-12 2020-03-06 安徽师范大学 Three-dimensional culture human myocardial cell pulsation detection method based on image edge extraction
CN111028245A (en) * 2019-12-06 2020-04-17 衢州学院 Multi-mode composite high-definition high-speed video background modeling method
CN111033351A (en) * 2017-05-19 2020-04-17 洛克菲勒大学 Imaging signal extraction device and method of using the same
CN111047654A (en) * 2019-12-06 2020-04-21 衢州学院 High-definition high-speed video background modeling method based on color information

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040015310A1 (en) * 2000-06-29 2004-01-22 Rafael Yuste Method and system for analyzing multi-dimensional data
US20090129699A1 (en) * 2004-01-30 2009-05-21 Baumer Optronic Gmbh Image processing system
US8313437B1 (en) * 2010-06-07 2012-11-20 Suri Jasjit S Vascular ultrasound intima-media thickness (IMT) measurement system
CN102033043A (en) * 2010-10-19 2011-04-27 浙江大学 Grain moisture content detecting method based on hyperspectral image technology
WO2012162981A1 (en) * 2011-09-16 2012-12-06 华为技术有限公司 Video character separation method and device
CN102332162A (en) * 2011-09-19 2012-01-25 西安百利信息科技有限公司 Method for automatic recognition and stage compression of medical image regions of interest based on artificial neural network
CN104616290A (en) * 2015-01-14 2015-05-13 合肥工业大学 Target detection algorithm in combination of statistical matrix model and adaptive threshold
US20180267284A1 (en) * 2015-01-31 2018-09-20 Board Of Regents, The University Of Texas System High-speed laser scanning microscopy platform for high-throughput automated 3d imaging and functional volumetric imaging
US20180177401A1 (en) * 2015-06-22 2018-06-28 The Board Of Trustees Of The Leland Stanford Junior University Methods and Devices for Imaging and/or Optogenetic Control of Light-Responsive Neurons
CN105574896A (en) * 2016-02-01 2016-05-11 衢州学院 High-efficiency background modeling method for high-resolution video
CN111033351A (en) * 2017-05-19 2020-04-17 洛克菲勒大学 Imaging signal extraction device and method of using the same
CN108154513A (en) * 2017-11-21 2018-06-12 中国人民解放军第三军医大学 Cell based on two photon imaging data detects automatically and dividing method
CN109472801A (en) * 2018-11-22 2019-03-15 廖祥 It is a kind of for multiple dimensioned neuromorphic detection and dividing method
JP2019148801A (en) * 2019-03-20 2019-09-05 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー Method for using epi-illumination fluorescence microscope, method for using imaging device, and epi-illumination fluorescence microscope
CN110473166A (en) * 2019-07-09 2019-11-19 哈尔滨工程大学 A kind of urinary formed element recognition methods based on improvement Alexnet model
CN110403576A (en) * 2019-08-01 2019-11-05 中国医学科学院北京协和医院 Application of the three-dimensional photoacoustic imaging in tumor of breast points-scoring system
CN110866906A (en) * 2019-11-12 2020-03-06 安徽师范大学 Three-dimensional culture human myocardial cell pulsation detection method based on image edge extraction
CN111028245A (en) * 2019-12-06 2020-04-17 衢州学院 Multi-mode composite high-definition high-speed video background modeling method
CN111047654A (en) * 2019-12-06 2020-04-21 衢州学院 High-definition high-speed video background modeling method based on color information

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
M. FILIP SLUZEWSKIY, PETR TVRDIKZ AND SCOTT T. ACTON: "SEGMENTATION OF CORTICAL SPREADING DEPRESSIONWAVEFRONTS THROUGH LOCAL SIMILARITY METRIC", ICIP 2019 *
RUI ZHANG, WEIGUO GONG, VICTOR GRZEDA, ANDREW YAWORSKI, AND MICHAEL GREENSPAN: "An Adaptive Learning Rate Method for Improving Adaptability of Background Models", IEEE SIGNAL PROCESSING LETTERS, VOL. 20, NO. 12, DECEMBER 2013 *
丁德志;侯德文;: "具有自适应能力的背景模型构建算法", 计算机工程与设计, no. 01 *
赵琪1,石鑫1,龚薇2,胡乐佳1,郑瑶1,祝欣培2,斯科1: "基于并行波前校正算法的大视场深穿透光学显微成像", 中国激光, vol. 45, no. 12 *

Also Published As

Publication number Publication date
CN111583292B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN111160533B (en) Neural network acceleration method based on cross-resolution knowledge distillation
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
CN104992447B (en) A kind of image automatic testing method of sewage motion microorganism
CN112183501B (en) Depth counterfeit image detection method and device
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN110728294A (en) Cross-domain image classification model construction method and device based on transfer learning
CN111242026B (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN106815576B (en) Target tracking method based on continuous space-time confidence map and semi-supervised extreme learning machine
US20200082213A1 (en) Sample processing method and device
CN107194414A (en) A kind of SVM fast Incremental Learning Algorithms based on local sensitivity Hash
CN110992365A (en) Loss function based on image semantic segmentation and design method thereof
CN109255799B (en) Target tracking method and system based on spatial adaptive correlation filter
CN114897782B (en) Gastric cancer pathological section image segmentation prediction method based on generation type countermeasure network
CN116071560A (en) Fruit identification method based on convolutional neural network
CN113963333B (en) Traffic sign board detection method based on improved YOLOF model
CN107798329A (en) Adaptive particle filter method for tracking target based on CNN
CN110136164B (en) Method for removing dynamic background based on online transmission transformation and low-rank sparse matrix decomposition
CN111583292B (en) Self-adaptive image segmentation method for two-photon calcium imaging video data
CN111047654A (en) High-definition high-speed video background modeling method based on color information
CN111028245B (en) Multi-mode composite high-definition high-speed video background modeling method
CN110991361B (en) Multi-channel multi-modal background modeling method for high-definition high-speed video
CN111583293B (en) Self-adaptive image segmentation method for multicolor double-photon image sequence
CN112532938A (en) Video monitoring system based on big data technology
CN111797732A (en) Video motion identification anti-attack method insensitive to sampling
CN111008995B (en) Single-channel multi-mode background modeling method for high-definition high-speed video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant