CN117291941A - Cell nucleus segmentation method based on boundary and central point feature assistance - Google Patents
Cell nucleus segmentation method based on boundary and central point feature assistance Download PDFInfo
- Publication number
- CN117291941A CN117291941A CN202311329962.7A CN202311329962A CN117291941A CN 117291941 A CN117291941 A CN 117291941A CN 202311329962 A CN202311329962 A CN 202311329962A CN 117291941 A CN117291941 A CN 117291941A
- Authority
- CN
- China
- Prior art keywords
- segmentation
- module
- cell
- ith
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 118
- 210000003855 cell nucleus Anatomy 0.000 title claims abstract description 88
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 81
- 210000004027 cell Anatomy 0.000 claims description 46
- 238000012360 testing method Methods 0.000 claims description 30
- 210000004940 nucleus Anatomy 0.000 claims description 25
- 238000003384 imaging method Methods 0.000 claims description 24
- 238000013507 mapping Methods 0.000 claims description 15
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 9
- 238000005192 partition Methods 0.000 claims description 4
- GNFTZDOKVXKIBK-UHFFFAOYSA-N 3-(2-methoxyethoxy)benzohydrazide Chemical compound COCCOC1=CC=CC(C(=O)NN)=C1 GNFTZDOKVXKIBK-UHFFFAOYSA-N 0.000 claims description 3
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000001575 pathological effect Effects 0.000 abstract description 7
- 238000012545 processing Methods 0.000 abstract description 6
- 238000003745 diagnosis Methods 0.000 abstract description 4
- 230000008569 process Effects 0.000 abstract description 4
- 238000011160 research Methods 0.000 abstract description 3
- 238000013461 design Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 26
- 206010028980 Neoplasm Diseases 0.000 description 3
- 201000011510 cancer Diseases 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000032823 cell division Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000032839 leukemia Diseases 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000011369 optimal treatment Methods 0.000 description 1
- 238000010827 pathological analysis Methods 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The utility model provides a cell nucleus segmentation method based on boundary and central point characteristic assist, relates to medical image processing field, realizes training through encoder and decoder network for the network after training can find these outward appearance characteristics of boundary, central point, cell nucleus of cell nucleus from pathological image, has realized the characteristic constraint from point to line to the face again, through design specific central point loss function and boundary weighting module bwm, has guaranteed the integrality of input picture information, has improved the accuracy to cell nucleus segmentation. Meanwhile, by means of computer vision and image processing technology, automatic cell nucleus segmentation can be achieved, and segmentation accuracy and efficiency are greatly improved. Not only can save human resources, but also can accelerate the research and diagnosis process.
Description
Technical Field
The invention relates to the field of medical image processing, in particular to a cell nucleus segmentation method based on boundary and central point feature assistance.
Background
Pathological sections are considered "gold standard" for cancer diagnosis, as they provide a large amount of tumor information, digital pathological images are now widely used in clinical prediction. In clinical applications, cell nucleus segmentation plays a vital role in analyzing morphological features of the cell nucleus in pathological images. The cell nucleus segmentation can extract useful characteristics of a plurality of cells in a pathological image, and has important values for cell nucleus morphology measurement, digital pathological analysis and the like. By accurately dividing the cell nucleus, doctors can be helped to diagnose and treat diseases at early stage, and the characteristics of the shape, the size, the distribution and the like of the cell nucleus can provide important pathological information, such as abnormal proliferation of cancer cells and the like. By cell division, morphological characteristics of the cell nucleus can be quantitatively analyzed, which assists doctors in diagnosing malignant tumors, leukemia and other diseases and in determining an optimal treatment scheme. Cell segmentation is often performed manually by doctors, is time-consuming and susceptible to major factors, and is a large number of nuclei clustered together, resulting in crowding and overlapping of nuclei; and secondly, the nucleus boundary in the defocused image is often blurred, so that the segmentation accuracy is affected. Therefore, by means of computer vision and image processing technology, automatic cell nucleus segmentation can be realized, and the accuracy and efficiency of segmentation are greatly improved. Not only can save human resources, but also can accelerate the research and diagnosis process.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a cell nucleus segmentation method based on boundary and central point feature assistance, which improves the accuracy of cell nucleus segmentation.
The technical scheme adopted for overcoming the technical problems is as follows:
a cell nucleus segmentation method based on boundary and center point feature assistance comprises the following steps:
a) Acquiring n nuclear images to obtain a nuclear image set Y and a tag set G, Y= { Y 1 ,Y 2 ,...,Y i ,...,Y n -wherein Y is i For the ith nuclear image, g= { G 1 ,G 2 ,...,G i ,...,G n }, wherein G i For and ith nuclear image Y i Corresponding tags, i e {1,., n };
b) Dividing the cell nucleus image set Y into a training set P and a test set T, wherein P= { P 1 ,P 2 ,...,P i ,...,P o "where P i For the ith nuclear image in training set P, i ε {1, …, o }, o is the total number of nuclear images in training set P, T= { T 1 ,T 2 ,...,T i ,...,T q }, T therein i For the ith nuclear image in test set T, i e { 1..q }, q is the total number of nuclear images in test set T;
c) Converting the label set corresponding to the training set P machine to obtain a pseudo-label set of the cell edge profilePseudo tag set of cell center point->Wherein->Pseudo tag for the ith cell edge profile, < +.>Pseudo tag for the ith cell center point;
d) Constructing a segmentation network consisting of a PC module, a PE module and a main module segmentation framework;
e) The ith cell nucleus image P in the training set P i Inputting the cell nucleus image P into a PC module, training the PC module, and obtaining an ith cell nucleus image P in a training set P i Inputting the training data into a PE module, and training the PE module;
f) The ith cell nucleus image P in the training set P i Inputting the training data into a main module segmentation frame, and training the main module segmentation frame;
g) Training the segmentation network to obtain an optimized segmentation network;
h) Imaging the ith cell nucleus in the test set T i Inputting the obtained segmentation image into a PC module in the optimized segmentation network, outputting a segmentation image of a cell center point, and obtaining an ith cell nucleus image T in a test set T i Inputting the cell boundary segmentation images into PE modules in the optimized segmentation network, outputting segmentation images of cell boundaries, and obtaining an ith cell nucleus image T in a test set T i And inputting the cell nucleus segmentation frame into a main module segmentation frame in the optimized segmentation network, and outputting to obtain a segmentation map of the cell nucleus.
Further, n nuclear images are acquired from the NulnsSeg dataset in step a).
Preferably, in the step b), the cell nucleus image set Y is divided into a training set P and a test set T according to the proportion of 7:3. Further, step c) comprises the steps of:
c-1) imaging the ith nucleus in the training set P i After scaling to 512 x 512, the image P is obtained i ′,P i ′∈R C×H×W Wherein R is a real space, C is the number of image channels, H is the image height, and W is the image width;
c-2) ith nuclear image P from training set P i Corresponding label G i A square frame X is established around the selected cell, the cell nucleus boundary is Y, and the formula is adoptedCalculating the minimum L1 distance M (X) from a point X on the square frame X to a point Y on the cell nucleus boundary Y, wherein omega is the cell boundary of the cell, and selecting the center point of the square frame X as the center of the cell when the maximum L1 distance M (X) from each point on the square frame X to each point on the cell nucleus boundary Y is calculated, wherein the center point pseudo-label of the center point is->If the maximum L1 distance M (X) from each point on the square frame X to each point on the cell nucleus boundary Y is N, N is a positive integer greater than or equal to 2, sequentially selecting the center point of the square frame X corresponding to the first maximum L1 distance M (X) as the center of the cell, and forming a cell center point pseudo tag set by all center point pseudo tags in the training set P>
c-3) utilizing equation I using the Sobel algorithm X =P i *G X Calculating to obtain an ith cell nucleus image P in the training set P i Corresponding label G i Gradient image I in horizontal direction X Where is convolution operation, G X Is a Sobel operator in the horizontal direction,using the Sobel algorithm using formula I Y =P i *G Y Calculating to obtain an ith cell nucleus image P in the training set P i Corresponding label G i Gradient image I in vertical direction Y In which G Y Is a Sobel operator in the vertical direction,by passing throughFormula->Calculating to obtain the ith cell edge profile pseudo tagAll edge contour pseudo tags in the training set P form a cell edge contour pseudo tag set +.>
Further, in step d), the PC module, the PE module, and the main module partition frame are all configured by a U-net network, the PC module is used for partitioning the cell center, the PE module is used for partitioning the cell boundary, and the main module partition frame is used for partitioning the cell nucleus.
Further, step e) comprises the steps of:
e-1) imaging the ith nucleus in the training set P i Inputting into a PC module, outputting to obtain a feature map P i PC ,P i PC ∈R 1×H×W ;
e-2) passing through the formulaCalculating to obtain a loss function L1 of the center point distance, wherein u is as follows ij Is a characteristic map P i PC Pixel value, m, of pixel point of ith row and jth column of (c) ij Pseudo tag G as a center point i C Pixel values of pixel points of the ith row and the jth column;
e-3) mapping the characteristic pattern P i PC Pseudo tag with center pointCalculating a loss value by adopting a loss function based on Euclidean distance, and adjusting the encoder and decoder parameter weights of the PC module by using a back propagation method according to the loss value;
e-4) imaging the ith nucleus in the training set P i Input into a PE module, and output to obtain a feature map P i PE ,P i PE ∈R 1×H×W ;
e-5) mapping the characteristic map P i PE Pseudo tag with center pointUsing edge Loss function Loss edges Calculating a loss value, and adjusting the encoder and decoder parameter weights of the PE module by using a back propagation method according to the loss value;
e-6) setting a weighting module which sequentially consists of a convolution layer and a Tanh activation function and is used for setting a characteristic map P i PE Input into a weighting module, and output to obtain a feature map P i BW ,P i BW ∈R 1×H×W 。
The convolution kernel size of the convolution layer in step e-6) is 5*5.
Further, step f) comprises the steps of:
f-1) imaging the ith nucleus in the training set P i Input into the main module dividing frame, output and obtain the last image characteristic P of the decoder i MA ,P i MA ∈R 1×H×W ;
f-2) mapping the characteristic pattern P i BW And feature map P i PC Performing addition operation to obtain a feature map P i ME ,P i ME ∈R 1×H×W ;
f-3) mapping the characteristic pattern P i ME Sequentially inputting into a first convolution layer and a second convolution layer, and outputting to obtain a feature map P i ED ,P i ED ∈R 1×H×W ;
f-4) mapping the characteristic pattern P i ED Pseudo tag with center pointCalculating a loss function L3, and adjusting encoder and decoder parameter weights of the main module partition frame by using a back propagation method according to the loss value of the loss function L3, wherein L3=0.5BCE+L Dice BCE is binary cross entropy loss, L Dice Is the Dice loss. Further, in step g), training the PC module according to the Loss function L1 by using an Adam optimizer to obtain an optimized PC module, and according to the Loss function Loss by using the Adam optimizer edges Training the PE module to obtain an optimized PE module, training the PE module according to the loss function L3 by using an Adam optimizer to obtain an optimized main module segmentation frame, and forming an optimized segmentation network by the optimized PC module, the optimized PE module and the optimized main module segmentation frame.
Further, step h) comprises the steps of:
h-1) imaging the ith nucleus in the test set T i Inputting the cell center point segmentation map P into a PC module in the optimized segmentation network, and outputting the cell center point segmentation map P i PC′ ;
h-2) imaging the ith nucleus in the test set T i Inputting the cell boundary into a PE module in the optimized segmentation network, and outputting a segmentation map P for obtaining the cell boundary i BW′ ;
h-3) imaging the ith nucleus in the test set T i Inputting the cell nucleus segmentation map P into a main module segmentation framework in the optimized segmentation network i ED′ 。
The beneficial effects of the invention are as follows: the nuclear labels given by the dataset may, according to the invention, generate a cell edge contour pseudo-label and a cell center pseudo-label, which are considered as significant input labels associated with the original image in the subsequent segmentation framework. The three labels respectively participate in the network for training, so that the trained network can acquire the center point position and the edge outline of the cell nucleus from the image, the overlapped cell nucleus segmentation is greatly improved, the main module network is restrained, the boundary weighting module BWM is added after the PE module, the network is enabled to pay more attention to the boundary part of the cell nucleus, the segmentation capability of the network near the cell nucleus boundary is enhanced, the feature map output by the BWM module and the segmentation result of the PC module are fused for training, and the segmentation precision is greatly improved. In the invention, a specific center point loss function is designed, more accurate constraint is realized according to a point segmentation task, and a boundary weighting module BWM is designed, so that the filtering of unimportant noise parts in an edge feature map can be realized, the edge information of a foreground cell nucleus is optimized, and the subsequent fusion is favorably promoted.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram of a nuclear division network according to the present invention;
FIG. 3 is a block diagram of a boundary weighting module according to the present invention.
Detailed Description
The invention is further described with reference to fig. 1 to 3.
A cell nucleus segmentation method based on boundary and center point feature assistance comprises the following steps:
a) Acquiring n nuclear images to obtain a nuclear image set Y and a tag set G, Y= { Y 1 ,Y 2 ,...,Y i ,...,Y n -wherein Y is i For the ith nuclear image, g= { G 1 ,G 2 ,...,G i ,...,G n }, wherein G i For and ith nuclear image Y i Corresponding tags, i e { 1..n }.
b) Dividing the cell nucleus image set Y into a training set P and a test set T, wherein P= { P 1 ,P 2 ,...,P i ,...,P o "where P i For the ith nuclear image in training set P, i e { 1..once, o }, o is the total number of nuclear images in training set P, t= { T 1 ,T 2 ,...,T i ,...,T q }, T therein i For the ith nuclear image in test set T, i e { 1..q }, q is the total number of nuclear images in test set T.
c) Converting the label set corresponding to the training set P machine to obtain a pseudo-label set of the cell edge profilePseudo tag set of cell center point->Wherein->Pseudo tag for the ith cell edge profile, < +.>Is the i-th cell center pseudo tag.
d) And constructing a split network consisting of a PC module, a PE module and a main module split framework.
e) The ith cell nucleus image P in the training set P i Inputting the cell nucleus image P into a PC module, training the PC module, and obtaining an ith cell nucleus image P in a training set P i And inputting the training data into the PE module, and training the PE module.
f) The ith cell nucleus image P in the training set P i The input is input into the main module dividing frame, and the main module dividing frame is trained.
g) Training the segmentation network to obtain an optimized segmentation network.
h) Imaging the ith cell nucleus in the test set T i Inputting the obtained segmentation image into a PC module in the optimized segmentation network, outputting a segmentation image of a cell center point, and obtaining an ith cell nucleus image T in a test set T i Inputting the cell boundary segmentation images into PE modules in the optimized segmentation network, outputting segmentation images of cell boundaries, and obtaining an ith cell nucleus image T in a test set T i And inputting the cell nucleus segmentation frame into a main module segmentation frame in the optimized segmentation network, and outputting to obtain a segmentation map of the cell nucleus.
The training is realized through the encoder and decoder network, so that the trained network can find out the appearance characteristics of the boundary, the central point and the cell nucleus of the cell nucleus from the pathological image, the characteristic constraint from point to line to surface is realized, the integrity of the input picture information is ensured through designing a specific central point loss function and a boundary weighting module bwm, and the accuracy of cell nucleus segmentation is improved. Meanwhile, by means of computer vision and image processing technology, automatic cell nucleus segmentation can be achieved, and segmentation accuracy and efficiency are greatly improved. Not only can save human resources, but also can accelerate the research and diagnosis process.
In one embodiment of the invention, n nuclear images are acquired from the NulnsSeg dataset in step a). In one embodiment of the invention, the nuclear image set Y is divided into a training set P and a test set T in step b) according to a ratio of 7:3.
In one embodiment of the invention, step c) comprises the steps of:
c-1) imaging the ith nucleus in the training set P i After scaling to 512 x 512, the image P is obtained i ′,P i ′∈R C×H×W Where R is the real space, C is the number of image channels, H is the image height, and W is the image width.
c-2) ith nuclear image P from training set P i Corresponding label G i A square frame X is established around the selected cell, the cell nucleus boundary is Y, and the formula is adoptedCalculating the minimum L1 distance M (X) from a point X on the square frame X to a point Y on the cell nucleus boundary Y, wherein omega is the cell boundary of the cell, and selecting the center point of the square frame X as the center of the cell when the maximum L1 distance M (X) from each point on the square frame X to each point on the cell nucleus boundary Y is calculated, wherein the center point pseudo-label of the center point is->If the maximum L1 distance M (X) from each point on the square frame X to each point on the cell nucleus boundary Y is N, N is a positive integer greater than or equal to 2, sequentially selecting the center point of the square frame X corresponding to the first maximum L1 distance M (X) as the center of the cell, and forming a cell center point pseudo tag set by all center point pseudo tags in the training set P
c-3) utilizing equation I using the Sobel algorithm X =P i *G X Calculating to obtain an ith cell nucleus image P in the training set P i Corresponding label G i Gradient image I in horizontal direction X Where is convolution operation, G X Is a Sobel operator in the horizontal direction,using the Sobel algorithm using formula I Y =P i *G Y Calculating to obtain an ith cell nucleus image P in the training set P i Corresponding label G i Gradient image I in vertical direction Y In which G Y Is a Sobel operator in the vertical direction,by the formula->Calculating to obtain the ith cell edge profile pseudo tagAll edge contour pseudo tags in the training set P form a cell edge contour pseudo tag set +.>
In one embodiment of the present invention, in step d), the PC module, the PE module, and the main module segmentation framework are all composed of a U-net network, the PC module is used for segmenting the cell center point, the PE module is used for segmenting the cell boundary, and the main module segmentation framework is used for segmenting the cell nucleus.
In one embodiment of the invention, step e) comprises the steps of:
e-1) imaging the ith nucleus in the training set P i Inputting into a PC module, outputting to obtain a feature map P i PC ,P i PC ∈R 1×H×W 。
e-2) passing through the formulaCalculating to obtain a loss function L1 of the center point distance, wherein u is as follows ij Is a characteristic map P i PC Pixel value, m, of pixel point of ith row and jth column of (c) ij Pseudo tag for center point->Pixel values of pixel points of the ith row and the jth column of (c). (u) ij -m ij ) 2 Representing a characteristic map P i PC Pseudo tag of position and center point->The difference between the positions is squared to emphasize the larger difference. />To divide the total difference by the total number of image pixels, an average difference value at each pixel location is obtained, resulting in a final center point distance loss function L1.
e-3) mapping the characteristic pattern P i PC Pseudo tag with center pointAnd calculating a loss value by adopting a loss function based on Euclidean distance, and adjusting the encoder and decoder parameter weights of the PC module by using a back propagation method according to the loss value.
e-4) imaging the ith nucleus in the training set P i Input into a PE module, and output to obtain a feature map P i PE ,P i PE ∈R 1×H×W 。
e-5) mapping the characteristic map P i PE Pseudo tag with center pointUsing edge Loss function Loss edges And calculating a loss value, and adjusting encoder and decoder parameter weights of the PE module according to the loss value by using a back propagation method.
e-6) in order to care about the information of the edges,the background information is restrained, the features are fused with the PC module and the main module better, and the feature map P output by the PE module is obtained i PE The boundary weighting processing is carried out through a boundary weighting module, and the specific weighting module sequentially consists of a convolution layer and a Tanh activation function, so as to lead the feature map P to be i PE Input into a weighting module, and output to obtain a feature map P i BW ,P i BW ∈R 1×H×W . And a feature map with clear boundaries can be obtained through a weighting module. After weighting by the boundary, the positions close to the boundary are given higher weights, and the positions far from the boundary are given lower weights, i.e. the pixel values at positions outside the boundary are monotonically decreasing.
In this embodiment, the convolution kernel size of the convolution layer in step e-6) is 5*5.
In one embodiment of the invention, step f) comprises the steps of:
f-1) imaging the ith nucleus in the training set P i Input into the main module dividing frame, output and obtain the last image characteristic P of the decoder i MA ,P i MA ∈R 1×H×W 。
f-2) mapping the characteristic pattern P i BW And feature map P i PC Performing addition operation to obtain a feature map P i ME ,P i ME ∈R 1×H×W 。
f-3) mapping the characteristic pattern P i ME Sequentially inputting into a first convolution layer and a second convolution layer, and outputting to obtain a feature map P i ED ,P i ED ∈R 1×H×W . Overlapping effects brought in the fusion process can be eliminated through the two convolution layers.
f-4) mapping the characteristic pattern P i ED Pseudo tag with center pointThe loss function L3 is calculated, and the encoder and decoder parameter weights of the main module division frame are adjusted by using a back propagation method according to the loss value of the loss function L3. Nuclear separation task slave sheetFrom a pixel-by-pixel perspective, this is a two-classification problem, with the parts outside the kernel being the background. Therefore, binary cross entropy (Binary Cross Entropy, BCE) is usually used as a loss function, but in cell nucleus segmentation, the background is much larger than the nucleus, which causes the imbalance problem of pixel class, the network learns more features of learning background, which affects the learning of cell nucleus features, and when the simple use of cross entropy loss function does not work well, the Dice loss function can better deal with the imbalance problem of class, so the invention combines the features of both to provide an improved joint loss function L3, wherein l3=0.5bce+l Dice BCE is binary cross entropy loss, L Dice Is the Dice loss. The improved loss function makes the training process more efficient and stable and alleviates the problem of class imbalance. In one embodiment of the invention, step g) trains the PC module according to the Loss function L1 using an Adam optimizer, resulting in an optimized PC module according to the Loss function Loss using the Adam optimizer edges Training the PE module to obtain an optimized PE module, training the PE module according to the loss function L3 by using an Adam optimizer to obtain an optimized main module segmentation frame, and forming an optimized segmentation network by the optimized PC module, the optimized PE module and the optimized main module segmentation frame.
In one embodiment of the invention, step h) comprises the steps of:
h-1) imaging the ith nucleus in the test set T i Inputting the cell center point segmentation map P into a PC module in the optimized segmentation network, and outputting the cell center point segmentation map P i PC′ 。
h-2) imaging the ith nucleus in the test set T i Inputting the cell boundary into a PE module in the optimized segmentation network, and outputting a segmentation map P for obtaining the cell boundary i BW′ 。
h-3) imaging the ith nucleus in the test set T i Inputting the cell nucleus segmentation map P into a main module segmentation framework in the optimized segmentation network i ED′ 。
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. The cell nucleus segmentation method based on the boundary and center point feature assistance is characterized by comprising the following steps:
a) Acquiring n nuclear images to obtain a nuclear image set Y and a tag set G, Y= { Y 1 ,Y 2 ,...,Y i ,...,Y n -wherein Y is i For the ith nuclear image, g= { G 1 ,G 2 ,...,G i ,...,G n }, wherein G i For and ith nuclear image Y i Corresponding tags, i e {1,., n };
b) Dividing the cell nucleus image set Y into a training set P and a test set T, wherein P= { P 1 ,P 2 ,...,P i ,...,P o "where P i For the ith nuclear image in training set P, i e { 1..once, o }, o is the total number of nuclear images in training set P, t= { T 1 ,T 2 ,...,T i ,...,T q }, T therein i For the ith nuclear image in test set T, i e { 1..q }, q is the total number of nuclear images in test set T;
c) Converting the label set corresponding to the training set P machine to obtain a pseudo-label set of the cell edge profilePseudo tag set of cell center point->Wherein->Pseudo tag for the ith cell edge profile, < +.>Pseudo tag for the ith cell center point;
d) Constructing a segmentation network consisting of a PC module, a PE module and a main module segmentation framework;
e) The ith cell nucleus image P in the training set P i Inputting the cell nucleus image P into a PC module, training the PC module, and obtaining an ith cell nucleus image P in a training set P i Inputting the training data into a PE module, and training the PE module;
f) The ith cell nucleus image P in the training set P i Inputting the training data into a main module segmentation frame, and training the main module segmentation frame;
g) Training the segmentation network to obtain an optimized segmentation network;
h) Imaging the ith cell nucleus in the test set T i Inputting the obtained segmentation image into a PC module in the optimized segmentation network, outputting a segmentation image of a cell center point, and obtaining an ith cell nucleus image T in a test set T i Inputting the cell boundary segmentation images into PE modules in the optimized segmentation network, outputting segmentation images of cell boundaries, and obtaining an ith cell nucleus image T in a test set T i And inputting the cell nucleus segmentation frame into a main module segmentation frame in the optimized segmentation network, and outputting to obtain a segmentation map of the cell nucleus.
2. The boundary and center point feature based assisted nuclear segmentation method of claim 1, wherein: in step a), n nuclear images are acquired from the NulnsSeg dataset.
3. The boundary and center point feature based assisted nuclear segmentation method of claim 1, wherein: in the step b), the cell nucleus image set Y is divided into a training set P and a test set T according to the proportion of 7:3.
4. The boundary and center point feature based assisted nuclear segmentation method of claim 1 wherein step c) comprises the steps of:
c-1) imaging the ith nucleus in the training set P i After scaling to 512 x 512, the image P is obtained i ′,P i ′∈R C ×H×W Wherein R is a real space, C is the number of image channels, H is the image height, and W is the image width;
c-2) ith nuclear image P from training set P i Corresponding label G i A square frame X is established around the selected cell, the cell nucleus boundary is Y, and the formula is adoptedCalculating the minimum L1 distance M (X) from a point X on the square frame X to a point Y on the cell nucleus boundary Y, wherein omega is the cell boundary of the cell, and selecting the center point of the square frame X as the center of the cell when the maximum L1 distance M (X) from each point on the square frame X to each point on the cell nucleus boundary Y is calculated, wherein the center point pseudo-label of the center point is->If the maximum L1 distance M (X) from each point on the square frame X to each point on the cell nucleus boundary Y is N, N is a positive integer greater than or equal to 2, sequentially selecting the center point of the square frame X corresponding to the first maximum L1 distance M (X) as the center of the cell, and forming a cell center point pseudo tag set by all center point pseudo tags in the training set P
c-3) utilizing equation I using the Sobel algorithm X =P i *G X Calculating to obtain an ith cell nucleus image P in the training set P i Corresponding label G i Gradient image I in horizontal direction X Where is convolution operation, G X Is a Sobel operator in the horizontal direction,using the Sobel algorithm using formula I Y =P i *G Y Calculating to obtain an ith cell nucleus image P in the training set P i Corresponding label G i Gradient image I in vertical direction Y In which G Y Is a Sobel operator in the vertical direction,by the formula->Calculating to obtain the ith cell edge profile pseudo tagAll edge contour pseudo tags in the training set P form a cell edge contour pseudo tag set +.>
5. The boundary and center point feature based assisted nuclear segmentation method of claim 4 wherein: in the step d), the PC module, the PE module and the main module segmentation frame are all composed of a U-net network, the PC module is used for segmenting cell center points, the PE module is used for segmenting cell boundaries, and the main module segmentation frame is used for segmenting cell nuclei.
6. The boundary and center point feature based assisted nuclear segmentation method of claim 5 wherein step e) comprises the steps of:
e-1) imaging the ith nucleus in the training set P i Inputting into a PC module, outputting to obtain a feature map P i PC ,P i PC ∈R 1 ×H×W ;
e-2) passing through the formulaCalculating to obtain a loss function L1 of the center point distance, wherein u is as follows ij Is a characteristic map P i PC Pixel value, m, of pixel point of ith row and jth column of (c) ij Pseudo tag for center point->Pixel values of pixel points of the ith row and the jth column;
e-3) mapping the characteristic pattern P i PC Pseudo tag with center pointCalculating a loss value by adopting a loss function based on Euclidean distance, and adjusting the encoder and decoder parameter weights of the PC module by using a back propagation method according to the loss value;
e-4) imaging the ith nucleus in the training set P i Input into a PE module, and output to obtain a feature map P i PE ,P i PE ∈R 1 ×H×W ;
e-5) mapping the characteristic map P i PE Pseudo tag with center pointUsing edge Loss function Loss edges Calculating a loss value, and adjusting the encoder and decoder parameter weights of the PE module by using a back propagation method according to the loss value;
e-6) setting a weighting module which sequentially consists of a convolution layer and a Tanh activation function and is used for setting a characteristic map P i PE Input into a weighting module, and output to obtain a feature map P i BW ,P i BW ∈R 1×H×W 。
7. The boundary and center point feature based assisted nuclear segmentation method of claim 6, wherein: the convolution kernel size of the convolution layer in step e-6) is 5*5.
8. The boundary and center point feature based assisted nuclear segmentation method of claim 6 wherein step f) comprises the steps of:
f-1) imaging the ith nucleus in the training set P i Input into the main module dividing frame, output and obtain the last image characteristic P of the decoder i MA ,P i MA ∈R 1×H×W ;
f-2) mapping the characteristic pattern P i BW And feature map P i PC Performing addition operation to obtain a feature map P i ME ,P i ME ∈R 1×H×W ;
f-3) mapping the characteristic pattern P i ME Sequentially inputting into a first convolution layer and a second convolution layer, and outputting to obtain a feature map P i ED ,P i ED ∈R 1 ×H×W ;
f-4) mapping the characteristic pattern P i ED Pseudo tag with center pointCalculating a loss function L3, and adjusting encoder and decoder parameter weights of the main module partition frame by using a back propagation method according to the loss value of the loss function L3, wherein L3=0.5BCE+L Dice BCE is binary cross entropy loss, L Dice Is the Dice loss.
9. The boundary and center point feature based assisted nuclear segmentation method of claim 8 wherein: training the PC module according to the Loss function L1 by using the Adam optimizer in the step g), obtaining an optimized PC module, and using the Adam optimizer to obtain a PC module according to the Loss function Loss edges Training the PE module to obtain an optimized PE module, training the PE module according to the loss function L3 by using an Adam optimizer to obtain an optimized main module segmentation frame, and forming an optimized segmentation network by the optimized PC module, the optimized PE module and the optimized main module segmentation frame.
10. The boundary and center point feature based assisted nuclear segmentation method of claim 8 wherein step h) comprises the steps of:
h-1) imaging the ith nucleus in the test set T i Inputting the cell center point segmentation map P into a PC module in the optimized segmentation network, and outputting the cell center point segmentation map P i PC ′;
h-2) imaging the ith nucleus in the test set T i Inputting the cell boundary into a PE module in the optimized segmentation network, and outputting a segmentation map P for obtaining the cell boundary i BW ′;
h-3) imaging the ith nucleus in the test set T i Inputting the cell nucleus segmentation map P into a main module segmentation framework in the optimized segmentation network i ED ′。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311329962.7A CN117291941A (en) | 2023-10-16 | 2023-10-16 | Cell nucleus segmentation method based on boundary and central point feature assistance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311329962.7A CN117291941A (en) | 2023-10-16 | 2023-10-16 | Cell nucleus segmentation method based on boundary and central point feature assistance |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117291941A true CN117291941A (en) | 2023-12-26 |
Family
ID=89256998
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311329962.7A Pending CN117291941A (en) | 2023-10-16 | 2023-10-16 | Cell nucleus segmentation method based on boundary and central point feature assistance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117291941A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190065817A1 (en) * | 2017-08-29 | 2019-02-28 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for detection and classification of cells using convolutional neural networks |
CN114612664A (en) * | 2022-03-14 | 2022-06-10 | 哈尔滨理工大学 | Cell nucleus segmentation method based on bilateral segmentation network |
CN115035295A (en) * | 2022-06-15 | 2022-09-09 | 湖北工业大学 | Remote sensing image semantic segmentation method based on shared convolution kernel and boundary loss function |
CN116433704A (en) * | 2022-12-29 | 2023-07-14 | 鹏城实验室 | Cell nucleus segmentation method based on central point and related equipment |
-
2023
- 2023-10-16 CN CN202311329962.7A patent/CN117291941A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190065817A1 (en) * | 2017-08-29 | 2019-02-28 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for detection and classification of cells using convolutional neural networks |
CN114612664A (en) * | 2022-03-14 | 2022-06-10 | 哈尔滨理工大学 | Cell nucleus segmentation method based on bilateral segmentation network |
CN115035295A (en) * | 2022-06-15 | 2022-09-09 | 湖北工业大学 | Remote sensing image semantic segmentation method based on shared convolution kernel and boundary loss function |
CN116433704A (en) * | 2022-12-29 | 2023-07-14 | 鹏城实验室 | Cell nucleus segmentation method based on central point and related equipment |
Non-Patent Citations (1)
Title |
---|
壹抹尘埃: "利用OpenCV识别不规则图形轮廓并找其中心点和角度", Retrieved from the Internet <URL:https://blog.csdn.net/weixin_44789544/article/details/104636477> * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Mohanakurup et al. | Breast cancer detection on histopathological images using a composite dilated Backbone Network | |
CN109063710B (en) | 3D CNN nasopharyngeal carcinoma segmentation method based on multi-scale feature pyramid | |
CN110889852B (en) | Liver segmentation method based on residual error-attention deep neural network | |
CN109447998B (en) | Automatic segmentation method based on PCANet deep learning model | |
CN111291825B (en) | Focus classification model training method, apparatus, computer device and storage medium | |
CN110659692A (en) | Pathological image automatic labeling method based on reinforcement learning and deep neural network | |
Oghli et al. | Automatic fetal biometry prediction using a novel deep convolutional network architecture | |
CN110619641A (en) | Automatic segmentation method of three-dimensional breast cancer nuclear magnetic resonance image tumor region based on deep learning | |
CN112508953B (en) | Meningioma rapid segmentation qualitative method based on deep neural network | |
Khan et al. | PMED-net: Pyramid based multi-scale encoder-decoder network for medical image segmentation | |
CN112102282A (en) | Automatic identification method for lumbar vertebrae with different joint numbers in medical image based on Mask RCNN | |
CN115812220A (en) | Method and apparatus for mammography multi-view mass identification | |
CN114332572B (en) | Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network | |
Manikandan et al. | Segmentation and Detection of Pneumothorax using Deep Learning | |
CN114494289A (en) | Pancreatic tumor image segmentation processing method based on local linear embedded interpolation neural network | |
CN114881914A (en) | System and method for determining three-dimensional functional liver segment based on medical image | |
CN116664590B (en) | Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image | |
CN117291941A (en) | Cell nucleus segmentation method based on boundary and central point feature assistance | |
US7440600B2 (en) | System and method for assigning mammographic view and laterality to individual images in groups of digitized mammograms | |
CN113379691B (en) | Breast lesion deep learning segmentation method based on prior guidance | |
Tummala et al. | Curriculum learning based overcomplete U-Net for liver tumor segmentation from computed tomography images | |
CN112967295A (en) | Image processing method and system based on residual error network and attention mechanism | |
CN115619641B (en) | FFDM-based breast image processing method, FFDM-based breast image processing system, FFDM-based terminal and FFDM-based breast image processing medium | |
Guo et al. | Dual network generative adversarial networks for pediatric echocardiography segmentation | |
CN113657558B (en) | Classification rating method and device for sacroiliac joint CT image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |