CN114119525A - Method and system for segmenting cell medical image - Google Patents

Method and system for segmenting cell medical image Download PDF

Info

Publication number
CN114119525A
CN114119525A CN202111373245.5A CN202111373245A CN114119525A CN 114119525 A CN114119525 A CN 114119525A CN 202111373245 A CN202111373245 A CN 202111373245A CN 114119525 A CN114119525 A CN 114119525A
Authority
CN
China
Prior art keywords
cell
feature
medical image
network
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111373245.5A
Other languages
Chinese (zh)
Inventor
文静
杨妍
王翊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202111373245.5A priority Critical patent/CN114119525A/en
Publication of CN114119525A publication Critical patent/CN114119525A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Abstract

The invention discloses a method for segmenting a cell medical image, which belongs to the technical field of medical image analysis and specifically comprises the following steps: taking the cell original image as the input of a pre-trained cell medical image segmentation neural network model, and performing feature enhancement and feature reuse on the cell nucleus of the image by the cell medical image segmentation neural network model to obtain a significant multi-semantic feature image; and performing pooling pyramid downsampling on the feature map to obtain a multi-semantic feature map, upsampling to restore the original image size, splicing the upsampled feature maps to obtain a segmentation result map, and overlapping the segmentation result map and the cell original map to output as a model. The invention can segment the cell image and the characteristics of the segmented region are prominent.

Description

Method and system for segmenting cell medical image
Technical Field
The invention relates to a method and a system for segmenting a cell medical image, and belongs to the technical field of medical image analysis.
Background
Nuclear segmentation is an important and challenging step for computer-aided diagnosis of various diseased cells. This is because: 1) factors such as irregular shape of cells in the smear, uneven distribution of chromatin and the like make it difficult to accurately segment cell nuclei; 2) a difficult sample with small cell nucleus exists, and the accurate segmentation of the small cell nucleus is difficult; 3) due to the property of semantic segmentation pixel-by-pixel classification, pixels classified into cytoplasm and background exist inside the nuclear range in the segmentation result. Classical neural networks with similar encoder-decoder architectures are: fully Capacitive Network (FCN), U-Net, UNet + +, DeepLab series, and the like. In recent years, there are many methods for cell nucleus segmentation, such as traditional methods based on level sets, watershed, and the like, and machine learning methods based on clustering methods, unsupervised classification, and shape modeling methods. Most algorithms only use the spatial domain information of the cell image to carry out segmentation, and the segmentation precision is poor in some cell nucleus and cytoplasm unobvious transition regions; a few algorithms use simple priori knowledge such as the shape of cell nucleuses, but the segmentation effect is still not accurate enough. Deep convolutional neural networks have achieved a great deal of success in the field of biomedical image segmentation, such as segmentation of organs or lesions in magnetic resonance MR images, or cells or tumors in pathological images. The deep learning method has great advantages in medical image processing tasks due to its strong feature extraction capability, and is the mainstream method. The current deep learning method is often combined with high-level and low-level feature information, wherein the low-level feature comprises more position information and less semantic information, and the high-level feature comprises more semantics and less position information. However, the current method does not pay attention to the context dependence of global information, and does not aggregate context information based on different areas to mine the global context information, so that a large number of pixels are possibly classified wrongly, confusable categories are difficult to distinguish, small cell nuclear sample information is difficult to mine, the characteristics of a segmented area are difficult to highlight, and meanwhile, segmented partial noise is difficult to suppress.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned deficiencies in the prior art, and provides a method and a system for segmenting a cellular medical image, which can segment a cellular image and highlight the features of segmented regions.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, an embodiment of the present invention provides a method for segmenting a cellular medical image, including the following steps: the cell original image is used as the input of a pre-trained cell medical image segmentation neural network model, and the cell medical image segmentation neural network model sequentially performs feature enhancement and feature reuse on the background, cytoplasm, large cell nucleus and small cell nucleus of the image to obtain a feature image containing multiple semantics; the cell medical image segmentation neural network model performs feature enhancement processing on a cell original image by adopting the importance of neurons; after the feature map containing various semantics is up-down sampled, each up-sampled feature map is segmented to obtain a segmentation result map, and the segmentation result map and the cell original map are superposed and then output as a model; the pre-trained cell medical image segmentation neural network model training sample is a single cell image marked with background, cytoplasm and cell nucleus.
Further, the importance of the neuron is calculated by formula (1):
Figure BDA0003363074110000031
wherein E is the importance of the neuron,
Figure BDA0003363074110000032
Figure BDA0003363074110000033
Figure BDA0003363074110000034
in order to be a function of the minimum energy,
Figure BDA0003363074110000035
respectively mean and variance of all neurons except the target neuron in one channel, t is the target neuron of each channel, lambda is the only hyper-parameter, M is the energy function number of each channel, and xiExcept for the target neuron for each channel.
Further, the feature enhancement processing is performed by using formula (2):
Figure BDA0003363074110000036
wherein the content of the first and second substances,
Figure BDA0003363074110000037
for the feature enhanced feature map, sigmoid () is an activation function, and X is an input feature map.
Further, the feature map after the feature enhancement processing is segmented by adopting a formula (3):
Figure BDA0003363074110000038
wherein the content of the first and second substances,
Figure BDA0003363074110000039
is a characteristic diagram of the l-th layer, Hl() Indicating a network change]The concatenation of the characteristic maps is shown,
Figure BDA00033630741100000310
the characteristic diagram of the l-1 layer is shown.
Further, the training of the cellular medical image segmentation neural network model comprises the following steps: adopting a single cell image in a Pasteur cell data set Herlev as a training set; calculating the channel-by-channel mean value of the whole training set image, and performing mean value subtraction operation on a single image in the training set; inputting a training set, and training by adopting four types of weighted cross entropy loss functions of background, cytoplasm, large cell nucleus and small cell nucleus.
Further, the weighted cross entropy loss function is calculated by formula (4):
Figure BDA0003363074110000041
wherein the content of the first and second substances,
Figure BDA0003363074110000042
Figure BDA0003363074110000043
respectively representing a main loss function and an auxiliary loss function, beta is a loss weighted balance parameter, y is a labeling result,
Figure BDA0003363074110000044
in order to be the final result of the prediction,
Figure BDA0003363074110000045
for the prediction result of the auxiliary branch, c is 4, alphajIs a weighting factor for the class.
In a second aspect, an embodiment of the present invention provides a system for segmenting a cellular medical image, including a backbone network, a pyramid pooling module, and a loss layer; the backbone network is used for enhancing, extracting and reusing the 3D enhanced features of the sample feature map and providing rich semantic feature information for the pyramid pooling module; the pyramid pooling module is used for fusing context semantic feature information of each feature map, and each feature map is encoded and decoded in each scale and then superposed with the original map to obtain a final cell segmentation map; the loss layer is used for learning optimization of the backbone network and the pyramid pooling module; the backbone network is composed of dense connection of feature extraction sub-network blocks and alternate transition layers, and the feature extraction sub-network blocks comprise attention mechanism layers.
Preferably, the loss layers include a secondary loss layer and a primary loss layer, and the weighted cross-entropy loss functions of four types of background, cytoplasm, large nucleus and small nucleus are used as the loss functions.
Preferably, the backbone network is composed of four densely connected network blocks, and the number of transition layer control channels composed of convolutional layers among the densely connected network blocks; each dense connection network block is formed by connecting six feature extraction sub-network blocks, and the input of each feature extraction sub-network block is the output of all feature extraction sub-network blocks in front of the feature extraction sub-network block; the pyramid pooling module includes three dimensions.
Preferably, the first transition layer of the backbone network uses 2 × 2 convolutional layer convergence characteristics, and does not directly perform downsampling, so that the network learns proper convolutional kernel parameters through back propagation.
According to the method, the importance of the neurons is introduced to carry out feature enhancement processing on the input feature map, so that the significance of the target area is improved. Meanwhile, the invention performs up-and-down sampling after the characteristics of the image cell nucleus are enhanced and the characteristics are repeated, thereby effectively avoiding the problem that the cell nucleus is classified into cytoplasm and background pixels in the cell nucleus range, and further obtaining a cell segmentation image with a clear and obvious target area.
The backbone network is formed by connecting the feature extraction sub-network blocks in a sealing manner, and each feature extraction sub-network block comprises an attention mechanism layer, so that the features of image cell nucleuses are more obvious. Meanwhile, the context information is synthesized by adopting the three-layer pyramid pooling module, so that the effect is not influenced, the parameters are reduced, the convergence and running speed are increased, and the problem that the cell nucleus is classified into cytoplasm and background pixels in the cell nucleus range is effectively solved.
Drawings
Fig. 1 is a flowchart of a method for segmenting a medical cell image according to embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram of a system for cellular medical image segmentation according to embodiment 2 of the present invention;
FIG. 3 is a graph of IoU trend during loss and testing of the present invention during network training;
FIG. 4 is a diagram illustrating the segmentation result according to the present invention;
fig. 5 is a noteworthy thermodynamic diagram of the present invention.
Detailed Description
For a better understanding of the nature of the invention, its description is further set forth below in connection with the specific embodiments and the drawings.
Example 1
The invention provides a method for segmenting a cell medical image, which comprises the following steps:
s1, sample pretreatment.
The invention uses a published cervical pasteur cell data set Herlev, wherein 917 single cell images are totally marked with four types, namely background (red), cytoplasm edge (gray), cytoplasm (dark blue) and nucleus (light blue), and the cytoplasm edge is classified into the background category in the scheme, so that only three types are finally segmented. The size of the samples set in this example is 256 × 256 × 3, and the samples are unified into 256 × 256 × 3 size by nearest neighbor sampling. To eliminate the inter-sample differences and improve the convergence speed of the model, the channel-by-channel mean of the entire training set images is first calculated and a mean subtraction operation is performed on the images at the time of pre-processing. Since the non-relevant areas can be eliminated due to the uniform distribution in the training set, the relevant areas are highlighted and segmented preliminarily, so that the model convergence and the feature extraction are accelerated, and therefore the method performs the preprocessing operation of subtracting the channel-by-channel mean value from the image element by element.
The proportion of the training set to the test set is 8:2, and 733 sheets are randomly selected as the training set and 184 sheets are selected as the test set.
And S2, constructing a network model for cell medical image segmentation.
The invention adopts the pytorech learning library to build the CNN network.
And S21, obtaining the neuron parameters added with the attention mechanism.
Solving an energy function with closed solution and carrying out a regression equation to obtain the weight of the attention mechanism module, and multiplying the neuron parameters by the weight to obtain the attention mechanism-added spiritAnd (4) passing the meta parameters. Solving the obtained minimum energy function
Figure BDA0003363074110000071
Comprises the following steps:
Figure BDA0003363074110000072
wherein the content of the first and second substances,
Figure BDA0003363074110000073
Figure BDA0003363074110000074
respectively mean and variance of all neurons except the target neuron in one channel, t is the target neuron of each channel, lambda is the only hyper-parameter, M is the energy function number of each channel, and xiExcept for the target neuron for each channel.
The significance E of the neuron can be obtained according to equation (1):
Figure BDA0003363074110000075
s22, performing feature enhancement processing on the input feature graph, and extracting channels, spaces and pixel-by-pixel 3D enhancement features:
Figure BDA0003363074110000076
wherein the content of the first and second substances,
Figure BDA0003363074110000077
for the feature map subjected to feature enhancement, sigmoid () is an activation function to prevent an E value from being too large, and X is an input feature map.
The characteristic diagram is obtained by extracting the characteristics of the sample image after pretreatment.
S23, feature map subjected to feature enhancement in S22
Figure BDA0003363074110000078
And (3) reusing and extracting the features to obtain a feature map with rich semantics:
Figure BDA0003363074110000079
wherein the content of the first and second substances,
Figure BDA00033630741100000710
is a characteristic diagram of the l-th layer, Hl() Indicating a network change]The concatenation of the characteristic maps is shown,
Figure BDA00033630741100000711
the characteristic diagram of the l-1 layer is shown.
And S3, performing down-sampling of different scales on the rich-semantic feature map obtained in the S23, performing up-sampling to obtain a multi-scale feature map, segmenting each up-sampled feature map, and overlapping each segmentation result map and the cell original map by using a Canny operator to serve as output.
S4, model training
Training is performed using the training set. The invention adopts a mini-batch training mode for training. Wherein the batch size is set to 16, λ ═ 1e-4And loading the constructed cell medical image segmentation neural network model and initializing network parameters. The optimizer adopts Adam, and the initial learning rate is set to be le-13 e-3Then, an exponential learning rate decay strategy is used in the training process, and the decay rate is set to be 0.95.
Aiming at the problem that the accurate segmentation of small cell nucleuses is difficult, the method of the invention comprises the following steps: 1) using a weighted cross entropy loss function to strengthen the training of the network on the small cell nucleus; the labels of the small cell nuclei are trained to be distinguished from the labels of the large cell nuclei, i.e., the small cell nuclei and the large cell nuclei are segmented as different classes, i.e., four classes are segmented: setting four classifications of background, cytoplasm, large cell nucleus and small cell nucleus as alpha123,α4. 2) At one endIn each mini-batch, the iteration times of the small cell nucleus are doubled compared with the iteration times of other classes during training, the optimization contribution to the check network of the small cell nucleus is increased, and the mining of difficult samples is increased.
Main loss function
Figure BDA0003363074110000081
Obtained from equation (4):
Figure BDA0003363074110000082
wherein, y is the labeling result,
Figure BDA0003363074110000083
in order to predict the result of the event,
Figure BDA0003363074110000084
as the final prediction result, c is 4, αjIs a weighting factor for the class.
Auxiliary loss function
Figure BDA0003363074110000085
Obtained from equation (5):
Figure BDA0003363074110000091
wherein the content of the first and second substances,
Figure BDA0003363074110000092
is the predicted outcome of the auxiliary branch.
Then the weighted cross entropy loss function LtotalObtained from equation (6):
Figure BDA0003363074110000093
the four classifications of background, cytoplasm, large nucleus and small nucleus are weighted by alpha1=0.1,α2=0.1,α3=0.5,α4The optimum is 1.
In the present embodiment, β is set to 0.4.
The segmentation effect of the method is evaluated by using the mIoU value, and specific results are shown in Table 1. As shown in Table 1, under the condition that the control data set samples have the same pretreatment, the segmentation effect superior to that of the baseline model U-Net is obtained by using the method provided by the invention, and the mIoU exceeds the U-Net and approaches to 4 percentage points. The loss and mlou index trend for each round of training and testing of the model is shown in fig. 3. The segmentation evaluation effect of the model is good, the segmentation effect of the model is shown in fig. 4, three sample images are selected for testing, Canny operators are used for post-processing to extract edges in segmentation results, and the marking lines in the segmentation overlay image on the rightmost side are basically overlapped with the prediction result of the network model. The significant thermodynamic diagrams before and after the introduction of the attention mechanism in the model are shown in fig. 5, and fig. 5 is a graph in which the graph-CAM is used for visualizing the SimAM before and after the addition, wherein a is a cell original image, B is the thermodynamic diagram before the addition of the attention mechanism, and C is the thermodynamic diagram after the addition of the attention mechanism. The main body target can be better focused through the characteristics extracted by the SimAM, and the effectiveness of the strengthened segmentation area of the attention mechanism and the noise suppression can be verified. The segmentation results are shown in table 1:
TABLE 1 segmentation results
Method mIoU Cytoplasm IoU Nucleus IoU
U-Net control experiment 75.207% 70.367% 79.701%
Segmentation result 79.262% 74.021% 87.290%
Example 2
The network model for cell medical image segmentation is suitable for segmentation of various cells or pathological cell nuclei. In order to prove the generalization of the invention on various pathological cell data sets, a breast cancer pathological cell data set TNBC is introduced, and the established network model for cell medical image segmentation is retrained. The method for establishing the network model for cell medical image segmentation and the training method are the same as those in embodiment 1.
The segmentation results of example 2 were verified: the method comprises the steps of carrying out sliding window preprocessing on 50 WSIs pathological pictures with the resolution of 512x512 from 11 patients to obtain 256x256 pictures, and using the 256x256 pictures as input of a trained network model for cell medical image segmentation, wherein a plurality of cells are contained in a single sampled picture. The segmentation results obtained are shown in table 2:
TABLE 2
Method Cell IoU
Segmentation result 80.975%
As can be seen from Table 2, the network model of the present invention is also suitable for the segmentation of pathological cells of breast cancer.
Example 3
The invention also provides a system for segmenting the cell medical image, which comprises a backbone network and a pyramid pooling module as shown in fig. 2.
The backbone network is used for feature enhancement of the feature map, and performs feature extraction and reuse on the feature map after feature enhancement processing, so as to provide rich feature information for the pyramid pooling module. The backbone network consists of four densely connected network blocks (densecssblocks), each densely connected network Block consisting of six feature extraction sub-network blocks (cssblocks). Dense connectivity means that the input to each feature extraction sub-network block is the output of all feature extraction sub-network blocks preceding that feature extraction sub-network block. The convolution between the four densely connected network blocks by 1x1 reduces the dimension. Particularly, different from the average pooling used by the PSPNet model, the 2x2 convolutional layer convergence feature is used between the first two densely connected network blocks, downsampling is not performed directly, the network can optimize and learn proper convolutional kernel parameters through back propagation, and the improvement enables the final segmentation effect to be improved to a certain extent.
And the feature extraction sub-network block is used for feature mining extraction of the cell nucleus salient region of the cell medical image. The three-dimensional space-based multi-channel convolution layer comprises two convolution layers and an attention mechanism layer, a Batch Norm layer and a ReLU layer are connected behind each convolution layer in a default mode, 3D context information of channel-by-channel, space-by-space and pixel point-by-pixel point is introduced, extra network parameters are not required to be introduced, a segmentation part is strengthened, and noise is suppressed. The convolution kernels of the two convolution layers of the feature extraction sub-network block are 1x1 and 3x3 respectively.
The system for segmenting the cell medical image further comprises an auxiliary loss layer and a main loss layer, wherein the auxiliary loss function and the main loss function are respectively adopted. The main penalty is the primary optimization direction and the auxiliary penalty helps to optimize the learning process. The model is propagated backwards after calculating the loss function, thereby optimizing the model parameters. And the optimal model parameters are calculated through forward propagation to obtain the optimal segmentation result.
The pyramid pooling module is used for merging context semantic feature information of each feature map, and the feature maps are obtained by encoding, decoding and backbone network of each scale and are spliced and segmented to obtain a final cell segmentation map. Aiming at the task of cell segmentation, the pyramid pooling module in the prior art is different from a pyramid pooling module in four dimensions: 1. 2, 3 and 4, the invention only adopts three dimensions: 2. and 3, 4, down-sampling the feature map to three scales of 2x2x1, 3x3x1 and 6x6x1, and performing channel splicing with the feature map subjected to the Encoder to finally obtain the large receptive field, the context semantic information and the global semantic information.
The pyramid pooling module adopted by the invention reduces network parameters and accelerates the calculation speed while not influencing the model effect. The possibility that cytoplasm or background appears in the nuclear range of the segmentation result is reduced by acquiring rich context, and the fusion of the multi-scale semantic information extracts the characteristics of rich high-level semantic information and low-level position information and contains rich context semantic information, so that the possibility of pixel classification error is reduced, the confusable categories are distinguished, the small nuclear sample information is mined, and the segmentation precision is improved.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The present invention is not limited to the above embodiments, and any modifications, equivalent replacements, improvements, etc. made within the spirit and principle of the present invention are included in the scope of the claims of the present invention which are filed as the application.

Claims (10)

1. A method of cellular medical image segmentation, comprising the steps of:
taking the cell original image as the input of a pre-trained cell medical image segmentation neural network model, and performing feature enhancement and feature reuse on the cell nucleus of the image by the cell medical image segmentation neural network model to obtain a feature image containing multiple semantics; the cell medical image segmentation neural network model performs feature enhancement processing on a cell original image based on the importance of neurons; after the feature map containing various semantics is up-down sampled, each up-sampled feature map is segmented to obtain a segmentation result map, and the segmentation result map and the cell original map are superposed and then output as a model;
the pre-trained cell medical image segmentation neural network model training sample is a single cell image marked with background, cytoplasm and cell nucleus.
2. The method of cytological medical image segmentation according to claim 1, wherein: the importance of the neuron is calculated by formula (1):
Figure FDA0003363074100000011
wherein E is the importance of the neuron,
Figure FDA0003363074100000012
Figure FDA0003363074100000013
Figure FDA0003363074100000014
in order to be a function of the minimum energy,
Figure FDA0003363074100000015
respectively mean and variance of all neurons except the target neuron in one channel, t is the target neuron of each channel, lambda is the only hyper-parameter, M is the energy function number of each channel, and xiExcept for the target neuron for each channel.
3. The method of cytological medical image segmentation according to claim 1, wherein: and (3) performing feature enhancement processing by adopting a formula (2):
Figure FDA0003363074100000021
wherein the content of the first and second substances,
Figure FDA0003363074100000022
for the feature enhanced feature map, sigmoid () is an activation function, and X is an input feature map.
4. The method of cytological medical image segmentation according to claim 1, wherein: and (3) reusing the characteristics by adopting a formula (3):
Figure FDA0003363074100000023
wherein the content of the first and second substances,
Figure FDA0003363074100000024
is a characteristic diagram of the l-th layer, Hl() Indicating a network change]The concatenation of the characteristic maps is shown,
Figure FDA0003363074100000025
the characteristic diagram of the l-1 layer is shown.
5. The method of cytological medical image segmentation according to claim 1, wherein: the training of the cell medical image segmentation neural network model comprises the following steps:
adopting a single cell image in a Pasteur cell data set Herlev as a training set;
calculating the channel-by-channel mean value of the whole training set image, and performing mean value subtraction operation on a single image in the training set;
inputting a training set, and training by adopting four types of weighted cross entropy loss functions of background, cytoplasm, large cell nucleus and small cell nucleus.
6. The method of cytological medical image segmentation according to claim 5, wherein: the weighted cross entropy loss function is calculated by formula (4):
Figure FDA0003363074100000031
wherein the content of the first and second substances,
Figure FDA0003363074100000032
Figure FDA0003363074100000033
respectively representing a main loss function and an auxiliary loss function, beta is a loss weighted balance parameter, y is a labeling result,
Figure FDA0003363074100000034
in order to be the final result of the prediction,
Figure FDA0003363074100000035
for the prediction result of the auxiliary branch, c is 4, alphajIs a weighting factor for the class.
7. A system for cellular medical image segmentation, characterized by: the system comprises a backbone network and a pyramid pooling module; the backbone network is used for enhancing, extracting and reusing the 3D enhanced features of the sample feature map and providing rich semantic feature information for the pyramid pooling module;
the pyramid pooling module is used for fusing context semantic feature information of each feature map, and each feature map is encoded and decoded in each scale and then superposed with the original map to obtain a final cell segmentation map;
the loss layer is used for learning optimization of the backbone network and the pyramid pooling module;
the backbone network is composed of dense connection of feature extraction sub-network blocks and alternate transition layers, and the feature extraction sub-network blocks comprise attention mechanism layers.
8. The system for cytological medical image segmentation according to claim 7, wherein: the system further comprises a loss layer, wherein the loss layer comprises an auxiliary loss layer and a main loss layer, and weighted cross entropy loss functions of four types of backgrounds, cytoplasms, large cell nuclei and small cell nuclei are used as loss functions.
9. The system for cytological medical image segmentation according to claim 7, wherein: the backbone network consists of four densely connected network blocks, and transition layers among the densely connected network blocks are composed of convolutional layers to control the number of channels; each dense connection network block is formed by connecting six feature extraction sub-network blocks, and the input of each feature extraction sub-network block is the output of all feature extraction sub-network blocks in front of the feature extraction sub-network block; the pyramid pooling module includes three dimensions.
10. The system for cytological medical image segmentation according to claim 9, wherein: the first transition layer of the backbone network uses 2x2 convolutional layer convergence characteristics, does not directly perform downsampling, and enables the network to learn proper convolutional kernel parameters through back propagation.
CN202111373245.5A 2021-11-19 2021-11-19 Method and system for segmenting cell medical image Pending CN114119525A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111373245.5A CN114119525A (en) 2021-11-19 2021-11-19 Method and system for segmenting cell medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111373245.5A CN114119525A (en) 2021-11-19 2021-11-19 Method and system for segmenting cell medical image

Publications (1)

Publication Number Publication Date
CN114119525A true CN114119525A (en) 2022-03-01

Family

ID=80396396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111373245.5A Pending CN114119525A (en) 2021-11-19 2021-11-19 Method and system for segmenting cell medical image

Country Status (1)

Country Link
CN (1) CN114119525A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409844A (en) * 2022-11-02 2022-11-29 杭州华得森生物技术有限公司 Circulating tumor cell detection device and method thereof
CN117523205A (en) * 2024-01-03 2024-02-06 广州锟元方青医疗科技有限公司 Segmentation and identification method for few-sample ki67 multi-category cell nuclei
CN117789207A (en) * 2024-02-28 2024-03-29 吉林大学 Intelligent analysis method and system for pathological images of cell tissues based on graph neural network

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409844A (en) * 2022-11-02 2022-11-29 杭州华得森生物技术有限公司 Circulating tumor cell detection device and method thereof
CN115409844B (en) * 2022-11-02 2023-02-03 杭州华得森生物技术有限公司 Circulating tumor cell detection device and method thereof
CN117523205A (en) * 2024-01-03 2024-02-06 广州锟元方青医疗科技有限公司 Segmentation and identification method for few-sample ki67 multi-category cell nuclei
CN117523205B (en) * 2024-01-03 2024-03-29 广州锟元方青医疗科技有限公司 Segmentation and identification method for few-sample ki67 multi-category cell nuclei
CN117789207A (en) * 2024-02-28 2024-03-29 吉林大学 Intelligent analysis method and system for pathological images of cell tissues based on graph neural network
CN117789207B (en) * 2024-02-28 2024-04-30 吉林大学 Intelligent analysis method and system for pathological images of cell tissues based on graph neural network

Similar Documents

Publication Publication Date Title
Maharjan et al. A novel enhanced softmax loss function for brain tumour detection using deep learning
Han et al. Combining noise-to-image and image-to-image GANs: Brain MR image augmentation for tumor detection
CN106940816B (en) CT image pulmonary nodule detection system based on 3D full convolution neural network
CN111091527B (en) Method and system for automatically detecting pathological change area in pathological tissue section image
CN114119525A (en) Method and system for segmenting cell medical image
CN110189308B (en) Tumor detection method and device based on fusion of BM3D and dense convolution network
CN113313234A (en) Neural network system and method for image segmentation
CN112150428A (en) Medical image segmentation method based on deep learning
CN112767417B (en) Multi-modal image segmentation method based on cascaded U-Net network
CN113034505B (en) Glandular cell image segmentation method and glandular cell image segmentation device based on edge perception network
CN111709952A (en) MRI brain tumor automatic segmentation method based on edge feature optimization and double-flow decoding convolutional neural network
CN112150476A (en) Coronary artery sequence vessel segmentation method based on space-time discriminant feature learning
JP7427080B2 (en) Weakly supervised multitask learning for cell detection and segmentation
CN116097302A (en) Connected machine learning model with joint training for lesion detection
Dogar et al. Attention augmented distance regression and classification network for nuclei instance segmentation and type classification in histology images
Hoorali et al. IRUNet for medical image segmentation
CN113344933B (en) Glandular cell segmentation method based on multi-level feature fusion network
Sheeba et al. Microscopic image analysis in breast cancer detection using ensemble deep learning architectures integrated with web of things
Koyun et al. Adversarial nuclei segmentation on H&E stained histopathology images
Imtiaz et al. BAWGNet: Boundary aware wavelet guided network for the nuclei segmentation in histopathology images
Hu et al. Accurate neuronal soma segmentation using 3D multi-task learning U-shaped fully convolutional neural networks
CN116823868A (en) Melanin tumor image segmentation method
Han et al. Three dimensional nuclei segmentation and classification of fluorescence microscopy images
Kar et al. Assessment of deep learning algorithms for 3D instance segmentation of confocal image datasets
CN114972382A (en) Brain tumor segmentation algorithm based on lightweight UNet + + network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination