WO2021003821A1 - 一种肾小球病理切片图像的细胞检测方法、装置及设备 - Google Patents
一种肾小球病理切片图像的细胞检测方法、装置及设备 Download PDFInfo
- Publication number
- WO2021003821A1 WO2021003821A1 PCT/CN2019/103522 CN2019103522W WO2021003821A1 WO 2021003821 A1 WO2021003821 A1 WO 2021003821A1 CN 2019103522 W CN2019103522 W CN 2019103522W WO 2021003821 A1 WO2021003821 A1 WO 2021003821A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- layer
- image
- neural network
- glomerular
- encoder
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Definitions
- This application relates to the field of biometrics technology, in particular to a cell detection method, device and equipment for glomerular pathological slice images.
- the glomerulus is a group of capillaries in the nephron that is used to filter blood to produce protouria.
- the blood enters the abalone after being filtered by the endothelial cells, mesangial cells and podocytes in the glomerulus.
- the inner cavity of the sac so the inherent cell structure and function of the glomerulus is directly related to the filtration effect of the glomerulus.
- Different etiologies can lead to the damage of different intrinsic cells, which are clinically manifested as different types of glomerular diseases.
- podocyte damage is typically large amounts of proteinuria, and endothelial cell damage can significantly affect glomerular diseases In severe cases, sclerosis will replace the damaged area of endothelium. Mesangial cell damage can promote the occurrence of glomerular sclerosis. Therefore, how to quantitatively count the number of intrinsic cells in the glomerulus is clinically relevant Very important application meaning.
- the number of intrinsic cells in the glomerulus in the glomerular pathological slice image is roughly estimated manually, and the number of cells obtained is not accurate enough, which affects the treatment effect.
- this application provides a cell detection method, device and equipment for glomerular pathological slice images.
- the main purpose is to solve the current artificial rough estimation of the number of intrinsic cells inside the glomerulus in the glomerular pathological slice image, and the obtained cell number is not accurate enough, which affects the treatment effect.
- a cell detection method for glomerular pathological slice images comprising:
- the pathological section image of the glomerulus to be detected is input into a preset neural network model for identification and detection, where the preset neural network model is that the neural network performs a predetermined number of mesangial cells, endothelial cells, and podocytes. Recognized and labeled glomerular pathological slice image training;
- the number of mesangial cells, endothelial cells and podocytes in the pathological section image of the glomerulus to be detected is calculated according to the probability map of the mesangial cells, endothelial cells and podocytes.
- a cell detection device for glomerular pathological slice images comprising:
- the acquisition module is used to acquire the glomerular pathological slice image to be detected
- the detection module is used to input the glomerular pathological slice image to be detected into a preset neural network model for identification and detection, where the preset neural network model is that the neural network performs a predetermined number of tests on mesangial cells and endothelial cells.
- Cells and podocytes are obtained through training of pathological slice images of the glomeruli that are identified and labeled;
- the output module is configured to output the probability maps of mesangial cells, endothelial cells and podocytes in the glomerular pathological slice image to be detected from the three output channels of the preset neural network model;
- the calculation module is used to calculate the number of mesangial cells, endothelial cells and podocytes in the pathological slice image of the glomerulus to be detected according to the probability map of the mesangial cells, endothelial cells and podocytes.
- a computer device including a memory and a processor, the memory stores a computer program, and the processor implements the glomerular pathology section of the first aspect when the computer program is executed Image of the steps of the cell detection method.
- a computer storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of cell detection of the glomerular pathological slice image of the first aspect are realized.
- the method, device and equipment for cell detection of glomerular pathological slice images can use a preset neural network model to detect the glomerular pathological slice image to be detected, and determine the The number of mesangial cells, endothelial cells, and podocytes in the glomerular pathological slice images, so that no manual estimation is required.
- the etiology and condition of the patient can be better determined, and more accurately Develop a treatment plan so that the patient's kidneys will recover as soon as possible, and the use is simple and fast.
- FIG. 1 is a flowchart of an embodiment of a cell detection method for glomerular pathological slice images of the application
- Figure 2 is an architecture diagram of the U-net neural network of this application.
- FIG. 3 is a structural block diagram of an embodiment of a cell detection device for glomerular pathological slice images of this application;
- Fig. 4 is a schematic diagram of the structure of the computer equipment of this application.
- the embodiment of the application provides a cell detection method for glomerular pathological slice images, which can automatically detect glomerular pathological slice images by using a preset neural network model obtained by learning and training, and detect the glomerular pathological slice images With the number of mesangial cells, endothelial cells and podocytes, doctors can judge the patient's condition based on the test results, and the operation is simple and convenient.
- an embodiment of the present application provides a cell detection method for glomerular pathological slice images, which includes the following steps:
- Step 101 Obtain a pathological section image of the glomerulus to be detected.
- the pathological slice image of the glomerulus of the patient to be tested can be obtained through ultrasound, X-ray, or nuclear magnetic resonance.
- the pathological section image of the glomerulus to be detected may be a color photo, a black and white photo, or an X-ray film.
- Step 102 Input the glomerular pathological section image to be detected into a preset neural network model for identification and detection, where the preset neural network model is that the neural network recognizes mesangial cells, endothelial cells, and podocytes through a predetermined number Annotated glomerular pathological slice image training.
- the preset neural network model is that the neural network recognizes mesangial cells, endothelial cells, and podocytes through a predetermined number Annotated glomerular pathological slice image training.
- the glomerular pathological slice image to be detected is first subjected to elastic deformation, and/or color transformation, and/or random inversion, and/or size transformation, and then input into a preset neural network model for detection.
- the elastic deformation is obtained by transforming a certain pixel point in the glomerular pathological slice image to be detected as the center and other pixel points, such as displacement and rotation;
- Color conversion refers to the conversion of the glomerular pathological slice image to be detected in the red, green, and blue system into the glomerular pathological slice image to be detected in the transparency, color, and saturation system;
- Random flip refers to flipping the glomerular pathological slice image to be detected at any angle (for example, 10°, 45°, etc.);
- Size conversion refers to operations such as enlargement, reduction, or cropping of the glomerular pathological slice image to be detected.
- Step 103 Output the probability maps of mesangial cells, endothelial cells and podocytes in the glomerular pathological slice image to be detected from the three output channels of the preset neural network model.
- the detection result output from the three output channels of the preset neural network model is an image with the same size as the glomerular pathological slice image to be detected.
- the image includes each pixel that belongs to the channel outputting the image.
- the probability of the corresponding cell for example, in the image output by the channel for outputting the mesangial cell detection result, includes the probability that each pixel belongs to the mesangial cell.
- Step 104 Calculate the number of mesangial cells, endothelial cells and podocytes in the pathological section image of the glomerulus to be detected according to the probability maps of mesangial cells, endothelial cells and podocytes.
- the probability of mesangial cells in the mesangial cell probability map ⁇ the total number of cells in the glomerular pathological section image to be detected can be calculated to obtain the number of mesangial cells; the endothelial cells in the endothelial cell probability map Cell probability ⁇ total number of cells in the glomerular pathological section image to be tested, calculate the number of endothelial cells; Podocyte probability shown in the podocyte probability map ⁇ total number of cells in the glomerular pathological section image to be tested, calculated Show the number of podocytes.
- the preset neural network model to detect the glomerular pathological section image to be detected, and determine the number of mesangial cells, endothelial cells, and podocytes in the glomerular pathological section image to be detected.
- Artificial estimation based on the number of mesangial cells, endothelial cells and podocytes, can better determine the cause and condition of the patient, and formulate a more accurate treatment plan, so that the patient's kidney can recover as soon as possible, and the use is simple and fast.
- step 102 before step 102, it further includes:
- Step 1021 Obtain a predetermined number of glomerular pathological slice images as a training set.
- the glomerular pathological slice images in the training set are all collected glomerular pathological slice images of various diagnosed patients and healthy people.
- the predetermined number can be 500 or 100 according to the neural network. Learning and training requires choices. The more the predetermined number is, the higher the accuracy of the preset neural network model obtained by training.
- step 1022 the mesangial cells, endothelial cells, and podocytes in the pathological slice image of each glomerulus in the training set are respectively labeled.
- each glomerular pathological slice image is divided into several regions, and each region is labeled with its attributes .
- the attribute can be mesangial cells, endothelial cells or podocytes.
- Step 1023 Perform elastic deformation, and/or color transformation, and/or random inversion, and/or size transformation on the marked glomerular pathological slice image.
- the elastic deformation is obtained by transforming a certain pixel point in the glomerular pathological slice image as the center and other pixel points, such as displacement and rotation;
- Color conversion refers to the conversion of the glomerular pathological slice image represented by the red, green, and blue system into the glomerular pathological slice image represented by the transparency, color, and saturation system;
- Random turning refers to turning the glomerular pathological slice image at any angle (for example, 10°, 45°, etc.);
- Size conversion refers to operations such as enlargement, reduction or cropping of the glomerular pathological slice image.
- Step 1024 Input the processed glomerular pathological slice image into the U-net neural network for learning and training to obtain a preset neural network model.
- the processed glomerular pathological section image is input from the input port of the U-net neural network, and then the U-net neural network performs learning and training processing on the processed glomerular pathological section image.
- Normalized transmission neurons are formed in the neural network, and three output channels are formed to output the probability maps of mesangial cells, endothelial cells and podocytes corresponding to the glomerular pathological slice image.
- the U-net neural network is used to learn and train the glomerular pathological slice images in the training set, and a preset neural network model for accurately detecting the glomerular pathological slice images is obtained.
- a certain number b of glomerular pathological slice images are collected as a detection set, and the probability maps of mesangial cells, endothelial cells, and podocytes in each glomerular pathological slice image in the detection set are associated.
- the U-net neural network includes a U-net module, a feature pyramid network, and a variational autoencoder.
- Step 1023 specifically includes:
- Step 10231 Input the processed glomerular pathological slice image into the U-net module of the U-net neural network for feature image extraction.
- Step 10232 Input the feature image into the feature pyramid network of the U-net neural network to perform segmentation processing and output a probability map of segmentation results of glomerular edges, mesangial cells, endothelial cells, and podocytes.
- Step 10233 input the characteristic image into the variational autoencoder for image reconstruction, calculate the loss function according to the difference between the reconstructed image and the original glomerular pathological slice image, and adjust the U-net neural network according to the loss function to complete the correction Learning and training of glomerular pathological slice images.
- Step 10234 Repeat the learning and training process until all the glomerular pathological slice images in the training set are all learned and trained, and the variational autoencoder in the trained U-net neural network is removed to obtain the preset neural network model.
- the U-net module, the feature pyramid network and the variational autoencoder are constructed in advance for the U-net neural network, and then the U-net neural network is learned and trained by the training set through the above scheme. It is assumed that the neural network can better detect and recognize glomerular pathological slice images, and the detection accuracy is greatly improved.
- the variational autoencoder branch can be removed.
- the variational autoencoder branch is only used in the training process.
- the segmentation result of the glomerular pathological slice image during the detection process No effect.
- step 10231 specifically includes:
- Step 102311 input the processed glomerular pathological slice image into the encoder of the U-net module, and after convolution processing on the first layer of the encoder, a ResBlock and a down-sampling process are performed to generate the encoder first Layer output feature map.
- Step 102312 Input the first layer output feature map of the encoder into the second layer of the encoder, and generate the second layer output feature map of the encoder after two ResBlocks and one downsampling process.
- Step 102313 Input the second layer output feature map of the encoder into the third layer of the encoder, and generate the third layer output feature map of the encoder after two ResBlocks and one downsampling process.
- Step 102314 Input the third layer output feature map of the encoder into the fourth layer of the encoder, and generate the fourth layer output feature map of the encoder after four ResBlock processing.
- Step 102315 Input the fourth layer output feature map of the encoder into the third layer of the decoder, and after an upsampling process, and join the third layer output feature map of the encoder through skip connection to form the third layer feature map of the decoder Figure output.
- Step 102316 Input the third layer feature map of the decoder into the second layer of the decoder, and after an up-sampling process, and join the second layer output feature map of the encoder through skip connection to form the second layer feature map of the decoder Output.
- Step 102317 Input the second-layer feature map of the decoder into the first layer of the decoder, undergo an upsampling process, and join the first-layer output feature map of the encoder through skip connection to form the first-layer feature map of the decoder Output.
- the U-net module encoder is used to extract the cell characteristic image of the glomerular pathological slice image, including: color, shape, size, position, texture, etc.
- the encoder undergoes hierarchical convolution and down sampling Such operations can extract the global information of the entire image with lower resolution, which can provide the context information of the cells in the entire glomerular pathological slice image, which can be understood as the characteristics of the dependence between the cells and the surrounding environment.
- Features help to judge the cell type.
- the jump connection at the same level of the encoder and the decoder can fuse the fine position information of the encoder and the richly expressed semantic information of the decoder, thereby extracting more expressive features for the segmentation task.
- step 10232 specifically includes:
- Step 102321 Input the fourth layer output feature map of the encoder to the fourth layer of the feature pyramid network, perform convolution to perform dimensionality reduction processing on the fourth layer output feature map of the encoder, and then perform upsampling processing to output the fourth layer of the pyramid Layer feature image.
- Step 102322 Input the third layer output feature map of the decoder to the third layer of the feature pyramid network, perform convolution to reduce the dimensionality of the third layer output feature map of the encoder, and superimpose it with the fourth layer feature image of the pyramid Output the third layer feature image of the pyramid.
- Step 102323 input the second layer output feature map of the decoder to the second layer of the feature pyramid network, perform convolution to reduce the dimensionality of the second layer output feature map of the encoder, and superimpose it with the third layer feature image Output the second layer feature image of the pyramid.
- Step 102324 Input the first layer output feature map of the decoder to the first layer of the feature pyramid network, perform convolution to reduce the dimensionality of the first layer output feature map of the encoder, and superimpose it with the second layer feature image of the pyramid Output the feature image of the first layer of the pyramid.
- Step 102325 after performing 3*3 convolution processing on the feature image of the first layer of the pyramid, performing 1*1 convolution and Sigmoid (S-shaped growth curve function) operation processing to generate glomerular edges, mesangial cells, and endothelium Probability graph of segmentation results of cells and podocytes.
- Sigmoid S-shaped growth curve function
- the four-layer structure of the feature pyramid network can be used to fuse feature images at different levels of the decoder, so that the feature pyramid network has good detection and segmentation capabilities for glomerular pathological slice images at different scales, and thus is fast and accurate Obtain the probability map of segmentation results of glomerular edge, mesangial cells, endothelial cells, and podocytes.
- step 10233 specifically includes:
- Step 102331 Input the fourth layer output feature map of the encoder into the variational autoencoder, and generate N sets of mean values and variances after down-sampling, down-convolution, and full-connection processing.
- step 102332 the N groups of mean values and variances are processed by sampling, fully connected, Reshape and up-convolution, and then ResBlock and up-sampling are processed to form a reconstructed image.
- Step 102333 calculates a loss function according to the difference between the reconstructed image and the original glomerular pathological slice image, and adjusts the U-net neural network according to the loss function to complete the learning and training of the glomerular pathological slice image.
- the image can be reconstructed according to the features extracted by the U-Net encoder, and more guidance and constraints can be added to the network training process, so that the U-Net encoder can learn more generalization and expressive ability. feature.
- step 104 specifically includes:
- Step 1041 Determine the foreground/background binary map of mesangial cells, endothelial cells and podocytes according to the probability maps of mesangial cells, endothelial cells and podocytes.
- the probability map respectively includes the probability that each pixel is a mesangial cell, an endothelial cell, and a podocyte.
- the probability is greater than the preset threshold, it is determined that the pixel corresponding to the probability is the foreground, Otherwise, the pixel point is determined as the background, so that a foreground/background binary image of mesangial cells, endothelial cells, and podocytes can be generated.
- Step 1042 Extract the contours of mesangial cells, endothelial cells and podocytes in the foreground/background binary image.
- Step 1043 Count the numbers of mesangial cells, endothelial cells, and podocytes in the glomerular pathological section image to be detected according to the extracted contour numbers of mesangial cells, endothelial cells, and podocytes.
- the number of mesangial cells, endothelial cells, and podocytes in the glomerular pathological slice image to be tested can be determined according to the obtained probability map, so that the doctor can judge the patient’s condition based on the obtained results and formulate treatment Program.
- the preset neural network model to detect the glomerular pathological slice image to be detected, and determine the mesangial cells in the glomerular pathological slice image to be detected.
- the number of endothelial cells and podocytes so that no manual estimation is required. According to the number of mesangial cells, endothelial cells and podocytes, the cause and condition of the patient can be better determined, and the treatment plan can be formulated more accurately, so that the patient’s kidney can recover as soon as possible. And it is simple and fast to use.
- the cell detection method for glomerular pathological slice images includes the following steps:
- the glomerular area in the pathological image of the kidney tissue slice can be cut out to obtain the initial glomerular pathological slice image.
- Annotate the data used to construct the preset neural network model that is, annotate mesangial cells, endothelial cells, and podocytes in each glomerular pathological section image as a training set, and perform elastic deformation and color transformation on the glomerular pathological section image , Random flip and size transformation and other data enhancement methods to improve the generalization performance of neural network training.
- the input of the U-net neural network is the glomerular pathological slice image
- the output is the mesangial cell recognition result map, the endothelial cell recognition result map, and the podocyte recognition
- the mesangial cell identification result graph shows the probability that each pixel belongs to the mesangial cell
- the endothelial cell identification result graph shows the probability that each pixel belongs to the endothelial cell
- the podocyte identification result graph shows each The probability that a pixel belongs to a podocyte.
- the preset neural network model is obtained.
- the preset neural network model is an optimized U-Net neural network model, and the optimized U-Net network model mainly includes: U-Net module, Feature Pyramid Networks (Feature Pyramid Networks for Object Detection) and Variational Auto The encoder VAE (Variational AutoEncoder), the model architecture is shown in Figure 2.
- U-Net module Feature Pyramid Networks (Feature Pyramid Networks for Object Detection)
- VAE Variational AutoEncoder
- the U-Net module is composed of an encoder and a decoder.
- the input of the encoder is a glomerular pathological slice image.
- the encoder outputs the 256*64*64 feature map output by the fourth layer to the decoder and variation
- the self-encoder and the feature pyramid output the feature maps output by the first, second, and third layers to the first, second, and third layers of the decoder, respectively.
- the encoder is used to extract cell feature information: color, shape, size, position, texture, etc. It consists of 4 levels, each level contains several ResBlocks and down-sampling operations.
- the input image of the encoder in the neural network is 3*512 *512 RGB color map, the first level first generates a 32*512*512 feature map through a 3x3 initial convolution operation, and then generates a 64*256*256 feature map through 1 ResBlock and 1 downsampling.
- the second level generates a 128*128*128 feature map through 2 ResBlocks and 1 downsampling.
- the third level generates a 256*64*64 feature map through 2 ResBlocks and 1 downsampling.
- the fourth level passes Four ResBlocks generate the final feature map of the decoder with a size of 256*64*64, so that the encoder can extract the global information of the whole image with a lower resolution after layer-by-layer convolution, down-sampling, etc., so as to Provides the context information of the cells in the entire image, which can be understood as the characteristics of the dependence relationship between the cells and the surrounding environment. These characteristics help to judge the cell types.
- the decoder is composed of 3 levels.
- the first, second and third levels output feature maps to the first, second and third levels of the feature pyramid respectively.
- the third level firstly uses upsampling the 256*64*64 feature map of the encoder's fourth level to generate a 128*128*128 feature map, and then joins the feature maps of the same size as the third level of the encoder for splicing. , And finally generate a 256*128*128 feature map after 1 ResBlock, the size of the corresponding feature map of the second level and the first level of the same understanding coder is 128*256*256, 64*512*512, encoder and decoding
- the jump connection at the same level of the encoder can merge the fine position information of the encoder and the richly expressed semantic information of the decoder, thereby extracting more expressive features for the segmentation task.
- the feature pyramid network FPN consists of 4 levels.
- the inputs of the first, second, and third layers are the outputs of the first, second, and third layers of the decoder respectively, and the input of the fourth layer is the fourth layer of the encoder.
- the output of the layer, the fourth level reduces the dimensionality of the encoder's third level feature 256*64*64 to 64*64*64 through a 1x1 convolution operation, and then generates a 64*128*128 feature map through an upsampling operation.
- the third level first reduces the dimensionality of the decoder's third level feature 256*128*128 to 64*128*128, and then adds it to the feature map uploaded from the bottom of the fourth level.
- the size of the feature map corresponding to the level is 64*256*256, 64*512*512, so that the features of different levels of the decoder can be fused, so that the network has good detection and segmentation capabilities for targets at different scales.
- the shape and size of the spherical cells vary greatly, and FPN is particularly suitable for the task of cell segmentation in this paper.
- a 3*3 convolution operation is used to convolve the fusion result.
- the purpose is to eliminate the aliasing effect of upsampling, and then generate the final result through 1*1 convolution and Sigmoid operation
- the four-channel segmentation results correspond to the probability maps of the segmentation results of glomerular edges, mesangial cells, endothelial cells, and podocytes.
- the variational autoencoder is applied to the end of the U-Net encoder, its input is the above encoder, and its output can be used in the loss function to reconstruct the image according to the features extracted by the U-Net encoder, which is the network training process Add more guidance and constraints, so that the U-Net encoder can learn more generalized and expressive features.
- This branch first undergoes down-sampling, down-convolution (convolution kernel size 16, step size 2), fully connected and other operations to generate 256 sets of mean and variance, and then according to these 256 sets of mean and variance through sampling, fully connected, Reshape, up Convolution (convolution kernel size 16, step size 2) and other operations generate 512*32*32 size feature maps, and then through a series of different levels of ResBlock, upsampling and other operations to reconstruct the original image.
- This branch is only used in the process of training the neural network, and has no effect on the segmentation result in the actual detection process. Therefore, in the actual detection process, the variational self-encoder branch can be removed.
- the U-net neural network training process contains three loss functions.
- the loss function used by the decoder and FPN (feature pyramid network) branch in the U-Net module is the Dice function L DICE , which is used to measure the segmentation results
- the gap with the gold standard, and the loss function of the variational autoencoder part includes two parts, one is to calculate the mean square error between the reconstructed image and the input image, that is, to measure the pixel error L MSE between the original image and the reconstructed image , The second is the KL divergence loss function, which is used to measure the difference between the distribution of latent variables in the variational autoencoder and the unit Gaussian distribution, L KL .
- the formula is as follows:
- p true is the correct probability of each cell in the image
- p pred is the probability of each cell output by the preset neural network model
- ⁇ is the gold standard.
- I input is the input pixel
- I recon is the reconstruction pixel
- the position parameter of the image is ⁇ and the scale parameter is ⁇ .
- glomerular pathological slice image Detect the mesangial cells, endothelial cells, and podocytes in the glomerular pathological slice image through a preset neural network model, which is obtained by training based on the glomerular pathological slice image in the training set .
- the glomerular pathological slice images in the training set are marked with mesangial cells, endothelial cells and podocytes in the glomerulus.
- the preset neural network model may include three output channels.
- the three output channels are respectively used to output the detection results of mesangial cells, endothelial cells, and podocytes.
- the detection results may be compared with the pathological slice image of the glomerulus to be detected. Images of the same size, including the probability that each pixel belongs to the cell corresponding to the channel outputting the image. For example, in the image output by the channel used to output the mesangial cell detection result, each pixel is included in the system. Probability of membrane cells.
- each of the probability maps includes the probability that each pixel is the mesangial cell, endothelial cell, and podocyte, and then Generate foreground and background binary images of mesangial cells, endothelial cells, and podocytes according to a preset threshold.
- the probability is greater than the preset threshold, determine that the pixel corresponding to the probability is the foreground; otherwise, determine the Pixels are the background.
- the contours of mesangial cells, endothelial cells, and podocytes are extracted by image post-processing methods; respectively based on the extraction of mesangial cells, endothelial cells, and podocytes
- the number of contours counts the number of mesangial cells, endothelial cells and podocytes in glomerular pathological sections.
- an embodiment of the present application provides a cell detection device for glomerular pathological slice images.
- the device includes: an acquisition module 31, a detection module 32, Output module 33 and calculation module 34.
- the obtaining module 31 is used to obtain a glomerular pathological slice image to be detected
- the detection module 32 is used to input the glomerular pathological section image to be detected into a preset neural network model for identification and detection, where the preset neural network model is that the neural network performs a predetermined number of tests on mesangial cells, endothelial cells and foot Cells are trained to identify and label glomerular pathological slice images;
- the output module 33 is configured to output the probability maps of mesangial cells, endothelial cells and podocytes in the glomerular pathological slice image to be detected from the three output channels of the preset neural network model;
- the calculation module 34 is used for calculating the number of mesangial cells, endothelial cells and podocytes in the pathological slice image of the glomerulus to be detected according to the probability map of mesangial cells, endothelial cells and podocytes.
- the acquiring module 31 is also used to acquire a predetermined number of glomerular pathological slice images as a training set;
- the device also includes: a labeling module for respectively labeling mesangial cells, endothelial cells and podocytes in each glomerular pathological slice image in the training set;
- the processing module is used to perform elastic deformation, and/or color transformation, and/or random inversion, and/or size transformation on the marked glomerular pathological slice image;
- the training module is used to input the processed glomerular pathological slice image into the U-net neural network for learning and training to obtain a preset neural network model.
- the U-net neural network includes a U-net module, a feature pyramid network and a variational autoencoder;
- the training module specifically includes:
- the extraction unit is used to input the processed glomerular pathological slice image into the U-net module of the U-net neural network for feature image extraction;
- the segmentation unit is used to input the feature image into the feature pyramid network of the U-net neural network for segmentation processing and output the probability map of the segmentation result of glomerular edge, mesangial cells, endothelial cells, and podocytes;
- the reconstruction unit is used to input the characteristic image into the variational autoencoder for image reconstruction, calculate the loss function according to the difference between the reconstructed image and the original glomerular pathological slice image, and adjust the U-net neural network according to the loss function, Complete the learning and training of glomerular pathological slice images;
- the repetition unit is used to repeat the learning and training process until all the glomerular pathological slice images in the training set are all learned and trained, and the variational autoencoder in the trained U-net neural network is removed to obtain the preset neural network model.
- the extraction unit is specifically used for:
- the second layer feature map of the decoder is input to the first layer of the decoder, and the first layer feature map output of the decoder is formed after the first layer feature map output of the decoder is spliced with the first layer output feature map of the encoder through a jump connection.
- the dividing unit is specifically used for:
- One-layer feature image
- the reconstruction unit is specifically used for:
- the loss function is calculated according to the difference between the reconstructed image and the original glomerular pathological slice image, and the U-net neural network is adjusted according to the loss function to complete the learning and training of the glomerular pathological slice image.
- the calculation module 34 specifically includes:
- the determination unit is used to determine the foreground/background binary map of mesangial cells, endothelial cells and podocytes according to the probability maps of mesangial cells, endothelial cells and podocytes;
- Contour extraction unit for extracting the contours of mesangial cells, endothelial cells and podocytes in the foreground/background binary map
- the statistical unit is used to count the number of mesangial cells, endothelial cells, and podocytes in the glomerular pathological section image to be detected according to the extracted contour numbers of mesangial cells, endothelial cells, and podocytes.
- an embodiment of the present application also provides a computer device, as shown in FIG. 4, including a memory 42 and a processor 41, wherein the memory 42 and the processor 41 are both arranged on the bus 43 and the memory 42 stores a computer program.
- the processor 41 executes the computer program, the cell detection method of the glomerular pathological slice image shown in FIG. 1 is implemented.
- the technical solution of the present application can be embodied in the form of a software product, which can be stored in a non-volatile memory (which can be a CD-ROM, U disk, mobile hard disk, etc.), including several instructions It is used to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in each implementation scenario of this application.
- a non-volatile memory which can be a CD-ROM, U disk, mobile hard disk, etc.
- a computer device which may be a personal computer, a server, or a network device, etc.
- the device can also be connected to a user interface, a network interface, a camera, a radio frequency (RF) circuit, a sensor, an audio circuit, a Wi-Fi module, and so on.
- the user interface may include a display screen (Display), an input unit such as a keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, and the like.
- the network interface can optionally include a standard wired interface, a wireless interface (such as a Bluetooth interface, a WI-FI interface), etc.
- a computer device does not constitute a limitation on the physical device, and may include more or fewer components, or combine certain components, or arrange different components.
- an embodiment of the present application also provides a storage medium on which a computer program is stored.
- the program is executed by a processor, the above-mentioned Figure 1 shows the cell detection method of the glomerular pathological slice image.
- the storage medium may also include an operating system and a network communication module.
- the operating system is a program that manages the hardware and software resources of computer equipment, and supports the operation of information processing programs and other software and/or programs.
- the network communication module is used to realize the communication between the components in the storage medium and the communication with other hardware and software in the computer equipment.
- a preset neural network model can be used to detect the glomerular pathological section image to be detected, and the number of mesangial cells, endothelial cells and podocytes in the glomerular pathological section image to be detected can be determined In this way, there is no need for manual estimation. According to the number of mesangial cells, endothelial cells and podocytes, the cause and condition of the patient can be better determined, and the treatment plan can be formulated more accurately, so that the patient's kidney can recover as soon as possible, and the use is simple and fast.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
一种肾小球病理切片图像的细胞检测方法、装置及设备,其中方法包括:获取待检测的肾小球病理切片图像(101);将待检测的肾小球病理切片图像输入至预设神经网络模型中进行识别检测(102),其中,预设神经网络模型为神经网络经过预定数量对系膜细胞、内皮细胞和足细胞进行识别标注的肾小球病理切片图像训练得到的;从预设神经网络模型的三个输出通道中分别输出待检测的肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞的概率图(103);根据系膜细胞、内皮细胞和足细胞的概率图计算待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量(104)。根据系膜细胞、内皮细胞和足细胞的数量确定病人的病因以及病况,准确的制定治疗方案。
Description
本申请涉及生物识别技术领域,特别是涉及一种肾小球病理切片图像的细胞检测方法、装置及设备。
肾小球是肾单位中用于将血液过滤生成原尿的一团毛细血管,血液在经过肾小球内部的内皮细胞、系膜细胞和足细胞三种固有细胞的过滤后便会进入到鲍氏囊的內腔,因此肾小球固有细胞结构功能的好坏直接关系到肾小球的过滤作用。不同的病因可导致不同固有细胞的损伤,在临床上表现为不同类型的肾小球疾病,其中,足细胞受损后临床典型表现为大量蛋白尿,内皮细胞受损可显著影响肾小球疾病的进展与修复,严重情况下硬化会代替内皮受损区域,系膜细胞受损后可促使肾小球硬化的发生,因此如何能够定量的统计肾小球内部的固有细胞个数在临床上具有很重要的应用意义。
目前,都是通过人工对于肾小球病理切片图像中肾小球内部的固有细胞个数进行粗略估计,得到的细胞数量不够准确,影响治病效果。
发明内容
有鉴于此,本申请提供了一种肾小球病理切片图像的细胞检测方法、装置及设备。主要目的在于解决目前人工对于肾小球病理切片图像中肾小球内部的固有细胞个数进行粗略估计,得到的细胞数量不够准确,影响治病效果的技术问题。
依据本申请的第一方面,提供了一种肾小球病理切片图像的细胞检测方法,所述方法包括:
获取待检测的肾小球病理切片图像;
将所述待检测的肾小球病理切片图像输入至预设神经网络模型中进行识别检测,其中,所述预设神经网络模型为神经网络经过预定数量对系膜细胞、内皮细胞和足细胞进行识别标注的肾小球病理切片图像训练得到的;
从所述预设神经网络模型的三个输出通道中分别输出所述待检测的肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞的概率图;
根据所述系膜细胞、内皮细胞和足细胞的概率图计算所述待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量。
依据本申请的第二方面,提供了一种肾小球病理切片图像的细胞检测装置,所述装置包括:
获取模块,用于获取待检测的肾小球病理切片图像;
检测模块,用于将所述待检测的肾小球病理切片图像输入至预设神经网络模型中进行识别检测,其中,所述预设神经网络模型为神经网络经过预定数量对系膜细胞、内皮细胞和足细胞进行识别标注的肾小球病理切片图像训练得到的;
输出模块,用于从所述预设神经网络模型的三个输出通道中分别输出所述待检测的肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞的概率图;
计算模块,用于根据所述系膜细胞、内皮细胞和足细胞的概率图计算所述待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量。
依据本申请的第三方面,提供了一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现第一方面所述肾小球病理切片图像的细胞检测方法的步骤。
依据本申请的第四方面,提供了一种计算机存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现第一方面所述肾小球病理切片图像的细胞检测的步骤。
借由上述技术方案,本申请提供的一种肾小球病理切片图像的细胞检测方法、装置和设备,能够利用预设神经网络模型对待检测的肾小球病理切片图像进行检测,确定出待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量,这样无需人工估计,根据系膜细胞、内皮细胞和足细胞的数量更好的确定病人的病因以及病况,更加准确的制定治疗方案,使得病人的肾脏尽快康复,并且使用简单快速。
上述说明仅是本申请技术方案的概述,为了能够更清楚了解本申请的技术手段,而可依照说明书的内容予以实施,并且为了让本申请的上述和其它目的、特征和优点能够更明显易懂,以下特举本申请的具体实施方式。
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领 域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本申请的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1为本申请的肾小球病理切片图像的细胞检测方法的一个实施例的流程图;
图2为本申请的U-net神经网络的架构图;
图3为本申请的肾小球病理切片图像的细胞检测装置的一个实施例的结构框图;
图4为本申请的计算机设备的结构示意图。
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
本申请实施例提供了一种肾小球病理切片图像的细胞检测方法,能够利用学习训练得到的预设神经网络模型对肾小球病理切片图像进行自动检测,检测出肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量,医生根据检测的结果判断病人的病况,操作简单方便。
如图1所示,本申请实施例提供了一种肾小球病理切片图像的细胞检测方法,包括如下步骤:
步骤101,获取待检测的肾小球病理切片图像。
在该步骤中,可以通过超声波检查、X光照或者核磁共振获取病人的待检测的肾小球病理切片图像。该待检测的肾小球病理切片图像可以是彩色照片或者黑白照片或者X光片。
步骤102,将待检测的肾小球病理切片图像输入至预设神经网络模型中进行识别检测,其中,预设神经网络模型为神经网络经过预定数量对系膜细胞、内皮细胞和足细胞进行识别标注的肾小球病理切片图像训练得到的。
在该步骤中,待检测的肾小球病理切片图像先进行弹性形变、和/或彩色变换、和/或随机翻转、和/或尺寸变换处理后再输入预设神经网络模型进行检测。
其中,弹性形变是以待检测的肾小球病理切片图像中的某一像素点作为中心其他像素点进行位移、旋转等变换得到的;
彩色变换是指将红、绿、蓝系统表示的待检测的肾小球病理切片图像变换为透明度、色别、饱和度系统表示的待检测的肾小球病理切片图像;
随机翻转是指将该待检测的肾小球病理切片图像翻转任意角度(例如,10°、45°等);
尺寸变换是指将该待检测的肾小球病理切片图像进行放大、缩小或者裁减等操作。
步骤103,从预设神经网络模型的三个输出通道中分别输出待检测的肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞的概率图。
在该步骤中,从预设神经网络模型的三个输出通道输出检测结果是与待检测的肾小球病理切片图像大小一致的图像,在该图像中包括各像素点属于输出该图像的通道所对应的细胞的概率,例如,在用于输出系膜细胞检测结果的通道输出的图像中,包括各像素点属于系膜细胞的概率。
步骤104,根据系膜细胞、内皮细胞和足细胞的概率图计算待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量。
在该步骤中,可以将系膜细胞概率图中表示的系膜细胞概率×待检测的肾小球病理切片图像中细胞总数,计算得出系膜细胞的数量;内皮细胞概率图中表示的内皮细胞概率×待检测的肾小球病理切片图像中细胞总数,计算得出内皮细胞的数量;足细胞概率图中表示的足细胞概率×待检测的肾小球病理切片图像中细胞总数,计算得出足细胞的数量。
通过上述技术方案,能够利用预设神经网络模型对待检测的肾小球病理切片图像进行检测,确定出待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量,这样无需人工估计,根据系膜细胞、内皮细胞和足细胞的数量更好的确定病人的病因以及病况,更加准确的制定治疗方案,使得病人的肾脏尽快康复,并且使用简单快速。
在具体实施例中,在步骤102之前,还包括:
步骤1021,获取预定数量的肾小球病理切片图像作为训练集。
在该步骤中,训练集中的肾小球病理切片图像都是收集的各种确诊病人以及健康人的肾小球病理切片图像,其中,预定数量可以是500张、100张等可以根据神经网络的学习训练需要进行选择。预定数量越多训练得到的预设神经网络模型精度越高。
步骤1022,对训练集中每个肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞分别进行标注。
在该步骤中,对每个肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞的轮廓进行划分,将肾小球病理切片图像划分为若干区域,并为每个区域标注其属性,该属性可以是系膜细胞、内皮细胞或足细胞。
步骤1023,将标注后的肾小球病理切片图像进行弹性形变、和/或彩色变换、和/或随机翻转、和/或尺寸变换处理。
其中,弹性形变是以肾小球病理切片图像中的某一像素点作为中心其他像素点进行位移、旋转等变换得到的;
彩色变换是指将红、绿、蓝系统表示的肾小球病理切片图像变换为透明度、色别、饱和度系统表示的肾小球病理切片图像;
随机翻转是指将该肾小球病理切片图像翻转任意角度(例如,10°、45°等);
尺寸变换是指将该肾小球病理切片图像进行放大、缩小或者裁减等操作。
步骤1024,将处理后的肾小球病理切片图像输入U-net神经网络中进行学习训练得到预设神经网络模型。
在该步骤中,处理后的肾小球病理切片图像从U-net神经网络的输入口输入,然后U-net神经网络对处理后的肾小球病理切片图像进行学习训练处理,在U-net神经网络中形成规范的传递神经元,并形成三个输出通道用来输出肾小球病理切片图像对应的系膜细胞、内皮细胞和足细胞的概率图。
通过上述技术方案,利用U-net神经网络对训练集中的肾小球病理切片图像进行学习训练,得到对肾小球病理切片图像进行精准检测的预设神经网络模型。
另外,收集一定数量b的肾小球病理切片图像作为检测集,并将该检测集中每一个肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞的概率图进行关联。
对该预设神经网络模型进行试验检测,判断该预设神经网络模型的输出的系膜细胞、内皮细胞和足细胞的概率图与检测集中肾小球病理切片图像关联的概率图是否相同,并统计概率图相同的数量为a,计算该预设神经网络模型的准确率(a/b)×100%,如果输出准确率小于设定阈值(例如,95%),则重新选取预定数量的训练集对得到的预设神经网络模型进行继续训练,如果输出准确率大于等于设定阈值,则将预设神经网络模型作为最终的预设神经网络模型。
在具体实施例中,U-net神经网络包括U-net模块、特征金字塔网络和变分自编码器。
步骤1023具体包括:
步骤10231,将处理后的肾小球病理切片图像输入U-net神经网络的U-net模块的进行特征图像的提取。
步骤10232,将特征图像输入U-net神经网络的特征金字塔网络进行分割处理输出肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图。
步骤10233,将特征图像输入变分自编码器进行图像重建,并根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据损失函数对U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练。
步骤10234,重复学习训练的过程直至所有训练集中的肾小球病理切片图像全部学习训练完成,将训练完成的U-net神经网络中的变分自编码器去除得到预设神经网络模型。
上述技术方案中,预先为U-net神经网络构建U-net模块、特征金字塔网络和变分自编码器,然后再通过上述方案利用训练集对U-net神经网络进行学习训练,这样得到的预设神经网络能够更好的对肾小球病理切片图像进行检测识别,并且检测的精度得到大大提高。
在利用预设神经网络模型进行检测过程中,可以将变分自编码器分支去除,该变分自编码器分支只在训练的过程中使用,检测过程中对肾小球病理切片图像的分割结果不产生作用。
在具体实施例中,步骤10231具体包括:
步骤102311,将处理后的肾小球病理切片图像输入U-net模块的编码器,经过编码器的第一层进行卷积处理后,再经过一个ResBlock和一次下采样处理后生成编码器第一层输出特征图。
步骤102312,将编码器第一层输出特征图输入编码器的第二层,经过两个ResBlock和一次下采样处理后生成编码器第二层输出特征图。
步骤102313,将编码器第二层输出特征图输入编码器的第三层,经过两个ResBlock和一次下采样处理后生成编码器第三层输出特征图。
步骤102314,将编码器第三层输出特征图输入编码器的第四层,经过四个ResBlock处理后生成编码器第四层输出特征图。
步骤102315,将编码器第四层输出特征图输入解码器的第三层,经过一次上采样处理,并通过跳跃连接与编码器第三层输出特征图进行拼接后,形成解码器第三层特征图输出。
步骤102316,将解码器第三层特征图输入解码器的第二层,经过一次上采样处理,并通过跳跃连接与编码器第二层输出特征图进行拼接后,形成解码器第二层特征图输出。
步骤102317,将解码器第二层特征图输入解码器的第一层,经过一次上采样处理,并通过跳跃连接与编码器第一层输出特征图进行拼接后,形成解码器第一层特征图输出。
通过上述方案,利用U-net模块的编码器对肾小球病理切片图像进行细胞特征图像提取,包括:颜色、形态、大小、位置、纹理等,编码器经过逐层级的卷积、下采样等操作可以提取出较低分辨率的整幅图像的全局信息,从而能够提供细胞在整幅肾小球病理切片图像中的上下文信息,可理解为细胞与周围环境之间依赖关系的特征,这些特征有助于对细胞进行类别判断。编码器与解码器相同层级上的跳跃连接能够将编码器的精细位置信息和解码器的丰富表达的语义信息融合起来,从而为分割任务提取出表达能力更强的特征。
在具体实施例中,步骤10232具体包括:
步骤102321,将编码器第四层输出特征图输入至特征金字塔网络的第四层,进行卷积对编码器第四层输出特征图进行降维处理后,再进行上采样处理后输出金字塔第四层特征图像。
步骤102322,将解码器第三层输出特征图输入至特征金字塔网络的第三层,进行卷积对编码器第三层输出特征图进行降维处理后,与金字塔第四层特征图像进行叠加后输出金字塔第三层特征图像。
步骤102323,将解码器第二层输出特征图输入至特征金字塔网络的第二层,进行卷积对编码器第二层输出特征图进行降维处理后,与金字塔第三层特征图像进行叠加后输出金字塔第二层特征图像。
步骤102324,将解码器第一层输出特征图输入至特征金字塔网络的第一层,进行卷积对编码器第一层输出特征图进行降维处理后,与金字塔第二层特征图像进行叠加后输出金字塔第一层特征图像。
步骤102325,对金字塔第一层特征图像进行3*3的卷积处理后,再进行1*1卷积和Sigmoid(S型生长曲线函数)操作处理,生成肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图。
通过上述方案,利用特征金字塔网络的四层结构可以将解码器不同层级的特征图像进行融合,使得特征金字塔网络对于不同尺度下的肾小球病理切片图像都有良好的检测分割能力,进而快速准确的得到肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图。
在具体实施例中,步骤10233具体包括:
步骤102331,将编码器第四层输出特征图输入变分自编码器中,经过下采样、下卷积和全连接处理后生成N组均值和方差。
步骤102332将N组均值和方差经过采样、全连接、Reshape和上卷积处理后,再进行ResBlock和上采样处理后形成重建的图像。
步骤102333根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据损失函数对U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练。
通过上述方案,能够根据U-Net编码器提取出的特征进行重建图像,为网络训练过程加入更多的引导与约束,从而能够让U-Net编码器学习到更泛化及表达能力更强的特征。
在具体实施例中,步骤104具体包括:
步骤1041,根据系膜细胞、内皮细胞和足细胞的概率图确定系膜细胞、内皮细胞和足细胞的前景/背景二值图。
在该步骤中,概率图中分别包括各像素点为系膜细胞、内皮细胞以及足细胞的概率,当所述概率大于所述预设阈值时,确定该所述概率对应的像素点为前景,否则,确定所述像素点为背景,这样就可以生成系膜细胞、内皮细胞以及足细胞的前景/背景二值图。
步骤1042,提取前景/背景二值图中系膜细胞、内皮细胞以及足细胞的轮廓。
步骤1043,分别根据系膜细胞、内皮细胞以及足细胞的提取出的轮廓数目统计待检测的肾小球病理切片图像中系膜细胞、内皮细胞以及足细胞的数目。
通过上述方案,能够根据得到的概率图确定待检测的肾小球病理切片图像中系膜细胞、内皮细胞以及足细胞的数目,这样医生就可以根据得出的结果判断病人的病况,制定治病方案。
通过上述实施例的肾小球病理切片图像的细胞检测方法,能够利用预设神经网络模型对待检测的肾小球病理切片图像进行检测,确定出待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量,这样无需人工估计,根据系膜细胞、内皮细胞和足细胞的数量更好的确定病人的病因以及病况,更加准确的制定治疗方案,使得病人的肾脏尽快康复,并且使用简单快速。
在本申请的另一个实施例的肾小球病理切片图像的细胞检测方法中,包括如下步骤:
1、从肾脏组织切片病理图像中提取待检测的肾小球病理切片图像,所述的肾小球病理切片图像中可见已被染色的系膜细胞、内皮细胞以及足细胞。
可根据肾小球的轮廓,截取出肾脏组织切片病理图像中的肾小球区域,得到初始的肾小球病理切片图像。
2、构建预设神经网络模型
标注用于构建预设神经网络模型的数据,即在各肾小球病理切片图像中标注出系膜细胞、内皮细胞以及足细胞作为训练集,对肾小球病理切片图像进行 弹性形变、彩色变换、随机翻转以及尺寸变换等数据增强手段,从而提高神经网络训练的泛化性能。
使用进行数据增强后的训练集训练U-net神经网络,该U-net神经网络的输入为肾小球病理切片图像,输出为系膜细胞识别结果图、内皮细胞识别结果图,以及足细胞识别结果图,系膜细胞识别结果图中呈现出各像素点属于系膜细胞的概率,内皮细胞识别结果图中呈现出了各像素点属于内皮细胞的概率,足细胞识别结果图中呈现出了各像素点属于足细胞的概率。
训练集中所有神小牛病理切片图像全部训练完成后,得到预设神经网络模型。
3、通过预设神经网络模型检测肾小球病理切片图像中的系膜细胞、内皮细胞以及足细胞。
所述预设神经网络模型为优化版U-Net神经网络模型,所述优化版U-Net网络模型主要包括:U-Net模块、特征金字塔网络FPN(Feature Pyramid Networks for Object Detection)以及变分自编码器VAE(Variational AutoEncoder),模型架构如图2所示。
其中,U-Net模块由编码器和解码器组成,编码器的输入为肾小球病理切片图像,编码器将第四层输出的256*64*64大小的特征图输出给解码器、变分自编码器以及特征金字塔,将其第一、第二以及第三层输出的特征图分别输出给解码器的第一、第二以及第三层。
编码器用于提取细胞特征信息:颜色、形态、大小、位置、纹理等,由4个层级组成,每个层级包含若干个ResBlock及下采样操作,所述神经网络中编码器输入图像为3*512*512的RGB彩色图,第1层级首先通过3x3的初始化卷积操作生成32*512*512大小的特征图,然后通过1个ResBlock、1次下采样生成64*256*256大小的特征图,第2层级通过2个ResBlock、1次下采样生成128*128*128大小的特征图,第3层级通过2个ResBlock、1次下采样生成256*64*64大小的特征图,第4层级通过4个ResBlock生成解码器最终的256*64*64大小的特征图,从而编码器经过逐层级的卷积、下采样等操作可 以提取出较低分辨率的整幅图像的全局信息,从而能够提供细胞在整幅图像中的上下文信息,可理解为细胞与周围环境之间依赖关系的特征,这些特征有助于对细胞进行类别判断。
解码器由3个层级组成,其第一、第二以及第三层分别将特征图输出给特征金字塔的第一、第二以及第三层。
第3层级首先将编码器第4层级的256*64*64大小特征图通过上采样生成128*128*128大小的特征图,然后通过跳跃连接与编码器第三层级相同大小的特征图进行拼接,最后经过1个ResBlock生成256*128*128大小特征图,同理解码器第2层级、第1层级对应特征图的大小分别为128*256*256、64*512*512,编码器与解码器相同层级上的跳跃连接能够将编码器的精细位置信息和解码器的丰富表达的语义信息融合起来,从而为分割任务提取出表达能力更强的特征。
特征金字塔网络FPN由4个层级组成,其第一、第二、第三层的输入分别为解码器的第一、第二以及第三层的输出,第四层的输入为编码器的第四层的输出,第4层级通过1x1的卷积操作将编码器第3层级特征256*64*64降维成64*64*64,然后通过上采样操作生成64*128*128大小特征图,第3层级首先将解码器第3层级特征256*128*128降维成64*128*128,然后与第4层级自下而上传过来的特征图进行相加,同理FPN第2层级、第1层级对应特征图大小为64*256*256、64*512*512,由此可以将解码器不同层级的特征进行融合,使得网络对于不同尺度下的目标都有良好的检测分割能力,由于肾小球细胞形状大小变化很大,FPN特别适合本文细胞分割的任务。
FPN第1层级64*512*512特征图之后采用3*3的卷积操作对融合结果进行卷积,目的是消除上采样的混叠效应,然后通过1*1卷积、Sigmoid操作生成最终的4通道分割结果,分别对应肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图。
变分自编码器应用于U-Net编码器的末尾,其输入为上述编码器,其输出可用于损失函数中,用于根据U-Net编码器提取出的特征进行重建图像,为网络训练过程加入更多的引导与约束,从而能够让U-Net编码器学习到更泛化及 表达能力更强的特征。该分支首先经过下采样、下卷积(卷积核大小16,步长2)、全连接等操作生成256组均值和方差,然后根据这256组均值和方差经过采样、全连接、Reshape、上卷积(卷积核大小16,步长2)等操作生成512*32*32大小特征图,然后经过一系列不同层级的ResBlock、上采样等操作重建出原图像。该分支只在训练神经网络的过程中使用,实际检测过程中对分割结果不产生作用,因此在实际检测过程中,可以将变分自编码器分支去除。
4、U-net神经网络训练过程中共包含三种损失函数,其中U-Net模块中的解码器与FPN(特征金字塔网络)分支采用的损失函数为Dice函数L
DICE,该函数用于衡量分割结果与金标准之间的差距,而变分自编码器部分的损失函数包括两部分,一是计算重建后的图像与输入图像之间的均方差,即衡量原图与重建图像的像素误差L
MSE,二是KL散度损失函数,用来衡量变分自编码器内部潜在变量的分布和单位高斯分布的差异L
KL。公式如下:
p
true为图像的各细胞正确概率,p
pred为预设神经网络模型输出的各细胞概率,δ为金标准。
L
MSE=(I
input-I
recon)
2
I
input为输入像素,I
recon为重建像素。
图像的位置参数为μ、尺度参数为σ。
5、通过预设神经网络模型对所述肾小球病理切片图像中的系膜细胞、内皮细胞以及足细胞进行检测,所述预设神经网络模型基于训练集中的肾小球病理切片图像训练得到,所述训练集中的肾小球病理切片图像中标注了肾小球内的系膜细胞、内皮细胞以及足细胞。
6、获取所述预设神经网络模型的各通道输出的系膜细胞、内皮细胞以及足细胞的检测结果;
其中,预设神经网络模型可包括三个输出通道,该三个输出通道分别用于输出系膜细胞、内皮细胞以及足细胞的检测结果,检测结果可以是与待检测的肾小球病理切片图像大小一致的图像,在该图像中包括各像素点属于输出该图像的通道所对应的细胞的概率,例如,在用于输出系膜细胞检测结果的通道输出的图像中,包括各像素点属于系膜细胞的概率。
7、根据所述各通道输出的系膜细胞、内皮细胞以及足细胞的检测结果确定所述待检测的肾小球病理切片中系膜细胞、内皮细胞以及足细胞的数目。
基于预设神经网络模型的各输出通道输出的系膜细胞、内皮细胞以及足细胞的概率图,各所述概率图中分别包括各像素点为系膜细胞、内皮细胞以及足细胞的概率,然后根据预设阈值生成系膜细胞、内皮细胞以及足细胞的前景背景二值图,当所述概率大于所述预设阈值时,确定该所述概率对应的像素点为前景,否则,确定所述像素点为背景。
基于系膜细胞、内皮细胞以及足细胞的前景背景二值图通过图像后处理手段提取出系膜细胞、内皮细胞以及足细胞的轮廓;分别根据系膜细胞、内皮细胞以及足细胞的提取出的轮廓数目统计肾小球病理切片中系膜细胞、内皮细胞以及足细胞的数目。
能够准确定位及统计肾小球内部足细胞、系膜细胞以及内皮细胞三种固有细胞,为病理科临床医生提供一种可靠地辅助诊断工具;测试速度快,检测精度高,处理一张800*800左右肾小球仅需0.2s,细胞计数误差维持在±8左右。
进一步的,作为图1方法的具体实现,本申请实施例提供了一种肾小球病理切片图像的细胞检测装置,如图3所示,装置包括:依次连接的获取模块31、检测模块32、输出模块33和计算模块34。
获取模块31,用于获取待检测的肾小球病理切片图像;
检测模块32,用于将待检测的肾小球病理切片图像输入至预设神经网络模型中进行识别检测,其中,预设神经网络模型为神经网络经过预定数量对系膜细胞、内皮细胞和足细胞进行识别标注的肾小球病理切片图像训练得到的;
输出模块33,用于从预设神经网络模型的三个输出通道中分别输出待检测的肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞的概率图;
计算模块34,用于根据系膜细胞、内皮细胞和足细胞的概率图计算待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量。
在具体实施例中,获取模块31,还用于获取预定数量的肾小球病理切片图像作为训练集;
装置还包括:标注模块,用于对训练集中每个肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞分别进行标注;
处理模块,用于将标注后的肾小球病理切片图像进行弹性形变、和/或彩色变换、和/或随机翻转、和/或尺寸变换处理;
训练模块,用于将处理后的肾小球病理切片图像输入U-net神经网络中进行学习训练得到预设神经网络模型。
在具体实施例中,U-net神经网络包括U-net模块、特征金字塔网络和变分自编码器;
训练模块具体包括:
提取单元,用于将处理后的肾小球病理切片图像输入U-net神经网络的U-net模块的进行特征图像的提取;
分割单元,用于将特征图像输入U-net神经网络的特征金字塔网络进行分割处理输出肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图;
重建单元,用于将特征图像输入变分自编码器进行图像重建,并根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据损失函数对U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练;
重复单元,用于重复学习训练的过程直至所有训练集中的肾小球病理切片图像全部学习训练完成,将训练完成的U-net神经网络中的变分自编码器去除得到预设神经网络模型。
在具体实施例中,提取单元具体用于:
将处理后的肾小球病理切片图像输入U-net模块的编码器,经过编码器的第一层进行卷积处理后,再经过一个ResBlock和一次下采样处理后生成编码器第一层输出特征图;
将编码器第一层输出特征图输入编码器的第二层,经过两个ResBlock和一次下采样处理后生成编码器第二层输出特征图;
将编码器第二层输出特征图输入编码器的第三层,经过两个ResBlock和一次下采样处理后生成编码器第三层输出特征图;
将编码器第三层输出特征图输入编码器的第四层,经过四个ResBlock处理后生成编码器第四层输出特征图;
将编码器第四层输出特征图输入解码器的第三层,经过一次上采样处理,并通过跳跃连接与编码器第三层输出特征图进行拼接后,形成解码器第三层特征图输出;
将解码器第三层特征图输入解码器的第二层,经过一次上采样处理,并通过跳跃连接与编码器第二层输出特征图进行拼接后,形成解码器第二层特征图输出;
将解码器第二层特征图输入解码器的第一层,经过一次上采样处理,并通过跳跃连接与编码器第一层输出特征图进行拼接后,形成解码器第一层特征图输出。
在具体实施例中,分割单元具体用于:
将编码器第四层输出特征图输入至特征金字塔网络的第四层,进行卷积对编码器第四层输出特征图进行降维处理后,再进行上采样处理后输出金字塔第四层特征图像;
将解码器第三层输出特征图输入至特征金字塔网络的第三层,进行卷积对编码器第三层输出特征图进行降维处理后,与金字塔第四层特征图像进行叠加后输出金字塔第三层特征图像;
将解码器第二层输出特征图输入至特征金字塔网络的第二层,进行卷积对编码器第二层输出特征图进行降维处理后,与金字塔第三层特征图像进行叠加后输出金字塔第二层特征图像;
将解码器第一层输出特征图输入至特征金字塔网络的第一层,进行卷积对编码器第一层输出特征图进行降维处理后,与金字塔第二层特征图像进行叠加后输出金字塔第一层特征图像;
利用3*3的卷积操作对金字塔第一层特征图像进行卷积处理后,再进行1*1卷积和Sigmoid操作处理,生成肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图。
在具体实施例中,重建单元具体用于:
将编码器第四层输出特征图输入变分自编码器中,经过下采样、下卷积和全连接处理后生成N组均值和方差;
将N组均值和方差经过采样、全连接、Reshape和上卷积处理后,再进行ResBlock和上采样处理后形成重建的图像;
根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据损失函数对U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练。
在具体实施例中,计算模块34具体包括:
确定单元,用于根据系膜细胞、内皮细胞和足细胞的概率图确定系膜细胞、内皮细胞和足细胞的前景/背景二值图;
轮廓提取单元,用于提取前景/背景二值图中系膜细胞、内皮细胞以及足细胞的轮廓;
统计单元,用于分别根据系膜细胞、内皮细胞以及足细胞的提取出的轮廓数目统计待检测的肾小球病理切片图像中系膜细胞、内皮细胞以及足细胞的数目。
基于上述图1所示方法和图3所示装置的实施例,为了实现上述目的,本申请实施例还提供了一种计算机设备,如图4所示,包括存储器42和处理器41,其中存储器42和处理器41均设置在总线43上存储器42存储有计算机程序,处理器41执行计算机程序时实现图1所示的肾小球病理切片图像的细胞检测方法。
基于这样的理解,本申请的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储器(可以是CD-ROM,U盘,移动硬盘等)中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施场景所述的方法。
可选地,该设备还可以连接用户接口、网络接口、摄像头、射频(Radio Frequency,RF)电路,传感器、音频电路、WI-FI模块等等。用户接口可以包括显示屏(Display)、输入单元比如键盘(Keyboard)等,可选用户接口还可以包括USB接口、读卡器接口等。网络接口可选的可以包括标准的有线接口、无线接口(如蓝牙接口、WI-FI接口)等。
本领域技术人员可以理解,本实施例提供的一种计算机设备的结构并不构成对该实体设备的限定,可以包括更多或更少的部件,或者组合某些部件,或者不同的部件布置。
基于上述如图1所示方法和图3所示装置的实施例,相应的,本申请实施例还提供了一种存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述如图1所示的肾小球病理切片图像的细胞检测方法。
存储介质中还可以包括操作系统、网络通信模块。操作系统是管理计算机设备硬件和软件资源的程序,支持信息处理程序以及其它软件和/或程序的运 行。网络通信模块用于实现存储介质内部各组件之间的通信,以及与计算机设备中其它硬件和软件之间通信。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本申请可以借助软件加必要的通用硬件平台的方式来实现,也可以通过硬件实现。
通过应用本申请的技术方案,能够利用预设神经网络模型对待检测的肾小球病理切片图像进行检测,确定出待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量,这样无需人工估计,根据系膜细胞、内皮细胞和足细胞的数量更好的确定病人的病因以及病况,更加准确的制定治疗方案,使得病人的肾脏尽快康复,并且使用简单快速。
本领域技术人员可以理解附图只是一个优选实施场景的示意图,附图中的模块或流程并不一定是实施本申请所必须的。本领域技术人员可以理解实施场景中的装置中的模块可以按照实施场景描述进行分布于实施场景的装置中,也可以进行相应变化位于不同于本实施场景的一个或多个装置中。上述实施场景的模块可以合并为一个模块,也可以进一步拆分成多个子模块。
上述本申请序号仅仅为了描述,不代表实施场景的优劣。以上公开的仅为本申请的几个具体实施场景,但是,本申请并非局限于此,任何本领域的技术人员能思之的变化都应落入本申请的保护范围。
Claims (20)
- 一种肾小球病理切片图像的细胞检测方法,所述方法包括:获取待检测的肾小球病理切片图像;将所述待检测的肾小球病理切片图像输入至预设神经网络模型中进行识别检测,其中,所述预设神经网络模型为神经网络经过预定数量对系膜细胞、内皮细胞和足细胞进行识别标注的肾小球病理切片图像训练得到的;从所述预设神经网络模型的三个输出通道中分别输出所述待检测的肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞的概率图;根据所述系膜细胞、内皮细胞和足细胞的概率图计算所述待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量。
- 根据权利要求1所述的方法,在将所述待检测的肾小球病理切片图像输入至预设神经网络模型中进行识别检测之前,还包括:获取预定数量的肾小球病理切片图像作为训练集;对训练集中每个肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞分别进行标注;将标注后的肾小球病理切片图像进行弹性形变、和/或彩色变换、和/或随机翻转、和/或尺寸变换处理;将处理后的肾小球病理切片图像输入U-net神经网络中进行学习训练得到预设神经网络模型。
- 根据权利要求2所述的方法,所述U-net神经网络包括U-net模块、特征金字塔网络和变分自编码器;所述将处理后的肾小球病理切片图像输入U-net神经网络中进行学习训练得到预设神经网络模型,具体包括:将处理后的肾小球病理切片图像输入U-net神经网络的U-net模块的进行特征图像的提取;将所述特征图像输入U-net神经网络的特征金字塔网络进行分割处理输出肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图;将所述特征图像输入变分自编码器进行图像重建,并根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据所述损失函数对所述U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练;重复学习训练的过程直至所有训练集中的肾小球病理切片图像全部学习 训练完成,将训练完成的U-net神经网络中的变分自编码器去除得到所述预设神经网络模型。
- 根据权利要求3所述的方法,所述将处理后的肾小球病理切片图像输入U-net神经网络的U-net模块的进行特征图像的提取,具体包括:将处理后的肾小球病理切片图像输入所述U-net模块的编码器,经过编码器的第一层进行卷积处理后,再经过一个ResBlock和一次下采样处理后生成编码器第一层输出特征图;将所述编码器第一层输出特征图输入编码器的第二层,经过两个ResBlock和一次下采样处理后生成编码器第二层输出特征图;将所述编码器第二层输出特征图输入编码器的第三层,经过两个ResBlock和一次下采样处理后生成编码器第三层输出特征图;将所述编码器第三层输出特征图输入编码器的第四层,经过四个ResBlock处理后生成编码器第四层输出特征图;将所述编码器第四层输出特征图输入解码器的第三层,经过一次上采样处理,并通过跳跃连接与所述编码器第三层输出特征图进行拼接后,形成解码器第三层特征图输出;将所述解码器第三层特征图输入解码器的第二层,经过一次上采样处理,并通过跳跃连接与所述编码器第二层输出特征图进行拼接后,形成解码器第二层特征图输出;将所述解码器第二层特征图输入解码器的第一层,经过一次上采样处理,并通过跳跃连接与所述编码器第一层输出特征图进行拼接后,形成解码器第一层特征图输出。
- 根据权利要求4所述的方法,将所述特征图像输入U-net神经网络的特征金字塔网络进行分割处理输出肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图,具体包括:将所述编码器第四层输出特征图输入至特征金字塔网络的第四层,进行卷积对所述编码器第四层输出特征图进行降维处理后,再进行上采样处理后输出金字塔第四层特征图像;将所述解码器第三层输出特征图输入至特征金字塔网络的第三层,进行卷积对所述编码器第三层输出特征图进行降维处理后,与所述金字塔第四层特征图像进行叠加后输出金字塔第三层特征图像;将所述解码器第二层输出特征图输入至特征金字塔网络的第二层,进行卷积对所述编码器第二层输出特征图进行降维处理后,与所述金字塔第三层特征图像进行叠加后输出金字塔第二层特征图像;将所述解码器第一层输出特征图输入至特征金字塔网络的第一层,进行卷积对所述编码器第一层输出特征图进行降维处理后,与所述金字塔第二层特征图像进行叠加后输出金字塔第一层特征图像;对所述金字塔第一层特征图像进行3*3的卷积处理后,再进行1*1卷积和Sigmoid操作处理,生成肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图。
- 根据权利要求4所述的方法,将所述特征图像输入变分自编码器进行图像重建,并根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据所述损失函数对所述U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练,具体包括:将所述编码器第四层输出特征图输入变分自编码器中,经过下采样、下卷积和全连接处理后生成N组均值和方差;将N组均值和方差经过采样、全连接、Reshape和上卷积处理后,再进行ResBlock和上采样处理后形成重建的图像;根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据所述损失函数对所述U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练。
- 根据权利要求2-6任一项所述的方法,根据所述系膜细胞、内皮细胞和足细胞的概率图计算所述待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量,具体包括:根据所述系膜细胞、内皮细胞和足细胞的概率图确定所述系膜细胞、内皮细胞和足细胞的前景/背景二值图;提取所述前景/背景二值图中系膜细胞、内皮细胞以及足细胞的轮廓;分别根据系膜细胞、内皮细胞以及足细胞的提取出的轮廓数目统计待检测的肾小球病理切片图像中系膜细胞、内皮细胞以及足细胞的数目。
- 一种肾小球病理切片图像的细胞检测装置,所述装置包括:获取模块,用于获取待检测的肾小球病理切片图像;检测模块,用于将所述待检测的肾小球病理切片图像输入至预设神经网络 模型中进行识别检测,其中,所述预设神经网络模型为神经网络经过预定数量对系膜细胞、内皮细胞和足细胞进行识别标注的肾小球病理切片图像训练得到的;输出模块,用于从所述预设神经网络模型的三个输出通道中分别输出所述待检测的肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞的概率图;计算模块,用于根据所述系膜细胞、内皮细胞和足细胞的概率图计算所述待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量。
- 根据权利要求8所述的装置,在将所述待检测的肾小球病理切片图像输入至预设神经网络模型中进行识别检测之前,还用于:获取预定数量的肾小球病理切片图像作为训练集;对训练集中每个肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞分别进行标注;将标注后的肾小球病理切片图像进行弹性形变、和/或彩色变换、和/或随机翻转、和/或尺寸变换处理;将处理后的肾小球病理切片图像输入U-net神经网络中进行学习训练得到预设神经网络模型。
- 根据权利要求9所述的装置,所述U-net神经网络包括U-net模块、特征金字塔网络和变分自编码器;所述装置将处理后的肾小球病理切片图像输入U-net神经网络中进行学习训练得到预设神经网络模型,具体包括:将处理后的肾小球病理切片图像输入U-net神经网络的U-net模块的进行特征图像的提取;将所述特征图像输入U-net神经网络的特征金字塔网络进行分割处理输出肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图;将所述特征图像输入变分自编码器进行图像重建,并根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据所述损失函数对所述U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练;重复学习训练的过程直至所有训练集中的肾小球病理切片图像全部学习训练完成,将训练完成的U-net神经网络中的变分自编码器去除得到所述预设神经网络模型。
- 根据权利要求10所述的装置,所述装置将处理后的肾小球病理切片 图像输入U-net神经网络的U-net模块的进行特征图像的提取,具体包括:将处理后的肾小球病理切片图像输入所述U-net模块的编码器,经过编码器的第一层进行卷积处理后,再经过一个ResBlock和一次下采样处理后生成编码器第一层输出特征图;将所述编码器第一层输出特征图输入编码器的第二层,经过两个ResBlock和一次下采样处理后生成编码器第二层输出特征图;将所述编码器第二层输出特征图输入编码器的第三层,经过两个ResBlock和一次下采样处理后生成编码器第三层输出特征图;将所述编码器第三层输出特征图输入编码器的第四层,经过四个ResBlock处理后生成编码器第四层输出特征图;将所述编码器第四层输出特征图输入解码器的第三层,经过一次上采样处理,并通过跳跃连接与所述编码器第三层输出特征图进行拼接后,形成解码器第三层特征图输出;将所述解码器第三层特征图输入解码器的第二层,经过一次上采样处理,并通过跳跃连接与所述编码器第二层输出特征图进行拼接后,形成解码器第二层特征图输出;将所述解码器第二层特征图输入解码器的第一层,经过一次上采样处理,并通过跳跃连接与所述编码器第一层输出特征图进行拼接后,形成解码器第一层特征图输出。
- 根据权利要求11所述的装置,所述装置将所述特征图像输入U-net神经网络的特征金字塔网络进行分割处理输出肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图,具体包括:将所述编码器第四层输出特征图输入至特征金字塔网络的第四层,进行卷积对所述编码器第四层输出特征图进行降维处理后,再进行上采样处理后输出金字塔第四层特征图像;将所述解码器第三层输出特征图输入至特征金字塔网络的第三层,进行卷积对所述编码器第三层输出特征图进行降维处理后,与所述金字塔第四层特征图像进行叠加后输出金字塔第三层特征图像;将所述解码器第二层输出特征图输入至特征金字塔网络的第二层,进行卷积对所述编码器第二层输出特征图进行降维处理后,与所述金字塔第三层特征图像进行叠加后输出金字塔第二层特征图像;将所述解码器第一层输出特征图输入至特征金字塔网络的第一层,进行卷积对所述编码器第一层输出特征图进行降维处理后,与所述金字塔第二层特征图像进行叠加后输出金字塔第一层特征图像;对所述金字塔第一层特征图像进行3*3的卷积处理后,再进行1*1卷积和Sigmoid操作处理,生成肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图。
- 根据权利要求11所述的装置,所述装置将所述特征图像输入变分自编码器进行图像重建,并根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据所述损失函数对所述U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练,具体包括:将所述编码器第四层输出特征图输入变分自编码器中,经过下采样、下卷积和全连接处理后生成N组均值和方差;将N组均值和方差经过采样、全连接、Reshape和上卷积处理后,再进行ResBlock和上采样处理后形成重建的图像;根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据所述损失函数对所述U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练。
- 根据权利要求9-13任一项所述的装置,所述装置根据所述系膜细胞、内皮细胞和足细胞的概率图计算所述待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量,具体包括:根据所述系膜细胞、内皮细胞和足细胞的概率图确定所述系膜细胞、内皮细胞和足细胞的前景/背景二值图;提取所述前景/背景二值图中系膜细胞、内皮细胞以及足细胞的轮廓;分别根据系膜细胞、内皮细胞以及足细胞的提取出的轮廓数目统计待检测的肾小球病理切片图像中系膜细胞、内皮细胞以及足细胞的数目。
- 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现一种肾小球病理切片图像的细胞检测方法的步骤,包括:获取待检测的肾小球病理切片图像;将所述待检测的肾小球病理切片图像输入至预设神经网络模型中进行识 别检测,其中,所述预设神经网络模型为神经网络经过预定数量对系膜细胞、内皮细胞和足细胞进行识别标注的肾小球病理切片图像训练得到的;从所述预设神经网络模型的三个输出通道中分别输出所述待检测的肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞的概率图;根据所述系膜细胞、内皮细胞和足细胞的概率图计算所述待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量。
- 根据权利要求15所述的计算机设备,在将所述待检测的肾小球病理切片图像输入至预设神经网络模型中进行识别检测之前,还包括:获取预定数量的肾小球病理切片图像作为训练集;对训练集中每个肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞分别进行标注;将标注后的肾小球病理切片图像进行弹性形变、和/或彩色变换、和/或随机翻转、和/或尺寸变换处理;将处理后的肾小球病理切片图像输入U-net神经网络中进行学习训练得到预设神经网络模型。
- 根据权利要求16所述的计算机设备,所述U-net神经网络包括U-net模块、特征金字塔网络和变分自编码器;所述将处理后的肾小球病理切片图像输入U-net神经网络中进行学习训练得到预设神经网络模型,具体包括:将处理后的肾小球病理切片图像输入U-net神经网络的U-net模块的进行特征图像的提取;将所述特征图像输入U-net神经网络的特征金字塔网络进行分割处理输出肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图;将所述特征图像输入变分自编码器进行图像重建,并根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据所述损失函数对所述U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练;重复学习训练的过程直至所有训练集中的肾小球病理切片图像全部学习训练完成,将训练完成的U-net神经网络中的变分自编码器去除得到所述预设神经网络模型。
- 一种计算机存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现一种肾小球病理切片图像的细胞检测方法的步骤,包括:获取待检测的肾小球病理切片图像;将所述待检测的肾小球病理切片图像输入至预设神经网络模型中进行识别检测,其中,所述预设神经网络模型为神经网络经过预定数量对系膜细胞、内皮细胞和足细胞进行识别标注的肾小球病理切片图像训练得到的;从所述预设神经网络模型的三个输出通道中分别输出所述待检测的肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞的概率图;根据所述系膜细胞、内皮细胞和足细胞的概率图计算所述待检测的肾小球病理切片图像中系膜细胞、内皮细胞和足细胞的数量。
- 根据权利要求18所述的计算机存储介质,在将所述待检测的肾小球病理切片图像输入至预设神经网络模型中进行识别检测之前,还包括:获取预定数量的肾小球病理切片图像作为训练集;对训练集中每个肾小球病理切片图像中的系膜细胞、内皮细胞和足细胞分别进行标注;将标注后的肾小球病理切片图像进行弹性形变、和/或彩色变换、和/或随机翻转、和/或尺寸变换处理;将处理后的肾小球病理切片图像输入U-net神经网络中进行学习训练得到预设神经网络模型。
- 根据权利要求19所述的计算机存储介质,所述U-net神经网络包括U-net模块、特征金字塔网络和变分自编码器;所述将处理后的肾小球病理切片图像输入U-net神经网络中进行学习训练得到预设神经网络模型,具体包括:将处理后的肾小球病理切片图像输入U-net神经网络的U-net模块的进行特征图像的提取;将所述特征图像输入U-net神经网络的特征金字塔网络进行分割处理输出肾小球边缘、系膜细胞、内皮细胞、足细胞的分割结果概率图;将所述特征图像输入变分自编码器进行图像重建,并根据重建的图像与原肾小球病理切片图像的差异计算损失函数,并根据所述损失函数对所述U-net神经网络进行调整,完成对肾小球病理切片图像的学习训练;重复学习训练的过程直至所有训练集中的肾小球病理切片图像全部学习训练完成,将训练完成的U-net神经网络中的变分自编码器去除得到所述预设神经网络模型。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910624969.9A CN110490840B (zh) | 2019-07-11 | 2019-07-11 | 一种肾小球病理切片图像的细胞检测方法、装置及设备 |
CN201910624969.9 | 2019-07-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021003821A1 true WO2021003821A1 (zh) | 2021-01-14 |
Family
ID=68545988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/103522 WO2021003821A1 (zh) | 2019-07-11 | 2019-08-30 | 一种肾小球病理切片图像的细胞检测方法、装置及设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110490840B (zh) |
WO (1) | WO2021003821A1 (zh) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112967272A (zh) * | 2021-03-25 | 2021-06-15 | 郑州大学 | 基于改进U-net的焊接缺陷检测方法、装置及终端设备 |
CN112991263A (zh) * | 2021-02-06 | 2021-06-18 | 杭州迪英加科技有限公司 | 用于提升pd-l1免疫组化病理切片tps计算准确度的方法及设备 |
CN113222944A (zh) * | 2021-05-18 | 2021-08-06 | 湖南医药学院 | 细胞核分割方法、系统、装置及基于病理图像的癌症辅助分析系统、装置 |
CN114549520A (zh) * | 2022-04-08 | 2022-05-27 | 北京端点医药研究开发有限公司 | 基于全卷积注意力增强网络的视网膜病理切片分析系统 |
CN114612482A (zh) * | 2022-03-08 | 2022-06-10 | 福州大学 | 胃癌神经浸润数字病理切片图像定位和分类方法及系统 |
CN114972760A (zh) * | 2022-06-17 | 2022-08-30 | 湘潭大学 | 基于多尺度注意力增强U-Net的电离图自动描迹方法 |
US20220284236A1 (en) * | 2021-03-02 | 2022-09-08 | Adobe Inc. | Blur classification and blur map estimation |
CN116843997A (zh) * | 2023-08-24 | 2023-10-03 | 摩尔线程智能科技(北京)有限责任公司 | 模型训练、细胞图像标注方法、装置、设备及存储介质 |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111260633B (zh) * | 2020-01-16 | 2024-05-10 | 平安科技(深圳)有限公司 | 基于全局语境的肾小球分型方法、设备、存储介质及装置 |
CN111260666B (zh) * | 2020-01-19 | 2022-05-24 | 上海商汤临港智能科技有限公司 | 图像处理方法及装置、电子设备、计算机可读存储介质 |
CN111291716B (zh) * | 2020-02-28 | 2024-01-05 | 深圳市瑞图生物技术有限公司 | 精子细胞识别方法、装置、计算机设备和存储介质 |
CN111353442A (zh) * | 2020-03-03 | 2020-06-30 | Oppo广东移动通信有限公司 | 图像处理方法、装置、设备及存储介质 |
CN111951221B (zh) * | 2020-07-13 | 2023-10-31 | 清影医疗科技(深圳)有限公司 | 一种基于深度神经网络的肾小球细胞图像识别方法 |
CN114638776A (zh) * | 2020-12-15 | 2022-06-17 | 通用电气精准医疗有限责任公司 | 脑卒中早期评估方法与系统及脑部区域分割方法 |
CN113990521A (zh) * | 2021-10-22 | 2022-01-28 | 北京大学人民医院 | 一种IgA肾病病理分析、预后预测及病理指标挖掘系统 |
CN114240836B (zh) * | 2021-11-12 | 2024-06-25 | 杭州迪英加科技有限公司 | 一种鼻息肉病理切片分析方法、系统和可读存储介质 |
CN114663383B (zh) * | 2022-03-18 | 2024-08-20 | 清华大学 | 一种血细胞分割与识别方法、装置、电子设备和存储介质 |
CN114943723B (zh) * | 2022-06-08 | 2024-05-28 | 北京大学口腔医学院 | 一种对不规则细胞进行分割计数的方法及相关设备 |
CN115406815B (zh) * | 2022-11-02 | 2023-02-03 | 杭州华得森生物技术有限公司 | 基于多源数据融合的肿瘤细胞检测设备及其方法 |
CN115760858B (zh) * | 2023-01-10 | 2023-05-02 | 东南大学附属中大医院 | 基于深度学习的肾脏病理切片细胞识别方法及系统 |
CN117974528B (zh) * | 2024-04-02 | 2024-06-18 | 北京易优联科技有限公司 | 一种肾活检切片影像优化增强方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017165801A1 (en) * | 2016-03-24 | 2017-09-28 | The Regents Of The University Of California | Deep-learning-based cancer classification using a hierarchical classification framework |
CN108345871A (zh) * | 2018-03-20 | 2018-07-31 | 宁波江丰生物信息技术有限公司 | 一种宫颈癌切片识别方法 |
CN108717554A (zh) * | 2018-05-22 | 2018-10-30 | 复旦大学附属肿瘤医院 | 一种甲状腺肿瘤病理组织切片图像分类方法及其装置 |
CN109191476A (zh) * | 2018-09-10 | 2019-01-11 | 重庆邮电大学 | 基于U-net网络结构的生物医学图像自动分割新方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190139216A1 (en) * | 2017-11-03 | 2019-05-09 | Siemens Healthcare Gmbh | Medical Image Object Detection with Dense Feature Pyramid Network Architecture in Machine Learning |
CN108197606A (zh) * | 2018-01-31 | 2018-06-22 | 浙江大学 | 一种基于多尺度膨胀卷积的病理切片中异常细胞的识别方法 |
CN109063710B (zh) * | 2018-08-09 | 2022-08-16 | 成都信息工程大学 | 基于多尺度特征金字塔的3d cnn鼻咽癌分割方法 |
-
2019
- 2019-07-11 CN CN201910624969.9A patent/CN110490840B/zh active Active
- 2019-08-30 WO PCT/CN2019/103522 patent/WO2021003821A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017165801A1 (en) * | 2016-03-24 | 2017-09-28 | The Regents Of The University Of California | Deep-learning-based cancer classification using a hierarchical classification framework |
CN108345871A (zh) * | 2018-03-20 | 2018-07-31 | 宁波江丰生物信息技术有限公司 | 一种宫颈癌切片识别方法 |
CN108717554A (zh) * | 2018-05-22 | 2018-10-30 | 复旦大学附属肿瘤医院 | 一种甲状腺肿瘤病理组织切片图像分类方法及其装置 |
CN109191476A (zh) * | 2018-09-10 | 2019-01-11 | 重庆邮电大学 | 基于U-net网络结构的生物医学图像自动分割新方法 |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112991263A (zh) * | 2021-02-06 | 2021-06-18 | 杭州迪英加科技有限公司 | 用于提升pd-l1免疫组化病理切片tps计算准确度的方法及设备 |
CN112991263B (zh) * | 2021-02-06 | 2022-07-22 | 杭州迪英加科技有限公司 | 用于提升pd-l1免疫组化病理切片tps计算准确度的方法及设备 |
US11816181B2 (en) * | 2021-03-02 | 2023-11-14 | Adobe, Inc. | Blur classification and blur map estimation |
US20220284236A1 (en) * | 2021-03-02 | 2022-09-08 | Adobe Inc. | Blur classification and blur map estimation |
CN112967272A (zh) * | 2021-03-25 | 2021-06-15 | 郑州大学 | 基于改进U-net的焊接缺陷检测方法、装置及终端设备 |
CN112967272B (zh) * | 2021-03-25 | 2023-08-22 | 郑州大学 | 基于改进U-net的焊接缺陷检测方法、装置及终端设备 |
CN113222944B (zh) * | 2021-05-18 | 2022-10-14 | 湖南医药学院 | 细胞核分割方法及基于病理图像的癌症辅助分析系统、装置 |
CN113222944A (zh) * | 2021-05-18 | 2021-08-06 | 湖南医药学院 | 细胞核分割方法、系统、装置及基于病理图像的癌症辅助分析系统、装置 |
CN114612482A (zh) * | 2022-03-08 | 2022-06-10 | 福州大学 | 胃癌神经浸润数字病理切片图像定位和分类方法及系统 |
CN114612482B (zh) * | 2022-03-08 | 2024-06-07 | 福州大学 | 胃癌神经浸润数字病理切片图像定位和分类方法及系统 |
CN114549520A (zh) * | 2022-04-08 | 2022-05-27 | 北京端点医药研究开发有限公司 | 基于全卷积注意力增强网络的视网膜病理切片分析系统 |
CN114549520B (zh) * | 2022-04-08 | 2024-05-07 | 北京端点医药研究开发有限公司 | 基于全卷积注意力增强网络的视网膜病理切片分析系统 |
CN114972760A (zh) * | 2022-06-17 | 2022-08-30 | 湘潭大学 | 基于多尺度注意力增强U-Net的电离图自动描迹方法 |
CN114972760B (zh) * | 2022-06-17 | 2024-04-16 | 湘潭大学 | 基于多尺度注意力增强U-Net的电离图自动描迹方法 |
CN116843997A (zh) * | 2023-08-24 | 2023-10-03 | 摩尔线程智能科技(北京)有限责任公司 | 模型训练、细胞图像标注方法、装置、设备及存储介质 |
CN116843997B (zh) * | 2023-08-24 | 2024-03-19 | 摩尔线程智能科技(北京)有限责任公司 | 模型训练、细胞图像标注方法、装置、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN110490840B (zh) | 2024-09-24 |
CN110490840A (zh) | 2019-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021003821A1 (zh) | 一种肾小球病理切片图像的细胞检测方法、装置及设备 | |
WO2021169128A1 (zh) | 眼底视网膜血管识别及量化方法、装置、设备及存储介质 | |
US11170482B2 (en) | Image processing method and device | |
Sevastopolsky | Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network | |
Zhang et al. | Automated semantic segmentation of red blood cells for sickle cell disease | |
CN109816666B (zh) | 对称全卷积神经网络模型构建方法、眼底图像血管分割方法、装置、计算机设备及存储介质 | |
CN112348785B (zh) | 一种癫痫病灶定位方法及系统 | |
CN112396605B (zh) | 网络训练方法及装置、图像识别方法和电子设备 | |
CN112884788B (zh) | 基于丰富上下文网络的视杯视盘分割方法及成像方法 | |
CN113237881B (zh) | 一种特定细胞的检测方法、装置和病理切片检测系统 | |
CN109767448A (zh) | 分割模型训练方法及装置 | |
CN112102259A (zh) | 一种基于边界引导深度学习的图像分割算法 | |
CN113052228A (zh) | 一种基于SE-Inception的肝癌病理切片分类方法 | |
WO2022227342A1 (zh) | 眼底图像黄斑区域的识别检测方法和装置及设备 | |
CN115375711A (zh) | 基于多尺度融合的全局上下文关注网络的图像分割方法 | |
CN111738992B (zh) | 肺部病灶区域提取方法、装置、电子设备和存储介质 | |
CN113781468A (zh) | 一种基于轻量级卷积神经网络的舌图像分割方法 | |
CN115526834A (zh) | 免疫荧光图像检测方法及装置、设备、存储介质 | |
CN117409002A (zh) | 一种用于创伤的视觉识别检测系统及其检测方法 | |
CN116958679A (zh) | 一种基于弱监督的目标检测方法及相关设备 | |
CN116468702A (zh) | 黄褐斑评估方法、装置、电子设备及计算机可读存储介质 | |
CN115908237B (zh) | 一种眼裂宽度的测量方法、装置和存储介质 | |
CN113160261B (zh) | 一种用于oct图像角膜层分割的边界增强卷积神经网络 | |
CN113379770B (zh) | 鼻咽癌mr图像分割网络的构建方法、图像分割方法及装置 | |
CN116091522A (zh) | 医学图像分割方法、装置、设备及可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19936694 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19936694 Country of ref document: EP Kind code of ref document: A1 |