CN116543160A - Automatic segmentation method for leucocytes in bone marrow cells - Google Patents
Automatic segmentation method for leucocytes in bone marrow cells Download PDFInfo
- Publication number
- CN116543160A CN116543160A CN202310527841.7A CN202310527841A CN116543160A CN 116543160 A CN116543160 A CN 116543160A CN 202310527841 A CN202310527841 A CN 202310527841A CN 116543160 A CN116543160 A CN 116543160A
- Authority
- CN
- China
- Prior art keywords
- image
- bone marrow
- cell
- segmentation
- leukocytes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 title claims abstract description 44
- 210000002798 bone marrow cell Anatomy 0.000 title claims abstract description 39
- 210000000265 leukocyte Anatomy 0.000 claims abstract description 49
- 210000004027 cell Anatomy 0.000 claims abstract description 37
- 239000012528 membrane Substances 0.000 claims abstract description 11
- 210000003855 cell nucleus Anatomy 0.000 claims abstract description 7
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 7
- 230000000877 morphologic effect Effects 0.000 claims description 17
- 210000000170 cell membrane Anatomy 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 210000004940 nucleus Anatomy 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000010339 dilation Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 238000005530 etching Methods 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims 1
- 238000011161 development Methods 0.000 abstract description 5
- 230000018109 developmental process Effects 0.000 abstract description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 4
- 210000001185 bone marrow Anatomy 0.000 description 17
- 238000000605 extraction Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000002411 adverse Effects 0.000 description 5
- 239000003086 colorant Substances 0.000 description 3
- 210000003743 erythrocyte Anatomy 0.000 description 3
- 230000008014 freezing Effects 0.000 description 3
- 238000007710 freezing Methods 0.000 description 3
- 238000009499 grossing Methods 0.000 description 3
- 208000032839 leukemia Diseases 0.000 description 3
- 238000004195 computer-aided diagnosis Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 208000019838 Blood disease Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000001464 adherent effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002902 bimodal effect Effects 0.000 description 1
- 210000001772 blood platelet Anatomy 0.000 description 1
- 230000032823 cell division Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001973 epigenetic effect Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000003394 haemopoietic effect Effects 0.000 description 1
- 208000014951 hematologic disease Diseases 0.000 description 1
- 210000003958 hematopoietic stem cell Anatomy 0.000 description 1
- 208000018706 hematopoietic system disease Diseases 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000000338 in vitro Methods 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000003211 malignant effect Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010827 pathological analysis Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an automatic segmentation method of white blood cells in bone marrow cells, which comprises the following steps: s1, extracting obvious cell Gray from an input bone marrow cell rgb image through a convolutional neural network; s2, binarizing a bone marrow cell image CellGray to obtain a leucocyte nuclear binary image; s3, performing image morphology on-off operation on the binary image of the white cell nucleus to obtain whitecellMorph, and performing watershed algorithm operation on the whitecellMorph to obtain a segmentation map Leukocyntenucleus of the white cell nucleus; s4, shielding corresponding numbers in the CellGray data obtained in the step S1 by using Leukocyntenucleus obtained in the step S3, and obtaining leucocyte membrane Leukocyntembirange by using a maximum inter-class variance method and an image morphology opening and closing operation; s5, combining the data obtained in the step S3 and the step S4 to obtain a segmented complete leukocyte image, extracting accurate positions of leukocytes, and generating a standard data set. The invention can effectively improve the accuracy of the whole positioning and segmentation of the leukocyte image, generate a data set and facilitate the development of the artificial intelligence for the subsequent leukocyte type identification.
Description
Technical Field
The invention relates to the technical field of medical pathological analysis, in particular to an automatic segmentation method of leucocytes in bone marrow cells.
Background
Leukemia is a malignant blood disease in which hematopoietic stem cells undergo epigenetic and genetic abnormal changes under the action of various pathological factors in vitro and in vivo, resulting in hematopoietic transformation. Bone marrow puncture is an indispensable examination of leukemia, a typical bone marrow smear image consists of white blood cells, red blood cells, platelets and a background, and a pathologist observes the types and the numbers of the white blood cells in the bone marrow smear under a microscope through human eyes to provide diagnosis basis for various leukemia, which is a very complex, tedious and time-consuming task and is easily influenced by subjective factors.
Today, with the rapid development of computer-aided methods and the update and iteration of related computing hardware, computer-aided diagnosis is possible, and the computer-aided diagnosis can not only simulate the diagnosis process of a pathologist to extract and position the white blood cells in a complex scene, but also identify the positioned white blood cells.
Currently, cell segmentation generally adopts an example segmentation network, such as a maskrnn segmentation model, and the accuracy of cell edge segmentation is not enough, so that it is difficult to cope with the cell segmentation task under the condition of dense cells. In order to solve the problem of inaccurate edge segmentation, part of the work can newly increase edge segmentation loss on the basis of the original loss function, and the edge segmentation loss carries out boundary extraction on a prediction result and a labeling result, so that the difference between the prediction boundary and the labeling boundary is calculated, and the edge segmentation accuracy is improved by reducing the edge segmentation loss. However, this approach suffers from the performance of the boundary extraction algorithm, and the worse the boundary extraction, the greater the calculation error of the edge segmentation loss. Especially for dense cell scenes, cells are mutually extruded, the cells can deform, cell boundaries become irregular, the boundary extraction effect is deteriorated, and the training of a segmentation model can be negatively influenced. In addition, when the calculation loss is calculated, boundary extraction is carried out on the prediction result, and the segmentation loss of each pixel on the boundary is enhanced, on one hand, the mode is also interfered by the performance of the boundary extraction algorithm, and the method is difficult to adapt to a dense cell segmentation scene well; on the other hand, it is difficult to solve the classification errors of the signal points outside and inside the boundary by only reinforcing the division loss of the pixels on the boundary, and the signal point classification errors may cause the decrease of the cell division effect and adversely affect the downstream tasks.
In the prior art, the publication number is: the chinese patent application CN110060229a discloses an automatic cell location segmentation method for bone marrow leucocytes, which includes extracting a bone marrow leucocyte image WhiteCellGray from an input bone marrow rgb image, and binarizing the bone marrow leucocyte image WhiteCellGray by using a maximum inter-class variance method (ot s u) to obtain a leucocyte binary image WhiteCellBW and the like. The method comprises the steps of performing bone marrow leucocyte channel image extraction on a specimen image through color deconvolution; then realizing the segmentation and positioning of bone marrow leucocytes through binarization, hole filling, morphological smoothing, watershed and other operations; the color deconvolution can fundamentally eliminate the adverse effect of mature red blood cells on the segmentation and identification of later-stage bone marrow leucocytes, so that the segmentation and positioning accuracy of the bone marrow leucocytes is improved; the split watershed can separate the adhesion cells well, and reduce the omission ratio of the cells. However, the method is complex in calculation, and the accuracy of positioning segmentation cannot be effectively improved only by adopting once binarization processing, so that the accuracy of positioning segmentation in the cell segmentation process is low, the operation efficiency is low, and only one type of white blood cells (white blood cells without cell membranes) can be operated, so that improvement is needed.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an automatic segmentation method of leucocytes in bone marrow cells, which is characterized in that the characteristic extraction of the bone marrow cells is carried out on images through a convolutional neural network, bone marrow cell data obtained by different operations can be processed in a large scale, and after independent processing is carried out on a leucocyte nuclear image and a leucocyte membrane image by adopting multiple binarization, the whole leucocyte image is obtained in a superposition mode, so that the accuracy of the whole positioning segmentation of the leucocyte image can be effectively improved, a data set can be generated, and the development of the artificial intelligence for the subsequent identification of the leucocyte type is facilitated.
The invention provides an automatic segmentation method of white blood cells in bone marrow cells, which comprises the following steps:
s1, extracting obvious cell Gray from an input bone marrow cell rgb image through a convolutional neural network;
s2, binarizing a bone marrow cell image CellGray to obtain a leucocyte nuclear binary image;
s3, performing image morphology on-off operation on the binary image of the white cell nucleus to obtain whitecellMorph, and performing watershed algorithm operation on the whitecellMorph to obtain a complete segmentation map Leukocyntenulus of the white cell nucleus;
s4, shielding corresponding numbers in the CellGray data obtained in the step S1 by using Leukocyntenucleus obtained in the step S3, and obtaining a leucocyte membrane Leukocyntembirange by using a maximum inter-class variance method and an image morphology opening and closing operation;
s5, combining the data obtained in the step S3 and the step S4 to obtain a segmented complete leukocyte image, extracting accurate positions of leukocytes, and generating a standard data set.
Preferably, in step S1, the obvious cells CellGray are extracted from the bone marrow cell rgb image specifically by the following method: and adopting a deep neural network VGG16 model to inhibit various cells with obvious interference output through a data fine-tuning model and analysis output, and extracting obvious cells CellGray, wherein: cellgray=vgg16 [ 'block1_conv2']VGG16 is the z-path of the multilayered neuron l p(i,j) A network structure model composed of functions,
n l for the number of convolution kernels of the first layer,
k l p,q for the convolution kernel for the layer l p channel and the layer l-1 q channel,
b l p for the layer i p node to correspond to the convolution kernel,
a l q for the output of the first layer q channel after the activation function,
z l p(i,j) and (3) outputting the first layer p channel (i, j) after the activation function.
Preferably, in the step S2, a maximum inter-class variance method is used, the image is first divided into a background and a foreground according to the gray characteristic of the bone marrow cell image CellGray, and then the image is binarized and divided according to a threshold value obtained by the oxford method, and finally the maximum inter-class variance of the foreground and the background image is obtained.
Preferably, the WhiteCellMorph in step S3 is obtained by the following calculation formula:
in the above formula, b represents a structural element, and by definition, a morphological etching operation,morphological dilation operation is said.
Preferably, the step S3 of performing a watershed algorithm on WhiteCellMorph specifically includes the following steps:
s31, classifying all pixels in the gradient image according to gray values, and setting a geodesic distance threshold;
s32, finding out pixel points with the minimum gray values, and enabling the threshold value to increase from the minimum value, wherein the points are starting points;
s33, in the growing process of the horizontal plane, surrounding neighborhood pixels are touched, the geodesic distance from the pixels to the starting point is measured, if the geodesic distance is smaller than a set threshold value, the pixels are submerged, otherwise, a dam is arranged on the pixels, and the neighborhood pixels are classified;
and S34, as the horizontal plane is higher, more and higher dams are arranged until the gray value is maximum, all areas meet on the watershed line, and the dams divide the whole image pixel.
Preferably, in the step S4, the leukocyte membrane Wherein b represents a structural element, +.>Morphological dilation operation is said.
Preferably, in the step S5, the leukocyte nucleus obtained in the step S3 and the cell membrane obtained in the step S4 are added to obtain complete leukocytes, morphological operation is performed on the leukocytes, and then the outline acquisition position is extracted from the operated leukocytes, and a data set is generated according to the extracted coordinates and the data set format of the pasal VOC.
The automatic segmentation method for the leucocytes in the bone marrow cells has the beneficial effects that: according to the invention, the characteristic extraction of bone marrow cells is carried out on the image through the convolutional neural network, so that bone marrow cell data acquired by different operations can be processed in a large range; the bone marrow leucocyte is divided by multiple combination operations such as binarization, hole filling, morphological smoothing, watershed and the like; the convolution neural network can eliminate adverse effects on target cells caused by image differences of bone marrow cells generated by factors such as coloring agents, freezing, manual operation and the like, so that the segmentation accuracy of bone marrow leucocytes is improved; the split watershed can well separate adhesion cells, so that the omission ratio of the cells is reduced; after independent treatment is carried out on the leucocyte nuclei and leucocyte membranes by adopting multiple binarization, the accuracy of the whole positioning and segmentation of the leucocyte images can be effectively improved in a superposition mode, a data set can be generated, and the development of the artificial intelligence for identifying the subsequent leucocyte types is facilitated.
Drawings
Fig. 1 is a flow chart of the present invention.
Fig. 2 is an original image of bone marrow cells.
Fig. 3 is an image after feature extraction by a convolutional neural network.
Fig. 4 is a binary image of the white blood cell nucleus after treatment using the maximum inter-class variance method.
Fig. 5 is an image of a white blood cell nucleus after processing by a watershed algorithm.
FIG. 6 is an image of cell membranes after morphological operations.
Fig. 7 is an image of segmented white blood cells obtained by data combination.
Fig. 8 is a data set generated in a data set format of a PASCAL VOC from the extracted coordinates.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present invention.
Examples: an automatic segmentation method of leucocytes in bone marrow cells.
Referring to fig. 1 to 8, a method for automatically dividing leukocytes in bone marrow cells comprises the steps of:
(I), bone marrow cell image neural network processing: obvious cells CellGray, including leukocyte nuclei, leukocyte membranes, and erythrocytes, were extracted from the imported bone marrow rgb images. According to the method, a VGG processing mode of a deep neural network with strong generalization is adopted, various cells with obvious interference output can be restrained by a shallow layer of the VGG deep neural network through a small amount of data fine adjustment model and analysis output, and obvious cells CellGray are extracted, wherein: cellgray=vgg16 [ 'block1_conv2']VGG16 is the z-path of the multilayered neuron l p(i,j) A network structure model composed of functions,
n l for the number of convolution kernels of the first layer,
k l p,q for the convolution kernel for the layer l p channel and the layer l-1 q channel,
b l p for the layer i p node to correspond to the convolution kernel,
a l q for the output of the first layer q channel after the activation function,
z l p(i,j) and (3) outputting the first layer p channel (i, j) after the activation function.
Compared with the images shown in fig. 1 and 2, the neural network processing can eliminate adverse effects on target cells caused by difference of generated bone marrow cell images due to factors such as coloring agent, freezing, manual operation and the like, and highlight bone marrow cell images, so that the segmentation accuracy of bone marrow leucocytes is improved.
And (II) binarizing the bone marrow cell image CellGray by using a maximum inter-class variance method (otsu) to obtain a leucocyte nuclear binary image.
otsu is an algorithm for determining the threshold value of the image binary segmentation, which is also called the maximum inter-class variance method, because the inter-class variance of the foreground and background images is the maximum after the image binary segmentation is performed according to the threshold value obtained by the method. The maximum inter-class variance method (otsu) is used to divide an image into a background and a foreground according to the gray scale characteristics of the image. Since variance is a measure of the uniformity of the gray level distribution, the larger the inter-class variance between the background and the foreground, the larger the difference between the two parts constituting the image, and the smaller the difference between the two parts when the foreground is divided into the background or the background is divided into the foreground. Using segmentation with maximum inter-class variance means that the probability of misclassification is minimal.
The otsu is characterized by being sensitive to image noise, and can reduce the probability of misclassification only for single target segmentation when the size ratio of the target to the background is greatly different and the inter-class variance function can be bimodal or multimodal. Referring to fig. 4, the white blood cell nucleus binary image processed by the maximum inter-class variance method is clearly displayed, so that the probability of misclassification can be greatly reduced.
(III) voids exist in the binary image data of the white cell nuclei, and white cell morphology (white cell morphology) is obtained by carrying out image morphology opening and closing operation on the voids, wherein the white cell morphology is obtained by the following calculation formula:
in the above formula, b represents a structural element, and by definition, a morphological etching operation,morphological dilation operation is said.
Then watershed operation is carried out on whitecellMorph in the following way, so as to obtain a leucocyte nuclear segmentation chart Leukocyntenucleus:
a. classifying all pixels in the gradient image according to gray values, and setting a geodesic distance threshold;
b. finding out the pixel point with the minimum gray value, and starting to increase the threshold value from the minimum value, wherein the points are starting points;
c. in the process of increasing the horizontal plane, surrounding neighborhood pixels are encountered, the geodesic distance from the pixels to the starting point is measured, if the geodesic distance is smaller than a set threshold value, the pixels are submerged, otherwise, a dam is arranged on the pixels, and the neighborhood pixels are classified;
d. as the horizontal plane gets higher, more and higher dams are set up until the maximum of the gray values, all areas meet on the watershed line, and the dams divide the whole image pixel.
Referring to FIG. 5, the split watershed can separate adherent cells well, and reduce the omission factor of cells.
And (IV) masking the corresponding number by using the Leukocyntenucleus obtained in the step (III) in the CellGray data obtained in the step (I), and then carrying out morphological opening and closing operation by using a maximum inter-class variance method (otsu) image to obtain the leucocyte membrane Leukocyntemebrane. Wherein the leukocyte membrane Wherein b represents a structural element, +.>Morphological dilation operation is said. Referring to fig. 6, through the operation of the step (four), a clear cell membrane image can be obtained, strengthening of the cell membrane image is achieved, and the accuracy of cell membrane positioning and segmentation is improved.
And fifthly, combining the data obtained in the step (three) and the step (four) to obtain a segmented white blood cell image, extracting accurate positions of white blood cells, and generating a standard data set. The white blood cell nuclear image obtained in the step (III) and the cell membrane image obtained in the step (IV) are integrated white blood cell images (refer to fig. 7), and clear and integrated white blood cell images can be obtained after the white blood cell nuclear image and the cell membrane image are added, and the method only needs simple superposition, is simple to calculate and has high response speed. Then morphological operation is carried out on the leukocyte image, and the outline of the leukocyte image after operation is extracted to obtain the position, so that the accuracy of positioning and segmentation can be effectively improved. A data set (see fig. 8) is generated according to the extracted coordinates in a data set format of the PASCAL VOC to facilitate subsequent system calls.
According to the invention, the characteristic extraction of bone marrow cells is carried out on the image through the convolutional neural network, so that bone marrow cell data acquired by different operations can be processed in a large range; the bone marrow leucocyte is divided by multiple combination operations such as binarization, hole filling, morphological smoothing, watershed and the like; the convolution neural network can eliminate adverse effects on target cells caused by image differences of bone marrow cells generated by factors such as coloring agents, freezing, manual operation and the like, so that the segmentation accuracy of bone marrow leucocytes is improved; the split watershed can well separate adhesion cells, so that the omission ratio of the cells is reduced; after the independent processing is carried out on the leucocyte nuclear image and the leucocyte membrane image by adopting multiple binarization, the whole leucocyte image is obtained in a superposition mode, the accuracy of whole positioning and segmentation of the leucocyte image can be effectively improved, a data set can be generated, and the development of the artificial intelligence for identifying the subsequent leucocyte types is convenient.
The foregoing is a preferred embodiment of the present invention, but the present invention should not be limited to the embodiment and the disclosure of the drawings, so that the equivalents and modifications can be made without departing from the spirit of the disclosure.
Claims (7)
1. An automatic segmentation method of white blood cells in bone marrow cells, which is characterized by comprising the following steps:
s1, extracting obvious cell Gray from an input bone marrow cell rgb image through a convolutional neural network;
s2, binarizing a bone marrow cell image CellGray to obtain a leucocyte nuclear binary image;
s3, performing image morphology on-off operation on the binary image of the white cell nucleus to obtain whitecellMorph, and performing watershed algorithm operation on the whitecellMorph to obtain a complete segmentation map Leukocyntenulus of the white cell nucleus;
s4, shielding corresponding numbers in the CellGray data obtained in the step S1 by using Leukocyntenucleus obtained in the step S3, and obtaining a leucocyte membrane Leukocyntembirange by using a maximum inter-class variance method and an image morphology opening and closing operation;
s5, combining the data obtained in the step S3 and the step S4 to obtain a segmented complete leukocyte image, extracting accurate positions of leukocytes, and generating a standard data set.
2. The method for automatic segmentation of leukocytes in bone marrow cells according to claim 1, wherein step S1 comprises extracting apparent cells CellGray from bone marrow cell rgb images by: and adopting a deep neural network VGG16 model to inhibit various cells with obvious interference output through a data fine-tuning model and analysis output, and extracting obvious cells CellGray, wherein: cellgray=vgg16 [ 'block1_conv2']VGG16 is the z-path of the multilayered neuron l p(i,j) A network structure model composed of functions,
n l for the number of convolution kernels of the first layer,
k l p,q for the convolution kernel for the layer l p channel and the layer l-1 q channel,
b l p for the layer i p node to correspond to the convolution kernel,
a l q for the output of the first layer q channel after the activation function,
z l p(i,j) and (3) outputting the first layer p channel (i, j) after the activation function.
3. The automatic segmentation method of white blood cells in bone marrow cells according to claim 1, wherein in the step S2, a maximum inter-class variance method is used, the image is first divided into a background and a foreground according to the gray characteristic of the bone marrow cell image CellGray, and then the image is subjected to binarization segmentation according to a threshold value obtained by the oxford method, and finally the maximum inter-class variance of the foreground and the background image is obtained.
4. The method for automatic segmentation of leukocytes in bone marrow cells according to claim 1, wherein WhiteCellMorph in step S3 is obtained by means of the following calculation formula:
in the above formula, b represents a structural element, and by definition, a morphological etching operation,morphological dilation operation is said.
5. The method for automatic segmentation of leukocytes in bone marrow cells according to claim 1, wherein said step S3 comprises the steps of:
s31, classifying all pixels in the gradient image according to gray values, and setting a geodesic distance threshold;
s32, finding out pixel points with the minimum gray values, and enabling the threshold value to increase from the minimum value, wherein the points are starting points;
s33, in the growing process of the horizontal plane, surrounding neighborhood pixels are touched, the geodesic distance from the pixels to the starting point is measured, if the geodesic distance is smaller than a set threshold value, the pixels are submerged, otherwise, a dam is arranged on the pixels, and the neighborhood pixels are classified;
and S34, as the horizontal plane is higher, more and higher dams are arranged until the gray value is maximum, all areas meet on the watershed line, and the dams divide the whole image pixel.
6. The method for automatic separation of leukocytes in bone marrow cells according to claim 1, wherein in step S4, leukocyte membrane is obtainedWherein b represents a structural element, +.>Morphological dilation operation is said.
7. The method for automatic segmentation of leukocytes in bone marrow cells according to claim 1, wherein: in the step S5, the leukocyte nucleus obtained in the step S3 and the cell membrane obtained in the step S4 are added to obtain complete leukocytes, morphological operation is performed on the leukocytes, outline acquisition positions are extracted from the operated leukocytes, and a data set is generated according to the extracted coordinates and the data set format of the PASCAL VOC.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310527841.7A CN116543160A (en) | 2023-05-11 | 2023-05-11 | Automatic segmentation method for leucocytes in bone marrow cells |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310527841.7A CN116543160A (en) | 2023-05-11 | 2023-05-11 | Automatic segmentation method for leucocytes in bone marrow cells |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116543160A true CN116543160A (en) | 2023-08-04 |
Family
ID=87451937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310527841.7A Pending CN116543160A (en) | 2023-05-11 | 2023-05-11 | Automatic segmentation method for leucocytes in bone marrow cells |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116543160A (en) |
-
2023
- 2023-05-11 CN CN202310527841.7A patent/CN116543160A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107452010B (en) | Automatic cutout algorithm and device | |
WO2021203795A1 (en) | Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network | |
CN107016681B (en) | Brain MRI tumor segmentation method based on full convolution network | |
CN113344849B (en) | Microemulsion head detection system based on YOLOv5 | |
CN107644420B (en) | Blood vessel image segmentation method based on centerline extraction and nuclear magnetic resonance imaging system | |
Tosta et al. | Segmentation methods of H&E-stained histological images of lymphoma: A review | |
US10229488B2 (en) | Method and system for determining a stage of fibrosis in a liver | |
Pan et al. | An accurate nuclei segmentation algorithm in pathological image based on deep semantic network | |
CN106340016B (en) | A kind of DNA quantitative analysis method based on microcytoscope image | |
CN111402267B (en) | Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image | |
CN108537751B (en) | Thyroid ultrasound image automatic segmentation method based on radial basis function neural network | |
CN113362306A (en) | Packaged chip defect detection method based on deep learning | |
CN116912255B (en) | Follicular region segmentation method for ovarian tissue analysis | |
CN113269737B (en) | Fundus retina artery and vein vessel diameter calculation method and system | |
CN108596920A (en) | A kind of Target Segmentation method and device based on coloured image | |
CN110060229A (en) | A kind of cell automatic positioning dividing method of myeloplast | |
Fakhrzadeh et al. | Analyzing tubular tissue in histopathological thin sections | |
WO2022229638A1 (en) | Method & apparatus for processing microscope images of cells and tissue structures | |
CN113850792A (en) | Cell classification counting method and system based on computer vision | |
CN112419335B (en) | Shape loss calculation method of cell nucleus segmentation network | |
CN110428437B (en) | GGO segmentation method based on edge-sensitive SLIC and quadratic density clustering | |
CN117252893A (en) | Segmentation processing method for breast cancer pathological image | |
CN109241865B (en) | Vehicle detection segmentation algorithm under weak contrast traffic scene | |
Saxena et al. | Study of Computerized Segmentation & Classification Techniques: An Application to Histopathological Imagery | |
CN116543160A (en) | Automatic segmentation method for leucocytes in bone marrow cells |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |