CN114926398B - Adhesion peripheral blood cell segmentation method based on deep learning image processing - Google Patents
Adhesion peripheral blood cell segmentation method based on deep learning image processing Download PDFInfo
- Publication number
- CN114926398B CN114926398B CN202210405579.4A CN202210405579A CN114926398B CN 114926398 B CN114926398 B CN 114926398B CN 202210405579 A CN202210405579 A CN 202210405579A CN 114926398 B CN114926398 B CN 114926398B
- Authority
- CN
- China
- Prior art keywords
- region
- segmentation
- cell
- new
- cut
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 72
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013135 deep learning Methods 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 title claims abstract description 21
- 210000004976 peripheral blood cell Anatomy 0.000 title claims abstract description 16
- 210000004027 cell Anatomy 0.000 claims abstract description 53
- 210000003855 cell nucleus Anatomy 0.000 claims abstract description 37
- 210000000265 leukocyte Anatomy 0.000 claims abstract description 30
- 238000004891 communication Methods 0.000 claims abstract description 27
- 210000005259 peripheral blood Anatomy 0.000 claims abstract description 12
- 239000011886 peripheral blood Substances 0.000 claims abstract description 12
- 230000001464 adherent effect Effects 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000005484 gravity Effects 0.000 claims description 3
- 238000012217 deletion Methods 0.000 claims description 2
- 230000037430 deletion Effects 0.000 claims description 2
- 230000002776 aggregation Effects 0.000 abstract description 5
- 238000004220 aggregation Methods 0.000 abstract description 5
- 210000004369 blood Anatomy 0.000 abstract description 3
- 239000008280 blood Substances 0.000 abstract description 3
- 238000005192 partition Methods 0.000 abstract 1
- 230000009466 transformation Effects 0.000 abstract 1
- 238000004422 calculation algorithm Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 10
- 238000001514 detection method Methods 0.000 description 5
- 210000000601 blood cell Anatomy 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000000877 morphologic effect Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000032823 cell division Effects 0.000 description 3
- 238000003672 processing method Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013145 classification model Methods 0.000 description 2
- 210000000805 cytoplasm Anatomy 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 208000019838 Blood disease Diseases 0.000 description 1
- 230000021164 cell adhesion Effects 0.000 description 1
- 230000005859 cell recognition Effects 0.000 description 1
- 238000003776 cleavage reaction Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010790 dilution Methods 0.000 description 1
- 239000012895 dilution Substances 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 210000003743 erythrocyte Anatomy 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 208000014951 hematologic disease Diseases 0.000 description 1
- 208000018706 hematopoietic system disease Diseases 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 210000004940 nucleus Anatomy 0.000 description 1
- 230000007017 scission Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000010186 staining Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of computer image processing, and relates to a method for segmenting adhered peripheral blood cells based on deep learning image processing; identifying all connected cell nucleus areas in the peripheral blood smear by using U-net semantic segmentation so as to realize the center point positioning of the white blood cells; performing Euclidean distance-based distance transformation operation on the large-area communication area so as to judge whether the situation of nuclear adhesion of a plurality of cells exists or not, and further realizing cell segmentation under the situation of nuclear adhesion; introducing a weighted undirected graph based on the connected region of the U-net segmentation; judging whether two connected cell nucleus areas belong to the same cell or not according to the two node attributes and the weight of the edge between the two nodes, and adjusting the center position of the cell and the size of the partition frame; the method effectively solves the problems of difficult segmentation, incomplete cell segmentation and deviation of cell center point positioning under the condition of cell aggregation in the blood sample.
Description
Technical Field
The invention belongs to the technical field of computer image processing, and relates to a method for segmenting adhered peripheral blood cells based on deep learning image processing.
Background
With the development of modern technology, the processing of cells with computers plays an important role in the fields of medical diagnosis and medical image processing. Cell segmentation is the basis of cell feature extraction and cell recognition, and the segmentation of accurate cell images from medical images is currently a very challenging topic. In the research of automatic cell identification, the need of an effective segmentation algorithm is generated, and different segmentation algorithms are proposed according to different characteristics of images, such as a traditional threshold method, a watershed algorithm, edge detection (a spatial differential operator, a fitting curved surface, wavelet multi-scale edge detection), and target detection based on deep learning (Faster R-CNN, YOLO and the like), and a segmentation algorithm (FCN, deepLab, PSPNet, U-net series network and the like).
The main problems of the traditional segmentation method are as follows: 1. the algorithms are better for dilution, clear-boundary cells and better in judging effect, but the blood cells of the human body often have overlapping phenomenon, so that the existing algorithms are not applicable to the field of direct application of the white blood cell segmentation; 2. the super parameters are more, the large quantity of parameters is mainly represented by low segmentation performance, and the large quantity of parameters is represented by poor robustness. The performance is good on one or a small-variability data set, but the performance is often poor in the blood cell segmentation task with various forms; 3. the cell nucleus area is positioned inaccurately and incompletely, so that the size and the position of the segmentation frame are affected; 4. the edge portion of the segmentation result often presents a portion of the nuclear area of adjacent cells.
The target detection based on deep learning has poor robustness performance due to the fact that the data set is few; due to the polymorphism of leukocytes, the target detection algorithm often does not work well on new data. The main problems of the segmentation network based on deep learning are as follows: 1. the effective data set is few. Because the medical data set involves privacy protection and has high labeling difficulty, the training data set has a limited scale, and the training of an excellent model is often difficult; 2. the model parameters are large in quantity and high in cost. The depth model performs well, partly because of the huge parameters and better feature extraction. The training and prediction of the model have high requirements on the performance of equipment, and the model is long in time, so that the model is not convenient to directly deploy; 3. the segmentation based on the pixel level can accurately identify boundary information, but increases the probability of misclassifying one cell into two cells, thereby influencing the accuracy and count of later white blood cell classification.
The existing method is not suitable for being directly applied to the field of peripheral blood cell segmentation in combination with the characteristics of the existing segmentation method, such as serious cell adhesion of peripheral blood smear, and most of the methods face challenges of robustness, accuracy, high efficiency, instantaneity and the like. Therefore, by combining deep learning and a traditional morphological processing method, the method has very important theoretical research significance and practical application value for effectively dividing the white blood cells with complex morphology.
Disclosure of Invention
The invention overcomes the defects of the prior art and provides a method for segmenting adherent peripheral blood cells based on deep learning image processing; solves the problems of difficult segmentation, incomplete cell area identification, inaccuracy and the like under the condition of cell aggregation in a blood sample.
In order to achieve the above purpose, the present invention is realized by the following technical scheme.
The adhered peripheral blood cell segmentation method based on the deep learning image processing comprises the following steps of:
1) Identifying a cell nucleus region by utilizing a deep learning U-net network; the result output by the U-net network is identified, and the area of each connected area and the average connected area S are calculated avg =∑S i N, wherein S i Represents the area of the ith communication area, and n represents the number of communication areas.
2) Constructing a weighted undirected graph G < V, E > based on the connected region; wherein V is the set of all the connected areas, E is the weight set between any two connected areas, and the value of the weight is calculated by the attributes of the two nodes.
3) Area is greater than S avg The connected region set of (2) is denoted as V DT ={V DT_i |V DT_i >V avg ,V DT_i E V, cycle through V DT Inner V DT_i For V DT_i Performing Euclidean distance-based distance conversion, and converting the image after the distance conversion into a binary image; carrying out connected region identification on the binary image, and marking a connected region set as V' DT_i ={V DT_i1 ,V DT_i2 ,V DT_i3 ,…,V DT_in J, record len (V' DT_i ) Is V (V) DT_i The number of cells contained therein; if len (V' DT_i ) > 1, step 4) is performed; if len (V' DT_i ) And= 1), step 5) is performed.
4) Updating an undirected graph G < V, E >; newly added V DT_i1 ,V DT_i2 ,V DT_i3 ,…,V DT_in Delete V DT_i And (5) a node.
5) Setting a threshold S overlap Traversing all edges in the undirected graph, and selecting the maximum edge weight E m-n =max{E i-j |V i ,V j E, V }, judge E m-n And S is equal to overlap Magnitude relation. If E m-n ≥S overlap Step 6) is carried out until E m-n <S overlap The method comprises the steps of carrying out a first treatment on the surface of the Otherwise, step 7) is performed.
6) For V m And V n The two connected regions are combined and the weighted undirected graph G < V, E >.
7) Traversing the adjusted connected region set V according to V.S region The attribute deletes free, immature, small-area leukocyte regions.
8) Traversing the adjusted connected region set V according to V.Gx, V.Gy and V.S cut The attribute draws a segmentation window and performs segmentation.
Further, after step 8), the non-target cell nucleus region of the edge portion of the cleavage window is cleared, and the non-target cell nucleus region is filled with a background color for replacement.
Further, in the step 1), the peripheral blood smear is input into a trained U-net network, and the output result is a binary image, wherein 1 represents a cell nucleus area and 0 represents a non-cell nucleus area; searching a cell nucleus communication area, and recording the maximum communication area as S max Selecting a small threshold S threshold Record S threshold =0.01*S max The small-area noise communication area is filtered out.
Further, in the step 2), firstly, calculating attribute values of all connected regions, wherein the attribute includes a gravity center position coordinate g= (Gx, gy), a circumscribed rectangle width and height (W, H), and a cell nucleus connected area S region Cell cutting window size S cut Wherein S is cut =[1.5*max(W,H)] 2 The method comprises the steps of carrying out a first treatment on the surface of the Constructing a weighted undirected graph G < V, E >, wherein V is a set of all connected regions, and node attributes of the weighted undirected graph G < V, E > comprise V.Gx, V.Gy, V.W, V.H and V.S region 、V.S cut E is a weight set between any two connected areas, and the value of the weight is calculated by the attribute of the two nodes; v-shaped memory i 、V j Respectively represent two connected areas E i-j Representing connection V i And V is equal to j Weighting of edges between connected regions (E i-j =E j-i ) Record E i-j =(V i .S cut ∪V j .S cut )/min{V i .S cut ,V j .S cut }。
Further, the euclidean distance is the distance between the pixel and the nearest background pixel.
Further, in the step 3), the maximum value of the pixels in the transformed image is denoted as max_distance, and the transformed image is transformed into a binary image with 0.5×max_distance as a threshold.
Further, in the step 4), V DT_ij .S cut =1.5*V DT_ij .S region /len(V′ DT_i ),V DT_ ij .S region =V DT_ij .S region /len(V' DT_i ),V DT_ij .W=sqrt(V DT_ij .S cut ),V DT_ij .H=sqrt(V DT_ ij .S cut ) And calculating the weight of the edges between the newly added node and the rest of the communication areas.
Further, in the step 6):
s1: the newly added communication area is marked as V new Wherein V is new The node attribute calculation formula is as follows:
V new .G=(V m .G*V m .S region +V n .G*V n .S region )/(V m .S region +V n .S region ),
V new .S region =V m .S region +V n .S region ,
array_x=[V m .Gx±0.5*V m .W,V n .Gx±0.5*V n .W],
array_y=[V m .Gy±0.5*V m .H,V n .Gy±0.5*V n .H],
V new .W=max(array_x)-min(array_x),V new .H=max(array_y)-min(array_y),
V new .S cut =[1.5*max(V new .W,V new .H)] 2 ,
V new .G=(V m .G*V m .S region +V n .G*V n .S region )/(V m .S region +V n .S region ),
V new .G=(V m .G*V m .S region +V n .G*V n .S region )/(V m .S region +V n .S region )。
s2: increase V new Relation weight of node and other connected region (except m and n), E new-i =max(E i-m ,E i-n ) Wherein i is V and
s3: delete V m V (V) n Two connected region nodes and node-related edges.
S4: and (5) re-executing the step (5).
Compared with the prior art, the invention has the following beneficial effects:
peripheral blood leukocyte identification (peripheral blood smear analysis) is an important means of clinical testing. The accuracy of the blood cell classification count results is directly affected by the quality of the cell segmentation effect. The application of the computer image processing and deep learning technology can greatly improve the efficiency of peripheral blood smear analysis.
In addition to white blood cells, white blood cell images also contain a large number of visible components such as red blood cells and stained backgrounds. According to the white blood cell staining principle and analysis of a large number of stained sample images, the invention provides an accurate white blood cell region detection method based on a U-net semantic segmentation network and a morphological processing method, and solves the problems of difficult segmentation, incomplete cell region identification, inaccuracy and the like under the condition of cell aggregation in a blood sample.
1. Aiming at the problems of easy adhesion and overlapping of leukocyte cytoplasm in the peripheral blood smear, unclear boundary identification, fuzzy cell center point positioning, poor robustness and the like caused by using a traditional morphological segmentation method, the invention uses U-net semantic segmentation to replace the traditional segmentation method, and accurately and efficiently identifies all communicated cell nucleus areas in the peripheral blood smear so as to realize the center point positioning of the leukocytes. And the cell nucleus region is accurately positioned by using a semantic segmentation technology based on a pixel level, so that all the cell nucleus regions are effectively and rapidly identified. The method greatly improves the accuracy and simultaneously supports multi-image parallel segmentation, thereby realizing real-time segmentation.
The U-net network can identify a small area and avoid omitting a small cell nucleus communication area in the cell, so that the cell center is positioned more accurately; compared with other deep learning networks, the method has the advantages that the segmentation speed is greatly improved, and the U-net network is far lower than the other deep networks in parameter order, so that the method can obtain the segmentation result in real time. Therefore, this step ensures the accuracy and real-time of the segmentation process.
2. The invention carries out the distance conversion operation based on Euclidean distance on the large-area communication area so as to judge whether the situation of cell nuclear adhesion of a plurality of cells exists, if the situation exists, the center position of each cell can be positioned by the result of the distance conversion, and then the cell segmentation under the situation of cell nuclear adhesion is realized; and the large communication area is further judged whether to be the communication area of a plurality of cells by using a distance transformation method, and the method effectively solves the cell segmentation under the condition of cell nuclear adhesion.
3. The invention introduces a weighted undirected graph based on the connected region of the U-net segmentation. The connected areas and the attributes thereof are respectively used as nodes and node attributes of the undirected graph, and the crossed overlapping area of the rectangular frames at the periphery of the two connected areas is used as the weight of the edges between the nodes. Judging whether two connected cell nucleus areas belong to the same cell or not according to the two node attributes and the weight of the edge between the two nodes, and adjusting the center position of the cell and the size of the division frame. The method effectively solves the problems of incomplete cell segmentation and deviation of cell center point positioning, in particular to multi-cell nuclear cells. And the center position and the window size of the cell division window are adjusted according to the cell nucleus communication area attribute, so that the centrality and the integrity of the target white blood cells in the division window are ensured.
4. In order to reduce the influence of the cell nucleus of the non-interested region (the cell nucleus of the non-main cell) on the cell classification in the segmentation result, the invention adopts a boundary object elimination method to replace the cell nucleus region of the boundary part with a background color, thereby providing high-quality input for classification. The cell nucleus area of non-target cells at the edge part of the segmentation window is removed, the quality of the segmentation result is greatly improved, and the uniqueness of white blood cells in the segmentation window is ensured.
The algorithm provided by the invention effectively ensures the centrality, the integrity and the uniqueness of the target cells in the segmentation window, and simultaneously, the algorithm takes robustness, accuracy, high efficiency and instantaneity into consideration. The method has good algorithm stability and calculation cost even under the condition of cell aggregation, and provides assistance for blood cell classification counting which is the final target of clinical examination to a certain extent.
Drawings
FIG. 1 is a flow chart of a method for segmentation of adherent peripheral blood cells based on deep learning and digital image processing techniques according to an embodiment;
FIG. 2 is a first set of effect contrast plots for white blood cell segmentation using the method of the examples; in the figure, (a) is an original image, and (b) is an effect of dividing the original image, and a box represents a division frame.
FIG. 3 is a comparison of a second set of effects on leukocyte segmentation using the method of the example; in the figure, (a) is an original image, and (b) is an effect of dividing the original image, and a box having an overlapping portion represents a cell division result of nuclear adhesion.
FIG. 4 is a third set of effect contrast plots for white blood cell segmentation using the method of the example; in the figure, (a) is an original image, and (b) is an effect of dividing the original image, and a box having an overlapping portion represents a cell division result of nuclear adhesion.
FIG. 5 is a fourth set of effect contrast plots for white blood cell segmentation using the method of the example; in the figure, (a) is an original image, and (b) is an effect of dividing the original image, and a box represents a division frame.
FIG. 6 is a comparative graph of the method of the examples after division of the white blood cell nuclei; in the figure, (a) is a view after segmentation, and (b) is a result of deleting the cell nucleus background filling in the edge area of the view after segmentation.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail by combining the embodiments and the drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. The following describes the technical scheme of the present invention in detail with reference to examples and drawings, but the scope of protection is not limited thereto.
The embodiment provides a method for dividing adherent peripheral blood cells based on deep learning and digital image processing technology, as shown in fig. 1, which is a flowchart of the method in the embodiment.
In clinic, manual microscopic examination is a "gold standard" of white blood cell examination, and most of hospitals are performing blood disease screening through manual microscopic examination, so an image analysis system capable of replacing manual microscopic examination to automatically identify white blood cells becomes an urgent need of the clinical hematology department.
Due to the characteristic that the cytoplasm of the white blood cells is very easy to overlap, the white blood cell segmentation is always a difficult point in the development of an automatic white blood cell classification and counting system, and the accurate segmentation of the cells is an essential step for realizing the accurate classification of the white blood cells. The embodiment provides a leucocyte cell area detection scheme by carefully researching the characteristics of the peripheral blood smear color image. Firstly, a cell nucleus region is accurately identified by using a deep learning U-net network, and then cells are accurately positioned based on the cell nucleus communication region by using a morphological processing method, so that accurate segmentation of the cells is realized. The method comprises the following steps:
the first step: and realizing cell nucleus area identification based on the U-net network. The peripheral blood smear is input into a trained U-net network, and the output result is a binary image, wherein 1 represents a cell nucleus area and 0 represents a non-cell nucleus area.
And a second step of: searching for a nuclear communication region. The result output by the U-net network is identified to the connected areas, the area of each connected area is calculated, and the largest connected area is recorded as S max Selecting a small threshold S threshold Record S threshold =0.01*S max Filtering out the small-area noise communication area; calculating an average connected region S avg =∑S i N, wherein S i Represents the area of the ith communication area, and n represents the number of communication areas.
And a third step of: a weighted undirected graph G < V, E > is constructed based on the connected regions. Firstly, calculating attribute values of all connected areas, wherein the attribute comprises gravity center position coordinates G= (Gx, gy), external rectangle width and height (W, H) and cell nucleus connected area S region Cell cutting window size S cut Wherein S is cut =[1.5*max(W,H)] 2 . Constructing a weighted undirected graph G < V, E >, wherein V is a set of all connected regions, and node attributes of the weighted undirected graph G < V, E > comprise V.Gx, V.Gy, V.W, V.H and V.S region 、V.S cut E is a weight set between any two connected areas, and the value of the weight is calculated by the attributes of the two nodes. V-shaped memory i 、V j Respectively represent two connected areas E i-j Representing connection V i And V is equal to j Weighting of edges between connected regions (E i-j =E j-i ) Record E i-j =(V i .S cut ∪V j .S cut )/min{V i .S cut ,V j .S cut };
Fourth step: area is greater than S avg The connected region set of (2) is denoted as V DT ={V DT_i |V DT_i >V avg ,V DT_i E V, cycle through V DT Inner V DT_i For V DT_i And performing distance conversion based on Euclidean distance (the distance between the pixel and the nearest background pixel), recording the maximum value of the pixel in the converted image as max_distance, and converting the image after distance conversion into a binary image by taking 0.5 x max_distance as a threshold. Carrying out connected region identification on the binary image, and marking a connected region set as V' DT_i ={V DT_i1 ,V DT_i2 ,V DT_i3 ,…,V DT_in J, record len (V' DT_i ) Is V (V) DT_i Cell number contained therein. If len (V' DT_i ) > 1, performing the fifth step; if len (V' DT_i ) The sixth step is performed, with= 1.
Fifth step: updating undirected graph G < V, E >. New addition ofV DT_i1 ,V DT_i2 ,V DT_i3 ,…,V DT_in Delete V DT_i And (5) a node. Wherein V is DT_ij .S cut =1.5*V DT_ij .S region /len(V' DT_i ),V DT_ij .S region =V DT_ij .S region /len(V' DT_i ),V DT_ij .W=sqrt(V DT_ij .S cut ),V DT_ij .H=sqrt(V DT_ij .S cut ) And calculating the weight of the edges between the newly added node and the rest of the communication areas.
Sixth step: setting a threshold S over Traversing all edges in the undirected graph, and selecting the maximum edge weight E m-n =max{E i-j |V i ,V j E, V }, judge E m-n And S is equal to overlap Magnitude relation. If E m-n ≥S overlap Then go through the seventh step until E m-n <S overlap The method comprises the steps of carrying out a first treatment on the surface of the Otherwise, the eighth step is performed.
Seventh step: for V m And V n The two connected regions are combined and the weighted undirected graph G < V, E >.
1) The newly added communication area is marked as V new Wherein V is new The node attribute calculation formula is as follows:
V new .G=(V m .G*V m .S region +V n .G*V n .S region )/(V m .S region +V n .S region ),
V new .S region =V m .S region +V n .S region ,
array_x=[V m .Gx±0.5*V m .W,V n .Gx±0.5*V n .W],
array_y=[V m .Gy±0.5*V m .H,V n .Gy±0.5*V n .H],
V new .W=max(array_x)-min(array_x),V new .H=max(array_y)-min(array_y),
V new .S cut =[1.5*max(V new .W,V new .H)] 2 ,
V new .G=(V m .G*V m .S region +V n .G*V n .S region )/(V m .S region +V n .S region ),
V new .G=(V m .G*V m .S region +V n .G*V n .S region )/(V m .S region +V n .S region )。
2) Increase V new Relation weight of node and other connected region (except m and n), E new-i =max(E i-m ,E i-n ) Wherein i is V and
3) Delete V m V (V) n Two connected region nodes and node-related edges.
4) Re-executing the sixth step;
eighth step: traversing the adjusted connected region set V according to V.S region Attribute deletion of free, immature, small-area leukocyte regions;
ninth step: traversing the adjusted connected region set V according to V.Gx, V.Gy and V.S cut Drawing a segmentation window by attributes and segmenting;
tenth step: and (3) removing the non-target cell nucleus area of the edge part of the cutting window, and filling and replacing the cell nucleus area of the non-target cell by using a background color.
The method is used for carrying out segmentation operation on the white blood cells and the cell nuclei, and the segmented images are compared with the images shown in figures 2-6, and experiments prove that: the method effectively avoids the previous pain points such as poor positioning and difficult segmentation in the aspect of white blood cell segmentation, effectively ensures the centrality, the integrity and the uniqueness of the target cells in the segmentation window, and has good algorithm stability and calculation cost even under the condition of cell aggregation. Meanwhile, high-quality input data can be provided for the classification model, so that the classification model can be well put into use in medical equipment. In addition, the idea of the method can be applied to the field of segmentation of other medical image cells.
While the invention has been described in detail in connection with specific preferred embodiments thereof, it is not to be construed as limited thereto, but rather as a result of a simple deduction or substitution by a person having ordinary skill in the art to which the invention pertains without departing from the scope of the invention defined by the appended claims.
Claims (8)
1. The adhered peripheral blood cell segmentation method based on the deep learning image processing is characterized by comprising the following steps of:
1) Identifying a cell nucleus region by utilizing a deep learning U-net network; the result output by the U-net network is identified, and the area of each connected area and the average connected area S are calculated avg =∑S i N, wherein S i Represents the area of the ith communication area, n represents the number of communication areas;
2) Constructing a weighted undirected graph G < V, E > based on the connected region; wherein V is a set of all the connected areas, E is a weight set between any two connected areas, and the value of the weight is calculated by the attribute of two nodes;
3) Area is greater than S avg The connected region set of (2) is denoted as V DT ={V DT_i |V DT_i >V avg ,V DT_i E V, cycle through V DT Inner V DT_i For V DT_i Performing Euclidean distance-based distance conversion, and converting the image after the distance conversion into a binary image; carrying out connected region identification on the binary image, and marking a connected region set as V' DT_i ={V DT_i1 ,V DT_i2 ,V DT_i3 ,…,V DT_in J, record len (V' DT_i ) Is V (V) DT_i The number of cells contained therein; if len (V' DT_i ) > 1, step 4) is performed; if len (V' DT_i ) = 1, then execute step 5);
4) Updating an undirected graph G < V, E >; newly added V DT_i1 ,V DT_i2 ,V DT_i3 ,…,V DT_in Delete V DT_i A node;
5) Setting a threshold S overlap Traversing all edges in the undirected graph, and selecting the maximum edge weight E m-n =max{E i-j |V i ,V j E, V }, judge E m-n And S is equal to overlap A size relationship; if E m-n ≥S overlap Step 6) is carried out until E m-n <S overlap The method comprises the steps of carrying out a first treatment on the surface of the Otherwise, executing the step 7);
6) For V m And V n Combining the two connected areas, and updating a weighted undirected graph G < V, E >;
7) Traversing the adjusted connected region set V according to V.S region Attribute deletion of free, immature, small-area leukocyte regions;
8) Traversing the adjusted connected region set V according to V.Gx, V.Gy and V.S cut The attribute draws a segmentation window and performs segmentation.
2. The method for segmentation of adherent peripheral blood cells based on deep learning image processing according to claim 1, wherein after step 8), the non-target cell nucleus region of the edge portion of the cutting window is cleared and the non-target cell nucleus region is filled with a background color for replacement.
3. The method for segmenting adherent peripheral blood cells based on deep learning image processing according to claim 1, wherein the step 1) is to input a peripheral blood smear into a trained U-net network, and output the result as a binary image, wherein 1 represents a cell nucleus region and 0 represents a non-cell nucleus region; searching a cell nucleus communication area, and recording the maximum communication area as S max Selecting a small threshold S threshold Record S threshold =0.01*S max The small-area noise communication area is filtered out.
4. The deep learning image processing-based adherent peripheral blood of claim 1The cell segmentation method is characterized in that in the step 2), firstly, calculating attribute values of all connected regions, wherein the attribute comprises gravity center position coordinates G= (Gx, gy), external rectangle width and height (W, H) and cell nucleus connected area S region Cell cutting window size S cut Wherein S is cut =[1.5*max(W,H)] 2 The method comprises the steps of carrying out a first treatment on the surface of the Constructing a weighted undirected graph G < V, E >, wherein V is a set of all connected regions, and node attributes of the weighted undirected graph G < V, E > comprise V.Gx, V.Gy, V.W, V.H and V.S region 、V.S cut E is a weight set between any two connected areas, and the value of the weight is calculated by the attribute of the two nodes; v-shaped memory i 、V j Respectively represent two connected areas E i-j Representing connection V i And V is equal to j Weighting of edges between connected regions (E i-j =E j-i ) Record E i-j =(V i .S cut ∪V j .S cut )/min{V i .S cut ,V j .S cut }。
5. The method for segmentation of adherent peripheral blood cells based on deep learning image processing of claim 1, wherein the euclidean distance is a distance between the pixel and a nearest background pixel.
6. The method for segmentation of adherent peripheral blood cells based on deep learning image processing according to claim 1, wherein in the step 3), the maximum value of pixels in the transformed image is noted as max_distance, and the distance transformed image is converted into a binary image with 0.5×max_distance as a threshold.
7. The method for adherent peripheral blood cell segmentation based on deep learning image processing according to claim 1, wherein in the step 4), V DT_ij .S cut =1.5*V DT_ij .S region /len(V' DT_i ),V DT_ij .S region =V DT_ ij .S region /len(V' DT_i ),V DT_ij .W=sqrt(V DT_ij .S cut ),V DT_ij .H=sqrt(V DT_ij .S cut ) And calculating the weight of the edges between the newly added node and the rest of the communication areas.
8. The method for segmentation of adherent peripheral blood cells based on deep learning image processing according to claim 1, wherein in the step 6):
s1: the newly added communication area is marked as V new Wherein V is new The node attribute calculation formula is as follows:
V new .G=(V m .G*V m .S region +V n .G*V n .S region )/(V m .S region +V n .S region ),
V new .S region =V m .S region +V n .S region ,
array_x=[V m .Gx±0.5*V m .W,V n .Gx±0.5*V n .W],
array_y=[V m .Gy±0.5*V m .H,V n .Gy±0.5*V n .H],
V new .W=max(array_x)-min(array_x),V new .H=max(array_y)-min(array_y),
V new .S cut =[1.5*max(V new .W,V new .H)] 2 ,
V new .G=(V m .G*V m .S region +V n .G*V n .S region )/(V m .S region +V n .S region ),
V new .G=(V m .G*V m .S region +V n .G*V n .S region )/(V m .S region +V n .S region );
s2: increase V new Relation weight of node and other connected region (except m and n), E new-i =max(E i-m ,E i-n ) Wherein i is V and
s3: delete V m V (V) n Two connected region nodes and edges related to the nodes;
s4: and (5) re-executing the step (5).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210405579.4A CN114926398B (en) | 2022-04-18 | 2022-04-18 | Adhesion peripheral blood cell segmentation method based on deep learning image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210405579.4A CN114926398B (en) | 2022-04-18 | 2022-04-18 | Adhesion peripheral blood cell segmentation method based on deep learning image processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114926398A CN114926398A (en) | 2022-08-19 |
CN114926398B true CN114926398B (en) | 2024-03-29 |
Family
ID=82806677
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210405579.4A Active CN114926398B (en) | 2022-04-18 | 2022-04-18 | Adhesion peripheral blood cell segmentation method based on deep learning image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114926398B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116597441B (en) * | 2023-05-22 | 2024-01-26 | 生态环境部长江流域生态环境监督管理局生态环境监测与科学研究中心 | Algae cell statistics method and system based on deep learning and image pattern recognition |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104392460A (en) * | 2014-12-12 | 2015-03-04 | 山东大学 | Adherent white blood cell segmentation method based on nucleus-marked watershed transformation |
CN112132843A (en) * | 2020-09-30 | 2020-12-25 | 福建师范大学 | Hematoxylin-eosin staining pathological image segmentation method based on unsupervised deep learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109697460B (en) * | 2018-12-05 | 2021-06-29 | 华中科技大学 | Object detection model training method and target object detection method |
-
2022
- 2022-04-18 CN CN202210405579.4A patent/CN114926398B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104392460A (en) * | 2014-12-12 | 2015-03-04 | 山东大学 | Adherent white blood cell segmentation method based on nucleus-marked watershed transformation |
CN112132843A (en) * | 2020-09-30 | 2020-12-25 | 福建师范大学 | Hematoxylin-eosin staining pathological image segmentation method based on unsupervised deep learning |
Non-Patent Citations (1)
Title |
---|
朱琳琳 ; 韩璐 ; 杜泓 ; 范慧杰 ; .基于U-Net网络的多主动轮廓细胞分割方法研究.红外与激光工程.2020,(第S1期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN114926398A (en) | 2022-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111985536B (en) | Based on weak supervised learning gastroscopic pathology image Classification method | |
CN107862698B (en) | Light field foreground segmentation method and device based on K mean cluster | |
Jiang et al. | A novel white blood cell segmentation scheme using scale-space filtering and watershed clustering | |
CN103984958B (en) | Cervical cancer cell dividing method and system | |
CN109389129A (en) | A kind of image processing method, electronic equipment and storage medium | |
CN102651128B (en) | Image set partitioning method based on sampling | |
CN111798425B (en) | Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning | |
CN111860459B (en) | Gramineae plant leaf pore index measurement method based on microscopic image | |
WO2020029915A1 (en) | Artificial intelligence-based device and method for tongue image splitting in traditional chinese medicine, and storage medium | |
WO2021051875A1 (en) | Cell classification method and apparatus, medium and electronic device | |
CN109685045A (en) | A kind of Moving Targets Based on Video Streams tracking and system | |
CN109886170B (en) | Intelligent detection, identification and statistics system for oncomelania | |
CN112365471B (en) | Cervical cancer cell intelligent detection method based on deep learning | |
WO2019238104A1 (en) | Computer apparatus and method for implementing classification detection of pulmonary nodule images | |
CN112270681B (en) | Method and system for detecting and counting yellow plate pests deeply | |
CN114926398B (en) | Adhesion peripheral blood cell segmentation method based on deep learning image processing | |
CN112102332A (en) | Cancer WSI segmentation method based on local classification neural network | |
CN115049908A (en) | Multi-stage intelligent analysis method and system based on embryo development image | |
CN108876810A (en) | The method that algorithm carries out moving object detection is cut using figure in video frequency abstract | |
CN113160185A (en) | Method for guiding cervical cell segmentation by using generated boundary position | |
CN104573701B (en) | A kind of automatic testing method of Tassel of Corn | |
CN113139977A (en) | Mouth cavity curve image wisdom tooth segmentation method based on YOLO and U-Net | |
CN116485817A (en) | Image segmentation method, device, electronic equipment and storage medium | |
CN109948544B (en) | Automatic positioning and identifying method for target bacterial colony | |
CN110414317B (en) | Full-automatic leukocyte classification counting method based on capsule network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |