CN113570633A - Method for segmenting and counting fat cell images based on deep learning model - Google Patents
Method for segmenting and counting fat cell images based on deep learning model Download PDFInfo
- Publication number
- CN113570633A CN113570633A CN202110861762.0A CN202110861762A CN113570633A CN 113570633 A CN113570633 A CN 113570633A CN 202110861762 A CN202110861762 A CN 202110861762A CN 113570633 A CN113570633 A CN 113570633A
- Authority
- CN
- China
- Prior art keywords
- segmentation
- images
- deep learning
- counting
- fat
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000001789 adipocyte Anatomy 0.000 title claims abstract description 40
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000013136 deep learning model Methods 0.000 title claims abstract description 17
- 210000004027 cell Anatomy 0.000 claims abstract description 30
- 230000011218 segmentation Effects 0.000 claims abstract description 29
- 238000012545 processing Methods 0.000 claims abstract description 16
- 238000004458 analytical method Methods 0.000 claims abstract description 15
- 238000013135 deep learning Methods 0.000 claims abstract description 7
- 230000000877 morphologic effect Effects 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 3
- 239000012528 membrane Substances 0.000 claims description 2
- 238000012549 training Methods 0.000 claims description 2
- 238000004040 coloring Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010186 staining Methods 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
A method for segmenting and counting fat cell images based on a deep learning model comprises the steps of inputting the fat images into a deep learning network to obtain the segmentation probability of each pixel in the images, further generating fat cell edge images based on the probability images, sequentially removing bubbles through morphological processing and performing segmentation processing through a watershed algorithm to generate fat cell segmentation images, analyzing the cell area distribution of the fat cell segmentation images through connected domain analysis, and counting the number of fat cells on current target images. The time consumption of the manual counting of the fat cells is obviously shortened.
Description
Technical Field
The invention relates to a technology in the field of image processing, in particular to a method for segmenting and counting an adipocyte image based on a deep learning model.
Background
The key operation of the prior art on cell image processing comprises image segmentation, and accurate image segmentation can increase the accuracy of cell counting and more accurately analyze the area, thereby obtaining a better analysis result. The analysis efficiency of the existing cell segmentation algorithm on high-definition cell images is still low, so that the development of cell statistics technology is limited.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a method for segmenting and counting the fat cell image based on a deep learning model, which obviously shortens the time consumption of manual counting of the fat cells.
The invention is realized by the following technical scheme:
the invention relates to a method for segmenting and counting fat cell images based on a deep learning model, which comprises the steps of inputting the fat images into a deep learning network to obtain the segmentation probability of each pixel in the images, further generating fat cell edge images based on the probability images, sequentially removing bubbles through morphological processing and generating the fat cell segmentation images through segmentation processing by a watershed algorithm, finally analyzing the cell area distribution of the fat cell segmentation images through connected domain analysis and counting the number of fat cells on the current target images.
The deep learning network is a Unet + + network based on up-sampling and down-sampling.
The deep learning network is trained by a training set which is subjected to data enhancement including rotation, turnover, scaling and scale transformation, cross entropy is used as a loss function, and points marked as black and white in a marked image are correspondingly multiplied by final output probability to obtain a final loss function.
The cross entropy loss functionWherein: x is the number ofiTo input, yiTo train the focused binary labels, hw(xi) The probability that the point is identified as a membrane is output for the network, m represents how many pixel points the graph has in common, and j (w) is the value of the error function.
The fat cell edge image is a gray image generated on the basis of the probability image, and is converted through binarization processing to obtain the fat cell edge image.
The bubble removal refers to: bubbles that were misidentified as cells due to image stitching were removed using the gaussian filter function provided by Matlab.
The watershed algorithm is used for segmentation, and specifically comprises the following steps: and acquiring watershed identified as the edge by all watershed algorithms, and adding the watershed into the original image when the watershed is judged to be the cell edge.
The judgment that the cell edge is the cell edge needs to satisfy the following conditions at the same time:
1. the length of the current watershed is smaller than a set threshold value;
2. the ellipticity ((major axis-minor axis)/major axis 100%) of the cells with watershed is less than a set threshold;
3. the ratio of the areas of the two regions after division is 1.
The connected domain analysis specifically comprises: and analyzing the connected domains of the image, counting the information of the areas, the positions and the like of all the connected domains, filtering the connected domains with the areas smaller than a threshold value T, and counting each of the rest connected domains, namely the number of the fat cells.
The cell area distribution is obtained by a connected domain analysis mode.
The invention relates to a system for realizing the method, which comprises the following steps: the device comprises a depth network segmentation unit, a binarization processing unit, a watershed re-segmentation unit and a connected domain analysis unit, wherein: the depth network segmentation unit is connected with the binarization processing unit and transmits probability image information, the binarization processing unit is connected with the watershed resegmentation unit and transmits binary image information, and the watershed resegmentation unit is connected with the connected domain analysis and transmits segmentation image information.
Technical effects
The invention integrally solves the defects of the prior art that the segmentation precision is not accurate enough and the segmentation result is not clear enough; the invention integrates the functions of fat cell segmentation and subsequent related analysis processing, can automatically extract fat cell edges, automatically fill related unsegmented areas and automatically resegmented under-segmented areas, provides threshold conversion, cell number statistics, image staining, manual post-processing and histogram analysis, and shows higher precision in an application level. Compared with the prior art, the accuracy of the method reaches 99.65%, the recall rate reaches 98.38%, and the F1-score reaches 99.01%.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is an image of adipocytes input in the example;
FIG. 3 is a probability map of the output of the deep learning model of an embodiment;
FIG. 4 is a result of the filtering process performed in FIG. 3;
FIG. 5 is the result of the binarization in FIG. 4;
FIG. 6 shows the result of a re-segmentation using the watershed method;
FIG. 7 shows the result of coloring adipocytes;
fig. 8 is a schematic diagram of the operation of the Unet + + network.
Detailed Description
As shown in fig. 1, the present embodiment relates to a method for segmenting and counting fat cell images based on a deep learning model, which specifically includes the following steps:
step 1) inputting a fat image I shown in fig. 2, and setting initial parameters: the method comprises the following steps of an area threshold T, the size of a morphological closed operator, a watershed length threshold L and a threshold c of connected domain ellipticity.
And 2) graying the image.
Step 3) cell edge extraction, which specifically comprises the following steps:
3.1. the calculated output probability map after inputting the Unet + + model shown in fig. 8 is shown in fig. 3.
3.2. The image is gaussian filtered as shown in fig. 4.
3.3. The probability map is binarized to obtain a black and white image, as shown in fig. 5.
Step 4), image post-processing: and (4) performing re-segmentation by using a watershed algorithm, selecting a watershed and adding the watershed into the cell edge image to obtain a re-segmentation result, as shown in fig. 6.
Step 5) cell counting: firstly, analyzing a connected region, extracting the area, the perimeter and the position information of the connected region, filtering out the area smaller than T, and then randomly coloring each connected region, as shown in fig. 7; specifically, 3 integers between 0 and 255 are generated and filled into RGB three-color channels.
Finally, the segmentation accuracy of the Unet + + model reaches 0.9606%, and the final loss function is calculated to be 0.0908.
Through a specific practical experiment, the above apparatus/method is started/operated with T2500, c 10, L50, and gaussian operator size 5 as a parameter, and experimental data can be obtained as follows: the accuracy rate reaches 99.65%, the recall rate reaches 98.38%, and the F1-score reaches 99.01%.
For a cell image with a total cell number of 107, accurately segmented cells were elevated from 59 to 94. For a cell image with total cell number 169, the number of accurately segmented cells was raised from 112 to 140. The accuracy rate reaches 99.65%, the recall rate reaches 98.38%, and the F1-score reaches 99.01%.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Claims (10)
1. A method for segmenting and counting fat cell images based on a deep learning model is characterized in that the fat images are input into a deep learning network to obtain the segmentation probability of each pixel in the images, after fat cell edge images are further generated based on the probability images, the fat cell edge images are sequentially subjected to morphological processing to remove bubbles and watershed algorithm to be segmented to generate fat cell segmentation images, and finally the cell area distribution of the fat cell segmentation images is analyzed and the number of the fat cells on the current target images is counted through connected domain analysis.
2. The method for segmentation and enumeration of adipocyte images based on deep learning model as claimed in claim 1, wherein said deep learning network is a Unet + + network based on up-sampling and down-sampling.
3. The method as claimed in claim 1, wherein the deep learning network is trained by a training set including rotation, inversion, scaling and scale transformation, and uses cross entropy as a loss function, and multiplies a point marked as black and white in the labeled image by a final output probability to obtain a final loss function.
4. The method for segmentation and enumeration of adipocyte images based on deep learning model as claimed in claim 3, wherein said cross entropy loss functionWherein: x is the number ofiTo input, yiTo train the focused binary labels, hw(xi) The probability that the point is identified as a membrane is output for the network, m represents how many pixel points the graph has in common, and j (w) is the value of the error function.
5. The method for segmenting and counting the fat cell image based on the deep learning model as claimed in claim 1, wherein the fat cell edge map is obtained by generating a gray image based on a probability map and performing binarization processing and transformation.
6. The method for segmentation and counting of fat cell images based on deep learning model as claimed in claim 1, wherein the removing of bubbles is: bubbles that were misidentified as cells due to image stitching were removed using the gaussian filter function provided by Matlab.
7. The method for segmenting and counting the fat cell images based on the deep learning model as claimed in claim 1, wherein the watershed algorithm is used for segmentation, and specifically comprises: and acquiring watershed identified as the edge by all watershed algorithms, and adding the watershed into the original image when the watershed is judged to be the cell edge.
8. The method for segmentation and counting of fat cell images based on deep learning model as claimed in claim 7, wherein the determination of the cell edge is a cell edge, and the following conditions are satisfied:
1. the length of the current watershed is smaller than a set threshold value;
2. the ellipticity ((major axis-minor axis)/major axis 100%) of the cells with watershed is less than a set threshold;
3. the ratio of the areas of the two regions after division is 1.
9. The method for segmenting and counting the fat cell images based on the deep learning model as claimed in claim 1, wherein the connected component analysis specifically comprises: analyzing the connected domains of the image, counting the information of the areas, the positions and the like of all the connected domains, filtering the connected domains with the areas smaller than a threshold value T, and counting each of the rest connected domains, namely the number of the fat cells;
the cell area distribution is obtained by a connected domain analysis mode.
10. A system for segmentation counting of fat cell images based on a deep learning model, which realizes the method of any one of claims 1 to 9, is characterized by comprising: the device comprises a depth network segmentation unit, a binarization processing unit, a watershed re-segmentation unit and a connected domain analysis unit, wherein: the depth network segmentation unit is connected with the binarization processing unit and transmits probability image information, the binarization processing unit is connected with the watershed resegmentation unit and transmits binary image information, and the watershed resegmentation unit is connected with the connected domain analysis and transmits segmentation image information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110861762.0A CN113570633A (en) | 2021-07-29 | 2021-07-29 | Method for segmenting and counting fat cell images based on deep learning model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110861762.0A CN113570633A (en) | 2021-07-29 | 2021-07-29 | Method for segmenting and counting fat cell images based on deep learning model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113570633A true CN113570633A (en) | 2021-10-29 |
Family
ID=78168919
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110861762.0A Pending CN113570633A (en) | 2021-07-29 | 2021-07-29 | Method for segmenting and counting fat cell images based on deep learning model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113570633A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114943723A (en) * | 2022-06-08 | 2022-08-26 | 北京大学口腔医学院 | Method for segmenting and counting irregular cells and related equipment |
CN115715994A (en) * | 2022-11-18 | 2023-02-28 | 深圳大学 | Image excitation ultramicro injection method, system and equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316077A (en) * | 2017-06-21 | 2017-11-03 | 上海交通大学 | A kind of fat cell automatic counting method based on image segmentation and rim detection |
US20200074271A1 (en) * | 2018-08-29 | 2020-03-05 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging |
CN112070772A (en) * | 2020-08-27 | 2020-12-11 | 闽江学院 | Blood leukocyte image segmentation method based on UNet + + and ResNet |
CN112964712A (en) * | 2021-02-05 | 2021-06-15 | 中南大学 | Method for rapidly detecting state of asphalt pavement |
-
2021
- 2021-07-29 CN CN202110861762.0A patent/CN113570633A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316077A (en) * | 2017-06-21 | 2017-11-03 | 上海交通大学 | A kind of fat cell automatic counting method based on image segmentation and rim detection |
US20200074271A1 (en) * | 2018-08-29 | 2020-03-05 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging |
CN112070772A (en) * | 2020-08-27 | 2020-12-11 | 闽江学院 | Blood leukocyte image segmentation method based on UNet + + and ResNet |
CN112964712A (en) * | 2021-02-05 | 2021-06-15 | 中南大学 | Method for rapidly detecting state of asphalt pavement |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114943723A (en) * | 2022-06-08 | 2022-08-26 | 北京大学口腔医学院 | Method for segmenting and counting irregular cells and related equipment |
CN114943723B (en) * | 2022-06-08 | 2024-05-28 | 北京大学口腔医学院 | Method for dividing and counting irregular cells and related equipment |
CN115715994A (en) * | 2022-11-18 | 2023-02-28 | 深圳大学 | Image excitation ultramicro injection method, system and equipment |
CN115715994B (en) * | 2022-11-18 | 2023-11-21 | 深圳大学 | Image excitation ultramicro injection method, system and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107944452B (en) | Character recognition method for circular seal | |
Yousif et al. | Toward an optimized neutrosophic K-means with genetic algorithm for automatic vehicle license plate recognition (ONKM-AVLPR) | |
CN111145209B (en) | Medical image segmentation method, device, equipment and storage medium | |
CN109886974B (en) | Seal removing method | |
CN107316077B (en) | Automatic adipose cell counting method based on image segmentation and edge detection | |
CN110619642B (en) | Method for separating seal and background characters in bill image | |
CN113570633A (en) | Method for segmenting and counting fat cell images based on deep learning model | |
CN109934828B (en) | Double-chromosome image cutting method based on Compact SegUnet self-learning model | |
CN106384112A (en) | Rapid image text detection method based on multi-channel and multi-dimensional cascade filter | |
JP2015065654A (en) | Color document image segmentation using automatic recovery and binarization | |
CN107085726A (en) | Oracle bone rubbing individual character localization method based on multi-method denoising and connected component analysis | |
CN110838100A (en) | Colonoscope pathological section screening and segmenting system based on sliding window | |
CN110110667B (en) | Processing method and system of diatom image and related components | |
CN110400362B (en) | ABAQUS two-dimensional crack modeling method and system based on image and computer readable storage medium | |
Shaikh et al. | A novel approach for automatic number plate recognition | |
Azad et al. | New method for optimization of license plate recognition system with use of edge detection and connected component | |
CN110991439A (en) | Method for extracting handwritten characters based on pixel-level multi-feature joint classification | |
CN114331869B (en) | Dam face crack semantic segmentation method | |
CN112270317A (en) | Traditional digital water meter reading identification method based on deep learning and frame difference method | |
CN111681185B (en) | Finite element modeling method based on X-ray scanning image of asphalt mixture | |
CN104834890A (en) | Method for extracting expression information of characters in calligraphy work | |
CN111126162A (en) | Method, device and storage medium for identifying inflammatory cells in image | |
Chakraborty et al. | An improved template matching algorithm for car license plate recognition | |
CN112508024A (en) | Intelligent identification method for embossed seal font of electrical nameplate of transformer | |
CN106295627A (en) | For identifying the method and device of word psoriasis picture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |