CN113409321A - Cell nucleus image segmentation method based on pixel classification and distance regression - Google Patents

Cell nucleus image segmentation method based on pixel classification and distance regression Download PDF

Info

Publication number
CN113409321A
CN113409321A CN202110645379.1A CN202110645379A CN113409321A CN 113409321 A CN113409321 A CN 113409321A CN 202110645379 A CN202110645379 A CN 202110645379A CN 113409321 A CN113409321 A CN 113409321A
Authority
CN
China
Prior art keywords
image
features
pixel classification
network
distance regression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110645379.1A
Other languages
Chinese (zh)
Other versions
CN113409321B (en
Inventor
张强
王晶涵
焦强
刘健
刘迦南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110645379.1A priority Critical patent/CN113409321B/en
Publication of CN113409321A publication Critical patent/CN113409321A/en
Application granted granted Critical
Publication of CN113409321B publication Critical patent/CN113409321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cell nucleus image segmentation method based on pixel classification and distance regression, which comprises the following steps of: (1) extracting features from an input image: (2) constructing an upsampling double-branch decoding network: (3) constructing a global information perception module: (4) constructing a characteristic aggregation module: (5) training the algorithm network: (6) the invention is suitable for the technical field of medical image processing, and can realize complete and consistent segmentation of a cell nucleus image; the segmentation effect of the cell nucleus image can be improved, the low-level features can be screened by the high-level features through an attention mechanism, and then the low-level features are guided better, so that feature comparison between the cell nucleus and the background is enhanced, semantic correlation and spatial correlation between pixel classification and distance regression can be better utilized, the correlation of the two tasks is captured, and the difference between the two tasks is kept.

Description

Cell nucleus image segmentation method based on pixel classification and distance regression
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a cell nucleus image segmentation method based on pixel classification and distance regression.
Background
The nucleus image segmentation means that a series of computer vision methods are used for processing the nucleus image, and the nucleus is extracted from a complex background area; the task is the basic premise of the digital pathological workflow and has important significance for cancer diagnosis, grading and prediction; the existing nuclear image segmentation methods can be divided into two main categories: one is a traditional cell nucleus image segmentation method, and the other is a cell nucleus image segmentation method based on deep learning.
However, the traditional segmentation method of the cell nucleus image mainly completes the segmentation of the cell nucleus image through the characteristics of pixel values, shapes and the like extracted manually, and the transition depends on the manually selected characteristics; most of the cell nucleus image segmentation methods only construct a double-branch subnetwork by a pixel classification method to respectively extract foreground classification features and boundary classification features of the cell nucleus image, and carry out simple post-processing on the extracted features to generate a final segmentation result; due to the fact that slice thicknesses are different in the manufacturing process of cell nucleus images, cell nucleus boundaries are often fuzzy and unclear, and the method based on pixel classification can cause cell nucleus undersampling with dense adhesion in a complex adhesion scene.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a cell nucleus image segmentation method based on pixel classification and distance regression.
In order to achieve the purpose, the invention adopts the following technical scheme:
a cell nucleus image segmentation method based on pixel classification and distance regression comprises the following steps:
(1) extracting features from an input image:
inputting an input image into a backbone network to extract image level features with different resolutions;
(2) constructing an upsampling double-branch decoding network:
constructing an up-sampling double-branch decoding network, and respectively up-sampling the image level features with different resolutions in the step (1) to restore the image resolution based on the double-branch decoding network to obtain pixel classification features and distance regression features of different levels;
(3) constructing a global information perception module:
constructing a global information perception module, processing the pixel classification features and the distance regression features in the step (2) based on the global information perception module, and screening the image level features with different resolutions in the step (1) through an attention mechanism;
(4) constructing a characteristic aggregation module:
a feature aggregation module is constructed, based on the double-branch decoding network in the step (2), a double-branch feature aggregation module based on pixel classification and distance regression is constructed, feature aggregation is carried out on pixel classification features and distance regression features which are located at the same feature level, and a final pixel classification output result and a final distance regression output result are obtained in the last feature aggregation module;
(5) training the algorithm network:
on a training data set, finishing algorithm network training by respectively minimizing a cross entropy loss function and a mean square error loss function on the pixel classification output result and the distance regression result in the step (4) by adopting a supervised learning mechanism to obtain network model parameters;
(6) and (3) testing the algorithm network:
and (4) on a test data set, utilizing the network model parameters obtained in the step (5) to obtain a final cell nucleus image segmentation result by utilizing a post-processing technology of controlling watershed based on the marker for the pixel classification output result and the distance regression output result obtained in the step (4).
Preferably, the input image in the step (1) is a cell nucleus original image.
Preferably, the backbone network in step (1) is a ResNet-50 network, and the backbone network parameters are shared.
Preferably, said step (1)) In the method, 5 image level features F with different resolutions of a generated cell nucleus image are extracted through a ResNet-50 network0,F1,F2,F3,F4Wherein the number of channels of each feature is 64, 256, 512, 1024, 2048 respectively.
Preferably, the upsampling two-branch decoding network in the step (2) is to use the highest layer characteristic F in the step (1)4As input, restoring image resolution and respectively obtaining pixel classification characteristics of different levels
Figure BDA0003109018820000031
Sum distance regression feature
Figure BDA0003109018820000032
Preferably, the global information perception module in the step (3) is configured to use the hierarchical feature F obtained in the step (1)1,F2,F3And said step (2) dual branch upsampled features as input, sub-imaging pixel classification global attention features
Figure BDA0003109018820000033
And
Figure BDA0003109018820000034
preferably, the step (3) further comprises the steps of:
(31) the feature extracted by the coding network ResNet-50 network is recorded as a high-level feature and is recorded as FiWherein i is 4;
(311) for the high-level features F in the step (31)1,F2,F3Carrying out global average pooling;
(312) for the high-level features F in the step (31)1,F2,F3Carrying out up-sampling;
(313) applying a Sigmoid function to the result of said step (311), expressed as follows:
βi=S(G(Fi)),
wherein, S (-) is a Sigmoid function, G (-) represents global average pooling, and i ═ 1,2, and 3 respectively represent high-level features with different resolutions;
(32) the low-level feature input is the upsampled feature obtained by the upsampling double-branch decoding module in the step (2); wherein the pixel classification is characterized by
Figure BDA0003109018820000041
Distance regression features are noted
Figure BDA0003109018820000042
Wherein i is 1,2, 3;
(321) spatially separable convolving the low-level features of step (32);
(322) carrying out element-level multiplication on the image features generated in the step (321) and the result in the step (313) to obtain new features;
(323) performing an element-level addition operation on the result of the step (322) and the result of the step (312);
the global information perception module of the pixel classification branch and the distance regression branch of step (3) may be represented as follows:
Figure BDA0003109018820000043
Figure BDA0003109018820000044
where Upsmple () is an upsampling operation, S () is a Sigmoid function, G (-) represents a global averaging pooling, Spconv (-) represents a spatially separable convolution, and i ═ 1,2, and 3 represent features of different resolutions, respectively.
Preferably, the constructing of the dual-branch feature aggregation module based on pixel classification and distance regression in step (4) includes the following steps:
(41) performing pixel-level addition of the result of step (3) and the result of step (2) as input to step (4);
(42) respectively convolving the pixel classification characteristic diagram and the distance regression characteristic diagram obtained in the step (41), and cascading the convolved characteristic diagrams;
(43) convolving the result of step (42) and performing a pixel-level addition with the result of step (41);
the information aggregation module of the pixel classification branch and the distance regression branch of step (4) may be represented as follows:
Figure BDA0003109018820000051
Figure BDA0003109018820000052
where Cat (-) denotes the signature channel cascade and Conv (-) denotes the 3 × 3 convolution.
Preferably, in the network training process in step (5), supervision in the network training process is divided into two parts: the penalty for pixel classification is a cross entropy penalty function, and the penalty for distance regression is a mean square error penalty function for normalized coordinates.
Preferably, the step (6) tests the image by using the trained network model parameters in the step (5) to generate a pixel classification result image and a distance regression result image, and then, the distance regression image is inverted by a post-processing technology to be used as a mark of the pixel classification image, and a final cell nucleus image segmentation result image is obtained by using a mark control watershed algorithm.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1) the method can realize complete and consistent segmentation of the cell nucleus image without manual design and characteristic extraction, and simulation results show that the segmentation result of the method is basically not influenced under the condition of adhesion overlapping;
2) the invention is composed of an encoding network for characteristic extraction and a double-branch decoding network for generating a pixel classification graph and a distance regression graph; the core of the method is that a pixel classification image of boundary information is obtained by using pixel classification, a distance image with positioning information is obtained by using distance regression, and the segmentation effect of a cell nucleus image can be improved by effectively combining two segmentation principles;
3) the invention extracts features from the network, and by constructing the global information perception module, the module can screen low-level features from high-level features through an attention mechanism, so as to better guide the low-level features, thereby enhancing the feature contrast between cell nucleuses and backgrounds;
4) the method extracts features from a decoding network, constructs a feature aggregation module based on pixel classification and distance regression on a decoding branch network by constructing a feature aggregation module, aggregates pixel classification graphs and distance regression graphs of the same feature level, better utilizes semantic correlation and spatial correlation between the pixel classification and the distance regression, captures the correlation between two tasks and keeps the difference between the two tasks.
Drawings
FIG. 1 is a flow chart of the training of the present invention;
FIG. 2 is a flow chart of the test of the present invention;
FIG. 3 is a first (horizontal) web framework of the present invention;
FIG. 4 is a second (vertical) version of the overall framework of the network of the present invention;
FIG. 5 is a schematic diagram of a global information awareness module according to the present invention;
FIG. 6 is a schematic view of a feature aggregation module of the present invention;
FIG. 7 is a schematic diagram of the post-processing technique of the present invention.
Detailed Description
The following further describes an embodiment of the nuclear image segmentation method based on pixel classification and distance regression according to the present invention with reference to fig. 1 to 7. The method for segmenting the cell nucleus image based on the pixel classification and the distance regression is not limited to the description of the following embodiment.
Example (b):
this embodiment provides a specific implementation of a cell nucleus image segmentation method based on pixel classification and distance regression, as shown in fig. 1 to 7, including the following steps:
(1) extracting features from an input image:
inputting an input image into a backbone network to extract image level features with different resolutions;
(2) constructing an upsampling double-branch decoding network:
constructing an up-sampling double-branch decoding network, and respectively up-sampling image hierarchy features with different resolutions in the step (1) to restore the image resolution based on the double-branch decoding network to obtain pixel classification features and distance regression features of different hierarchies;
(3) constructing a global information perception module:
constructing a global information perception module, processing the pixel classification characteristic and the distance regression characteristic in the step (2) based on the global information perception module, and screening the image level characteristics with different resolutions in the step (1) through an attention mechanism;
(4) constructing a characteristic aggregation module:
constructing a feature aggregation module, constructing a dual-branch feature aggregation module based on pixel classification and distance regression based on the dual-branch decoding network in the step (2), performing feature aggregation on the pixel classification features and the distance regression features which are positioned at the same feature level, and obtaining a final pixel classification output result and a final distance regression output result in the last feature aggregation module;
(5) training the algorithm network:
on the training data set, finishing algorithm network training by respectively minimizing a cross entropy loss function and a mean square error loss function to the pixel classification output result and the distance regression result in the step (4) by adopting a supervised learning mechanism to obtain network model parameters;
(6) and (3) testing the algorithm network:
and (4) on a test data set, utilizing the network model parameters obtained in the step (5) to obtain a final cell nucleus image segmentation result by utilizing a post-processing technology of controlling watershed based on the mark on the pixel classification branch output result and the distance regression branch output result obtained in the step (4).
Specifically, the input image in step (1) is a cell nucleus original image.
Specifically, the backbone network in step (1) is a ResNet-50 network, and backbone network parameters are shared.
Specifically, in step (1), 5 image level features F with different resolutions of the generated cell nucleus image are extracted through a ResNet-50 network0,F1,F2,F3,F4Wherein the number of channels of each feature is 64, 256, 512, 1024, 2048 respectively.
Specifically, the upsampling dual-branch decoding network in the step (2) is to use the highest layer characteristic F in the step (1)4As input, restoring image resolution and respectively obtaining pixel classification characteristics of different levels
Figure BDA0003109018820000091
Figure BDA0003109018820000092
Sum distance regression feature
Figure BDA0003109018820000093
Further, the global information perception module in the step (3) is used for obtaining the hierarchical feature F obtained in the step (1)1,F2,F3And (2) using the feature of the two-branch up-sampling as input, and classifying the global attention feature of the sub-pixel
Figure BDA0003109018820000094
And
Figure BDA0003109018820000095
further, the step (3) further comprises the following steps:
(31) ResNet-50 network for coded networkThe extracted features are recorded as high-level features, which are recorded as FiWherein i is 4;
(311) for the high-level feature F in step (31)1,F2,F3Carrying out global average pooling;
(312) for the high-level feature F in step (31)1,F2,F3Carrying out up-sampling;
(313) applying the Sigmoid function to the result of step (311) is expressed as follows:
βi=S(G(Fi)),
wherein, S (-) is a Sigmoid function, G (-) represents global average pooling, and i ═ 1,2, and 3 respectively represent high-level features with different resolutions;
(32) inputting the low-level features into the upsampling features obtained by the upsampling double-branch decoding module in the step (2); wherein the pixel classification is characterized by
Figure BDA0003109018820000096
Distance regression features are noted
Figure BDA0003109018820000097
Wherein i is 1,2, 3;
(321) spatially separable convolving the low-level features of step (32);
(322) carrying out element-level multiplication on the image features generated in the step (321) and the result in the step (313) to obtain new features;
(323) performing an element-level addition operation on the result of step (322) and the result of step (312);
the global information perception module of the pixel classification branch and the distance regression branch in step (3) may be represented as follows:
Figure BDA0003109018820000101
Figure BDA0003109018820000102
where Upsmple () is an upsampling operation, S () is a Sigmoid function, G (-) represents a global averaging pooling, Spconv (-) represents a spatially separable convolution, and i ═ 1,2, and 3 represent features of different resolutions, respectively.
Further, the constructing of the dual-branch feature aggregation module based on pixel classification and distance regression in the step (4) includes the following steps:
(41) performing pixel-level addition on the result in the step (3) and the result in the step (2) as input of the step (4);
(42) respectively convolving the pixel classification characteristic diagram and the distance regression characteristic diagram obtained in the step (41), and cascading the convolved characteristic diagrams;
(43) convolving the result of step (42) and performing a pixel-level addition with the result of step (41);
the information aggregation module of the pixel classification branch and the distance regression branch in step (4) can be expressed as follows:
Figure BDA0003109018820000103
Figure BDA0003109018820000104
where Cat (-) denotes the signature channel cascade and Conv (-) denotes the 3 × 3 convolution.
Further, in the network training process in the step (5), supervision in the network training process is divided into two parts: the penalty for pixel classification is a cross entropy penalty function, and the penalty for distance regression is a mean square error penalty function for normalized coordinates.
Further, step (6) tests the image by using the trained network model parameters in step (5) to generate a pixel classification result image and a distance regression result image, negating the distance regression result image through a post-processing technology to be used as a mark of the pixel classification image, and obtaining a final cell nucleus image segmentation result image by using a mark control watershed algorithm.
By adopting the technical scheme:
firstly, constructing a coding network, extracting features of an input image, inputting the input image into a backbone network, and extracting image level features with different resolutions;
then, a two-branch up-sampling decoding network is constructed, the features of the coding network are up-sampled to restore the image resolution, and pixel classification features and distance regression features of different levels are obtained respectively;
then, a global information perception module is constructed, and the pixel classification features and the distance regression features are screened by using different hierarchical features of the coding network through an attention mechanism;
then, constructing a feature aggregation module, performing feature aggregation on the pixel classification features and the distance regression features which are positioned at the same feature level in a decoding network, and obtaining a pixel classification result and a distance regression result in the last feature aggregation module;
then, adopting a supervised learning mechanism to supervise the model by respectively using a minimized cross entropy loss function and a mean square error function for the pixel classification branch and the distance regression branch;
and finally, training an algorithm network to obtain model parameters, testing the algorithm network model to obtain a pixel classification graph and a distance regression graph, and controlling a watershed post-processing algorithm by using a marker to obtain a final segmentation result.
Has the following advantages:
the invention can realize the complete and consistent segmentation of the cell nucleus image without manual design and characteristic extraction, and simulation results show that the segmentation result of the invention is not affected basically under the condition of adhesion and overlapping.
The invention is composed of an encoding network for feature extraction and a dual-branch decoding network for generating a pixel classification map and a distance regression map. The core of the method is to obtain a pixel classification image of boundary information by using pixel classification, obtain a distance image with positioning information by using distance regression, and effectively combine two segmentation principles to improve the segmentation effect of a cell nucleus image.
The invention extracts features from the network, and by constructing the global information perception module, the module can screen the low-level features from the high-level features through an attention mechanism, so as to better guide the low-level features, thereby enhancing the feature contrast between the cell nucleus and the background.
The method extracts features from a decoding network, constructs a feature aggregation module based on pixel classification and distance regression on a decoding branch network by constructing a feature aggregation module, aggregates pixel classification graphs and distance regression graphs of the same feature level, better utilizes semantic correlation and spatial correlation between the pixel classification and the distance regression, captures the correlation between two tasks and keeps the difference between the two tasks.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (10)

1. A cell nucleus image segmentation method based on pixel classification and distance regression is characterized by comprising the following steps:
(1) extracting features from an input image:
inputting an input image into a backbone network to extract image level features with different resolutions;
(2) constructing an upsampling double-branch decoding network:
constructing an up-sampling double-branch decoding network, and respectively up-sampling the image level features with different resolutions in the step (1) to restore the image resolution based on the double-branch decoding network to obtain pixel classification features and distance regression features of different levels;
(3) constructing a global information perception module:
constructing a global information perception module, processing the pixel classification features and the distance regression features in the step (2) based on the global information perception module, and screening the image level features with different resolutions in the step (1) through an attention mechanism;
(4) constructing a characteristic aggregation module:
a feature aggregation module is constructed, based on the double-branch decoding network in the step (2), a double-branch feature aggregation module based on pixel classification and distance regression is constructed, feature aggregation is carried out on pixel classification features and distance regression features which are located at the same feature level, and a final pixel classification output result and a final distance regression output result are obtained in the last feature aggregation module;
(5) training the algorithm network:
on a training data set, finishing algorithm network training by respectively minimizing a cross entropy loss function and a mean square error loss function on the pixel classification output result and the distance regression result in the step (4) by adopting a supervised learning mechanism to obtain network model parameters;
(6) and (3) testing the algorithm network:
and (4) on a test data set, utilizing the network model parameters obtained in the step (5) to obtain a final cell nucleus image segmentation result by utilizing a post-processing technology of controlling watershed based on the marker for the pixel classification output result and the distance regression output result obtained in the step (4).
2. The method of claim 1, wherein the image segmentation method comprises the following steps: the input image in the step (1) is a cell nucleus original image.
3. The method of claim 2, wherein the image segmentation method comprises the following steps: the backbone network in the step (1) is a ResNet-50 network, and the backbone network parameters are shared.
4. The method of claim 3, wherein the image segmentation method is based on pixel classification and distance regressionThe method is characterized in that: in the step (1), 5 image level features F with different resolutions of the generated cell nucleus image are extracted through a ResNet-50 network0,F1,F2,F3,F4Wherein the number of channels of each feature is 64, 256, 512, 1024, 2048 respectively.
5. The method of claim 4, wherein the image segmentation method comprises the following steps: the upsampling double-branch decoding network in the step (2) is to use the highest layer characteristic F in the step (1)4As input, restoring image resolution and respectively obtaining pixel classification characteristics of different levels
Figure FDA0003109018810000021
Sum distance regression feature
Figure FDA0003109018810000022
6. The method of claim 5, wherein the image segmentation method comprises the following steps: the global information perception module in the step (3) is used for obtaining the hierarchical feature F obtained in the step (1)1,F2,F3And said step (2) dual branch upsampled features as input, sub-imaging pixel classification global attention features
Figure FDA0003109018810000031
And
Figure FDA0003109018810000032
7. the method for segmenting the nuclear image based on the pixel classification and the distance regression as claimed in claim 6, wherein the step (3) further comprises the steps of:
(31) for coded network ResNet-50 networkThe feature taken is recorded as a high level feature, which is recorded as FiWherein i is 4;
(311) for the high-level features F in the step (31)1,F2,F3Carrying out global average pooling;
(312) for the high-level features F in the step (31)1,F2,F3Carrying out up-sampling;
(313) applying a Sigmoid function to the result of said step (311), expressed as follows:
βi=S(G(Fi)),
wherein, S (-) is a Sigmoid function, G (-) represents global average pooling, and i ═ 1,2, and 3 respectively represent high-level features with different resolutions;
(32) the input of the low-level features is the upsampled features obtained by the upsampling double-branch decoding module in the step (2); wherein the pixel classification is characterized by
Figure FDA0003109018810000033
Distance regression features are noted
Figure FDA0003109018810000034
Wherein i is 1,2, 3;
(321) spatially separable convolving the low-level features of step (32);
(322) carrying out element-level multiplication on the image features generated in the step (321) and the result in the step (313) to obtain new features;
(323) performing an element-level addition operation on the result of the step (322) and the result of the step (312);
the global information perception module of the pixel classification branch and the distance regression branch of step (3) may be represented as follows:
Figure FDA0003109018810000041
Figure FDA0003109018810000042
where Upsmple () is an upsampling operation, S () is a Sigmoid function, G (-) represents a global averaging pooling, Spconv (-) represents a spatially separable convolution, and i ═ 1,2, and 3 represent features of different resolutions, respectively.
8. The method of claim 7, wherein the image segmentation method comprises: the construction of the dual-branch feature aggregation module based on pixel classification and distance regression in the step (4) comprises the following steps:
(41) performing pixel-level addition of the result of step (3) and the result of step (2) as input to step (4);
(42) respectively convolving the pixel classification characteristic diagram and the distance regression characteristic diagram obtained in the step (41), and cascading the convolved characteristic diagrams;
(43) convolving the result of step (42) and performing a pixel-level addition with the result of step (41);
the information aggregation module of the pixel classification branch and the distance regression branch of step (4) may be represented as follows:
Figure FDA0003109018810000051
Figure FDA0003109018810000052
where Cat (-) denotes the signature channel cascade and Conv (-) denotes the 3 × 3 convolution.
9. The method of claim 8, wherein the image segmentation method comprises: in the network training process in the step (5), supervision in the network training process is divided into two parts: the penalty for pixel classification is a cross entropy penalty function, and the penalty for distance regression is a mean square error penalty function for normalized coordinates.
10. The method of claim 9, wherein the image segmentation method comprises: and (6) testing the image by using the trained network model parameters in the step (5) to generate a pixel classification result image and a distance regression result image, negating the distance regression result image by using a post-processing technology as a mark of the pixel classification image, and controlling a watershed algorithm by using the mark to obtain a final cell nucleus image segmentation result image.
CN202110645379.1A 2021-06-09 2021-06-09 Cell nucleus image segmentation method based on pixel classification and distance regression Active CN113409321B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110645379.1A CN113409321B (en) 2021-06-09 2021-06-09 Cell nucleus image segmentation method based on pixel classification and distance regression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110645379.1A CN113409321B (en) 2021-06-09 2021-06-09 Cell nucleus image segmentation method based on pixel classification and distance regression

Publications (2)

Publication Number Publication Date
CN113409321A true CN113409321A (en) 2021-09-17
CN113409321B CN113409321B (en) 2023-10-27

Family

ID=77683315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110645379.1A Active CN113409321B (en) 2021-06-09 2021-06-09 Cell nucleus image segmentation method based on pixel classification and distance regression

Country Status (1)

Country Link
CN (1) CN113409321B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710969A (en) * 2024-02-05 2024-03-15 安徽大学 Cell nucleus segmentation and classification method based on deep neural network

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0858051A2 (en) * 1997-02-10 1998-08-12 Delphi 2 Creative Technologies GmbH Digital image segmentation method
US20050163373A1 (en) * 2004-01-26 2005-07-28 Lee Shih-Jong J. Method for adaptive image region partition and morphologic processing
US20130266185A1 (en) * 2012-04-06 2013-10-10 Xerox Corporation Video-based system and method for detecting exclusion zone infractions
US20150376697A1 (en) * 2012-08-01 2015-12-31 Bgi-Shenzhen Method and system to determine biomarkers related to abnormal condition
CN106651887A (en) * 2017-01-13 2017-05-10 深圳市唯特视科技有限公司 Image pixel classifying method based convolutional neural network
US20180214105A1 (en) * 2017-01-31 2018-08-02 Siemens Healthcare Gmbh System and method breast cancer detection with x-ray imaging
CN110910388A (en) * 2019-10-23 2020-03-24 浙江工业大学 Cancer cell image segmentation method based on U-Net and density estimation
CN111144486A (en) * 2019-12-27 2020-05-12 电子科技大学 Heart nuclear magnetic resonance image key point detection method based on convolutional neural network
CN111462126A (en) * 2020-04-08 2020-07-28 武汉大学 Semantic image segmentation method and system based on edge enhancement
CN112102323A (en) * 2020-09-17 2020-12-18 陕西师范大学 Adherent nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN112396621A (en) * 2020-11-19 2021-02-23 之江实验室 High-resolution microscopic endoscope image nucleus segmentation method based on deep learning
CN112446892A (en) * 2020-11-18 2021-03-05 黑龙江机智通智能科技有限公司 Cell nucleus segmentation method based on attention learning
CN112541503A (en) * 2020-12-11 2021-03-23 南京邮电大学 Real-time semantic segmentation method based on context attention mechanism and information fusion

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0858051A2 (en) * 1997-02-10 1998-08-12 Delphi 2 Creative Technologies GmbH Digital image segmentation method
US20050163373A1 (en) * 2004-01-26 2005-07-28 Lee Shih-Jong J. Method for adaptive image region partition and morphologic processing
US20130266185A1 (en) * 2012-04-06 2013-10-10 Xerox Corporation Video-based system and method for detecting exclusion zone infractions
US20150376697A1 (en) * 2012-08-01 2015-12-31 Bgi-Shenzhen Method and system to determine biomarkers related to abnormal condition
CN106651887A (en) * 2017-01-13 2017-05-10 深圳市唯特视科技有限公司 Image pixel classifying method based convolutional neural network
US20180214105A1 (en) * 2017-01-31 2018-08-02 Siemens Healthcare Gmbh System and method breast cancer detection with x-ray imaging
CN110910388A (en) * 2019-10-23 2020-03-24 浙江工业大学 Cancer cell image segmentation method based on U-Net and density estimation
CN111144486A (en) * 2019-12-27 2020-05-12 电子科技大学 Heart nuclear magnetic resonance image key point detection method based on convolutional neural network
CN111462126A (en) * 2020-04-08 2020-07-28 武汉大学 Semantic image segmentation method and system based on edge enhancement
CN112102323A (en) * 2020-09-17 2020-12-18 陕西师范大学 Adherent nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN112446892A (en) * 2020-11-18 2021-03-05 黑龙江机智通智能科技有限公司 Cell nucleus segmentation method based on attention learning
CN112396621A (en) * 2020-11-19 2021-02-23 之江实验室 High-resolution microscopic endoscope image nucleus segmentation method based on deep learning
CN112541503A (en) * 2020-12-11 2021-03-23 南京邮电大学 Real-time semantic segmentation method based on context attention mechanism and information fusion

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
QIANG ZHANG 等: "A structure-aware splitting framework for separating cell clumps in biomedical images", 《SIGNAL PROCESSING》 *
QIANG ZHANG 等: "A structure-aware splitting framework for separating cell clumps in biomedical images", 《SIGNAL PROCESSING》, vol. 168, 31 March 2020 (2020-03-31), pages 1 - 13 *
SHI YIN 等: "Automatic kidney segmentation in ultrasound images using subsequent boundary distance regression and pixelwise classification networks", 《MEDICAL IMAGE ANALYSIS》 *
SHI YIN 等: "Automatic kidney segmentation in ultrasound images using subsequent boundary distance regression and pixelwise classification networks", 《MEDICAL IMAGE ANALYSIS》, 29 February 2020 (2020-02-29), pages 1 - 14 *
崔凤 等: "主动学习的白细胞图像自动分割", 《中国图象图形学报》 *
崔凤 等: "主动学习的白细胞图像自动分割", 《中国图象图形学报》, vol. 17, no. 8, 31 August 2012 (2012-08-31), pages 1029 - 1034 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710969A (en) * 2024-02-05 2024-03-15 安徽大学 Cell nucleus segmentation and classification method based on deep neural network
CN117710969B (en) * 2024-02-05 2024-06-04 安徽大学 Cell nucleus segmentation and classification method based on deep neural network

Also Published As

Publication number Publication date
CN113409321B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
Bashir et al. A comprehensive review of deep learning-based single image super-resolution
CN111563902B (en) Lung lobe segmentation method and system based on three-dimensional convolutional neural network
CN112308860B (en) Earth observation image semantic segmentation method based on self-supervision learning
Li et al. Multiscale features supported DeepLabV3+ optimization scheme for accurate water semantic segmentation
Pei et al. Does haze removal help cnn-based image classification?
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
Meng et al. Single-image dehazing based on two-stream convolutional neural network
Anvari et al. Dehaze-GLCGAN: unpaired single image de-hazing via adversarial training
CN110070517A (en) Blurred picture synthetic method based on degeneration imaging mechanism and generation confrontation mechanism
CN115909006B (en) Mammary tissue image classification method and system based on convolution transducer
CN116205962B (en) Monocular depth estimation method and system based on complete context information
Guo et al. A novel transformer-based network with attention mechanism for automatic pavement crack detection
CN115205672A (en) Remote sensing building semantic segmentation method and system based on multi-scale regional attention
CN114972378A (en) Brain tumor MRI image segmentation method based on mask attention mechanism
CN115272777A (en) Semi-supervised image analysis method for power transmission scene
CN115546466A (en) Weak supervision image target positioning method based on multi-scale significant feature fusion
CN117036281A (en) Intelligent generation method and system for defect image
CN105956610A (en) Remote sensing image landform classification method based on multi-layer coding structure
CN117727046A (en) Novel mountain torrent front-end instrument and meter reading automatic identification method and system
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks
CN113409321A (en) Cell nucleus image segmentation method based on pixel classification and distance regression
Gupta et al. A robust and efficient image de-fencing approach using conditional generative adversarial networks
Ye et al. FMAM-Net: fusion multi-scale attention mechanism network for building segmentation in remote sensing images
Jiang et al. Mask‐guided image person removal with data synthesis
CN112036246B (en) Construction method of remote sensing image classification model, remote sensing image classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant