CN114639102B - Cell segmentation method and device based on key point and size regression - Google Patents

Cell segmentation method and device based on key point and size regression Download PDF

Info

Publication number
CN114639102B
CN114639102B CN202210506262.XA CN202210506262A CN114639102B CN 114639102 B CN114639102 B CN 114639102B CN 202210506262 A CN202210506262 A CN 202210506262A CN 114639102 B CN114639102 B CN 114639102B
Authority
CN
China
Prior art keywords
cell
point information
key point
scale
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210506262.XA
Other languages
Chinese (zh)
Other versions
CN114639102A (en
Inventor
吕行
王华嘉
邝英兰
范献军
蓝兴杰
黄仁斌
叶莘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Hengqin Shengao Yunzhi Technology Co ltd
Original Assignee
Zhuhai Hengqin Shengao Yunzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Hengqin Shengao Yunzhi Technology Co ltd filed Critical Zhuhai Hengqin Shengao Yunzhi Technology Co ltd
Priority to CN202210506262.XA priority Critical patent/CN114639102B/en
Publication of CN114639102A publication Critical patent/CN114639102A/en
Application granted granted Critical
Publication of CN114639102B publication Critical patent/CN114639102B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a cell segmentation method and a cell segmentation device based on key point and size regression, wherein the method comprises the following steps: performing feature extraction on a cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented; performing key point detection and size regression analysis based on the feature map to obtain key point information and cell size information associated with the key point information, and determining a cell detection frame based on the key point information and the cell size information associated with the key point information; wherein, any key point information comprises the position information of the key point corresponding to the cell detection frame; and performing cell segmentation on the characteristic diagram based on the cell detection frame to obtain a cell segmentation result of the cell image to be segmented. The invention improves the cell segmentation performance of irregular cells in a dense scene.

Description

Cell segmentation method and device based on key point and size regression
Technical Field
The invention relates to the field of image segmentation, in particular to a cell segmentation method and a cell segmentation device based on key point and size regression.
Background
In medical cell image analysis, detection of cell images and image segmentation are extremely important links, and are also basic preconditions for studies such as identification of cell images. However, real cell images have diversity and complexity, in some cases, there may be a cell mass formed by a plurality of dense cells in the cell images, and the task of segmenting the cell images for the dense cells may face more difficulty, which may easily cause errors in the segmentation result. Therefore, a cell image segmentation method capable of adapting to dense cell scenes is needed.
At present, traditional anchor box (anchor box) based example segmentation models, such as Mask-RCNN, Pointrend and the like, are mostly adopted in the segmentation task for cell images. When the cell segmentation is performed by the above-mentioned cell example segmentation model based on the anchor frame, it is usually necessary to exhaust the potential target locations in advance and generate the anchor frame at the corresponding location, so as to predict the target bounding box based on the above-mentioned anchor frame. However, when there is a large amount of cell aggregation, a plurality of cells are overlapped or pressed, and the pre-mentioned anchor frame may miss some cells in the cell mass, resulting in missing some cells in the cell segmentation result. In addition, although the performance degradation caused by the anchor frame is reduced in the existing partial anchor-free example division model, the problem of inaccuracy in prediction of the cell detection frame still exists, so that the selection of the cell detection frame in the later period is difficult, and the subsequent cell division task effect on the cell detection frame is poor. Therefore, a more accurate target detection frame prediction method is needed to cope with the cell segmentation problem in the dense cell scene.
Disclosure of Invention
The invention provides a cell segmentation method and a cell segmentation device based on key point and size regression, which are used for solving the defect of poor prediction accuracy of a cell detection frame in the prior art.
The invention provides a cell segmentation method based on key point and size regression, which comprises the following steps:
extracting features of a cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, and detecting key points based on the feature map to obtain key point information and cell size information related to the key point information; wherein, any key point information comprises the position information and the type of the key point corresponding to the cell detection frame; the cell size information represents size information of a cell detection frame corresponding to the associated key point information;
searching angular point information corresponding to each piece of central point information by taking the key point information except the central point as a searching object in the searching range of each piece of central point information in the key point information; the searching range of any central point information is determined and obtained based on cell size information related to any central point information;
generating a cell detection frame based on the central point information and the corner point information corresponding to the central point information;
and performing cell segmentation on the characteristic diagram based on the cell detection frame to obtain a cell segmentation result of the cell image to be segmented.
According to the cell segmentation method based on the key point and size regression provided by the invention, in the search range of each piece of central point information in the key point information, the key point information except the central point is taken as a search object, and the corner point information corresponding to each piece of central point information is searched, specifically comprising the following steps:
determining an initial search box of any central point information based on any central point information and cell size information related to any central point information; the initial search frame is a rectangular frame which takes any central point information as a center and has a size suitable for cell size information related to any central point information;
determining a search range of any central point information by taking an angular point of the initial search box of any central point information as a search center and a preset threshold value as a search radius;
and searching corner information corresponding to any central point information based on the type of each key point information in the searching range of any central point information.
According to the cell segmentation method based on the key point and size regression provided by the invention, the feature extraction is carried out on the cell image to be segmented to obtain the feature map corresponding to the cell image to be segmented, the key point detection is carried out on the basis of the feature map to obtain the key point information and the cell size information related to the key point information, and the method specifically comprises the following steps:
carrying out multi-scale feature extraction on a cell image to be segmented to obtain feature maps under all scales;
after an up-sampling feature map output by an up-sampling module of a previous scale is overlapped with a feature map of the previous scale, the feature map is input to an up-sampling module of a current scale to obtain an up-sampling feature map output by the up-sampling module of the current scale; wherein, the input of the first up-sampling module is a characteristic diagram under the highest-order scale;
and performing key point detection and cell size regression based on the up-sampling characteristic diagram output by the up-sampling module of each scale to obtain key point information and cell size information related to the key point information under each scale.
According to the cell segmentation method based on key point and size regression provided by the invention, the cell detection frame is generated based on each piece of central point information and the corner point information corresponding to each piece of central point information, and the method specifically comprises the following steps:
generating candidate detection frames under each scale based on each piece of central point information under each scale and the corner point information corresponding to each piece of central point information;
fusing the candidate detection frames under each scale by adopting a weighted non-maximum inhibition method to obtain cell detection frames under each scale; wherein, the candidate detection frame at the higher order scale has a larger weight, and the candidate detection frame with higher confidence coefficient has a larger weight.
According to the cell segmentation method based on the key point and size regression provided by the invention, the key point detection is carried out on the up-sampling feature map output by the up-sampling module based on each scale, and the method specifically comprises the following steps:
acquiring branches by using the key point heat maps of the key point prediction module, and respectively detecting key points of the up-sampling characteristic maps output by the up-sampling modules of all scales to obtain the key point heat maps of all scales;
utilizing the offset prediction branch of the key point prediction module to determine the key point offset under each scale based on the up-sampling characteristic diagram output by the up-sampling module of each scale; wherein the keypoint offset in any scale represents the coordinate offset of each keypoint in any scale when the keypoint is mapped to the cell image to be segmented from the keypoint heat map in any scale;
and determining the key point information under each scale based on the key point heat map and the key point offset under each scale.
According to the cell segmentation method based on the key point and size regression, provided by the invention, the loss function of the key point prediction module during training comprises key point heat map loss and offset loss;
wherein the keypoint heat map loss characterizes a difference between activation results of the sample keypoint heat map and sample keypoints determined based on annotation results of the sample cell images; the sample keypoint heat map comprises keypoints in the sample cell image obtained by the offset prediction branch prediction;
the offset loss characterizes an error of a sample keypoint offset predicted by the offset prediction branch.
According to the cell segmentation method based on the key point and size regression provided by the invention, the cell segmentation is performed on the feature map based on the cell detection frame to obtain the cell segmentation result of the cell image to be segmented, and the method specifically comprises the following steps:
respectively intercepting the feature maps under all scales based on any cell detection frame to obtain intercepted features under all scales;
the current intercepted fusion feature is subjected to upsampling and then fused with the intercepted feature under the corresponding scale, so that the next intercepted fusion feature is obtained; wherein, the first interception fusion characteristic is the interception characteristic under the highest order scale;
performing cell mask prediction based on the last intercepted fusion feature to obtain a cell mask prediction result corresponding to any cell detection frame;
and determining the cell segmentation result based on the cell mask prediction result corresponding to each cell detection frame.
According to the cell segmentation method based on key point and size regression, the cell segmentation is realized by a cell segmentation module;
wherein the loss function of the cell segmentation module during training comprises mask loss and edge loss; the mask loss represents the difference between the sample cell mask prediction result obtained by the cell segmentation module prediction and the cell mask marking result; the edge loss characterizes the difference between the cell edge prediction result and the cell edge labeling result of the sample; the sample cell edge prediction result is determined based on the sample cell mask prediction result, and the cell edge labeling result is determined based on the cell mask labeling result.
The invention also provides a cell segmentation device based on key point and size regression, which comprises:
the key point regression unit is used for extracting the features of the cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, and detecting key points based on the feature map to obtain key point information and cell size information related to the key point information; wherein, any key point information comprises the position information and the type of the key point corresponding to the cell detection frame; the cell size information represents size information of a cell detection frame corresponding to the associated key point information;
the corner searching unit is used for searching the corner information corresponding to each piece of central point information by taking the piece of central point information except the central point as a searching object in the searching range of each piece of central point information in the piece of key point information; the searching range of any central point information is determined and obtained based on cell size information related to any central point information;
a detection frame generating unit, configured to generate a cell detection frame based on the respective pieces of central point information and corner point information corresponding to the respective pieces of central point information;
and the cell segmentation unit is used for carrying out cell segmentation on the characteristic diagram based on the cell detection frame to obtain a cell segmentation result of the cell image to be segmented.
The invention also provides an electronic device, comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the cell segmentation method based on the key point and the size regression.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for cell segmentation based on key point and size regression as described in any of the above.
The present invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of any of the above methods for cell segmentation based on both key point and size regression.
The invention provides a cell segmentation method and a device based on key point and size regression, which are characterized in that key point detection is carried out based on a characteristic graph to obtain each key point information and cell size information related to the key point information, then the cell size information related to each key point information and key point information is comprehensively utilized to search the corner point information of each central point information, thereby generating and determining a corresponding cell detection frame based on the central point information and the angular point information thereof, and restricting the size of the cell detection frame through the cell size information, searching key point information of the same cell detection frame in a constraint range, avoiding cell segmentation failure caused by confusion of key points of cell detection frames of different cells when the cells are dense, thereby improving the accuracy of cell detection frame prediction, and then cell segmentation is carried out according to each cell detection frame, and the cell segmentation performance of irregular cells in a dense scene is improved.
Drawings
In order to more clearly illustrate the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart of a cell segmentation method based on key point and size regression according to the present invention;
FIG. 2 is a schematic diagram of a corner point search method provided by the present invention;
FIG. 3 is a schematic diagram of a multi-scale feature extraction method provided by the present invention;
FIG. 4 is a schematic structural diagram of a CBAM module provided by the present invention;
FIG. 5 is a schematic diagram of a multi-scale keypoint regression method provided by the present invention;
FIG. 6 is a schematic diagram of a cell segmentation branch provided by the present invention;
FIG. 7 is a flow chart illustrating a segmentation model construction method according to the present invention;
FIG. 8 is a block diagram of a segmentation model provided by the present invention;
FIG. 9 is a schematic diagram comparing the effects of the models provided by the present invention;
FIG. 10 is a schematic diagram of a cell segmentation apparatus based on key point and size regression according to the present invention;
fig. 11 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flowchart of a cell segmentation method based on key point and size regression according to an embodiment of the present invention, as shown in fig. 1, the method includes:
step 110, extracting features of a cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, and detecting key points based on the feature map to obtain key point information and cell size information related to the key point information; wherein, any key point information comprises the position information and the type of the key point corresponding to the cell detection frame; the cell size information characterizes size information of the cell detection frame corresponding to the associated keypoint information.
Here, the image feature extraction network may be used to perform feature extraction, obtain semantic information of each cell in the cell image to be segmented, and obtain a feature map corresponding to the cell image to be segmented. For example, a convolutional neural network may be used to perform down-sampling on a cell image to be segmented for multiple times, and the highest-level feature map output by the last down-sampling may be used as the feature map corresponding to the image. In addition, under the scene of different cell sizes, a multi-scale feature extraction mode can be adopted to gradually extract feature maps of cells to be segmented under various scales so as to accurately extract semantic information of the cells with different sizes.
And then, carrying out key point detection based on the characteristic diagram to obtain key point information in the cell image to be segmented and cell size information related to each key point information. The information of any key point comprises position information and type of the key point corresponding to the cell detection frame, the key point comprises a central point and/or an angular point of the cell detection frame, and the specific central point or angular point of the key point can be determined according to the type of the key point in the information of the key point.
In addition, if the feature maps of the cell image to be segmented under each scale are extracted, the feature maps under each scale can be used for predicting key points respectively, so that key point information under each scale can be obtained. Specifically, the feature map corresponding to the next scale may be gradually fused from the feature map at the highest scale, and after each feature map fusion, the key point prediction may be performed based on the fused feature map obtained by the fusion, so as to sequentially obtain the key point information from the high-order scale to the low-order scale. When any one feature map is fused, the image semantic information in each feature map from the highest-order scale to the current scale can be integrated to obtain a fused feature map at the current scale, and then the key point prediction is performed based on the image semantic information in the fused feature map at the current scale to obtain the key point information at the current scale. The image semantic information in the feature map under the current scale and the previous scales is utilized when the key point prediction is carried out each time, so that the method can adapt to the key point prediction of cells with different sizes and deformed cells, and the accuracy and comprehensiveness of the key point prediction under a dense cell scene are improved.
It should be noted that whether the multi-scale feature extraction and multi-scale key point prediction mode is adopted may be determined according to an actual application scenario, which is not specifically limited in the embodiment of the present invention. If the cell sizes in the practical application scene are not large, the highest-layer feature map output by the feature extraction network can be used as the feature map of the cell image to be segmented, and the key point prediction is carried out on the basis of the feature map; if the cell size difference in the practical application scene is large, the feature map under each scale obtained by each down-sampling of the feature extraction network can be obtained, and the key point prediction is carried out according to the feature map under each scale respectively to obtain the key point information under each scale.
When the key point prediction is performed, cell size regression may be performed according to the feature map, and cell size information associated with each key point information is determined, or cell size information is preset according to an average size of cells in a current application scenario. The cell size information reflects the size of the cell detection frame corresponding to the associated key point information.
Step 120, searching the corner information corresponding to each piece of central point information by taking the key point information except the central point as a searching object in the searching range of each piece of central point information in the key point information; and determining the search range of any central point information based on the cell size information associated with any central point information.
Here, according to the number of cells possible in an actual application scenario, a plurality of pieces of key point information with high confidence (that is, the possibility of key points of the cell detection frame) may be selected from each piece of key point information, and used as a basis for subsequently determining the cell detection frame. Subsequently, the cell size information obtained in the above steps can be used to constrain the cell detection frame, so as to determine the key point information of the same cell detection frame, so that the size of the cell detection frame is adapted to the corresponding cell size information, thereby improving the accuracy of the cell detection frame obtained in the dense cell scene. Specifically, the center point information is determined based on the type of any key point information. And determining a searching range of the central point information according to the central point information and the cell size information related to the central point information, so as to search other key point information which belongs to the same cell detection frame as the central point in the searching range. The cell size information related to the center point information is used for defining a search range, so that the corner point information which is most possibly the same as the center point information and belongs to one cell detection frame can be included to a large extent, and the corner point information of the cell detection frames belonging to other cells can be excluded as much as possible.
And then, searching the corner point information corresponding to each piece of central point information by taking the key point information except each piece of central point information as a searching object in the searching range of each piece of central point information. For any piece of center point information, other pieces of key point information may be used as search objects, and key point information of which the types are corner points (for example, an upper left key point and a lower right key point) is searched in the search range of the center point information. If other corner information is searched and the same type of corner information is larger than 1 (for example, two upper left key points are searched in the search range), the searched corner information can be further screened according to the cell size information associated with the center point information to obtain the corner information corresponding to the center point information. And then combining the central point information and the corner point information corresponding to the central point information into a key point set, and generating a corresponding cell detection frame by using the key point set. If a multi-scale key point prediction mode is adopted, when searching for a corner point, the corner point information of the center point information can be searched based on any center point information and other key point information under the scale corresponding to the center point information is used as a search object.
In addition, the cell detection frame can also be determined directly according to the central point information in the key point information and the cell size information related to the central point information, so that the acquisition efficiency of the cell detection frame is improved.
And 130, generating a cell detection frame based on each piece of central point information and the corner point information corresponding to each piece of central point information.
Here, a rectangular frame using the corner information of each piece of center point information (for example, the upper left key point information and the lower right key point information) as corners may be generated as the cell detection frame corresponding to each piece of center point information, respectively, based on each piece of center point information and the corner information corresponding to each piece of center point information.
And 140, performing cell segmentation on the characteristic diagram based on the cell detection frame to obtain a cell segmentation result of the cell image to be segmented.
Here, in order to avoid mutual interference between cells, the feature map of the cell image to be segmented may be segmented within the range of each cell detection frame based on each cell detection frame, and the cell region in each cell detection frame may be obtained as the cell segmentation result of the cell image to be segmented. In addition, if a multi-scale feature extraction scheme is adopted, a traversing segmentation mode can be adopted when cell segmentation is carried out, and the feature maps under all scales are used for carrying out binarization segmentation on potential cell areas under each cell detection frame one by one, so that a cell segmentation result of a cell image to be segmented is obtained.
The method provided by the embodiment of the invention detects the key points based on the characteristic diagram to obtain each piece of key point information and the cell size information associated with the key point information, then comprehensively utilizes each piece of key point information and the cell size information associated with the key point information to search the corner point information of each piece of central point information, thereby generating and determining a corresponding cell detection frame based on the central point information and the corner point information thereof, and constraining the size of the cell detection frame through the cell size information, the key point information of the same cell detection frame is searched in the constraint range, the failure of cell segmentation caused by confusion of the key points of the cell detection frames of different cells when the cells are dense is avoided, thereby improving the accuracy of cell detection frame prediction, and then cell segmentation is carried out according to each cell detection frame, so that the cell segmentation performance of irregular cells in a dense scene is improved.
Based on the above embodiment, the searching, in the search range of each piece of central point information in the key point information, for the corner point information corresponding to each piece of central point information by using key point information other than the central point as a search object, specifically includes:
determining an initial search frame of any central point information based on any central point information and cell size information related to any central point information; the initial search frame is a rectangular frame which takes any central point information as a center and has a size suitable for cell size information related to any central point information;
determining a search range of any central point information by taking an angular point of the initial search box of any central point information as a search center and a preset threshold as a search radius;
and searching corner information corresponding to any central point information based on the type of each key point information in the searching range of any central point information.
Specifically, as shown in fig. 2, for any piece of center point information, an initial search box centered on the center point information (as shown by a dashed box in fig. 2, the center point information is a center point of a dashed box) may be defined based on the center point information and cell size information associated with the center point information, and the size of the initial search box is adapted to the cell size information associated with the center point information. Usually, the corner information corresponding to the center point information should be at or near the corner of the initial search box. Therefore, the search range of the center point information (e.g., the solid line frames at the upper left corner and the lower right corner in fig. 2) can be determined by taking the corner point of the initial search box of the center point information as the search center and taking a preset threshold as the search radius. Here, the size of the search radius may be set according to the cell density, and the search radius may be set to be smaller as the cells are denser, so as to avoid enclosing the corner points of the cell detection frames of too many other cells in the search range.
And then, searching corner point information corresponding to the central point information based on the types of other key point information in the searching range of the central point information. Specifically, for any search range of the center point information, other key point information may be used as a search object in the search range, and according to the types of the other key point information, the corner point information which is the same as the center point type of the search range (for example, all the corner points are upper left corner points or lower right corner points) and is closest to the center point of the search range is searched, so that the accuracy of the cell detection frame obtained in accordance with the search object is improved, and particularly, in a dense cell scene, the most appropriate cell detection frame corresponding to each cell may be obtained. As shown in fig. 2, it is assumed that the type of the keypoint information is the top-left corner
Figure M_220509134633142_142265001
And
Figure M_220509134633205_205319002
and the type being the lower right corner point
Figure M_220509134633236_236545003
And
Figure M_220509134633267_267788004
. Taking the search range of the lower right corner as an example, since the center point of the search range is the lower right corner point of the initial search frame, the lower right corner point located in the search range and closest to the center point of the search range, that is, the lower right corner point can be searched in the key point information corresponding to the same scale
Figure M_220509134633299_299045005
Based on any of the embodiments, the performing feature extraction on the cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, and performing key point detection based on the feature map to obtain key point information and cell size information associated with the key point information specifically includes:
carrying out multi-scale feature extraction on a cell image to be segmented to obtain feature maps under all scales;
after an up-sampling feature map output by an up-sampling module of a previous scale is overlapped with a feature map of the previous scale, the feature map is input to an up-sampling module of a current scale to obtain an up-sampling feature map output by the up-sampling module of the current scale; wherein, the input of the first up-sampling module is a characteristic diagram under the highest-order scale;
and performing key point detection and cell size regression based on the up-sampling characteristic diagram output by the up-sampling module of each scale to obtain key point information and cell size information related to the key point information under each scale.
In particular, a multi-scale feature extraction network can be constructed to realize multi-scale feature extraction for the cell image to be segmented. The multi-scale feature extraction network can be implemented by various backbone networks, for example, a basic encoder of the Unet, Resnet, DLAnet, Densenet, Imagenet, Efficientnet, and the like. As shown in fig. 3, the multi-scale feature extraction network gradually obtains feature maps at different scales through a 3-5-level down-sampling process. The feature maps at different scales contain image semantic information at different degrees, the feature map corresponding to a higher scale has richer high-level semantic information, such as category information, and the feature map corresponding to a lower scale has richer low-level semantic information, such as position information.
Therefore, subsequent key point prediction operation and cell segmentation operation can be performed by using the feature maps corresponding to a plurality of scales to integrate the information contained in the feature maps corresponding to different scales, so that cells with different sizes can be captured and segmented more accurately. In addition, a CBAM Attention mechanism may be fused in the backbone network, for example, a CBAM (Attention mechanism Module of a convolution Module) Module may be integrated in each Resnet Block to improve the accuracy of feature extraction. The CBAM module integrated in each Resnet Block is shown in fig. 4.
As shown in fig. 5, the feature map (1024 × 64) corresponding to the highest-order scale output by the last layer of the multi-scale feature extraction network is input to the first upsampling module (i.e., the upsampling module of the highest-order scale), the first upsampling module performs upsampling on the feature map, the feature map is multiplied, the upsampled feature map is output, the upsampled feature map is then superimposed with the feature map (512 × 128) of the same scale, and the superimposed result is used as the input of the upsampling module of the next scale, so as to obtain the upsampled feature map output by the upsampling module of the next lower scale. By analogy, the upsampling characteristic diagrams output by the upsampling modules of all scales can be obtained in sequence.
The up-sampling feature map output by the up-sampling module of each scale can be input to the key point prediction module, and the plurality of regression heads are utilized to perform key point detection and cell size regression of the cell detection frame, so that key point information and cell size information associated with the key point information under each scale are obtained.
Here, since the sizes of cells in the cell image are not completely consistent, especially in a dense cell scene, there are problems that the cells are squeezed each other, the cells are deformed, and the like, and thus the sizes of the respective cells are different. Considering that the cell size information is the key information for determining the cell detection frame, the cell size information associated with each key point information can be determined by performing regression calculation on the key point information under each scale, so that the cell size information can more accurately reflect the size of the cell corresponding to each key point information, and the accuracy of the cell detection frame determined according to the cell size information can be improved. According to the obtained key point information and the cell size information associated with the key point information under each scale, a plurality of pieces of key point information (including but not limited to a central point, an upper left corner, a lower right corner and the like) under the same scale can be combined into a plurality of key point sets through a distance rule determined based on the cell size information, and corresponding cell detection frames are respectively generated through the key point sets.
Based on any one of the embodiments, the generating a cell detection frame based on the center point information and the corner point information corresponding to each center point information specifically includes:
generating candidate detection frames under each scale based on each piece of central point information under each scale and the corner point information corresponding to each piece of central point information;
fusing the candidate detection frames under each scale by adopting a weighted non-maximum inhibition method to obtain cell detection frames under each scale; wherein, the candidate detection frame at the higher order scale has a larger weight, and the candidate detection frame with higher confidence coefficient has a larger weight.
Specifically, since cell detection frames obtained by the search method may overlap with each other at different scales, candidate detection frames corresponding to each piece of center point information at each scale may be generated according to the pair of each piece of center point information at each scale and the pair of corner point information corresponding to each piece of center point information, and then the candidate detection frames are screened. Here, a Non-maximum Suppression (NMS) method may be used to screen candidate detection boxes at each scale. To further improve the accuracy of the selected cell detection frame, a weighted NMS method may be used to fuse and screen candidate detection frames to obtain a more accurate cell detection frame. Specifically, in the non-maximum suppression method, a weight may be set for each candidate detection frame, and the greater the weight of any cell detection frame is, the higher the possibility that the cell detection frame is selected and left is, and conversely, the smaller the weight of any cell detection frame is, the more likely the cell detection frame is to be filtered out.
Here, in setting the weight for each of the candidate detection frames, the weight of the candidate detection frame at a higher order scale may be set to be larger, but the weights of the candidate detection frames at the respective scales may be set to be uniform. The image semantic information applied in the candidate detection frame determined by performing focus search based on the key point information and the cell size information determined based on the up-sampling feature map with the higher order scale is higher and richer, so that the accuracy of the candidate detection frame is possibly higher, and the weight value is also correspondingly set to be larger. In addition, the higher the confidence, the higher the weight of the candidate detection box. The confidence of the candidate detection frame can be determined based on the energy value of the key point heat map in the candidate detection frame. For example, the following formula can be used to screen candidate detection frames for cell detection frames:
Figure P_220509134633314_314696001
wherein, biRefers to the heat map mean value, w, of each of a plurality of candidate detection frames (including different scales) formed for the same objectiThe weight corresponding to each candidate detection frame, bpreThen get wi*biAnd taking the candidate detection frame corresponding to the maximum value as the cell detection frame of the object.
Based on any of the above embodiments, the performing key point detection on the upsampling feature map output by the upsampling module based on each scale specifically includes:
acquiring branches by using the key point heat maps of the key point prediction module, and respectively detecting key points of the up-sampling characteristic maps output by the up-sampling modules of all scales to obtain the key point heat maps of all scales;
utilizing the offset prediction branch of the key point prediction module to determine the key point offset under each scale based on the up-sampling characteristic diagram output by the up-sampling module of each scale; the key point offset under any scale represents the coordinate offset of each key point under any scale when the key point is mapped to the cell image to be segmented from the key point heat map under any scale;
and determining the key point information under each scale based on the key point heat map and the key point offset under each scale.
In particular, the keypoint detection task may be split into two branches, a keypoint heat map fetch branch and an offset prediction branch. And the key point heat map acquisition branch is used for respectively carrying out key point detection on the up-sampling characteristic maps output by the up-sampling modules of all scales to obtain the key point heat maps of all scales. Each key point in the key point heat map may be stored according to a key point type subchannel, for example, a center point, an upper left corner point, and an upper right corner point may be stored in subchannels, so as to identify the type of each key point in subsequent operations.
In addition, the offset prediction branch is used for determining the key point offset under each scale based on the up-sampling feature map output by the up-sampling module of each scale; and the key point offset in any scale represents the coordinate offset of each key point in the scale when the key point is mapped to the cell image to be segmented from the key point heat map in the scale. Based on the key point heat map and the key point offset in each scale, the position of the key point in the cell image to be segmented in each scale can be calculated, and the key point information in each scale is obtained.
In any of the above embodiments, the loss function of the keypoint prediction module during training comprises a keypoint heat map loss and an offset loss;
wherein the keypoint heat map loss characterizes a difference between an activation result of the sample keypoint heat map and a sample keypoint determined based on an annotation result of the sample cell image; the sample keypoint heat map comprises keypoints in the sample cell image obtained by the offset prediction branch prediction;
the offset loss characterizes an error of a sample keypoint offset predicted by the offset prediction branch.
Specifically, to constrain the formation of keypoints, keypoint heat map loss and offset loss may be employed to constrain the keypoint heat map acquisition branch and offset prediction branch, respectively, when training the keypoint prediction module. Wherein the keypoint heat map loss characterizes a difference between an activation result of the sample keypoint heat map and a sample keypoint determined based on an annotation result of the sample cell image. For example, the loss of the key point heat map may be constrained by using BCE loss, according to the following formula, the result of the predicted key point heat map after Sigmoid activation and groudtruth:
Figure P_220509134633347_347373001
wherein, yiRefers to the Group Truth (GT), ŷ of the pixel point iiWhich is the predicted value of pixel point i. Here GT is generated by first forming GT Bbox based on cell labeling to form three keypoint coordinates, further generating a circle of diameter r based on the coordinates, and resampling the circle at different scales as GT for obtaining branches for keypoint heatmaps at each scale.
The offset loss represents the error of the sample key point offset obtained by predicting the offset prediction branch, and can be used for eliminating the error brought by discretization. The offset loss can be calculated using the following equation:
Figure P_220509134633379_379134001
the offset loss only calculates the offset error of the position of the key point, and does not calculate the offset errors of other positions. N is the number of cells, p is the absolute position of the key point,
Figure M_220509134633410_410408001
to quantify the location of the keypoints at different scales,
Figure M_220509134633441_441617002
for offset prediction, the loss may be substantially L1 loss.
For the constraint of cell size information (the size of the cell may be the length and width of the cell), the same can be constructed using L1 loss type, and the length and width of the cell calculated by Bbox of GT is taken as the size of each cell
Figure P_220509134633472_472900001
Thereby constraining regressed cell size information
Figure M_220509134633488_488481001
:
Figure P_220509134633536_536305001
Wherein s iskThe actual cell size of the kth cell;
Figure M_220509134633552_552037001
is the largest abscissa of the kth cell,
Figure M_220509134633599_599327002
the smallest abscissa of the kth cell,
Figure M_220509134633630_630579003
is the length of the kth cell;
Figure M_220509134633661_661824004
is the largest ordinate of the kth cell,
Figure M_220509134633677_677458005
is the smallest ordinate of the kth cell,
Figure M_220509134633708_708739006
is the width of the kth cell; l. thesizeIs the average value of the difference between the regressed cell size information and the actual cell size.
Based on any one of the embodiments, the cell segmentation on the feature map based on the cell detection frame to obtain the cell segmentation result of the cell image to be segmented specifically includes:
respectively intercepting the feature maps under all scales based on any cell detection frame to obtain intercepted features under all scales;
the current intercepted fusion feature is subjected to upsampling and then fused with the intercepted feature under the corresponding scale, so that the next intercepted fusion feature is obtained; wherein, the first interception fusion feature is the interception feature under the highest order scale;
performing cell mask prediction based on the last intercepted fusion feature to obtain a cell mask prediction result corresponding to any cell detection frame;
and determining the cell segmentation result based on the cell mask prediction result corresponding to each cell detection frame.
Specifically, in order to avoid mutual interference between cells, the feature maps at each scale may be segmented based on the above-described cell detection frames. As shown in FIG. 6, feature extraction can be respectively carried out on feature maps (F0-F4) at each scale according to any Cell detection frame (Cell Bbox), so as to obtain extraction features at each scale. Starting from the intercepted feature of the feature map (F4) under the highest-order scale, the intercepted feature is subjected to upsampling and then is fused with the intercepted feature under the corresponding scale, and the first intercepted and fused feature is obtained. And then, the first interception fusion feature is sampled upwards and then fused with the interception feature under the corresponding scale to obtain the next interception fusion feature. And repeating the steps until the interception features corresponding to the last scale are fused to obtain the last interception fusion feature.
For example, deconvolution with a convolution kernel of 3 × 3 may be performed on the truncated features of the feature map (F4) at the highest order scale, and the number of layers may be reduced from 1024 to 512 by upsampling, and then two-dimensional convolution with a convolution kernel of 1 × 1 is performed after stitching with the truncated vector of the feature map (F3) at the next highest order scale to obtain a new truncated and fused vector (512 × X2 × Y2); and sequentially carrying out the operations until the cut features of the feature graph (F0) corresponding to the lowest-order scale are spliced, and then obtaining the final cut fusion features through the arrangement of a 3-by-3 convolution kernel.
And performing cell mask prediction based on the final intercepted fusion characteristics to obtain a cell mask prediction result corresponding to the cell detection frame. For example, the final truncated fusion feature may be activated by using a Sigmoid function, and a cell mask prediction result corresponding to the cell detection frame may be obtained.
According to any of the above embodiments, the cell segmentation is performed by a cell segmentation module;
wherein the loss function of the cell segmentation module during training comprises mask loss and edge loss; the mask loss represents the difference between the sample cell mask prediction result obtained by the cell segmentation module and the cell mask marking result; the edge loss characterizes the difference between the cell edge prediction result and the cell edge labeling result of the sample; the sample cell edge prediction result is determined based on the sample cell mask prediction result, and the cell edge labeling result is determined based on the cell mask labeling result.
Specifically, the cell division operation described above may be implemented by a cell division module. The loss function of the cell segmentation module during training includes mask loss and edge loss. The mask loss represents the difference between the sample cell mask prediction result obtained by the cell segmentation module and the cell mask marking result. The loss can form the constraint on the cell area through the sample cell mask prediction result predicted by the cell segmentation module and the GroudTruth. For example, the masking of cells can be constrained by being based on BCE loss and Dice loss.
For a scene where an Edge is important, Edge-aware (Edge-aware) segmentation may be further adopted, and after Edge extraction is performed on a sample cell mask prediction result and a cell mask labeling result respectively (for example, Edge extraction is performed by a sobel operator, a gradient operator, and the like), the Edge is further constrained by Edge loss (for example, housedorff loss). The edge loss characterization sample cell edge prediction result and the cell edge labeling result are different from each other, and the difference can be used for optimizing the cell edge segmentation effect of the cell segmentation module and improving the cell segmentation effect in an edge sensitive scene.
Based on any of the above embodiments, the cell segmentation method based on the keypoint and size regression can be implemented by a segmentation model based on multi-scale keypoint regression, as shown in fig. 7, the construction process of the model includes:
s1, data collection, cleaning and labeling
Including the collection, cleaning and labeling of data. During the data collection process, 200 data samples were collected, each containing approximately 100 fields of view, with 15 microscopic images under each field of view. The cleaning mainly is to the control of image quality, ensures that the image is complete, the definition reaches the standard, and can be selected by artificially referring to a specific quality control standard. Labeling needs to realize cell pixel level labeling, and can be realized based on a Labelme frame. In addition, the data set can be divided into a training set, a test set and a verification set according to the ratio of 8:1:1 in the number of images.
S2, data preprocessing and standardization
Data pre-processing mainly comprises image histogram equalization, resampling (from 2048 down-sampling to 1024) and normalization. If the field of view size is larger, a sliding window segmentation mode will be used.
S3, model construction
The model construction environment adopts Python 3.7 and Pytrch 1.2 frameworks, and the main software package relates to Numpy, Pandas, Skymage and the like. The hardware environment is DGX station, and 4 Nvidia GTX 1080 Ti blocks are adopted. The model integral framework is shown in fig. 8 and comprises:
1. the multi-scale feature extraction network is constructed, and feature maps (E1, E2, E3 and E4) at different scales can be obtained through various backbone networks.
2. Constructing a multi-scale key point regression network, combining the same-scale feature maps through a plurality of levels of upsampling convolution modules to obtain upsampling feature maps (D1, D2, D3 and D4) under each scale and predict a plurality of key point heat maps (including but not limited to a central point) under the multi-scale, and simultaneously respectively obtaining the predicted offset of each key point and the size of a corresponding cell through two regression heads.
3. Candidate detection box Bbox generation
A two-stage Bbox generation mechanism is adopted, in the first stage, a certain number is selected after ranking is respectively carried out according to the predicted values of the key points under each scale (for example, the first 150 key points are selected under KP1, and the number is determined by the possible number of cells under the scene); and a second stage, forming a pair of three key points in a traversal mode, wherein a current central point is determined firstly, then the positions of the upper left point and the lower right point are determined according to the cell size regressed by the central point, finally, a search range is determined in a self-adaptive mode according to the cell size, if the upper left key point and the lower right key point exist in the search range, the upper left key point and the lower right key point are reserved, and forming a pair of the three key points. And finally generating the final Bbox according to the triple key pairing.
4. Bbox selection under multi-scale based on weighted NMS
A weighted NMS approach may be employed here, where the higher the scale the higher the weight of the selected Bbox in the feature map, the higher the confidence of the feature map the higher the weight of the Bbox.
5. Cell segmentation Module construction
The cell segmentation module respectively carries out feature interception on the feature map under each scale according to any cell detection frame obtained in the previous step to obtain intercepted features (C1, C2, C3 and C4) under each scale; starting from the truncated features (C4, S4) at the highest-order scale, firstly carrying out deconvolution with convolution kernel of 3 x 3 on the feature vector, reducing the number of layers from 1024 to 512, further merging the feature vector with the truncated features (C3) at the second highest-order scale, and then obtaining new truncated and fused features (S3) through two-dimensional convolution with convolution kernel of 1 x 1; and sequentially carrying out the operations until the intercepted features (C1) under the lowest order scale are connected in parallel, finishing by a 3-by-3 convolution kernel to obtain the final intercepted fusion features (S1), activating by a Sigmoid function, and obtaining the cell mask prediction result corresponding to the cell detection frame.
6. After the model is built and before large-scale data training is carried out, a smaller data set can be adopted for model pre-training, the sizes of all modules of the model are ensured to be consistent with the scene, and initial setting is carried out on all hyper-parameters, so that the model can be ensured to be converged.
S4, model tuning training and model selection
After the model is built, each parameter in the model needs to acquire an optimal value in one scene through training. During training, within one epoch, the method is divided into two steps, wherein the first step is to restrain the key point branches according to bbox and a central point extracted from the mask information of the cells and corresponding key point information, and the second step is to take the mask information of the cells and the extracted bbox as the input of the cell division branches to train the division branches.
During training, each hyper-parameter needs to be selected and adjusted, such as an optimizer, a learning rate curve, the size of an image, the size of batch and the like, so as to ensure that the overfitting and the underfitting of the model are avoided in the training process. To avoid overfitting, dynamic image augmentation of the data set can be performed. To avoid under-fitting, a more complex skeleton network may be employed. In addition, local computing resources, whether multi-GPU parallel computing is adopted or not, whether an original image is too large or not, whether segmentation sliding window training is adopted or not and the like are considered.
And establishing an evaluation index of the model, wherein mAP and mIoU can be used as the evaluation index. For example, boundary IoU can be used as an evaluation index for cell edge segmentation. By continuously detecting the index performance on the test set, the model with the best index performance can be selected as the final application model after a certain training times. Preferably, in order to improve the generalization capability of the model, multiple cross validation may be considered, the training set of the data set may be further divided into a plurality of training and tuning sets, and finally a more robust and generalized model is obtained through model merging (model organizing).
The above model training process can be implemented under a Pytorch framework.
S5, model application and cell segmentation
The model is different from the model in application and training, and is concluded by three steps, wherein in the first step, a Bbox list is obtained through a multi-scale key point regression network, in the second step, the combination and extraction of the Bboxes are completed through weighted NMS (network management system) to form a final cell detection frame, and in the third step, the example segmentation of the cells is completed through a cell segmentation module based on the final cell detection frame to form final output.
In order to ensure the performance of the model during application, after deployment is completed, the network input image and the image preprocessing adopted during training need to be ensured to be the same in practical application, and finally, the cell segmentation of each image in a practical scene is completed.
By adopting the test data set (174 cases) for evaluation, the effect of the model provided by the embodiment of the invention and the Mask-RCNN example segmentation network is better than that of the model (right side of fig. 9) shown in fig. 9, and the main index pairs of the model and the Mask-RCNN example segmentation network (left side of fig. 9) are shown in table 1:
TABLE 1 comparison of the main indices
Mask- RCNN Ours
mIoU 0.86389 0.8929
Boundary_mIoU 0.29082 0.40291
Based on any of the above embodiments, fig. 10 is a schematic structural diagram of a cell segmentation apparatus based on key point and size regression according to an embodiment of the present invention, as shown in fig. 10, the apparatus includes: a keypoint regression unit 1010, a corner search unit 1020, a detection frame generation unit 1030, and a cell segmentation unit 1040.
The key point regression unit is used for extracting features of a cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, and detecting key points based on the feature map to obtain key point information and cell size information related to the key point information; wherein, any key point information comprises the position information and the type of the key point corresponding to the cell detection frame; the cell size information represents the size information of the cell detection frame corresponding to the associated key point information;
the corner searching unit is used for searching the corner information corresponding to each piece of central point information by taking the piece of central point information except the central point as a searching object in the searching range of each piece of central point information in the piece of key point information; the searching range of any central point information is determined and obtained based on cell size information related to any central point information;
a detection frame generating unit, configured to generate a cell detection frame based on the respective pieces of central point information and corner point information corresponding to the respective pieces of central point information;
and the cell segmentation unit is used for carrying out cell segmentation on the characteristic diagram based on the cell detection frame to obtain a cell segmentation result of the cell image to be segmented.
The device provided by the embodiment of the invention detects the key points based on the characteristic diagram to obtain each piece of key point information and the cell size information associated with the key point information, then comprehensively utilizes each piece of key point information and the cell size information associated with the key point information to search the corner information of each piece of central point information, thereby generating and determining a corresponding cell detection frame based on the central point information and the corner point information thereof, and constraining the size of the cell detection frame through the cell size information, searching key point information of the same cell detection frame in a constraint range, avoiding cell segmentation failure caused by confusion of key points of cell detection frames of different cells when the cells are dense, thereby improving the accuracy of cell detection frame prediction, and then cell segmentation is carried out according to each cell detection frame, so that the cell segmentation performance of irregular cells in a dense scene is improved.
Based on any of the embodiments, the searching, in the search range of each piece of central point information in the key point information, for the corner point information corresponding to each piece of central point information by using key point information other than the central point as a search object, specifically includes:
determining an initial search frame of any central point information based on any central point information and cell size information related to any central point information; the initial search frame is a rectangular frame which takes any one piece of central point information as a center and is adaptive to cell size information related to any one piece of central point information in size;
determining a search range of any central point information by taking an angular point of the initial search box of any central point information as a search center and a preset threshold value as a search radius;
and searching corner information corresponding to any central point information based on the type of each key point information in the searching range of any central point information.
Based on any of the above embodiments, the performing feature extraction on the cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, and performing key point detection based on the feature map to obtain key point information and cell size information associated with the key point information specifically includes:
performing multi-scale feature extraction on a cell image to be segmented to obtain feature maps under all scales;
after an up-sampling feature map output by an up-sampling module of a previous scale is overlapped with a feature map of the previous scale, inputting the feature map into an up-sampling module of a current scale to obtain an up-sampling feature map output by the up-sampling module of the current scale; wherein, the input of the first up-sampling module is a characteristic diagram under the highest-order scale;
and performing key point detection and cell size regression based on the up-sampling characteristic diagram output by the up-sampling module of each scale to obtain key point information and cell size information related to the key point information under each scale.
Based on any one of the embodiments, the generating a cell detection frame based on the information of each central point and the corner information corresponding to the information of each central point specifically includes:
generating candidate detection frames under each scale based on each piece of central point information under each scale and the corner point information corresponding to each piece of central point information;
fusing the candidate detection frames under each scale by adopting a weighted non-maximum inhibition method to obtain cell detection frames under each scale; wherein, the weight of the candidate detection frame under the higher-order scale is larger, and the weight of the candidate detection frame with higher confidence coefficient is larger.
Based on any of the above embodiments, the performing, by the upsampling feature map output by the upsampling module based on each scale, the keypoint detection to obtain keypoint information corresponding to each scale specifically includes:
acquiring branches by using the key point heat maps of the key point prediction module, and respectively detecting key points of the up-sampling characteristic maps output by the up-sampling modules of all scales to obtain the key point heat maps of all scales;
utilizing the offset prediction branch of the key point prediction module to determine the key point offset under each scale based on the up-sampling characteristic diagram output by the up-sampling module of each scale; the key point offset under any scale represents the coordinate offset of each key point under any scale when the key point is mapped to the cell image to be segmented from the key point heat map under any scale;
and determining the key point information under each scale based on the key point heat map and the key point offset under each scale.
In any of the above embodiments, the loss function of the keypoint prediction module during training comprises a keypoint heat map loss and an offset loss;
wherein the keypoint heat map loss characterizes a difference between activation results of the sample keypoint heat map and sample keypoints determined based on annotation results of the sample cell images; the sample keypoint heat map comprises keypoints in the sample cell image obtained by the offset prediction branch prediction;
the offset loss characterizes an error of a sample keypoint offset predicted by the offset prediction branch.
Based on any of the above embodiments, the performing, based on the cell detection frame, cell segmentation on the feature map to obtain a cell segmentation result of the cell image to be segmented specifically includes:
respectively intercepting the feature maps under all scales based on any cell detection frame to obtain intercepted features under all scales;
the current interception fusion feature is sampled upwards and then fused with the interception feature under the corresponding scale to obtain the next interception fusion feature; wherein, the first interception fusion feature is the interception feature under the highest order scale;
performing cell mask prediction based on the last intercepted fusion feature to obtain a cell mask prediction result corresponding to any cell detection frame;
and determining the cell segmentation result based on the cell mask prediction result corresponding to each cell detection frame.
According to any of the above embodiments, the cell segmentation is performed by a cell segmentation module;
wherein the loss function of the cell segmentation module during training comprises mask loss and edge loss; the mask loss represents the difference between the sample cell mask prediction result obtained by the cell segmentation module prediction and the cell mask marking result; the edge loss characterizes the difference between the cell edge prediction result and the cell edge labeling result of the sample; the sample cell edge prediction result is determined based on the sample cell mask prediction result, and the cell edge labeling result is determined based on the cell mask labeling result.
Fig. 11 illustrates a physical structure diagram of an electronic device, and as shown in fig. 11, the electronic device may include: a processor (processor)1110, a communication Interface (Communications Interface)1120, a memory (memory)1130, and a communication bus 1140, wherein the processor 1110, the communication Interface 1120, and the memory 1130 communicate with each other via the communication bus 1140. Processor 1110 may invoke logic instructions in memory 1130 to perform a method of cell segmentation based on key point and size regression, the method comprising: performing feature extraction on a cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, and performing key point detection based on the feature map to obtain key point information and cell size information associated with the key point information; wherein, any key point information comprises the position information and the type of the key point corresponding to the cell detection frame; the cell size information represents size information of a cell detection frame corresponding to the associated key point information; searching angular point information corresponding to each piece of central point information by taking the key point information except the central point as a searching object in the searching range of each piece of central point information in the key point information; the search range of any central point information is determined based on the cell size information related to any central point information; generating a cell detection frame based on each piece of central point information and the corner point information corresponding to each piece of central point information; and performing cell segmentation on the characteristic diagram based on the cell detection frame to obtain a cell segmentation result of the cell image to be segmented.
In addition, the logic instructions in the memory 1130 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, the computer program product comprising a computer program, the computer program being stored on a non-transitory computer readable storage medium, wherein when the computer program is executed by a processor, the computer is capable of executing the cell segmentation method based on the key point and size regression provided by the above methods, the method comprising: performing feature extraction on a cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, and performing key point detection based on the feature map to obtain key point information and cell size information associated with the key point information; wherein, any key point information comprises the position information and the type of the key point corresponding to the cell detection frame; the cell size information represents size information of a cell detection frame corresponding to the associated key point information; respectively searching the corner information corresponding to each piece of central point information in the searching range of each piece of central point information by taking the key point information except the central point as a searching object; the search range of any central point information is determined based on the cell size information related to any central point information; generating a cell detection frame based on each piece of central point information and the corner point information corresponding to each piece of central point information; and based on the cell detection frame, carrying out cell segmentation on the characteristic diagram to obtain a cell segmentation result of the cell image to be segmented.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the cell segmentation method based on the key point and size regression provided by the above methods, the method comprising: performing feature extraction on a cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, and performing key point detection based on the feature map to obtain key point information and cell size information associated with the key point information; wherein, any key point information comprises the position information and the type of the key point corresponding to the cell detection frame; the cell size information represents the size information of the cell detection frame corresponding to the associated key point information; respectively searching the corner information corresponding to each piece of central point information in the searching range of each piece of central point information by taking the key point information except the central point as a searching object; the searching range of any central point information is determined and obtained based on cell size information related to any central point information; generating a cell detection frame based on each piece of central point information and the corner point information corresponding to each piece of central point information; and based on the cell detection frame, carrying out cell segmentation on the characteristic diagram to obtain a cell segmentation result of the cell image to be segmented.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A cell segmentation method based on key point and size regression is characterized by comprising the following steps:
performing feature extraction on a cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, and performing key point detection based on the feature map to obtain key point information and cell size information associated with the key point information; wherein, any key point information comprises the position information and the type of the key point corresponding to the cell detection frame; the cell size information represents the size information of the cell detection frame corresponding to the associated key point information;
respectively searching the corner information corresponding to each piece of central point information in the searching range of each piece of central point information by taking the key point information except the central point as a searching object; the search range of any central point information is determined based on the cell size information related to any central point information;
generating a cell detection frame based on the central point information and the corner point information corresponding to the central point information;
based on the cell detection frame, carrying out cell segmentation on the characteristic diagram to obtain a cell segmentation result of the cell image to be segmented;
the method comprises the following steps of extracting features of a cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, detecting key points based on the feature map to obtain key point information and cell size information related to the key point information, and specifically comprises the following steps:
performing multi-scale feature extraction on a cell image to be segmented to obtain feature maps under all scales;
after an up-sampling feature map output by an up-sampling module of a previous scale is overlapped with a feature map of the previous scale, the feature map is input to an up-sampling module of a current scale to obtain an up-sampling feature map output by the up-sampling module of the current scale; the input of the first up-sampling module is a feature map under the highest-order scale;
and performing key point detection and cell size regression based on the up-sampling characteristic diagram output by the up-sampling module of each scale to obtain key point information and cell size information associated with the key point information under each scale.
2. The method for cell segmentation based on key point and size regression according to claim 1, wherein the searching for the corner point information corresponding to each piece of center point information in the search range of each piece of center point information in the key point information by using the key point information other than the center point as a search object comprises:
determining an initial search box of any central point information based on any central point information and cell size information related to any central point information; the initial search frame is a rectangular frame which takes any central point information as a center and has a size suitable for cell size information related to any central point information;
determining a search range of any central point information by taking an angular point of the initial search box of any central point information as a search center and a preset threshold as a search radius;
and searching corner information corresponding to any central point information based on the type of each key point information in the searching range of any central point information.
3. The method for cell segmentation based on key point and size regression according to claim 1, wherein the generating a cell detection frame based on the center point information and the corner point information corresponding to the center point information specifically comprises:
generating candidate detection frames under each scale based on each piece of central point information under each scale and corner point information corresponding to each piece of central point information;
fusing the candidate detection frames under each scale by adopting a weighted non-maximum inhibition method to obtain cell detection frames under each scale; wherein, the weight of the candidate detection frame under the higher-order scale is larger, and the weight of the candidate detection frame with higher confidence coefficient is larger.
4. The method for cell segmentation based on regression of key points and size according to claim 1, wherein the detecting of key points based on the up-sampling feature maps outputted from the up-sampling modules of each scale specifically comprises:
acquiring branches by using the key point heat maps of the key point prediction module, and respectively detecting key points of the up-sampling characteristic maps output by the up-sampling modules of all scales to obtain the key point heat maps of all scales;
utilizing the offset prediction branch of the key point prediction module to determine the key point offset under each scale based on the up-sampling characteristic diagram output by the up-sampling module of each scale; the key point offset under any scale represents the coordinate offset of each key point under any scale when the key point is mapped to the cell image to be segmented from the key point heat map under any scale;
and determining the key point information under each scale based on the key point heat map and the key point offset under each scale.
5. The method of claim 4, wherein the loss function of the keypoint prediction module during training comprises keypoint heat map loss and offset loss;
wherein the keypoint heat map loss characterizes a difference between activation results of the sample keypoint heat map and sample keypoints determined based on annotation results of the sample cell images; the sample key point heat map comprises key points in the sample cell image obtained by the offset prediction branch prediction;
the offset loss characterizes an error of a sample keypoint offset predicted by the offset prediction branch.
6. The method for cell segmentation based on key point and size regression of claim 1, wherein the cell segmentation is performed on the feature map based on the cell detection frame to obtain a cell segmentation result of the cell image to be segmented, and specifically comprises:
respectively intercepting the feature maps under all scales based on any cell detection frame to obtain intercepted features under all scales;
the current interception fusion feature is sampled upwards and then fused with the interception feature under the corresponding scale to obtain the next interception fusion feature; wherein, the first interception fusion feature is the interception feature under the highest order scale;
performing cell mask prediction based on the last intercepted fusion feature to obtain a cell mask prediction result corresponding to any cell detection frame;
and determining the cell segmentation result based on the cell mask prediction result corresponding to each cell detection frame.
7. The method of claim 6, wherein the cell segmentation is performed by a cell segmentation module;
wherein the loss function of the cell segmentation module during training comprises mask loss and edge loss; the mask loss represents the difference between the sample cell mask prediction result obtained by the cell segmentation module prediction and the cell mask marking result; the edge loss characterizes the difference between the cell edge prediction result and the cell edge labeling result of the sample; the sample cell edge prediction result is determined based on the sample cell mask prediction result, and the cell edge labeling result is determined based on the cell mask labeling result.
8. A cell segmentation apparatus based on key point and size regression, comprising:
the key point regression unit is used for extracting features of a cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, and detecting key points based on the feature map to obtain key point information and cell size information related to the key point information; wherein, any key point information comprises the position information and the type of the key point corresponding to the cell detection frame; the cell size information represents size information of a cell detection frame corresponding to the associated key point information;
the corner searching unit is used for searching the corner information corresponding to each piece of central point information by taking the piece of central point information except the central point as a searching object in the searching range of each piece of central point information; the search range of any central point information is determined based on the cell size information related to any central point information;
a detection frame generating unit, configured to generate a cell detection frame based on the center point information and the corner point information corresponding to the center point information;
the cell segmentation unit is used for carrying out cell segmentation on the characteristic diagram based on the cell detection frame to obtain a cell segmentation result of the cell image to be segmented;
the method comprises the following steps of extracting features of a cell image to be segmented to obtain a feature map corresponding to the cell image to be segmented, detecting key points based on the feature map to obtain key point information and cell size information related to the key point information, and specifically comprises the following steps:
performing multi-scale feature extraction on a cell image to be segmented to obtain feature maps under all scales;
after an up-sampling feature map output by an up-sampling module of a previous scale is overlapped with a feature map of the previous scale, the feature map is input to an up-sampling module of a current scale to obtain an up-sampling feature map output by the up-sampling module of the current scale; the input of the first up-sampling module is a feature map under the highest-order scale;
and performing key point detection and cell size regression based on the up-sampling characteristic diagram output by the up-sampling module of each scale to obtain key point information and cell size information associated with the key point information under each scale.
9. A non-transitory computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of the method for cell segmentation based on key point and size regression according to any one of claims 1 to 7.
CN202210506262.XA 2022-05-11 2022-05-11 Cell segmentation method and device based on key point and size regression Active CN114639102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210506262.XA CN114639102B (en) 2022-05-11 2022-05-11 Cell segmentation method and device based on key point and size regression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210506262.XA CN114639102B (en) 2022-05-11 2022-05-11 Cell segmentation method and device based on key point and size regression

Publications (2)

Publication Number Publication Date
CN114639102A CN114639102A (en) 2022-06-17
CN114639102B true CN114639102B (en) 2022-07-22

Family

ID=81953005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210506262.XA Active CN114639102B (en) 2022-05-11 2022-05-11 Cell segmentation method and device based on key point and size regression

Country Status (1)

Country Link
CN (1) CN114639102B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330808B (en) * 2022-07-18 2023-06-20 广州医科大学 Segmentation-guided magnetic resonance image spine key parameter automatic measurement method
CN115063797B (en) * 2022-08-18 2022-12-23 珠海横琴圣澳云智科技有限公司 Fluorescence signal segmentation method and device based on weak supervised learning and watershed processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136065A (en) * 2019-05-15 2019-08-16 林伟阳 A kind of method for cell count based on critical point detection
CN110148126A (en) * 2019-05-21 2019-08-20 闽江学院 Blood leucocyte dividing method based on color component combination and contour fitting
CN113869246A (en) * 2021-09-30 2021-12-31 安徽大学 Wheat stripe rust germ summer spore microscopic image detection method based on improved CenterNet technology
CN113989758A (en) * 2021-10-26 2022-01-28 清华大学苏州汽车研究院(相城) Anchor guide 3D target detection method and device for automatic driving

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10621747B2 (en) * 2016-11-15 2020-04-14 Magic Leap, Inc. Deep learning system for cuboid detection
CN109784333B (en) * 2019-01-22 2021-09-28 中国科学院自动化研究所 Three-dimensional target detection method and system based on point cloud weighted channel characteristics
CN110110799B (en) * 2019-05-13 2021-11-16 广州锟元方青医疗科技有限公司 Cell sorting method, cell sorting device, computer equipment and storage medium
CN112164077B (en) * 2020-09-25 2023-12-29 陕西师范大学 Cell instance segmentation method based on bottom-up path enhancement

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136065A (en) * 2019-05-15 2019-08-16 林伟阳 A kind of method for cell count based on critical point detection
CN110148126A (en) * 2019-05-21 2019-08-20 闽江学院 Blood leucocyte dividing method based on color component combination and contour fitting
CN113869246A (en) * 2021-09-30 2021-12-31 安徽大学 Wheat stripe rust germ summer spore microscopic image detection method based on improved CenterNet technology
CN113989758A (en) * 2021-10-26 2022-01-28 清华大学苏州汽车研究院(相城) Anchor guide 3D target detection method and device for automatic driving

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Combining keypoint-based and segment-based features for counting people in crowded scenes;M Hashemzadeh等;《Information Sciences》;20160601;第345卷;199-216 *
低维度特征的行人检测方法;文韬等;《计算机工程与设计》;20130916(第09期);182-186 *
关键点特征强化实例分割;陈帅印;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20210115(第1期);I138-846 *
基于优选傅里叶描述子的粘连条锈病孢子图像分割方法研究;邸馨瑶等;《计算机应用与软件》;20180315(第03期);199-204 *
基于深度学习的宫颈癌细胞识别研究;杨毅瑶;《中国优秀硕士学位论文全文数据库 (医药卫生科技辑)》;20220315(第3期);E068-42 *
基于轻量级无锚点深度卷积神经网络的树上苹果检测模型;夏雪等;《智慧农业(中英文)》;20200331(第01期);107-118 *
复杂场景下的动态实时人脸识别研究与实现;王皓洁;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20220215(第2期);I138-701 *

Also Published As

Publication number Publication date
CN114639102A (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN110991311B (en) Target detection method based on dense connection deep network
CN114639102B (en) Cell segmentation method and device based on key point and size regression
CN110298321B (en) Road blocking information extraction method based on deep learning image classification
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN109993040A (en) Text recognition method and device
CN110516541B (en) Text positioning method and device, computer readable storage medium and computer equipment
CN112819821B (en) Cell nucleus image detection method
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN112132827A (en) Pathological image processing method and device, electronic equipment and readable storage medium
WO2022134354A1 (en) Vehicle loss detection model training method and apparatus, vehicle loss detection method and apparatus, and device and medium
CN117156442B (en) Cloud data security protection method and system based on 5G network
CN109165654B (en) Training method of target positioning model and target positioning method and device
CN114862861B (en) Lung lobe segmentation method and device based on few-sample learning
CN113240623A (en) Pavement disease detection method and device
CN111177135B (en) Landmark-based data filling method and device
CN111027551B (en) Image processing method, apparatus and medium
CN117173145A (en) Method, device, equipment and storage medium for detecting surface defects of power equipment
CN112037173A (en) Chromosome detection method and device and electronic equipment
CN117058079A (en) Thyroid imaging image automatic diagnosis method based on improved ResNet model
CN110889418A (en) Gas contour identification method
CN116778164A (en) Semantic segmentation method for improving deep V < 3+ > network based on multi-scale structure
CN116246161A (en) Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge
CN114463300A (en) Steel surface defect detection method, electronic device, and storage medium
CN114882292B (en) Remote sensing image ocean target identification method based on cross-sample attention mechanism graph neural network
CN117523205B (en) Segmentation and identification method for few-sample ki67 multi-category cell nuclei

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant