CN115601749A - Pathological image classification method and image classification device based on characteristic peak map - Google Patents

Pathological image classification method and image classification device based on characteristic peak map Download PDF

Info

Publication number
CN115601749A
CN115601749A CN202211566089.9A CN202211566089A CN115601749A CN 115601749 A CN115601749 A CN 115601749A CN 202211566089 A CN202211566089 A CN 202211566089A CN 115601749 A CN115601749 A CN 115601749A
Authority
CN
China
Prior art keywords
feature extraction
pathological
target image
feature
cell type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211566089.9A
Other languages
Chinese (zh)
Other versions
CN115601749B (en
Inventor
姚沁玥
汪进
陈睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Severson Guangzhou Medical Technology Service Co ltd
Original Assignee
Severson Guangzhou Medical Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Severson Guangzhou Medical Technology Service Co ltd filed Critical Severson Guangzhou Medical Technology Service Co ltd
Priority to CN202211566089.9A priority Critical patent/CN115601749B/en
Publication of CN115601749A publication Critical patent/CN115601749A/en
Application granted granted Critical
Publication of CN115601749B publication Critical patent/CN115601749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The application discloses a pathological image classification method, an image classification device and a medium based on a characteristic peak value atlas, which comprises the steps of obtaining a pathological image, and performing sliding window cutting on the pathological image to obtain a plurality of target image blocks; performing feature extraction on the target image block by using a feature extraction model to obtain a pathological cell type, a confidence coefficient corresponding to the pathological cell type and a feature extraction map; determining a characteristic peak value map according to the sequencing result of the confidence degrees and the quantity information of the characteristic extraction maps; extracting the characteristic of the characteristic peak value spectrum to obtain the type characteristics of the pathological cell; extracting the characteristics of the types of the pathological cells, and splicing the characteristics obtained after the characteristics are extracted to obtain the characteristics of a target image; according to the technical scheme of the method and the device, the classification result of the pathological image is obtained according to the target image characteristics, and the target image characteristics with stronger representation capability are obtained after the target image block is subjected to automatic feature extraction for multiple times, so that the purpose of improving the accuracy of the classification result is achieved.

Description

Pathological image classification method and image classification device based on characteristic peak map
Technical Field
The present application relates to the field of image classification technologies, and in particular, to a pathological image classification method, an image classification device, and a medium based on a characteristic peak map.
Background
The computer vision technology is applied to a cell pathology auxiliary diagnosis system, so that the workload of inspectors can be reduced, negatives are eliminated, positives are screened out, and the analysis efficiency of a Whole Slice Image (WSI) is improved. However, since the WSI is very large, how to refine the features of the whole graph to obtain better feature information to improve the accuracy of classification is an urgent technical problem to be solved.
Disclosure of Invention
The embodiment of the application provides a pathological image classification method, an image classification device and a medium based on a characteristic peak value atlas, and the target image characteristics with stronger representation capability can be obtained, so that the classification accuracy is improved.
In a first aspect, an embodiment of the present application provides a pathological image classification method, including:
acquiring a pathological image, and performing sliding window cutting on the pathological image to obtain a plurality of target image blocks;
performing feature extraction on the target image block by using a trained feature extraction model to obtain a pathological change cell type, a confidence coefficient corresponding to the pathological change cell type and a feature extraction map, wherein the feature extraction model is obtained by training preset pathological change cell data;
sequencing the confidence degrees corresponding to the same lesion cell type, and determining a characteristic peak value map according to the sequencing result of the confidence degrees and the quantity information of the characteristic extraction maps;
extracting the characteristic of the characteristic peak value spectrum to obtain the type characteristics of the pathological cell;
extracting the characteristics of the types of the pathological cells, and splicing the characteristics obtained after the characteristics are extracted to obtain the characteristics of a target image;
and obtaining a classification result of the pathological image according to the target image characteristics.
In a second aspect, an embodiment of the present application further provides an image classification apparatus, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of pathological image classification as described above when executing the computer program.
In a third aspect, embodiments of the present application further provide a computer-readable storage medium storing computer-executable instructions for performing the pathological image classification method described above.
The embodiment of the application comprises the following steps: acquiring a pathological image, and performing sliding window cutting on the pathological image to obtain a plurality of target image blocks; performing feature extraction on the target image block by using a trained feature extraction model to obtain a pathological cell type, a confidence coefficient corresponding to the pathological cell type and a feature extraction map, wherein the feature extraction model is obtained by training preset pathological cell data; sequencing the confidence degrees corresponding to the same lesion cell type, and determining a characteristic peak value map according to the sequencing result of the confidence degrees and the quantity information of the characteristic extraction maps; extracting the characteristic of the characteristic peak value spectrum to obtain the type characteristics of the pathological cell; extracting the characteristics of the types of the pathological cells, and splicing the characteristics obtained after the characteristics are extracted to obtain the characteristics of a target image; according to the technical scheme, the pathological image classification result is obtained, the target image features with stronger representation capability are obtained after the target image blocks are subjected to automatic feature extraction for multiple times, and the purpose of improving the accuracy of the classification result is achieved.
Drawings
Fig. 1 is a flowchart of a pathological image classification method according to an embodiment of the present application;
FIG. 2 is a flowchart of a specific method of step S150 in FIG. 1;
FIG. 3 is a flowchart of a specific method of step S130 in FIG. 1;
fig. 4 is a flowchart of a pathological image classification method according to another embodiment of the present application;
FIG. 5 is a flowchart of a specific method of step S120 of FIG. 1;
FIG. 6 is a flowchart illustrating a specific method of step S520 in FIG. 5;
FIG. 7 is a flowchart of a specific method of step S520 in FIG. 5;
FIG. 8 is a flowchart of a specific method of step S140 in FIG. 1;
fig. 9 is a schematic structural diagram of an image classification apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different from that in the flowcharts. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
In each embodiment of the present application, when data related to the characteristics of a target object such as attribute information or an attribute information set of the target object (for example, a user) is subjected to a correlation process, permission or approval of the target object is obtained first, and the data is collected, used, and processed so as to comply with relevant laws and regulations and standards of relevant countries and regions. In addition, when the embodiment of the present application needs to acquire the attribute information of the target object, the individual permission or the individual agreement of the target object may be acquired by popping up a window or jumping to a confirmation page, and after the individual permission or the individual agreement of the target object is definitely acquired, the relevant data of the target object necessary for enabling the embodiment of the present application to operate normally may be acquired.
The computer vision technology is applied to a cell pathology auxiliary diagnosis system, so that the workload of inspectors can be reduced, negatives are eliminated, positives are screened, and the analysis efficiency of a Whole-slice Image (WSI) is improved. However, since the WSI is very large, how to refine the features of the whole graph, so as to obtain better feature information to improve the accuracy of classification, is a technical problem to be solved urgently.
At present, a classifier for a full-section is generally based on manually constructed statistical data of confidence, quantity, shape and the like of each type of cell detected by a pre-model by each small image block of a full-section or based on global features spliced by full-section features obtained by a pre-feature extractor, however, the information granularity of the classifier is too coarse and depends on the pre-detection model too much, when the confidence of the detection model changes in distribution due to data differences caused by factors such as dyeing, sheet making, scanning and the like, the stability of the model is greatly reduced, and meanwhile, the information of the coarse granularity does not well utilize the feature information of the image when being input into the classifier, so that great information loss exists and certain influence is exerted on the performance of the model.
The global feature used in the latter is too redundant to make the feature with minimum information loss, and in a slice, even in a typical positive slice, a large number of regions are obviously free of positive cells, and a large amount of redundant information is input into the classifier, so that the classifier can be more easily fitted to some pseudo-relevant features.
The application provides a pathological image classification method, an image classification device and a medium based on a characteristic peak value atlas, which comprise the steps of obtaining a pathological image, and performing sliding window cutting on the pathological image to obtain a plurality of target image blocks; performing feature extraction on the target image block by using the trained feature extraction model to obtain a pathological cell type, a confidence coefficient corresponding to the pathological cell type and a feature extraction map, wherein the feature extraction model is obtained by training preset pathological cell data; sequencing the confidence degrees corresponding to the same lesion cell type, and determining a characteristic peak value map according to the sequencing result of the confidence degrees and the quantity information of the characteristic extraction maps; extracting the characteristic of the characteristic peak value spectrum to obtain the type characteristics of the pathological cell; extracting the characteristics of the types of the pathological cells, and splicing the characteristics obtained after the characteristics are extracted to obtain the characteristics of a target image; according to the technical scheme, the pathological image classification result is obtained, the target image features with stronger representation capability are obtained after the target image blocks are subjected to automatic feature extraction for multiple times, and the purpose of improving the accuracy of the classification result is achieved.
The embodiments of the present application will be further explained with reference to the drawings.
As shown in fig. 1, fig. 1 is a flowchart of a pathological image classification method provided in an embodiment of the present application, and the identification method may include, but is not limited to, step S110, step S120, step S130, step S140, step S150, and step S160.
Step S110: and acquiring a pathological image, and performing sliding window cutting on the pathological image to obtain a plurality of target image blocks.
In this step, a pathology Image (WSI) refers to any pathology Image in the related art, such as a hydrothorax and ascites cell pathology Image, a breast cancer pathology Image, and the like. And acquiring a pathological image, and performing sliding window cutting on the pathological image to obtain a plurality of target image blocks, wherein the plurality of target image blocks can represent all information of the pathological image. The objective image block is obtained to facilitate obtaining a classification result of the pathological image in the subsequent step.
In another embodiment of the present application, a sliding window crop (shift window crop) refers to a sliding window crop implemented by any technical means in the related art, for example, by using a slidingbind function in OpenCV, so that a pathological image can be cropped to obtain a plurality of target image blocks.
Step S120: and performing feature extraction on the target image block by using the trained feature extraction model to obtain a pathological cell type, a confidence coefficient corresponding to the pathological cell type and a feature extraction map, wherein the feature extraction model is obtained by training preset pathological cell data.
In this step, the feature extraction model refers to any form of feature extraction model in the related art, and is not particularly limited herein. The trained feature extraction model refers to a feature extraction model obtained by training preset pathological change cell data, and the preset pathological change cell data comprises pathological change cell type information. And performing feature extraction on the target image block by using the trained feature extraction model, so that the type of the pathological change cell, the confidence coefficient corresponding to the type of the pathological change cell and a feature extraction map can be obtained, wherein the feature extraction map can represent feature information obtained after the feature extraction of the target image block.
In another embodiment of the present application, the feature extraction model may obtain the type of the diseased cell and the confidence corresponding to the type of the diseased cell, and the feature extraction model may be a classification model, a target detection network model or a segmentation model, as long as the type of the diseased cell and the confidence corresponding to the type of the diseased cell can be obtained, which is not specifically limited herein.
Step S130: and sequencing the confidence degrees corresponding to the same lesion cell type, and determining the characteristic peak value spectrum according to the sequencing result of the confidence degrees and the quantity information of the characteristic extraction spectrum.
In this step, the feature peak map refers to a feature map determined according to the ranking result of the confidence degrees and the number information of the feature extraction maps. The number of the pathological cell types can be multiple, and the confidence degrees corresponding to the same pathological cell type are sorted, so that the obtained sorting result can represent the corresponding situation of the target image block and the pathological cell type. The feature extraction map refers to a feature map obtained after feature extraction is performed on the target image block, and the feature peak map is determined according to the sequencing result of the confidence degrees and the quantity information of the feature extraction map, so that the obtained feature peak map can represent the feature map with the highest confidence degree corresponding to the same lesion cell type, and the feature peak map is obtained to facilitate obtaining of lesion cell type features in subsequent steps.
In another embodiment of the present application, for a target image block, a plurality of lesion cell types and a plurality of confidence levels corresponding to the plurality of lesion cell types may be included, and the confidence levels corresponding to the same lesion cell type obtained by all target image blocks are sorted, so that a feature peak value spectrum for characterizing the same lesion cell type can be determined.
Step S140: and (4) performing characteristic extraction on the characteristic peak value spectrum to obtain the type characteristics of the pathological cell.
In this step, the feature extraction refers to feature extraction performed by any technical means in the related art, and since the feature peak map is obtained by confirmation according to the ranking result of the confidence degrees of the same lesion cell type, the feature extraction is performed on the feature peak map, so that the lesion cell type features can be obtained. The lesion cell type feature is obtained to facilitate obtaining the target image feature in the subsequent step.
Step S150: and extracting the characteristics of the types of the pathological cells, and splicing the characteristics obtained after the characteristics are extracted to obtain the characteristics of the target image.
In this step, the feature extraction refers to feature extraction performed by any technical means in the related art, and feature extraction is performed on the type features of the pathological cells, which may be performed in the same feature extraction manner as the feature extraction performed on the characteristic peak value spectrum, so that the obtained target image features can more represent pathological cell information in the pathological image, and the representation capability of the target image features is improved.
In another embodiment of the present application, the number of the pathological cell type features may be any number, and the pathological cell type features are spliced after feature extraction, that is, feature extraction is performed on the pathological cell type features respectively, and then the features obtained after feature extraction are connected to obtain target image features. The stitching refers to any manner of connecting features in the related art, for example, the features are stitched through a concatenate function to obtain the target image features.
Step S160: and obtaining a classification result of the pathological image according to the characteristics of the target image.
In this step, the classification result of the pathological image is obtained according to the target image feature, which means that the target image feature is classified by any classification method in the related art or any classifier in any form in the practical related art, so as to obtain the classification result of the pathological image. The target image features can represent pathological cell information in the pathological image, and the representation capability of the target image features is improved, so that the accuracy of the obtained classification result can be improved.
In another embodiment of the present application, the fully-connected layer may be used as a classifier, and the target image feature passes through a fully-connected layer and an activation layer (for example, an activation layer formed by a softmax function), so as to obtain a classification result of the pathological image, and achieve the purpose of classifying the pathological image.
In this embodiment, the pathological image classification method including steps S110 to S160 is adopted to obtain a pathological image, and the pathological image is cut through a sliding window to obtain a plurality of target image blocks; performing feature extraction on the target image block by using a trained feature extraction model to obtain a pathological cell type, a confidence coefficient corresponding to the pathological cell type and a feature extraction map, wherein the feature extraction model is obtained by training preset pathological cell data; sequencing the confidence degrees corresponding to the same lesion cell type, and determining a characteristic peak value map according to the sequencing result of the confidence degrees and the quantity information of the characteristic extraction maps; extracting the characteristic of the characteristic peak value spectrum to obtain the type characteristics of the pathological cell; extracting the characteristics of the types of the pathological cells, and splicing the characteristics obtained after the characteristics are extracted to obtain the characteristics of a target image; according to the scheme of the embodiment of the application, the classification results of the pathological images are obtained according to the characteristics of the target images, and the characteristics obtained according to the target image blocks are respectively extracted according to different types of pathological cells, so that the condition that the classification results are inaccurate due to the fact that the difference of the types and the number of the pathological cells is too large is reduced, and the purpose of improving the accuracy of the classification results is achieved.
It is worth noting that the technical scheme of the application firstly performs feature extraction on the feature peak value map to obtain the pathological cell type features, and then performs feature extraction on the pathological cell type features to obtain the target image features, that is, the same pathological cell type is firstly expressed, and then the target image features of all pathological cell types are obtained according to the expression structure summary of the same pathological cell type, so that the diagnosis classification logic of pathological images is met, the condition that the number of the pathological cell types is small in overall distribution and the features are weakened is reduced, the sensitivity of the classification process is favorably improved, the problem of unbalanced classification of the number of cells in a section is solved, and the purpose of improving the accuracy of the classification result is achieved.
In an embodiment, as shown in fig. 2, for further explanation of the pathological image classification method, step S150 may further include, but is not limited to, step S210 and step S220.
Step S210: the method comprises the steps of utilizing a preset first feature extraction module to extract features of pathological change cell types, wherein the pathological change cell types are correspondingly provided with the first feature extraction modules, and the first feature extraction modules share a first weight value.
In this step, the preset first feature extraction module refers to a preset feature extraction module, and the first feature extraction module may be a module composed of two fully-connected layers, an activation layer, and a random deactivation layer (Dropout), or may be a random other self-designed neural network module. The pathological cell types can include at least one pathological cell type, and under the condition that a plurality of pathological cell types exist, the characteristic extraction is performed on the pathological cell type characteristics by using a preset first characteristic extraction module, wherein the pathological cell type characteristics are correspondingly provided with the first characteristic extraction module, and the first characteristic extraction module shares a first weight value.
In another embodiment of the present application, when the first feature extraction module is trained, all types of diseased cells may be used to train the first feature, so as to obtain the trained first feature extraction module. Because the first feature extraction module shares the weight value and utilizes the first feature extraction module to extract the features of the pathological change cell types, the characterization capability of the first feature extraction module on all the pathological change cell types can be improved, more accurate target image features can be obtained, and the classification accuracy can be improved.
Step S220: and splicing the features obtained after the features are extracted to obtain the features of the target image.
In this step, in the case that there is only one lesion cell type, the feature obtained after feature extraction may be used as the target image feature; under the condition that a plurality of pathological cell types exist, the features obtained after feature extraction are spliced to obtain target image features, and the purpose of obtaining the target image features is to obtain the classification result of pathological images in the subsequent steps.
In this embodiment, by using the pathological image classification method including the steps S210 to S220, a preset first feature extraction module is used to perform feature extraction on pathological change cell type features, where the pathological change cell type features are all correspondingly provided with the first feature extraction module, and the first feature extraction modules share a first weight value; according to the scheme of the embodiment of the application, the first feature extraction module shares the weighted value, so that the characterization capacity of the types of the pathological cells can be improved, the classification accuracy is improved, and the convenience in training the first feature extraction module can be improved.
In an embodiment, as shown in fig. 3, for further explanation of the pathological image classification method, step S130 may further include, but is not limited to, step S310, step S320, and step S330.
Step S310: and under the condition that the number of the feature extraction maps is 0, taking the feature maps with all feature values of 0 as feature peak value maps.
In the step, under the condition that the number of the feature extraction maps is 0, the representation target image blocks do not have corresponding lesion cell types, so that a feature map with all feature values of 0 can be constructed, therefore, the characteristic maps with all the characteristic values of 0 are used as the characteristic peak value maps, the condition that parameter errors occur to cause errors of classification results obtained in subsequent steps is reduced, and the rationality of the classification method is improved.
Step S320: and under the condition that the number of the feature extraction maps is greater than 0 and the number of the feature extraction maps is less than a preset number threshold, selecting all the feature extraction maps corresponding to the same pathological cell type as feature peak maps.
In this step, the preset number threshold refers to any number threshold preset according to requirements, for example, the preset number threshold may be 10. Under the condition that the number of the feature extraction maps is larger than 0 and the number of the feature extraction maps is smaller than a preset number threshold, the number of the obtained feature extraction maps is insufficient under the condition that the preset number of feature extraction maps are needed, so that all the feature extraction maps corresponding to the same pathological cell type are only needed to be selected as feature peak value maps.
Step S330: and under the condition that the number of the feature extraction maps is larger than a preset number threshold, selecting the feature extraction maps with the number threshold as feature peak value maps according to the sequencing result of the confidence.
In this step, the preset number threshold refers to any number threshold preset according to the requirement, and when the number of the feature extraction maps is greater than the preset number threshold, a plurality of feature extraction maps representing the existence of the feature extraction maps can be used as feature peak maps. The sequencing result of the confidence coefficient can represent the correlation degree between the target image block corresponding to the feature extraction map and the lesion cell type in the same lesion cell type, and the feature extraction map with a quantity threshold value is selected as the feature peak map according to the sequencing result of the confidence coefficient, namely, the feature extraction map most correlated with the lesion cell type is selected as the feature peak map, so that the characterization capability of the classification method can be improved, and the purpose of improving the classification accuracy is achieved.
In another embodiment of the present application, since the target image block may include lesion cells of multiple lesion cell types, in a case that the target image block has lesion cells of the same multiple lesion cell types, a repeated feature extraction map may be obtained, in this case, the repeated feature extraction map may be used as one feature peak map, and a subsequent feature extraction map is continuously selected as the feature peak map according to the ranking result of the confidence degrees until the number of the feature peak maps is equal to the preset number threshold.
In the present embodiment, by using the pathological image classification method including the above steps S310 to S330, in the case where the number of feature extraction maps is 0, feature maps having feature values all of 0 are used as feature peak maps; under the condition that the number of the feature extraction maps is larger than 0 and the number of the feature extraction maps is smaller than a preset number threshold, selecting all feature extraction maps corresponding to the same pathological cell type as feature peak maps; according to the scheme of the embodiment of the application, the characteristic peak value spectrum can represent the characteristic extraction spectrum with the highest confidence level in the same pathological cell type, so that the target image characteristics can be conveniently obtained in the subsequent steps, and the purpose of improving the classification accuracy is achieved.
In an embodiment, as shown in fig. 4, the pathological image classification method will be further described, and steps S410 and S420 may be included but not limited before step S140.
Step S410: and acquiring target position information of the target image block for the pathological image.
In this step, the target image block is obtained by performing sliding window clipping on the pathological image, and the target image block corresponds to different positions of the pathological image. In an optional embodiment, target position information of the target image block for the pathological image may be recorded correspondingly during sliding window cropping, or the target position information of the target image block for the pathological image may be determined after feature extraction is performed on the target image block. The target position information of the target image block for the pathological image is obtained so as to facilitate finding and obtaining a corresponding feature extraction map in the subsequent steps.
Step S420: and searching to obtain a corresponding feature extraction map according to the sequencing result of the confidence degree and the target position information.
In this step, the feature extraction map is obtained by performing feature extraction on the target image block, and the target position information can be used to find the target image block in the pathological image. And the sequencing result of the confidence degree represents the magnitude sequence of the confidence degree of the same lesion cell type, and the corresponding feature extraction map is obtained by searching according to the sequencing result of the confidence degree and the target position information, namely, the feature extraction map corresponding to the corresponding target image block is obtained by searching from the corresponding target position information according to the sequencing result of the confidence degree. The corresponding feature extraction map is obtained to facilitate determination of the feature peak map in the subsequent steps.
In another embodiment of the present application, when a plurality of target image blocks pass through the feature extraction model to obtain a plurality of feature extraction maps, the feature extraction maps may be cached first, and the feature extraction maps correspond to the target image blocks. When the target to be selected is determined to be the feature extraction map corresponding to the target image block at the target position according to the sequencing result of the confidence degree, the corresponding feature extraction map can be found according to the target position information.
In this embodiment, by using the pathological image classification method including the steps S410 to S420, target position information of the target image block with respect to the pathological image is obtained; according to the scheme of the embodiment of the application, the feature extraction map is associated with the target image block, so that the corresponding feature extraction map can be searched according to the target position information of the target image block without setting in a feature extraction model, the memory consumption of a training feature extraction model is reduced, and the feature peak value map can be conveniently obtained in the subsequent steps.
It is to be noted that, since the same target image block may include a plurality of lesion cell types, so that a plurality of feature extraction maps can be obtained, after determining target position information corresponding to a feature extraction map with high feature capability in the same lesion cell type according to the ranking result of the confidence degrees, the feature extraction map corresponding to the target image block is found, so that determination of a feature peak map in subsequent steps can be facilitated.
In an embodiment, as shown in fig. 5, for further explanation of the pathological image classification method, step S120 may further include, but is not limited to, step S510, step S520, and step S530.
Step S510: and training the feature extraction model by using the pathological cell data to obtain the trained feature extraction model.
In this step, the feature extraction model refers to an arbitrary feature extraction model in the related art. The lesion cell data may include data of a lesion cell type, a shape of a lesion cell, and the like, where the lesion cell type refers to a relevant lesion cell type in a pathological image of interest after classification of the pathological image, such as ascites lesion cells, glandular lesion cells, and the like. And training the feature extraction model by using the type of the pathological cell to obtain the trained feature extraction model, so that feature extraction can be conveniently carried out on the target image block in the subsequent step.
In another embodiment of the present application, in the process of training the feature extraction model, overfitting may be prevented in a Data Augmentation (Data Augmentation) manner, specifically, image Augmentation may be performed in a manner of adding noise to an image, disturbing the image, and the like, so as to reduce the overfitting of the obtained feature extraction model.
Step S520: and performing feature extraction on the target image block by using the trained feature extraction model to obtain the type of the pathological cell and the confidence coefficient corresponding to the type of the pathological cell.
In this step, the trained feature extraction model is used to perform feature extraction on the target image block, that is, the target image block is input into the feature extraction model, so that the type of the pathological cell and the confidence degree corresponding to the type of the pathological cell can be obtained.
In another embodiment of the present application, the feature extraction model may be any classification network model, target detection network model, or segmentation network model in the related art, one feature extraction model may be provided for each target image block, or a plurality of target image blocks may be sequentially input into the feature extraction model, so as to obtain a diseased cell type and a confidence corresponding to the diseased cell type.
Step S530: and determining a feature extraction map corresponding to the target image block according to the network layer information of the backbone network of the feature extraction model.
In this step, the backbone network of the feature extraction model refers to any backbone network corresponding to the feature extraction model, and it can be understood that parameters of the backbone network may be adjusted in the process of training the feature extraction model, and the features output by the backbone network may be classified to obtain a classification result corresponding to the feature extraction model. Determining a feature extraction map corresponding to the target image block according to network layer information of a backbone network of the feature extraction model, that is, the features output by the backbone network are taken as the feature extraction map of the corresponding target image block; or, connecting the information of each layer in the backbone network by using any feature connection mode in the related technology to obtain a feature extraction map.
In another embodiment of the present application, a feature map with a size of h × w × c output by the last layer of the backbone network of the feature extraction model is used as the feature extraction map; or, by using the structure of a Feature Pyramid Network (FPN), a preset number of network layer Feature maps are selected according to network layer information of the backbone network, so that a plurality of Feature maps with the size of 1 × 1 × c are obtained by classifying the network layer Feature maps, and the Feature maps are connected in series to obtain a Feature extraction map.
In this embodiment, the pathological image classification method including the steps S510 to S530 is adopted, and the feature extraction model is trained by using the type of the pathological cell, so as to obtain a trained feature extraction model; performing feature extraction on the target image block by using the trained feature extraction model to obtain a pathological cell type and a confidence coefficient corresponding to the pathological cell type; according to the scheme of the embodiment of the application, the feature extraction map is determined according to the network layer information of the backbone network of the feature extraction model, so that the characterization capability of the feature extraction map can be improved, and the purpose of improving the accuracy of the subsequently obtained classification result is achieved.
In an embodiment, as shown in fig. 6, to further describe the pathological image classification method, the feature extraction model is an object detection network model, and step S520 may further include, but is not limited to, step S610 and step S620.
Step S610: and performing feature extraction on the target image block by using the trained target detection network model to obtain the position information of the pathological cell.
In this step, the target detection network model refers to any target detection network model in the related art, such as EfficientDet, yolo, and the like. And (3) performing feature extraction on the target image block by using the trained target detection network model, namely inputting the target image block into the target detection network model so as to obtain the position information of the pathological cell. The information on the location of the diseased cells is obtained to facilitate obtaining the type of the diseased cells and the confidence corresponding to the type of the diseased cells in the subsequent steps.
Step S620: and determining the type of the pathological cell and the confidence corresponding to the type of the pathological cell according to the position information of the pathological cell.
In this step, the lesion cell location information refers to location information of a lesion cell in the target image, and since the target detection network model is obtained through training of lesion cell data, a plurality of target detection network models may be set to detect different types of lesion cells, respectively, so that a lesion cell type and a confidence corresponding to the lesion cell type may be determined according to the detected location information of the lesion cell.
In another embodiment of the present application, when there are two types of diseased cells, the target detection network model respectively detects and obtains corresponding location information of the diseased cells, and according to the target detection network model, whether there is a corresponding type of diseased cells in the target image block and a confidence corresponding to the type of diseased cells can be obtained.
In this embodiment, by using the pathological image classification method including the steps S610 to S620, the trained target detection network model is used to perform feature extraction on the target image block, so as to obtain the location information of the lesion cells; according to the scheme of the embodiment of the application, the lesion cell type and the confidence corresponding to the lesion cell type are determined, so that the most relevant feature map of the same lesion cell type can be obtained in the subsequent steps, and the purpose of improving the accuracy of the classification result is achieved.
In an embodiment, as shown in fig. 7, for further explaining the pathological image classification method, the feature extraction model is a segmentation network model, and step S520 may further include, but is not limited to, step S710 and step S720.
Step S710: and performing feature extraction on the target image block by using the trained segmentation network model to obtain mask information of the pathological cells.
In this step, the segmented network model refers to any segmented network model in the related art, such as Unet, linkNet, and the like. And (3) performing feature extraction on the target image block by using the trained segmentation network model, namely inputting the target image block into the segmentation network model, thereby obtaining mask information of the pathological cells. The mask information of the diseased cells is obtained to facilitate obtaining the diseased cell type and the confidence corresponding to the diseased cell type in the subsequent steps.
Step S720: and determining the type of the pathological cell and the confidence corresponding to the type of the pathological cell according to the mask information.
In this step, the mask information of the diseased cells refers to mask information of corresponding positions of the diseased cells in the target image obtained after the network model is segmented, and different mask information is added to each diseased cell type because the segmented network model is obtained through training of diseased cell data, so that the diseased cell type and the confidence corresponding to the diseased cell type can be determined according to the mask information of the diseased cells.
In this embodiment, by using the pathological image classification method including the steps S710 to S720, the trained segmentation network model is used to perform feature extraction on the target image block, so as to obtain mask information of the lesion cells; according to the scheme of the embodiment of the application, the lesion cell type and the confidence degree corresponding to the lesion cell type are determined, so that the most relevant feature map of the same lesion cell type can be obtained in the subsequent steps, and the purpose of improving the accuracy of the classification result is achieved.
In an embodiment, as shown in fig. 2, for further explanation of the pathological image classification method, step S140 may further include, but is not limited to, step S810 and step S820.
Step S810: and performing feature extraction on the feature peak value maps by using a preset second feature extraction module to obtain an intermediate feature map, wherein each feature peak value map corresponds to one second feature extraction module, and the second feature extraction modules share a second weight value.
In this step, the preset second feature extraction module refers to any preset feature extraction module, the second weight value refers to a weight value related to the type of the pathological change cell, and the preset second feature extraction module is used for performing feature extraction on the feature peak value maps to obtain an intermediate feature map, wherein each feature peak value map corresponds to one second feature extraction module, and the second feature extraction modules share the second weight value, so that the intermediate feature map can represent the type information of the pathological change cell in the feature peak value map, and the obtaining of the intermediate feature map is convenient for obtaining the type feature of the pathological change cell in the subsequent step.
In another embodiment of the application, the technical scheme of the embodiment of the application is used for classifying pathological images and sharing the second weight value, so that pathological cell type information among characteristic peak value maps can be unified, the condition that classification results are wrong due to unbalanced distribution of pathological cell types is reduced, and the adaptability of the classification method to various samples with different distributions is improved.
Step S820: and inputting the intermediate characteristic map into a pooling layer with an attention module to obtain the type characteristics of the diseased cells.
In this step, the intermediate feature map is input into the pooling layer with the attention module to obtain the type features of the diseased cells.
In an alternative embodiment, there are a total of n characteristic peak profiles B i ={F 1 ,...,F n And (6) performing feature extraction on the feature peak value spectrum Bi through a second feature extraction module to obtain an intermediate feature spectrum B i ’={X 1 ,...,X n In which X is j ∈R L*1 ,X j The two are in disordered independent relation. Obtaining characteristics Z of diseased cell types i The method comprises the following steps:
Figure 434776DEST_PATH_IMAGE001
when in use
Figure 788397DEST_PATH_IMAGE002
Namely, it is
Figure 321009DEST_PATH_IMAGE003
W, U and V are corresponding neural network parameters in the second feature extraction module, and w belongs to R L*1 ,U∈R L*M ,V∈R L*M M and L are both adjustable hyperparameters, and both M and L are characteristic lengths. At this time
Figure 785489DEST_PATH_IMAGE004
. K lesion cell type characteristics { BF 1., BFn } can be obtained.
In this embodiment, by using the pathological image classification method including the steps S810 to S820, a preset second feature extraction module is used to perform feature extraction on the feature peak value maps to obtain an intermediate feature map, where each feature peak value map corresponds to one second feature extraction module, and the second feature extraction modules share a second weight value; according to the scheme of the embodiment of the application, the pathological change cell characteristics are obtained according to the characteristic peak value spectrum and the second characteristic extraction module sharing the second weight value, the situation that the classification result is wrong due to unbalanced distribution of the pathological change cell types is reduced, and the adaptability of the classification method to various samples with different distributions is improved.
In addition, as shown in fig. 9, an embodiment of the present application further provides an image classification apparatus 1000, where the image classification apparatus 1000 includes: memory 1002, processor 1001, and computer programs stored on memory 1002 and operable on processor 1001.
The processor 1001 and the memory 1002 may be connected by a bus or other means.
The memory 1002, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer-executable programs. Further, the memory 1002 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1002 may optionally include memory 1002 located remotely from the processor 1001, which may be connected to the processor 1001 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Non-transitory software programs and instructions required to implement the pathological image classification method of the above-described embodiment are stored in the memory 1002, and when executed by the processor 1001, perform the pathological image classification method of the above-described embodiment, for example, perform the above-described method steps S110 to S160 in fig. 1, the method steps S210 to S220 in fig. 2, the method steps S310 to S330 in fig. 3, the method steps S410 to S420 in fig. 4, the method steps S510 to S530 in fig. 5, the method steps S610 to S620 in fig. 6, the method steps S710 to S720 in fig. 7, and the method steps S810 to S820 in fig. 8.
Furthermore, an embodiment of the present application further provides a computer-readable storage medium storing computer-executable instructions, which are executed by a processor or controller, for example, by a processor in the above-mentioned apparatus embodiment, and can make the above-mentioned processor execute the pathological image classification method in the above-mentioned embodiment, for example, execute the above-mentioned method steps S110 to S160 in fig. 1, method steps S210 to S220 in fig. 2, method steps S310 to S330 in fig. 3, method steps S410 to S420 in fig. 4, method steps S510 to S530 in fig. 5, method steps S610 to S620 in fig. 6, method steps S710 to S720 in fig. 7, and method steps S810 to S820 in fig. 8.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods disclosed above, the base station system, may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as is well known to those skilled in the art.

Claims (10)

1. A pathological image classification method based on a characteristic peak value atlas is characterized by comprising the following steps:
acquiring a pathological image, and performing sliding window cutting on the pathological image to obtain a plurality of target image blocks;
performing feature extraction on the target image block by using a trained feature extraction model to obtain a pathological cell type, a confidence coefficient corresponding to the pathological cell type and a feature extraction map, wherein the feature extraction model is obtained by training preset pathological cell data;
sequencing the confidence degrees corresponding to the same lesion cell type, and determining a characteristic peak value map according to the sequencing result of the confidence degrees and the quantity information of the characteristic extraction maps;
extracting the characteristic of the characteristic peak value spectrum to obtain the type characteristics of the pathological cell;
extracting the characteristics of the types of the pathological cells, and splicing the characteristics obtained after the characteristics are extracted to obtain the characteristics of a target image;
and obtaining a classification result of the pathological image according to the target image characteristics.
2. The pathological image classification method according to claim 1, wherein the extracting the features of the lesion cell types and stitching the features obtained after the extracting the features to obtain the target image features comprises:
performing feature extraction on the pathological change cell type features by using a preset first feature extraction module, wherein the first feature extraction modules are correspondingly arranged on the pathological change cell type features, and share a first weight value;
and splicing the features obtained after the features are extracted to obtain the features of the target image.
3. The pathological image classification method according to claim 1, wherein the determining a feature peak map according to the ranking result of the confidence degrees and the number information of the feature extraction maps includes:
under the condition that the number of the feature extraction maps is 0, taking feature maps with all feature values of 0 as feature peak value maps;
under the condition that the number of the feature extraction maps is larger than 0 and the number of the feature extraction maps is smaller than a preset number threshold, selecting all the feature extraction maps corresponding to the same pathological cell type as feature peak maps;
and under the condition that the number of the feature extraction maps is larger than a preset number threshold, selecting the feature extraction maps with the number threshold as feature peak value maps according to the sequencing result of the confidence.
4. The pathological image classification method according to claim 1, wherein before the determining a feature peak map according to the ranking result of the confidence degrees and the quantity information of the feature extraction maps, the method comprises:
acquiring target position information of a target image block relative to the pathological image;
and searching and obtaining the corresponding feature extraction map according to the sequencing result of the confidence degree and the target position information.
5. The pathological image classification method according to claim 1, wherein the performing feature extraction on the target image block by using the trained feature extraction model to obtain a diseased cell type, a confidence corresponding to the diseased cell type, and a feature extraction map comprises:
training the feature extraction model by using pathological cell data to obtain a trained feature extraction model;
performing feature extraction on the target image block according to a trained feature extraction model to obtain a pathological cell type and a confidence coefficient corresponding to the pathological cell type;
and determining a feature extraction map corresponding to the target image block according to the network layer information of the backbone network of the feature extraction model.
6. The pathological image classification method according to claim 5, wherein the feature extraction model is a target detection network model, and the obtaining of the lesion cell type and the confidence corresponding to the lesion cell type by performing feature extraction on the target image block using the trained feature extraction model includes:
performing feature extraction on the target image block by using the trained target detection network model to obtain position information of pathological cells;
and determining the type of the pathological cell and the confidence corresponding to the type of the pathological cell according to the position information of the pathological cell.
7. The pathological image classification method according to claim 5, wherein the feature extraction model is a segmentation network model, and the obtaining of the lesion cell type and the confidence corresponding to the lesion cell type by performing feature extraction on the target image block by using the trained feature extraction model comprises:
extracting the features of the target image block by using the trained segmentation network model to obtain mask information of pathological cells;
determining the lesion cell type and a confidence corresponding to the lesion cell type according to the mask information.
8. The pathological image classification method according to claim 1, wherein the feature extraction of the feature peak map to obtain the lesion cell type features includes:
performing feature extraction on the feature peak value maps by using a preset second feature extraction module to obtain an intermediate feature map, wherein each feature peak value map corresponds to one second feature extraction module, and the second feature extraction modules share a second weight value;
and inputting the intermediate characteristic map into a pooling layer with an attention module to obtain the type characteristics of the pathological cell.
9. An image classification apparatus, comprising: memory, processor and computer program stored on the memory and executable on the processor, which when executed by the processor implements a pathological image classification method according to any one of claims 1 to 8.
10. A computer-readable storage medium storing computer-executable instructions for performing the method of classifying pathological images according to any one of claims 1 to 8.
CN202211566089.9A 2022-12-07 2022-12-07 Pathological image classification method and image classification device based on characteristic peak value atlas Active CN115601749B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211566089.9A CN115601749B (en) 2022-12-07 2022-12-07 Pathological image classification method and image classification device based on characteristic peak value atlas

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211566089.9A CN115601749B (en) 2022-12-07 2022-12-07 Pathological image classification method and image classification device based on characteristic peak value atlas

Publications (2)

Publication Number Publication Date
CN115601749A true CN115601749A (en) 2023-01-13
CN115601749B CN115601749B (en) 2023-03-14

Family

ID=84851983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211566089.9A Active CN115601749B (en) 2022-12-07 2022-12-07 Pathological image classification method and image classification device based on characteristic peak value atlas

Country Status (1)

Country Link
CN (1) CN115601749B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689091A (en) * 2019-10-18 2020-01-14 中国科学技术大学 Weak supervision fine-grained object classification method
WO2021169161A1 (en) * 2020-02-26 2021-09-02 上海商汤智能科技有限公司 Image recognition method, recognition model training method and apparatuses related thereto, and device
WO2022011892A1 (en) * 2020-07-15 2022-01-20 北京市商汤科技开发有限公司 Network training method and apparatus, target detection method and apparatus, and electronic device
CN114140465A (en) * 2021-01-20 2022-03-04 赛维森(广州)医疗科技服务有限公司 Self-adaptive learning method and system based on cervical cell slice image
CN114187277A (en) * 2021-12-14 2022-03-15 赛维森(广州)医疗科技服务有限公司 Deep learning-based thyroid cytology multi-type cell detection method
US20220083762A1 (en) * 2020-09-15 2022-03-17 Shenzhen Imsight Medical Technology Co., Ltd. Digital image classification method for cervical fluid-based cells based on a deep learning detection model
CN115170571A (en) * 2022-09-07 2022-10-11 赛维森(广州)医疗科技服务有限公司 Method and device for identifying pathological images of hydrothorax and ascites cells and medium
CN115239705A (en) * 2022-09-19 2022-10-25 赛维森(广州)医疗科技服务有限公司 Method, device, equipment and storage medium for counting the number of endometrial plasma cells

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689091A (en) * 2019-10-18 2020-01-14 中国科学技术大学 Weak supervision fine-grained object classification method
WO2021169161A1 (en) * 2020-02-26 2021-09-02 上海商汤智能科技有限公司 Image recognition method, recognition model training method and apparatuses related thereto, and device
WO2022011892A1 (en) * 2020-07-15 2022-01-20 北京市商汤科技开发有限公司 Network training method and apparatus, target detection method and apparatus, and electronic device
US20220083762A1 (en) * 2020-09-15 2022-03-17 Shenzhen Imsight Medical Technology Co., Ltd. Digital image classification method for cervical fluid-based cells based on a deep learning detection model
CN114140465A (en) * 2021-01-20 2022-03-04 赛维森(广州)医疗科技服务有限公司 Self-adaptive learning method and system based on cervical cell slice image
CN114187277A (en) * 2021-12-14 2022-03-15 赛维森(广州)医疗科技服务有限公司 Deep learning-based thyroid cytology multi-type cell detection method
CN115170571A (en) * 2022-09-07 2022-10-11 赛维森(广州)医疗科技服务有限公司 Method and device for identifying pathological images of hydrothorax and ascites cells and medium
CN115239705A (en) * 2022-09-19 2022-10-25 赛维森(广州)医疗科技服务有限公司 Method, device, equipment and storage medium for counting the number of endometrial plasma cells

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAI WU: "A comprehensive texture feature analysis framework of renal cell carcinoma: pathological, prognostic, and genomic evaluation based on CT images" *
孟竹: "医学病理图像的特征学习与表示" *

Also Published As

Publication number Publication date
CN115601749B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN109447169B (en) Image processing method, training method and device of model thereof and electronic system
CN109033950B (en) Vehicle illegal parking detection method based on multi-feature fusion cascade depth model
CN110852288B (en) Cell image classification method based on two-stage convolutional neural network
CN110309781B (en) House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion
CN106355188A (en) Image detection method and device
CN112348787A (en) Training method of object defect detection model, object defect detection method and device
CN115170571B (en) Method for identifying pathological image of hydrothorax and ascites cells, image identification device and medium
CN114972191A (en) Method and device for detecting farmland change
CN111738114B (en) Vehicle target detection method based on anchor-free accurate sampling remote sensing image
US11715316B2 (en) Fast identification of text intensive pages from photographs
CN111881741A (en) License plate recognition method and device, computer equipment and computer-readable storage medium
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN113312508A (en) Vehicle image retrieval method and device
CN111444976A (en) Target detection method and device, electronic equipment and readable storage medium
CN111583180A (en) Image tampering identification method and device, computer equipment and storage medium
CN115984543A (en) Target detection algorithm based on infrared and visible light images
CN112766246A (en) Document title identification method, system, terminal and medium based on deep learning
CN116740758A (en) Bird image recognition method and system for preventing misjudgment
CN107578003A (en) A kind of remote sensing images transfer learning method based on GEOGRAPHICAL INDICATION image
CN115601749B (en) Pathological image classification method and image classification device based on characteristic peak value atlas
CN113313149A (en) Dish identification method based on attention mechanism and metric learning
CN115880293B (en) Pathological image identification method, device and medium for bladder cancer lymph node metastasis
CN115620083B (en) Model training method, face image quality evaluation method, equipment and medium
CN111680553A (en) Pathological image identification method and system based on depth separable convolution
CN110555344B (en) Lane line recognition method, lane line recognition device, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant