CN117036811A - Intelligent pathological image classification system and method based on double-branch fusion network - Google Patents
Intelligent pathological image classification system and method based on double-branch fusion network Download PDFInfo
- Publication number
- CN117036811A CN117036811A CN202311020950.6A CN202311020950A CN117036811A CN 117036811 A CN117036811 A CN 117036811A CN 202311020950 A CN202311020950 A CN 202311020950A CN 117036811 A CN117036811 A CN 117036811A
- Authority
- CN
- China
- Prior art keywords
- image
- pathological
- pathological image
- pathology
- category information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001575 pathological effect Effects 0.000 title claims abstract description 114
- 230000004927 fusion Effects 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 title claims abstract description 24
- 230000007170 pathology Effects 0.000 claims abstract description 33
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 238000000605 extraction Methods 0.000 claims description 24
- 238000007906 compression Methods 0.000 claims description 7
- 230000006835 compression Effects 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 230000002776 aggregation Effects 0.000 claims description 4
- 238000004220 aggregation Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 3
- 206010006187 Breast cancer Diseases 0.000 abstract description 2
- 208000026310 Breast neoplasm Diseases 0.000 abstract description 2
- 208000015634 Rectal Neoplasms Diseases 0.000 abstract description 2
- 206010038038 rectal cancer Diseases 0.000 abstract description 2
- 201000001275 rectum cancer Diseases 0.000 abstract description 2
- 238000013528 artificial neural network Methods 0.000 description 13
- 238000013527 convolutional neural network Methods 0.000 description 10
- 238000012549 training Methods 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000007907 direct compression Methods 0.000 description 1
- 238000004043 dyeing Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000013401 experimental design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Image Analysis (AREA)
Abstract
The application discloses a pathological image intelligent classification system and method based on a double-branch fusion network, comprising the following steps: performing image preprocessing on the pathology image to obtain a graph data structure of the pathology image and a pathology image with a fixed size; extracting features of the graph data structure of the pathological image and the pathological image with a fixed size to obtain graph features containing pathological image category information and depth convolution features containing pathological image category information; and carrying out feature fusion on the graph features containing the pathological image category information and the depth convolution features containing the pathological image category information to obtain a final classification result of the pathological image. The model is mainly designed for classifying the pathological images of the breast cancer, and the current best classifying performance of 67.03% is obtained on BRACS. At the same time the model was also validated on the rectal cancer CRA dataset, again achieving the best performance 97.33% at present.
Description
Technical Field
The application belongs to the field of digital pathology image processing, and particularly relates to an intelligent pathology image classification system based on a double-branch fusion network.
Background
Pathological image classification is one of research hotspots in the fields of computer vision and deep learning, and in traditional pathological image classification, a pathologist with highly specialized medical knowledge and skills is usually required to accurately evaluate the pathological image classification, however, manual evaluation is time-consuming, laborious and easily affected by subjective factors. With the development of artificial intelligence technology, more and more technologies are presented to assist pathologists in diagnosis, but due to large size difference of pathological pictures, a plurality of problems are often encountered when the pathological pictures are classified by directly using a traditional computer vision method, such as the fact that a large amount of information is lost due to direct compression to a fixed size, and the upper limit of classification accuracy is limited. The problem of pathological image classification is solved by using the image neural network, and the problem of image information loss caused by model compression can be avoided, but the classification performance is further limited due to the limited capability of the image neural network for extracting features.
Disclosure of Invention
The application aims to provide a pathological image intelligent classification system based on a double-branch fusion network, which aims to solve the problems in the prior art.
In order to achieve the above object, the present application provides an intelligent classification system for pathological images based on a dual-branch fusion network, comprising:
the image preprocessing module is used for preprocessing the pathological image to obtain an image data structure of the pathological image and the pathological image with a fixed size;
the feature extraction module is connected with the picture preprocessing module and is used for extracting features of a picture data structure of the pathological image and the pathological image with a fixed size to obtain picture features containing pathological image category information and depth convolution features containing pathological image category information;
and the category prediction module is connected with the feature extraction module and is used for carrying out feature fusion on the graph features containing the category information of the pathological image and the depth convolution features containing the category information of the pathological image to obtain a final classification result of the pathological image.
Preferably, the image preprocessing module includes:
the image construction module is used for cutting the pathological image to obtain an image data structure of the pathological image;
and the compression module is used for processing the pathological image data with different sizes into pathological images with fixed sizes.
Preferably, the feature extraction module includes:
the image feature extraction module is used for extracting features of the image data structure of the pathological image to obtain image features containing pathological image category information;
and the convolution feature extraction module is used for carrying out feature extraction on the pathology image with the fixed size to obtain the depth convolution feature containing the pathology image category information.
Preferably, the graph feature extraction module includes:
the aggregation unit is used for acquiring neighbor information of the corresponding node;
and the updating unit is used for updating the information.
In order to achieve the above purpose, the present application further provides a pathological image intelligent classification method based on a dual-branch fusion network, which comprises:
performing image preprocessing on the pathology image to obtain a graph data structure of the pathology image and a pathology image with a fixed size;
extracting features of the graph data structure of the pathological image and the pathological image with a fixed size to obtain graph features containing pathological image category information and depth convolution features containing pathological image category information;
and carrying out feature fusion on the graph features containing the pathological image category information and the depth convolution features containing the pathological image category information to obtain a final classification result of the pathological image.
Preferably, the process of obtaining the graph data structure of the pathology image and the pathology image of a fixed size comprises:
cutting the pathological image to obtain a graph data structure of the pathological image;
the pathology image data of different sizes are processed into pathology images of a fixed size.
Preferably, the process of obtaining the graph feature containing the pathological image category information and the depth convolution feature containing the pathological image category information includes:
extracting features from the graph data structure of the pathological image to obtain graph features containing pathological image category information;
and extracting features of the pathology image with the fixed size to obtain the depth convolution features containing the pathology image category information.
The application has the technical effects that: the pathological image is respectively a graph data structure taking a graph block as a node and a compressed graph with a fixed size at an input end, the images are mutually complemented in information, and meanwhile, the whole model is trained under the joint supervision of cross entropy loss and Focalloss, so that the two branches can be mutually complemented in performance, and the defect of poor performance in the single branch action is overcome;
the model is mainly designed for classifying the pathological images of the breast cancer, and the current best classifying performance of 67.03% is obtained on BRACS. At the same time the model was also validated on the rectal cancer CRA dataset, again achieving the best performance 97.33% at present.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application. In the drawings:
FIG. 1 is a schematic diagram of a classification system according to an embodiment of the present application;
FIG. 2 is an overall block diagram of an embodiment of the present application;
fig. 3 is a flowchart of a classification method according to an embodiment of the application.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Example 1
As shown in fig. 1-2, in this embodiment, a pathology image intelligent classification system based on a dual-branch fusion network is provided, which includes:
the method comprises a pathological image two-way representation module, a graph feature extraction module, a convolutional neural network forward propagation module and a feature fusion module;
the two representation method modules are used for representing the pathological image as image structure data (graph) and images with fixed sizes, and are respectively used as inputs of two network branches of GNN and CNN, and the problem of large size difference in pathological image classification can be effectively solved by fusing the respective advantages of the two networks on data representation, so that the classification performance of the model is improved.
The image feature extraction module is used for extracting features of the image data structure of the pathological image to obtain image features containing pathological image category information.
The convolutional neural network forward propagation module is used for extracting features of pathological images with fixed sizes to obtain deep convolutional features containing pathological image category information.
The feature fusion module is used for carrying out feature fusion on the graph features extracted by the graph neural network and the depth convolution features extracted by the convolution neural network, so that information complementation is realized, and a final classification result of the pathological image is obtained.
As a carefully chosen technical scheme, the two representation method modules of the pathological image comprise: the pathological image is represented as a graph construction module and a compression module;
the pathological image is expressed as a Graph construction module, the pathological image is cut into image blocks, each image block is regarded as a node construction Graph (Graph), in the Graph network branch, the construction of the Graph is crucial, a large tissue area is selected as a node to express the pathological image with variable size as a Graph, the method refers to the method of expressing a Quan Qiepian pathological image as a 2D point cloud, an ROI Graph is constructed, in the ROI Graph, the pathological image is cut into non-overlapped small image blocks, the size of the image blocks is 96 multiplied by 96, and the size considers the condition of a smaller ROI area. Each picture block is treated as a vertex, the coordinates of which are the center coordinates of the picture block, while the depth features of the picture block are extracted as the initial features of the node using a depth model pre-trained on ImageNet. The pre-trained depth model on a large scale image dataset can yield a good representation of features, and these features can be generalized into image tasks of other neighborhoods. Therefore, the features extracted by using the pre-training model can represent the original picture to a certain extent, and meanwhile, the features also have certain semantic information, so that the method is more beneficial to training and optimizing the network compared with the method of directly using the pixel features. The image block and the image block contacted with the adjacent image block are connected by the edges by adopting the principle of the adjacent existing connection, so that message transmission can be carried out in the forward propagation, the adjacent mode in the image construction is 8 adjacent, the depth feature extraction network uses an EfficientNet pre-training model, the initial feature length of the node is 1792, and the image keeps the structure of an original pathological image and is used for compensating the information lost by CNN due to image compression to a certain extent.
The pathological image is represented as a compression module, and the pathological image data with different sizes are processed into pictures with the same size, and the picture is required to be remodeled to a fixed size when the CNN model classifies the data, so that the picture information is lost, and visual deformation is also faced. In the experimental dataset, the H/W (height and width) ratio will be up to 4 times maximum, and will be severely compressed in height for picture reshaping to a fixed size. But even so, for most pictures, the classification accuracy on CNNs is still considerable, in view of which, in experimental design, the inclusion of CNNs into the design of the model is still considered. Thereby obtaining a deep convolution feature containing pathological image category information;
as a carefully chosen technical scheme, the graph feature extraction module is usually composed of two parts for a GNN model, wherein an aggregator is a function for obtaining neighbor information of a corresponding node and information (Combine) of an updated node, and a main neighborhood aggregator (Principal Neighbourhood Aggregation, PNA) is a aggregation function with strong distinguishing capability, and combines a multi-aggregator and a scaler based on node degree, and the scaler multiplies values obtained by the aggregator to finish amplifying or attenuating neighborhood messages. Therefore, at the graph neural network layer, we use PNA as node information update operation of each layer, and at our dual-branch hybrid modelThe graph neural network branches comprise three layers of graph neural network structures. Since the classification task at the graph level is performed, a Readout operation is performed herein in order to extract the image level references, where h i And representing the output of each layer of the graph neural network layer, wherein Concat is the process of splicing each layer of the neural network layer of each graph node in length, and finally, averaging the final representation of all nodes to obtain the representation of the graph level.
Embedding=Mean(Concat[h i ,i∈{1,2,3}])
Wherein h is i Representing the output of each layer of the graph neural network layer, wherein Concat is the process of splicing each layer of the neural network layer of each graph node in length, and finally, averaging the final representation of all nodes to obtain the representation of the graph level;
as a carefully chosen technical solution, the convolutional neural network forward propagation module selects Densenet201 at the CNN network layer. The design concept of DenseNet201 is to make deep neural networks easier to train and use. The network architecture is characterized by a dense connection (Densely Connected) by connecting the outputs of the previous layer with the inputs of the current layer. The connection mode can promote the flow of information and increase the transmission of gradients, thereby improving the precision and stability of the network. DenseNet201 comprises 201 layers of convolution layers and full connection layers, and has a large parameter. Also, the DenseNet201 also adds the residual structure and batch normalization (Batch Normalization) of BN-ReLU-Conv, improving the convergence speed and accuracy of the network. In addition, the DenseNet201 uses global averaging pooling (Global Average Pooling) at the last layer for feature extraction.
As a carefully selected technical scheme, the feature fusion module fuses the feature graphs obtained from the GNN network branches with the feature graphs obtained from the CNN network branches, so that the feature fusion module plays a complementary role in the same task, and performance improvement is realized. In the model training process, focal loss and cross entropy loss are used for punishing the network, different weight values are given to two loss functions, and flow is Focal loss, and CEloss is cross entropy loss. Particularly, during network training, in order to make the graph neural network branch pay more attention to the learning of the difficult sample, higher weight is set for the difficult sample in the Focal loss:
LoSS=CEloSS+aFloSS。
example two
As shown in fig. 3, in this embodiment, a method for intelligently classifying pathological images based on a dual-branch fusion network is provided, including:
s1, two representations of pathological images are acquired, wherein the two representations comprise pathological image data structure acquisition, pathological image compression with fixed size and provide rich information for accurate classification of the pathological images;
s2, extracting pathological image features, including graph feature extraction and deep convolution feature extraction;
and S3, feature fusion, namely fusing the graph features with the convolution features, and classifying by utilizing all the obtained features.
Further, the specific steps of acquiring the two representations of the pathological image are as follows:
s1.1, constructing a graph, namely cutting pathological images into small picture blocks with fixed sizes in a non-overlapping mode, wherein each small picture block is used as a graph node in graph data, and edges among the nodes are determined in an eight-neighbor mode according to the relative position relation among the picture blocks. The initial feature of the node uses the depth feature of the image extracted by the convolutional neural pre-trained by the ImageNet, which corresponds to the image, as the initial feature of the node.
S1.2, compressing, namely compressing the pathological image dyed by the HE into a picture with a fixed size, selecting a pathological image with a dyeing standard at the same time, and carrying out color normalization on the compressed picture with the fixed size to the standard color distribution;
further, in step S2, the step of extracting the pathological image features specifically includes:
s2.1, drawing feature extraction, wherein a drawing isomorphic network of three layers is used as the drawing feature extraction network;
s2.2, deep convolution feature extraction, wherein the compressed fixed-size pathological image is used for extracting high-level semantic information related to pathological image categories by using Densenet201.
The present application is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present application are intended to be included in the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.
Claims (7)
1. A pathology image intelligent classification system based on a double-branch fusion network is characterized by comprising:
the image preprocessing module is used for preprocessing the pathological image to obtain an image data structure of the pathological image and the pathological image with a fixed size;
the feature extraction module is connected with the picture preprocessing module and is used for extracting features of a picture data structure of the pathological image and the pathological image with a fixed size to obtain picture features containing pathological image category information and depth convolution features containing pathological image category information;
and the category prediction module is connected with the feature extraction module and is used for carrying out feature fusion on the graph features containing the category information of the pathological image and the depth convolution features containing the category information of the pathological image to obtain a final classification result of the pathological image.
2. The intelligent classification system of pathology images based on a dual-branch fusion network according to claim 1, wherein the image preprocessing module comprises:
the image construction module is used for cutting the pathological image to obtain an image data structure of the pathological image;
and the compression module is used for processing the pathological image data with different sizes into pathological images with fixed sizes.
3. The intelligent classification system of pathology images based on a dual-branch fusion network according to claim 1, wherein the feature extraction module comprises:
the image feature extraction module is used for extracting features of the image data structure of the pathological image to obtain image features containing pathological image category information;
and the convolution feature extraction module is used for carrying out feature extraction on the pathology image with the fixed size to obtain the depth convolution feature containing the pathology image category information.
4. The intelligent classification system of pathology images based on a dual-branch fusion network according to claim 3, wherein the graph feature extraction module comprises:
the aggregation unit is used for acquiring neighbor information of the corresponding node;
and the updating unit is used for updating the information.
5. The intelligent pathological image classification method based on the double-branch fusion network is characterized by comprising the steps of carrying out image preprocessing on pathological images to obtain a graph data structure of the pathological images and the pathological images with fixed sizes;
extracting features of the graph data structure of the pathological image and the pathological image with a fixed size to obtain graph features containing pathological image category information and depth convolution features containing pathological image category information;
and carrying out feature fusion on the graph features containing the pathological image category information and the depth convolution features containing the pathological image category information to obtain a final classification result of the pathological image.
6. The intelligent classification method of pathology images based on a dual-branch fusion network according to claim 5, wherein the process of obtaining the graph data structure of pathology images and pathology images of a fixed size comprises:
cutting the pathological image to obtain a graph data structure of the pathological image;
the pathology image data of different sizes are processed into pathology images of a fixed size.
7. The intelligent classification method of pathology image based on the dual-branch fusion network according to claim 5, wherein the process of obtaining the graph features containing the pathology image category information and the deep convolution features containing the pathology image category information comprises:
extracting features from the graph data structure of the pathological image to obtain graph features containing pathological image category information;
and extracting features of the pathology image with the fixed size to obtain the depth convolution features containing the pathology image category information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311020950.6A CN117036811A (en) | 2023-08-14 | 2023-08-14 | Intelligent pathological image classification system and method based on double-branch fusion network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311020950.6A CN117036811A (en) | 2023-08-14 | 2023-08-14 | Intelligent pathological image classification system and method based on double-branch fusion network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117036811A true CN117036811A (en) | 2023-11-10 |
Family
ID=88644443
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311020950.6A Pending CN117036811A (en) | 2023-08-14 | 2023-08-14 | Intelligent pathological image classification system and method based on double-branch fusion network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117036811A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113034462A (en) * | 2021-03-22 | 2021-06-25 | 福州大学 | Method and system for processing gastric cancer pathological section image based on graph convolution |
CN113469119A (en) * | 2021-07-20 | 2021-10-01 | 合肥工业大学 | Cervical cell image classification method based on visual converter and graph convolution network |
WO2021196632A1 (en) * | 2020-03-30 | 2021-10-07 | 中国科学院深圳先进技术研究院 | Intelligent analysis system and method for panoramic digital pathological image |
CN113674252A (en) * | 2021-08-25 | 2021-11-19 | 上海鹏冠生物医药科技有限公司 | Histopathology image diagnosis system based on graph neural network |
CN116012353A (en) * | 2023-02-07 | 2023-04-25 | 中国科学院重庆绿色智能技术研究院 | Digital pathological tissue image recognition method based on graph convolution neural network |
-
2023
- 2023-08-14 CN CN202311020950.6A patent/CN117036811A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021196632A1 (en) * | 2020-03-30 | 2021-10-07 | 中国科学院深圳先进技术研究院 | Intelligent analysis system and method for panoramic digital pathological image |
CN113034462A (en) * | 2021-03-22 | 2021-06-25 | 福州大学 | Method and system for processing gastric cancer pathological section image based on graph convolution |
CN113469119A (en) * | 2021-07-20 | 2021-10-01 | 合肥工业大学 | Cervical cell image classification method based on visual converter and graph convolution network |
CN113674252A (en) * | 2021-08-25 | 2021-11-19 | 上海鹏冠生物医药科技有限公司 | Histopathology image diagnosis system based on graph neural network |
CN116012353A (en) * | 2023-02-07 | 2023-04-25 | 中国科学院重庆绿色智能技术研究院 | Digital pathological tissue image recognition method based on graph convolution neural network |
Non-Patent Citations (2)
Title |
---|
GABRIELE CORSO 等: "Principal Neighbourhood Aggregation for Graph Nets", 《ARXIV:2004.05718V5》, 31 December 2020 (2020-12-31), pages 1 - 19 * |
程照雪 等: "增强边缘特征的肺结节分割模型", 《计算机工程与应用》, vol. 59, no. 24, 3 January 2023 (2023-01-03), pages 185 - 195 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109493346B (en) | Stomach cancer pathological section image segmentation method and device based on multiple losses | |
CN111062892B (en) | Single image rain removing method based on composite residual error network and deep supervision | |
JP6395158B2 (en) | How to semantically label acquired images of a scene | |
CN112017191A (en) | Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism | |
CN109064405A (en) | A kind of multi-scale image super-resolution method based on dual path network | |
CN111832570A (en) | Image semantic segmentation model training method and system | |
CN108229576B (en) | Cross-magnification pathological image feature learning method | |
CN113706545B (en) | Semi-supervised image segmentation method based on dual-branch nerve discrimination dimension reduction | |
CN110570352A (en) | image labeling method, device and system and cell labeling method | |
CN116757988B (en) | Infrared and visible light image fusion method based on semantic enrichment and segmentation tasks | |
CN112990077A (en) | Face action unit identification method and device based on joint learning and optical flow estimation | |
CN114820579A (en) | Semantic segmentation based image composite defect detection method and system | |
CN110826560B (en) | Esophageal cancer pathological image labeling method | |
CN111401247A (en) | Portrait segmentation method based on cascade convolution neural network | |
CN113989261A (en) | Unmanned aerial vehicle visual angle infrared image photovoltaic panel boundary segmentation method based on Unet improvement | |
CN113781385A (en) | Joint attention-seeking convolution method for brain medical image automatic classification | |
CN113420794A (en) | Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning | |
CN109345549A (en) | A kind of natural scene image dividing method based on adaptive compound neighbour's figure | |
CN115170556A (en) | Image segmentation method for uterine squamous epithelial lesion area based on DSA-Doubleunet model | |
CN116385725A (en) | Fundus image optic disk and optic cup segmentation method and device and electronic equipment | |
CN117710969B (en) | Cell nucleus segmentation and classification method based on deep neural network | |
CN113763300B (en) | Multi-focusing image fusion method combining depth context and convolution conditional random field | |
CN115272670A (en) | SAR image ship instance segmentation method based on mask attention interaction | |
CN111582437A (en) | Construction method of parallax regression deep neural network | |
CN110287990A (en) | Microalgae image classification method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |