CN112750115A - Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network - Google Patents
Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network Download PDFInfo
- Publication number
- CN112750115A CN112750115A CN202110054445.8A CN202110054445A CN112750115A CN 112750115 A CN112750115 A CN 112750115A CN 202110054445 A CN202110054445 A CN 202110054445A CN 112750115 A CN112750115 A CN 112750115A
- Authority
- CN
- China
- Prior art keywords
- image
- neural network
- feature
- attention
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 30
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000003902 lesion Effects 0.000 title claims abstract description 16
- 208000019065 cervical carcinoma Diseases 0.000 title claims abstract description 8
- QTBSBXVTEAMEQO-UHFFFAOYSA-N Acetic acid Chemical compound CC(O)=O QTBSBXVTEAMEQO-UHFFFAOYSA-N 0.000 claims abstract description 69
- ZCYVEMRRCGMTRW-UHFFFAOYSA-N 7553-56-2 Chemical compound [I] ZCYVEMRRCGMTRW-UHFFFAOYSA-N 0.000 claims abstract description 22
- 229910052740 iodine Inorganic materials 0.000 claims abstract description 22
- 239000011630 iodine Substances 0.000 claims abstract description 22
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 12
- 238000002573 colposcopy Methods 0.000 claims abstract description 10
- 238000012549 training Methods 0.000 claims abstract description 6
- 238000000605 extraction Methods 0.000 claims abstract description 4
- 238000002372 labelling Methods 0.000 claims abstract description 4
- 238000010827 pathological analysis Methods 0.000 claims abstract description 4
- 206010008263 Cervical dysplasia Diseases 0.000 claims description 20
- 239000011159 matrix material Substances 0.000 claims description 18
- 208000032124 Squamous Intraepithelial Lesions Diseases 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 10
- 206010008342 Cervix carcinoma Diseases 0.000 claims description 9
- 208000006105 Uterine Cervical Neoplasms Diseases 0.000 claims description 9
- 201000010881 cervical cancer Diseases 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 238000002474 experimental method Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 5
- 239000002243 precursor Substances 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 3
- 210000005036 nerve Anatomy 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 2
- 230000002776 aggregation Effects 0.000 claims description 2
- 238000004220 aggregation Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000003759 clinical diagnosis Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001574 biopsy Methods 0.000 description 1
- 210000003679 cervix uteri Anatomy 0.000 description 1
- 239000003153 chemical reaction reagent Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 210000000987 immune system Anatomy 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000002271 resection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a multi-modal cervical carcinoma pre-lesion image identification method based on a graph convolution neural network. Firstly, collecting colposcopy images, and labeling the colposcopy images of each patient according to a pathological diagnosis report; and then constructing a neural network BG-CNN, wherein the neural network comprises a graph neural network formed by two ResNet18, and a graph convolution neural network forms a relationship extraction network. Inputting the marked colposcope image into the constructed neural network BG-CNN for training, setting a loss function as a cross entropy loss function, and finally obtaining a trained model; the method disclosed by the invention fully combines clinical diagnosis experience of doctors, combines two types of images of acetic acid experimental images and iodine experimental images, and integrates two methods of a convolution network and a graph convolution neural network, so that the associated information in the two types of images of the same patient can be automatically learned, the colposcope images are jointly distinguished, and the identification precision is obviously improved.
Description
Technical Field
The invention belongs to the field of medical images, and relates to a method for identifying whether a cervical image shot under a colposcope is a precancerous lesion image or not by using a deep learning method.
Background
At present, the cervical cancer is screened in China clinically, and whether the screened object has the cervical cancer and the cervical cancer precursor lesion is determined by biopsy. The cervical precancerous lesions can be divided into two states, LSIL and HSIL. HSIL, a high grade precancerous lesion, requires timely resection to prevent its further development into cervical cancer. LSIL, a low-grade precancerous lesion, requires only conservative treatment and can restore normality through its own immune system by improving lifestyle and hygiene conditions. The use of colposcopy for cervical cancer pre-lesion screening is a common approach used by most hospitals. The colposcope can be used for inspecting the cervix under the condition of 3-7 times of amplification, and the abnormal pathological tissue and the normal tissue are obviously differentiated by applying an acetic acid solution and a Lugol iodine solution which are diluted to 3 percent. However, the operation process of screening precancerous lesions in colposcopy is relatively low in requirements of operators, the accuracy of clinical examination depends on the subjective judgment of doctors to a large extent, and part of experienced doctors have the specificity of only 48% clinically. Therefore, the embarrassing situation that a large number of doctors with rich experience are needed clinically can be relieved by finding a scientific, accurate and quick colposcopic diagnosis method.
Currently, there are some methods of deep learning to screen and analyze cervical disease images. However, most of the methods are directed to images after acetic acid experiments, and the fact that the clinical doctor can combine different experimental reagents to obtain images for combined diagnosis is ignored.
The method is different from other documents in identifying the cervical cancer precancerous lesion images under the colposcope. The method disclosed by the invention fully combines clinical diagnosis experience of doctors, combines two types of images of acetic acid experimental images and iodine experimental images, and integrates two methods of a convolution network and a graph convolution neural network, so that the associated information in the two types of images of the same patient can be automatically learned, and the colposcope images can be jointly distinguished. The identification precision is obviously improved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a multi-modal cervical carcinoma precursor image identification method based on a graph convolution neural network in order to fully integrate the advantages of the convolution neural network and the graph neural network and improve the diagnosis accuracy of cervical carcinoma precursor under a colposcope.
The neural network BG-CNN provided by the invention mainly comprises two parts, wherein one part is a graph neural network formed by two ResNet18, and the other part is a graph convolution neural network forming relationship extraction network. The technical scheme adopted for solving the technical problem comprises the following steps:
step one, collecting colposcopy images, and labeling the colposcopy images of each patient according to a pathological diagnosis report.
Step two, constructing two 18-layer ResNet as feature extraction networks for extracting feature maps F of acetic acid experiment images respectivelyaAnd feature map F of iodine experimental imageb. Then two attention models are respectively passed to generate two attention diagrams AaAnd Ab;
And step three, combining the feature map and the Attention map of the corresponding image extracted in the step two by using Bilinear pooling Bilinear Attention position based on Attention to respectively generate a two-dimensional feature matrix for generating an acetic acid experimental image and an iodine experimental image.
And step four, taking each row of the feature matrix in the step three as a node of the graph, and finding K nodes with the Euclidean distance being closest to each node by using a K neighbor algorithm to construct an adjacency matrix. A two-layer graph convolution neural network is then used to aggregate features between nodes. Finally, the final feature representation is obtained by element dot multiplication of the learned features of the convolutional layer with the features learned by ResNet 18.
And fifthly, inputting the marked colposcope image obtained in the first step into the nerve networks BG-CNN constructed in the second, third and fourth steps for training, setting the loss function as a cross entropy loss function, and finally obtaining a trained model. And inputting the colposcope image into a trained model for image detection.
The invention has the following beneficial effects:
the method disclosed by the invention fully combines clinical diagnosis experience of doctors, combines two types of images of acetic acid experimental images and iodine experimental images, and integrates two methods of a convolution network and a graph convolution neural network, so that the associated information in the two types of images of the same patient can be automatically learned, and the colposcope images can be jointly distinguished. The identification precision is obviously improved.
Drawings
FIG. 1 is a diagram of a neural network BG-CNN according to the present invention.
Detailed Description
The method of the present invention is further described below with reference to the accompanying drawings
The specific operation of marking the colposcope image in the step one is as follows:
each patient was classified into two categories, LSIL-and HSIL +, based on their pathological diagnosis report. The LSIL-comprises normal patients and low-grade squamous intraepithelial lesion (LSIL) patients, and the HSIL + comprises high-grade squamous intraepithelial lesion (HSIL) patients and cervical cancer patients; an acetic acid test image and an iodine test image were taken from each patient's colposcopy image as training sets.
The detailed process of obtaining the feature map and the attention map of the corresponding image described in the step two is as follows:
2-1, defining an acetic acid experimental image input into the neural network as X and an iodine experimental image as Y; ResNet18 network isThe acetic acid image feature map and the iodine experiment image feature map can be expressed as follows:
2-2. the attention module consists of a CNN layer with a convolution kernel of 3 x 3 and a BN layer, the notation of which is Atten (·). The attention map generated from the feature maps of the acetic acid experimental image and the iodine experimental image can be expressed as follows:
Aa=relu(Atten(Fa))
Ab=relu(Atten(Fb))
each attention diagram a ═ a1,a2,...,am]The dimension is determined by the number of convolution kernels of the CNN layer; relu () is an activation function.
The detailed process of the feature map of the corresponding image and the two-dimensional feature matrix generated by the BilinerAttention Pooling in the third step is as follows:
in the attention map AaAnd AbIn (2), each subvector aiSince the feature of each part of the object can be abstracted and considered as a part of the corresponding acetic acid experimental image and iodine experimental image, the feature of each part of the object can be represented by the feature map F and the ith attention sub-vector aiThe elements are obtained by dot multiplication. The formula can be expressed as follows:
Γ (·) denotes Biliner attachment position. As an element point, it is indicated as being coincident. a isiThe ith sub-vector representing the attention map. g (-) represents a global average pooling function.
The specific process of constructing the adjacency matrix using the K-nearest neighbor algorithm and the specific process of performing relationship aggregation using the graph convolution described in step four are as follows:
4-1 by step three, a feature vector of m dimensions is obtained for both the acetic acid experimental image and the iodine experimental image, so a total of 2m feature vectors is obtained. With each vector as a node of the graph, the adjacency matrix K of the whole graph structure is defined as follows:
wherein KNN (f)i) Representative node fiK nearest neighbors based on Euclidean distance, fjDenotes f byiOther nodes than the other.
4-2 defines a node set V and an edge set E, the node set comprises 2m nodes, namely fiE.v, when the adjacency matrix KijWhen equal to 1, then (f)i,fj) E.g. fiAnd fjThere is one edge, thus resulting in one complete graph G ═ (V, E). Graph G is then fed into a two-layer graph convolution neural network to aggregate the features of neighboring nodes. The calculation formula for updating each node is as follows:
Hla characteristic representation of a node representing the l-th layer, and when l is 0, then H0=[f1,f2,...,f2m]。Is the normalized critical matrix K. WlIs a weight matrix that the l-th layer can learn.
4-3 the final relationship characteristics are expressed as follows:
y=H2[Fa,Fb]
wherein H2The feature representation is learned after the two-layer graph convolution network. FaAnd FbCharacteristic graphs of the acetic acid experimental image and the iodine experimental image obtained by the ResNet18 network are respectively.
And fifthly, inputting the marked colposcope image obtained in the first step into the nerve networks BG-CNN constructed in the second, third and fourth steps for training, setting the loss function as a cross entropy loss function, and finally obtaining a trained model. And inputting the colposcope image into a trained model for image detection.
FIG. 1 is a diagram of a neural network BG-CNN according to the present invention.
Claims (5)
1. A multi-modal cervical carcinoma pre-lesion image recognition method based on a graph convolution neural network is characterized by comprising the following steps:
step one, collecting colposcopy images, and labeling the colposcopy images of each patient according to a pathological diagnosis report;
step two, constructing two 18-layer ResNet as feature extraction networks for extracting feature maps F of acetic acid experiment images respectivelyaAnd feature map F of iodine experimental imageb(ii) a Then two attention models are respectively passed to generate two attention diagrams AaAnd Ab;
Combining the feature map and the Attention map of the corresponding image extracted in the step two by using Bilinear pooling Bilinear Attention position based on Attention to respectively generate a two-dimensional feature matrix for generating an acetic acid experimental image and an iodine experimental image;
step four, taking each row of the characteristic matrix in the step three as a node of the graph, and finding K nodes with the shortest Euclidean distance of each node by using a K neighbor algorithm to construct an adjacent matrix; then using a two-layer graph convolution neural network to aggregate features between nodes; finally, the final feature representation is obtained by performing element dot multiplication on the learned features of the convolutional layer and the features learned by ResNet 18;
inputting the marked colposcope image obtained in the step one into the nerve network BG-CNN constructed in the step two, the step three and the step four for training, setting a loss function as a cross entropy loss function, and finally obtaining a trained model; and inputting the colposcope image into a trained model for image detection.
2. The method for recognizing the multimodal cervical cancer precursor lesion image based on the atlas neural network as claimed in claim 1, wherein the specific operation of labeling the colposcopic image in the step one is as follows:
each patient was classified into two categories, LSIL-and HSIL +; the LSIL-comprises normal patients and low-grade squamous intraepithelial lesion (LSIL) patients, and the HSIL + comprises high-grade squamous intraepithelial lesion (HSIL) patients and cervical cancer patients; an acetic acid test image and an iodine test image were taken from each patient's colposcopy image as training sets.
3. The method for identifying the multi-modal cervical carcinoma pre-lesion image based on the atlas neural network as claimed in claim 2, wherein the feature map and the attention map obtained by obtaining the corresponding image in the second step are detailed as follows:
2-1, defining an acetic acid experimental image input into the neural network as X and an iodine experimental image as Y; ResNet18 network isThe acetic acid image feature map and the iodine experiment image feature map can be expressed as follows:
2-2. the attention module consists of a CNN layer with a convolution kernel of 3 x 3 and a BN layer, the symbol of which is described as Atten (·); the attention map generated from the feature maps of the acetic acid experimental image and the iodine experimental image can be expressed as follows:
Aa=relu(Atten(Fa))
Ab=relu(Atten(Fb))
each attention diagram a ═ a1,a2,...,am]The dimension is determined by the number of convolution kernels of the CNN layer; relu () is an activation function.
4. The method for identifying the multimodal cervical pre-lesion image based on the atlas neural network as claimed in claim 3, wherein the detailed process of the feature map of the corresponding image and the two-dimensional feature matrix generated by the Bilinear Attention position in the third step is as follows:
in the attention map AaAnd AbIn (2), each subvector aiSince the feature of each part of the object can be abstracted and considered as a part of the corresponding acetic acid experimental image and iodine experimental image, the feature of each part of the object can be represented by the feature map F and the ith attention sub-vector aiElement dot product is obtained; the formula can be expressed as follows:
Γ (·) denotes Biliner attachment position; as element point of the line indicates a coincidence; a isiAn ith sub-vector representing the attention map; g (-) represents a global average pooling function.
5. The method for recognizing the multi-modal cervical carcinoma pre-lesion image based on the atlas neural network as claimed in claim 4, wherein the specific process of constructing the adjacency matrix by using the K-nearest neighbor algorithm and the specific process of performing the relationship aggregation by using the atlas are as follows:
4-1, obtaining a feature vector of m dimensions for both the acetic acid experiment image and the iodine experiment image by the third step, so that 2m feature vectors are obtained in total; with each vector as a node of the graph, the adjacency matrix K of the whole graph structure is defined as follows:
wherein KNN (f)i) Representative node fiK nearest neighbors based on Euclidean distance, fjDenotes f byiOther nodes than the first node;
4-2 defines a node set V and an edge set E, the node set comprises 2m nodes, namely fiE.v, when the adjacency matrix KijWhen equal to 1, then (f)i,fj) E.g. fiAnd fjThere is one edge, thus resulting in one complete graph G ═ (V, E); then sending the graph G into a two-layer graph convolution neural network to aggregate the characteristics of adjacent nodes; the calculation formula for updating each node is as follows:
Hla characteristic representation of a node representing the l-th layer, and when l is 0, then H0=[f1,f2,...,f2m];Is the normalized critical matrix K; wlIs a weight matrix learnable by the l-th layer;
4-3 the final relationship characteristics are expressed as follows:
y=H2[Fa,Fb]
wherein H2The feature representation is learned after the two-layer graph convolution network; faAnd FbCharacteristic graphs of the acetic acid experimental image and the iodine experimental image obtained by the ResNet18 network are respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110054445.8A CN112750115A (en) | 2021-01-15 | 2021-01-15 | Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110054445.8A CN112750115A (en) | 2021-01-15 | 2021-01-15 | Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112750115A true CN112750115A (en) | 2021-05-04 |
Family
ID=75652104
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110054445.8A Pending CN112750115A (en) | 2021-01-15 | 2021-01-15 | Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112750115A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113223723A (en) * | 2021-05-11 | 2021-08-06 | 胡敏雄 | Method for predicting multi-modal kidney tumor kidney protection operation difficulty and complications |
CN113469119A (en) * | 2021-07-20 | 2021-10-01 | 合肥工业大学 | Cervical cell image classification method based on visual converter and graph convolution network |
CN113591629A (en) * | 2021-07-16 | 2021-11-02 | 深圳职业技术学院 | Finger three-mode fusion recognition method, system, device and storage medium |
CN114841970A (en) * | 2022-05-09 | 2022-08-02 | 北京字节跳动网络技术有限公司 | Inspection image recognition method and device, readable medium and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108388841A (en) * | 2018-01-30 | 2018-08-10 | 浙江大学 | Cervical biopsy area recognizing method and device based on multiple features deep neural network |
CN109543719A (en) * | 2018-10-30 | 2019-03-29 | 浙江大学 | Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model |
CN109859159A (en) * | 2018-11-28 | 2019-06-07 | 浙江大学 | A kind of cervical lesions region segmentation method and device based on multi-modal segmentation network |
CN110826576A (en) * | 2019-10-10 | 2020-02-21 | 浙江大学 | Cervical lesion prediction system based on multi-mode feature level fusion |
US10692602B1 (en) * | 2017-09-18 | 2020-06-23 | Deeptradiology, Inc. | Structuring free text medical reports with forced taxonomies |
CN111738113A (en) * | 2020-06-10 | 2020-10-02 | 杭州电子科技大学 | Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint |
-
2021
- 2021-01-15 CN CN202110054445.8A patent/CN112750115A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10692602B1 (en) * | 2017-09-18 | 2020-06-23 | Deeptradiology, Inc. | Structuring free text medical reports with forced taxonomies |
CN108388841A (en) * | 2018-01-30 | 2018-08-10 | 浙江大学 | Cervical biopsy area recognizing method and device based on multiple features deep neural network |
CN109543719A (en) * | 2018-10-30 | 2019-03-29 | 浙江大学 | Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model |
CN109859159A (en) * | 2018-11-28 | 2019-06-07 | 浙江大学 | A kind of cervical lesions region segmentation method and device based on multi-modal segmentation network |
CN110826576A (en) * | 2019-10-10 | 2020-02-21 | 浙江大学 | Cervical lesion prediction system based on multi-mode feature level fusion |
CN111738113A (en) * | 2020-06-10 | 2020-10-02 | 杭州电子科技大学 | Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113223723A (en) * | 2021-05-11 | 2021-08-06 | 胡敏雄 | Method for predicting multi-modal kidney tumor kidney protection operation difficulty and complications |
CN113223723B (en) * | 2021-05-11 | 2023-08-25 | 福建省立医院 | Method for predicting difficulty and complications of kidney-protecting operation of multi-mode kidney tumor |
CN113591629A (en) * | 2021-07-16 | 2021-11-02 | 深圳职业技术学院 | Finger three-mode fusion recognition method, system, device and storage medium |
CN113591629B (en) * | 2021-07-16 | 2023-06-27 | 深圳职业技术学院 | Finger tri-modal fusion recognition method, system, device and storage medium |
CN113469119A (en) * | 2021-07-20 | 2021-10-01 | 合肥工业大学 | Cervical cell image classification method based on visual converter and graph convolution network |
CN113469119B (en) * | 2021-07-20 | 2022-10-04 | 合肥工业大学 | Cervical cell image classification method based on visual converter and image convolution network |
CN114841970A (en) * | 2022-05-09 | 2022-08-02 | 北京字节跳动网络技术有限公司 | Inspection image recognition method and device, readable medium and electronic equipment |
CN114841970B (en) * | 2022-05-09 | 2023-07-18 | 抖音视界有限公司 | Identification method and device for inspection image, readable medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112750115A (en) | Multi-modal cervical carcinoma pre-lesion image recognition method based on graph neural network | |
AU2019311336B2 (en) | Computer classification of biological tissue | |
WO2018120942A1 (en) | System and method for automatically detecting lesions in medical image by means of multi-model fusion | |
CN111429407B (en) | Chest X-ray disease detection device and method based on double-channel separation network | |
CN109636805B (en) | Cervical image lesion area segmentation device and method based on classification prior | |
CN108388841B (en) | Cervical biopsy region identification method and device based on multi-feature deep neural network | |
Li et al. | Lesion-attention pyramid network for diabetic retinopathy grading | |
CN109102491A (en) | A kind of gastroscope image automated collection systems and method | |
CN106056595A (en) | Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network | |
Luo et al. | Retinal image classification by self-supervised fuzzy clustering network | |
CN110826576B (en) | Cervical lesion prediction system based on multi-mode feature level fusion | |
CN113344864A (en) | Ultrasonic thyroid nodule benign and malignant prediction method based on deep learning | |
Lei et al. | Automated detection of retinopathy of prematurity by deep attention network | |
CN113610118A (en) | Fundus image classification method, device, equipment and medium based on multitask course learning | |
CN115760835A (en) | Medical image classification method of graph convolution network | |
Ay et al. | Automated classification of nasal polyps in endoscopy video-frames using handcrafted and CNN features | |
Noor et al. | GastroNet: A robust attention‐based deep learning and cosine similarity feature selection framework for gastrointestinal disease classification from endoscopic images | |
KR102407248B1 (en) | Deep Learning based Gastric Classification System using Data Augmentation and Image Segmentation | |
Xue et al. | CT-based COPD identification using multiple instance learning with two-stage attention | |
CN116664911A (en) | Breast tumor image classification method based on interpretable deep learning | |
CN112419246A (en) | Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution | |
CN110277166B (en) | Auxiliary diagnosis system and method for palace laparoscope | |
CN111768845B (en) | Pulmonary nodule auxiliary detection method based on optimal multi-scale perception | |
Song et al. | Classification of cervical lesion images based on CNN and transfer learning | |
Li et al. | Multi-source data fusion for recognition of cervical precancerous lesions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Yan Ling Inventor after: Shen Xingfa Inventor after: Li Shufeng Inventor after: Zhao Qingbiao Inventor after: Liu Lili Inventor before: Shen Xingfa Inventor before: Li Shufeng Inventor before: Yan Ling Inventor before: Zhao Qingbiao Inventor before: Liu Lili |