CN116129200A - Bronchoscope image benign and malignant focus classification device based on deep learning - Google Patents
Bronchoscope image benign and malignant focus classification device based on deep learning Download PDFInfo
- Publication number
- CN116129200A CN116129200A CN202310406196.3A CN202310406196A CN116129200A CN 116129200 A CN116129200 A CN 116129200A CN 202310406196 A CN202310406196 A CN 202310406196A CN 116129200 A CN116129200 A CN 116129200A
- Authority
- CN
- China
- Prior art keywords
- image
- bronchoscope
- features
- benign
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003211 malignant effect Effects 0.000 title claims abstract description 25
- 238000013135 deep learning Methods 0.000 title claims abstract description 14
- 230000003902 lesion Effects 0.000 claims abstract description 17
- 238000000605 extraction Methods 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 238000006243 chemical reaction Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 17
- 239000013598 vector Substances 0.000 claims description 15
- 238000010586 diagram Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 238000013145 classification model Methods 0.000 claims description 4
- 208000002151 Pleural effusion Diseases 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000013136 deep learning model Methods 0.000 claims description 3
- 210000001165 lymph node Anatomy 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 2
- 238000000034 method Methods 0.000 abstract description 10
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 5
- 201000005202 lung cancer Diseases 0.000 description 5
- 208000020816 lung neoplasm Diseases 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013276 bronchoscopy Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/778—Active pattern-learning, e.g. online learning of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
- G06V10/811—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a bronchoscope image benign and malignant focus classification device based on deep learning, which comprises the following steps: the image acquisition module is used for acquiring data and preprocessing the data; the image processing module is used for performing self-supervision pre-training on the model image encoder and improving the capability of the model for extracting deep abstract features; the image feature extraction module is used for extracting image features through the pre-trained encoder and performing dimension conversion on clinical data; the image classification module is used for fusing image features and clinical features through the multi-mode spatial attention mechanism module and classifying benign and malignant lesions; compared with the traditional image histology method, the device is high in efficiency and classification accuracy, and can provide reference for a doctor to rapidly distinguish benign and malignant lesions of the bronchoscope image clinically.
Description
Technical Field
The invention relates to the technical field of biology, in particular to a bronchoscope image benign and malignant focus classification device based on deep learning.
Background
Because of the lack of specificity of early symptoms of lung cancer, most patients have entered advanced stages when they were found, which is also a significant cause of high lung cancer mortality. Therefore, the detection rate of lung cancer is improved, the detection means of lung cancer is perfected, and early diagnosis and treatment are facilitated, so that the death rate is effectively reduced.
Currently, the number of related algorithms for classifying benign and malignant images for bronchoscopy is small and is mainly based on traditional machine learning methods. The method extracts the characteristics manually, performs characteristic screening, and finally classifies the characteristics through modeling of a machine learning algorithm. The method has a complex flow, and cannot extract high-level abstract features of the bronchoscope image, so that the model classification accuracy is limited. In order to assist a doctor in better identifying a lung cancer patient in clinic, a bronchoscope image benign and malignant focus classification method with simple and convenient flow and high accuracy is needed.
Disclosure of Invention
The invention aims to provide a bronchoscope image benign and malignant focus classification device based on deep learning, which is based on the deep learning and performs self-supervision pre-training on a model encoder so that deep abstract features of a bronchoscope image can be fully extracted and classification effects can be improved by merging clinical information; the advanced network architecture can fully utilize various types of data, achieves better effect than the traditional method, simplifies the complex flow of the traditional method, provides a new thought for classifying benign and malignant lesions of bronchoscope images, has high accuracy of prediction results, and can provide reference for doctors to rapidly distinguish the benign and malignant lesions of bronchoscope images clinically.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a bronchoscope image benign and malignant lesion classification device based on deep learning, comprising:
the image acquisition module is used for acquiring data and preprocessing the data;
the image processing module is used for performing self-supervision pre-training on the model image encoder and improving the capability of the model for extracting deep abstract features;
the image feature extraction module is used for extracting image features through the pre-trained encoder and performing dimension conversion on clinical data;
and the image classification module is used for fusing the image characteristics and the clinical characteristics through the multi-mode spatial attention mechanism module and classifying benign and malignant lesions.
Preferably, the image acquisition module is configured to implement the following steps:
s11, acquiring a bronchoscope image of a patient and corresponding clinical data;
s12, performing center clipping on a patient bronchoscope image, uniformly converting the bronchoscope image into an image with the size of 224 multiplied by 224, normalizing the data, and mapping pixel values between 0 and 1 to obtain a preprocessed bronchoscope image;
s13, selecting the age, sex, lymph node condition and pleural effusion condition of the patient according to the clinical information, and splicing different clinical information of the same patient into a vector form to obtain the preprocessed clinical information.
Preferably, the image processing module is configured to implement the following steps:
s21, transmitting bronchoscope image data in a training set into a self-supervision learning model SimCLR-V1;
s22, carrying out data enhancement on the bronchoscope image of the input model in different modes, and converting the bronchoscope image into two different images, namely x i And x j ;
S23, transmitting the images into encoders of the classification models to perform feature extraction to obtain image features h respectively i And h j, wherein ,hi For image x i Corresponding features, h j For image x j Corresponding features;
s24, model image characteristics h i And h j Respectively transmitting the non-transformed information into the mapping heads of the models to enhance the invariable information of the images;
s25, calculating loss values of two image features through a loss function, so that similarity between images is evaluated, the loss values are minimized as much as possible, and the corresponding loss function is as follows:
wherein ,si,j Representing cosine similarity of vectors, N representing batch size, l (i, j) being similarity probability of similar feature vectors,the average loss obtained by calculating the similarity probability of the 2 k-th image and the 2 k-1-th image through position exchange is represented by L, where k is the k-th image in one batch, excluding the case where k is equal to i.
Preferably, the image feature extraction module is configured to implement the following steps:
s31, inputting the preprocessed bronchoscope image into a deep learning model ResNet18, and obtaining a basic image characteristic F through a convolution layer, a batch standardization layer and an activation function layer 1 ;
S32, the basic image features F 1 Four residual modules are transmitted to perform image feature coding to obtain deep image features F 2 ;
S33, converting clinical data into clinical characteristics C through a full connection layer 1 To match the dimensions of the image features.
Preferably, the image classification module is configured to implement the following steps:
s41, combining each vector along the channel dimension in the image features with the clinical feature C 1 Performing cosine similarity calculation to obtain an attention feature map;
s42, attention characteristic diagram and deep image characteristic F 2 Multiplying and summing in each channel to obtain fusion characteristic R of image data and clinical data 1 ;
S43, fusing the characteristic R with the full-connection layer pair through softmax function 1 Converting to obtain model classification probability, and calculating by using cross entropy loss functionThe prediction model outputs a difference between the result and the real label, so that the model is optimized, and the loss function is as follows: wherein ,L CE representing the calculated loss value, i representing the ith dimension, K representing the total number of vector dimensions, +.>Represents the label i dimension value, +.>Representing an ith dimension predictor;
s44, classifying benign and malignant lesions of the bronchoscope image through the optimized prediction model.
After the technical scheme is adopted, the invention has the following beneficial effects: the prediction model extracts the bronchoscope image characteristics through a deep learning method, can extract the image characteristics richer than the traditional method, and is more efficient than the traditional method. In addition, the prediction model can fuse image characteristics with various clinical information, and compared with a traditional single-mode classification model, the device fully fuses complementary information of two modes of data, and can improve the model classification effect. The device can provide reference for doctors to rapidly distinguish benign and malignant lesions of bronchoscope images clinically, and has high accuracy.
Drawings
FIG. 1 is a schematic diagram of the structure of the present invention;
FIG. 2 is a frame diagram of a predictive model of the present invention;
fig. 3 is a schematic diagram of image data preprocessing according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1 to 3, a bronchoscope image benign and malignant lesion classification device based on deep learning includes:
the image acquisition module 100 is used for acquiring data and preprocessing the data;
the image acquisition module 100 is configured to implement the following steps:
s11, acquiring a bronchoscope image of a patient and corresponding clinical data;
s12, performing center clipping on a patient bronchoscope image, uniformly converting the bronchoscope image into an image with the size of 224 multiplied by 224, normalizing the data, and mapping pixel values between 0 and 1 to obtain a preprocessed bronchoscope image;
s13, selecting age, sex, lymph node condition and pleural effusion condition of a patient according to clinical information, and splicing different clinical information of the same patient into a vector form to obtain preprocessed clinical information;
the image processing module 200 is used for performing self-supervision pre-training on the model image encoder, and improving the capability of the model for extracting deep abstract features;
the image processing module 200 is configured to implement the following steps:
s21, transmitting bronchoscope image data in a training set into a self-supervision learning model SimCLR-V1;
s22, carrying out data enhancement on the bronchoscope image of the input model in different modes, and converting the bronchoscope image into two different images, namely x i And x j ;
S23, transmitting the images into encoders of the classification models to perform feature extraction to obtain image features h respectively i And h j, wherein ,hi For image x i Corresponding features, h j For image x j Corresponding features;
s24, model image characteristics h i And h j Respectively transmitting the non-transformed information into the mapping heads of the models to enhance the invariable information of the images;
s25, calculating loss values of two image features through a loss function, so that similarity between images is evaluated, the loss values are minimized as much as possible, and the corresponding loss function is as follows:
wherein ,si,j Representing cosine similarity of vectors, N representing batch size, l (i, j) being similarity probability of similar feature vectors,the method is characterized in that when k is not equal to i, L is the probability of similarity calculated by position exchange between the 2 k-th image and the 2k-1 th image, the obtained average loss is k is the k-th image in a batch;
the image feature extraction module 300 is used for extracting image features through the pre-trained encoder and performing dimension conversion on clinical data;
the image feature extraction module 300 is configured to implement the following steps:
s31, inputting the preprocessed bronchoscope image into a deep learning model ResNet18, and obtaining a basic image characteristic F through a convolution layer, a batch standardization layer and an activation function layer 1 ;
S32, the basic image features F 1 Four residual modules are transmitted to perform image feature coding to obtain deep image features F 2 ;
S33, converting clinical data into clinical characteristics C through a full connection layer 1 To match the dimensions of the image features;
the image classification module 400 is used for fusing the image features and the clinical features through the multi-mode spatial attention mechanism module and classifying benign and malignant lesions;
the image classification module 400 is configured to implement the following steps:
s41, combining each vector along the channel dimension in the image features with the clinical feature C 1 Performing cosine similarity calculation to obtain an attention feature map;
s42, attention characteristic diagram and deep image characteristic F 2 Multiplying and summing in each channel to obtain image data andfusion characteristics of clinical data R 1 ;
S43, fusing the characteristic R with the full-connection layer pair through softmax function 1 Converting to obtain model classification probability, calculating the difference between the output result of the prediction model and the real label by using a cross entropy loss function, so as to optimize the model, wherein the loss function is as follows: wherein ,L CE representing the calculated loss value, i representing the ith dimension, K representing the total number of vector dimensions, +.>Represents the label i dimension value, +.>Representing an ith dimension predictor;
s44, classifying benign and malignant lesions of the bronchoscope image through the optimized prediction model.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
Claims (5)
1. A bronchoscope image benign and malignant focus classification device based on deep learning, which is characterized by comprising:
the image acquisition module is used for acquiring data and preprocessing the data;
the image processing module is used for performing self-supervision pre-training on the model image encoder and improving the capability of the model for extracting deep abstract features;
the image feature extraction module is used for extracting image features through the pre-trained encoder and performing dimension conversion on clinical data;
and the image classification module is used for fusing the image characteristics and the clinical characteristics through the multi-mode spatial attention mechanism module and classifying benign and malignant lesions.
2. The bronchoscope image benign and malignant lesion classifying device based on deep learning according to claim 1, wherein the image acquiring module is configured to implement the following steps:
s11, acquiring a bronchoscope image of a patient and corresponding clinical data;
s12, performing center clipping on a patient bronchoscope image, uniformly converting the bronchoscope image into an image with the size of 224 multiplied by 224, normalizing the data, and mapping pixel values between 0 and 1 to obtain a preprocessed bronchoscope image;
s13, selecting the age, sex, lymph node condition and pleural effusion condition of the patient according to the clinical information, and splicing different clinical information of the same patient into a vector form to obtain the preprocessed clinical information.
3. The bronchoscope image benign and malignant lesion classifying device based on deep learning according to claim 2, wherein the image processing module is configured to implement the following steps:
s21, transmitting bronchoscope image data in a training set into a self-supervision learning model SimCLR-V1;
s22, carrying out data enhancement on the bronchoscope image of the input model in different modes, and converting the bronchoscope image into two different images, namely x i And x j ;
S23, transmitting the images into encoders of the classification models to perform feature extraction to obtain image features h respectively i And h j, wherein ,hi For image x i Corresponding features, h j For image x j Corresponding features;
s24, model image characteristics h i And h j Respectively transmitting the non-transformed information into the mapping heads of the models to enhance the invariable information of the images;
s25, calculating loss values of two image features through a loss function, so that similarity between images is evaluated, the loss values are minimized as much as possible, and the corresponding loss function is as follows:
wherein ,si,j Representing cosine similarity of vectors, N representing batch size, l (i, j) being similarity probability of similar feature vectors,the average loss obtained by calculating the similarity probability of the 2 k-th image and the 2 k-1-th image through position exchange is represented by L, where k is the k-th image in one batch, excluding the case where k is equal to i.
4. A bronchoscope image benign and malignant lesion classifying device based on deep learning as claimed in claim 3, wherein said image feature extracting module is configured to implement the steps of:
s31, inputting the preprocessed bronchoscope image into a deep learning model ResNet18, and obtaining a basic image characteristic F through a convolution layer, a batch standardization layer and an activation function layer 1 ;
S32, the basic image features F 1 Four residual modules are transmitted to perform image feature coding to obtain deep image features F 2 ;
S33, converting clinical data into clinical characteristics C through a full connection layer 1 To match the dimensions of the image features.
5. The bronchoscope image benign and malignant lesion classification device based on deep learning according to claim 4, wherein the image classification module is configured to implement the following steps:
s41, combining each vector along the channel dimension in the image features with the clinical feature C 1 Performing cosine similarity calculation to obtain an attention feature map;
s42, attention characteristic diagram and deep image characteristic F 2 Multiplying and summing in each channel to obtain fusion characteristic R of image data and clinical data 1 ;
S43, fusing the characteristic R with the full-connection layer pair through softmax function 1 Converting to obtain model classification probability, calculating the difference between the output result of the prediction model and the real label by using a cross entropy loss function, so as to optimize the model, wherein the loss function is as follows:
wherein ,L CE representing the calculated loss value, i representing the i-th dimension, K representing the total number of vector dimensions,represents the label i dimension value, +.>Representing an ith dimension predictor;
s44, classifying benign and malignant lesions of the bronchoscope image through the optimized prediction model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310406196.3A CN116129200A (en) | 2023-04-17 | 2023-04-17 | Bronchoscope image benign and malignant focus classification device based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310406196.3A CN116129200A (en) | 2023-04-17 | 2023-04-17 | Bronchoscope image benign and malignant focus classification device based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116129200A true CN116129200A (en) | 2023-05-16 |
Family
ID=86299462
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310406196.3A Pending CN116129200A (en) | 2023-04-17 | 2023-04-17 | Bronchoscope image benign and malignant focus classification device based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116129200A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117036894A (en) * | 2023-10-09 | 2023-11-10 | 之江实验室 | Multi-mode data classification method and device based on deep learning and computer equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210090247A1 (en) * | 2019-09-23 | 2021-03-25 | Samsung Sds Co., Ltd. | Apparatus and method for medical diagnostic |
CN112991295A (en) * | 2021-03-12 | 2021-06-18 | 中国科学院自动化研究所 | Lymph node metastasis image analysis system, method and equipment based on deep learning |
CN113361636A (en) * | 2021-06-30 | 2021-09-07 | 山东建筑大学 | Image classification method, system, medium and electronic device |
CN114398961A (en) * | 2021-12-28 | 2022-04-26 | 西南交通大学 | Visual question-answering method based on multi-mode depth feature fusion and model thereof |
CN114638994A (en) * | 2022-05-18 | 2022-06-17 | 山东建筑大学 | Multi-modal image classification system and method based on attention multi-interaction network |
CN115620912A (en) * | 2022-10-18 | 2023-01-17 | 北京大学深圳医院 | Soft tissue tumor benign and malignant prediction model construction method based on deep learning |
-
2023
- 2023-04-17 CN CN202310406196.3A patent/CN116129200A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210090247A1 (en) * | 2019-09-23 | 2021-03-25 | Samsung Sds Co., Ltd. | Apparatus and method for medical diagnostic |
CN112991295A (en) * | 2021-03-12 | 2021-06-18 | 中国科学院自动化研究所 | Lymph node metastasis image analysis system, method and equipment based on deep learning |
CN113361636A (en) * | 2021-06-30 | 2021-09-07 | 山东建筑大学 | Image classification method, system, medium and electronic device |
CN114398961A (en) * | 2021-12-28 | 2022-04-26 | 西南交通大学 | Visual question-answering method based on multi-mode depth feature fusion and model thereof |
CN114638994A (en) * | 2022-05-18 | 2022-06-17 | 山东建筑大学 | Multi-modal image classification system and method based on attention multi-interaction network |
CN115620912A (en) * | 2022-10-18 | 2023-01-17 | 北京大学深圳医院 | Soft tissue tumor benign and malignant prediction model construction method based on deep learning |
Non-Patent Citations (1)
Title |
---|
西伯利亚小斑点: ""SimCLR框架解析"", pages 1, Retrieved from the Internet <URL:https://blog.csdn.net/qq_40783513/article/details/124547742> * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117036894A (en) * | 2023-10-09 | 2023-11-10 | 之江实验室 | Multi-mode data classification method and device based on deep learning and computer equipment |
CN117036894B (en) * | 2023-10-09 | 2024-03-26 | 之江实验室 | Multi-mode data classification method and device based on deep learning and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114926746B (en) | SAR image change detection method based on multiscale differential feature attention mechanism | |
CN113077471A (en) | Medical image segmentation method based on U-shaped network | |
CN111325750B (en) | Medical image segmentation method based on multi-scale fusion U-shaped chain neural network | |
CN104077742B (en) | Human face sketch synthetic method and system based on Gabor characteristic | |
CN109635726B (en) | Landslide identification method based on combination of symmetric deep network and multi-scale pooling | |
CN109063643B (en) | Facial expression pain degree identification method under condition of partial hiding of facial information | |
CN112085742B (en) | NAFLD ultrasonic video diagnosis method based on context attention | |
CN114399465B (en) | Benign and malignant ulcer identification method and system | |
CN116129200A (en) | Bronchoscope image benign and malignant focus classification device based on deep learning | |
CN114897094A (en) | Esophagus early cancer focus segmentation method based on attention double-branch feature fusion | |
Ye et al. | Adjacent-level feature cross-fusion with 3D CNN for remote sensing image change detection | |
CN115496720A (en) | Gastrointestinal cancer pathological image segmentation method based on ViT mechanism model and related equipment | |
CN115471512A (en) | Medical image segmentation method based on self-supervision contrast learning | |
CN115311605A (en) | Semi-supervised video classification method and system based on neighbor consistency and contrast learning | |
WO2024104035A1 (en) | Long short-term memory self-attention model-based three-dimensional medical image segmentation method and system | |
Zhu et al. | CEFusion: Multi‐Modal medical image fusion via cross encoder | |
CN117237685A (en) | Mechanical equipment fault diagnosis method based on multi-mode deep clustering | |
CN113343966A (en) | Infrared and visible light image text description generation method | |
CN115527159B (en) | Counting system and method based on inter-modal scale attention aggregation features | |
CN114820395B (en) | Underwater image enhancement method based on multi-field information fusion | |
CN116958154A (en) | Image segmentation method and device, storage medium and electronic equipment | |
CN113205484B (en) | Mammary tissue classification and identification method based on transfer learning | |
CN117523626A (en) | Pseudo RGB-D face recognition method | |
Chen et al. | A corresponding region fusion framework for multi-modal cervical lesion detection | |
CN112199531A (en) | Cross-modal retrieval method and device based on Hash algorithm and neighborhood map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230516 |
|
RJ01 | Rejection of invention patent application after publication |