CN117078642A - Deep learning-based melanoma auxiliary diagnosis method - Google Patents

Deep learning-based melanoma auxiliary diagnosis method Download PDF

Info

Publication number
CN117078642A
CN117078642A CN202311076709.5A CN202311076709A CN117078642A CN 117078642 A CN117078642 A CN 117078642A CN 202311076709 A CN202311076709 A CN 202311076709A CN 117078642 A CN117078642 A CN 117078642A
Authority
CN
China
Prior art keywords
model
melanoma
deep learning
network
diagnosis method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311076709.5A
Other languages
Chinese (zh)
Inventor
钱春花
王海权
王瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Tenth Peoples Hospital
Original Assignee
Shanghai Tenth Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Tenth Peoples Hospital filed Critical Shanghai Tenth Peoples Hospital
Priority to CN202311076709.5A priority Critical patent/CN117078642A/en
Publication of CN117078642A publication Critical patent/CN117078642A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a deep learning-based melanoma auxiliary diagnosis method, which adopts an optimized transducer classification network to obtain ideal auxiliary classification effect for melanoma images. The invention adopts an countermeasure generation mode to generate malignant melanoma so as to solve the problem of unbalance among data set classes. The invention can judge benign or malignant melanoma, and the result shows that the accuracy of the invention can reach 98.4 percent, which is higher than 75 percent of human eye identification, and can effectively help patients to screen themselves in early stage or assist doctors to diagnose clinically. The invention can realize the light weight of the model, reduce the parameter quantity of the model, reduce the resource and time cost of the actual development program, and facilitate the application of the algorithm to the actual development program. According to the invention, the relevant knowledge of computer vision can be applied to real life, so that the flow for detecting skin melanoma is effectively simplified, and a doctor patient can obtain a detection reference result in real time only by simply operating an application program.

Description

Deep learning-based melanoma auxiliary diagnosis method
Technical Field
The invention relates to the field of artificial intelligence, in particular to a melanoma auxiliary diagnosis method based on deep learning.
Background
Malignant melanoma is a malignant tumor derived from melanocytes, mostly evolved from benign cutaneous melanin nevi, and is related to genetic and physical factors. Early and accurate diagnosis of malignant melanoma can effectively reduce death rate, and has extremely important significance. The current clinically commonly used method is easily influenced by the medical skill level and experience of doctors, has the problems of strong subjectivity, low diagnosis efficiency, long inspection period and the like, and cannot adapt to the development of modern medical requirements, so that the auxiliary diagnosis of the melanoma medical image by an artificial intelligence means becomes the development direction in the future.
Melanoma is very similar to a mole of melanin, which adds great difficulty to the diagnosis of melanoma. The main diagnosis method of melanoma at present is based on artificial visual observation of a dermatoscope, is easily influenced by medical skill level and experience of doctors, has the diagnosis accuracy of 75-80 percent and has low diagnosis efficiency. In clinical diagnosis, early malignant degree of the melanin nevus is monitored and judged by a mode analysis method, an ABCD rule, a seven-component method and the like, and the rules are simple, but have high misdiagnosis rate, cannot guarantee accuracy, and doctors can also combine pathological tissue observation of biopsy to obtain a conclusion. Biopsy is performed by observing a stained pathological sample through a common microscope, only two-dimensional spatial features of images can be observed, morphological features of pathological sample tissues are difficult to observe, and the period is long and the cost is high. With the use of staining agents, differences in the procedure also make the pathology samples available different, and no additional tools have been available to provide a more detailed quantitative analysis of the stained samples. The surgical scope of biopsy procedures is also difficult to determine before a tumor is diagnosed as benign or malignant. Medical diagnosis is a rapid increase in cost, and skin biopsy is an invasive test and costly, which brings great economic pressure to the patient while bringing physical pain. Thus, there is an urgent need in the medical community to investigate non-invasive, low-cost melanoma diagnostic techniques.
According to World Health Organization (WHO) statistics, there are about 13.2 thousands of newly diagnosed melanoma cases annually worldwide. Skin cancer was one of the most common cancer types as counted by the american cancer society in 2022, and melanoma mortality was 64% of the skin cancer mortality, with 97,920 new melanoma cases diagnosed in 2022, 7650 of which died. The malignant melanoma is not specially treated except early operation, so that the early diagnosis and treatment of the malignant melanoma are extremely important, and the earlier the diagnosis and treatment of the melanoma are, the better the diagnosis and treatment of the malignant melanoma are, the survival time of patients can be prolonged, and the death rate can be effectively reduced.
In summary, the number of melanoma patients is extremely large, but the proportion of doctors and patients is seriously unbalanced, medical resources are deficient, and the traditional diagnosis method has high cost and the like.
Disclosure of Invention
At present, the melanoma has the problems of low diagnosis efficiency, low inspection accuracy, long period and the like; the invention provides a deep learning-based melanoma auxiliary diagnosis method, which comprises the following specific steps:
1) Data preprocessing: dividing and removing hairs on an original image respectively for a training set, a verification set and a test set to accurately obtain a skin lesion area, and removing interference diagnostic factors such as hairs;
2) The number of different categories of the data set is balanced, and aiming at the problem of unbalanced samples of the training set, an antagonism generation network is adopted to generate malignant melanoma images so as to balance positive and negative samples; meanwhile, in order to avoid overfitting and ensure generalization capability, an image is generated by using self-adaptive enhanced StyleGAN2, namely, data enhancement is used in both a generator and a discriminator;
3) Optimizing a network classification model: based on Vision transducer, the number of model heads is changed, the changed model backbox is combined with the BatchFormer module, and two sharing classifiers are added at the same time, so that the sample features are mutually learned, the problem of unbalanced data is further solved, and a reliable classification result is obtained;
4) And (3) light weight of the model: performing model light weight operation on the network of the step 3), adding a distillation mark on the model, then compressing the model by using a knowledge distillation method, inputting the original characteristics processed by normalization and sliding time windows into a teacher network for training the model, finishing training of the training teacher model to guide training of the student model, namely, establishing a relation between the two model by a loss function, controlling the ratio of soft loss to hard loss by setting an hyper-parameter value, and performing sum operation to serve as a loss value of a final distillation model, thereby helping the student model to train better effects, and finally obtaining a model with excellent classification performance and light weight through training;
5) Model transplanting: combining the weight of the model with the mobile terminal and the webpage terminal by using a calculation model migration technology, so that the model which is originally operated under the GPU condition can normally and quickly operate on a mobile phone or a website, and simultaneously, the model result is visualized;
6) Clinical verification and optimization: after the deep learning model for melanoma auxiliary diagnosis is transplanted, the algorithm testing stage is carried out, the models of the mobile terminal and the webpage terminal are delivered to doctors and patients for use, the test results are continuously summarized, the algorithm model is optimized, and finally the reliable algorithm model is obtained.
Preferably, the training mode is supervised learning of the labeled data.
Preferably, in order to improve the quality of the network generated pictures, a multi-scale fusion module is added on the basis of the original network, so that the semantic difference between different characteristic channel layers is reduced.
Preferably, during training, a portion of the image is generated from two potential codes, preventing correlation of adjacent skin cancer lesion types.
Preferably, the generation network adds noise to each pixel in the melanoma image after each convolution to achieve diversity and randomness.
Preferably, in the generating network, the original lesion image is modulated and demodulated, the self-defined convolution layer and the demodulated pattern vector are fused together, the deviation and noise are added to the fused feature image, and the finally accumulated RGB image is output as the generated melanoma image.
Preferably, the classification network model is independent of a convolution kernel of a fixed size and is capable of capturing well pathological features of melanoma including asymmetry, boundary irregularities, color, size, relief shape, differential structure, etc.
The invention has the advantages and beneficial effects that:
the invention provides a deep learning-based melanoma auxiliary diagnosis method, which adopts a deep learning method to assist in diagnosing benign and malignant melanin and applies the melanoma to a mobile terminal, thereby helping patients to screen early, assisting doctors to diagnose, and relieving the problems of difficult patient seeing, reduced mortality and the like.
The invention has the following characteristics:
1) The optimized transducer classification network is adopted, and ideal effects are obtained for skin melanoma images, and the network is also suitable for detecting other medical images.
2) The melanoma auxiliary diagnosis method based on deep learning can detect seven types of skin tumors, and results show that the accuracy of detection of the method can reach 98.4%, which is higher than 75% of human eye recognition, and can facilitate early self-screening of patients and effectively assist doctors in diagnosis.
3) The invention can realize the light weight of the model, reduce the parameter quantity of the model by utilizing knowledge distillation, reduce the resource and time cost of the actual development program, and facilitate the application of the algorithm to the actual development program.
Drawings
FIG. 1 is a basic flow chart of a deep learning-based melanoma aided diagnosis method of the present invention;
FIG. 2 is a flow chart of image countermeasure generation in the present invention;
FIG. 3 is a diagram of a network model structure in the present invention;
FIG. 4 is a system flow diagram of a model lightweight knowledge distillation method of the present invention;
FIG. 5 is a flow chart of the algorithm design, verification and optimization model method of the present invention.
Description of the embodiments
The following describes the embodiments of the present invention further with reference to the drawings and examples. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Hardware environment for implementing the inventive arrangements: CPU is Intel (R) Xeon (R) CPU E5-2623 v4@2.60 GHz, GPU is NVIDIA-SMI 470.86, and the operating environments are python3.8 and pytorch.
Adopting a transducer model, processing global information in a melanoma medical image based on a coding and decoding network structure, and optimizing an internal structure of the model and a related algorithm to solve the problem that the classification recognition accuracy can be improved only by using a mass data sample in the existing auxiliary diagnosis technology; the classification recognition network model is not dependent on convolution kernels with fixed sizes, and can well capture pathological features of melanoma including asymmetry, boundary irregularity, color, size, concave-convex shape, differential structure and the like, so that generalization capability of the classification recognition model of the melanoma is improved, and accuracy of auxiliary diagnosis is improved.
Compared with other mainstream networks, the network provided by the invention has the advantages that the limitation that the RNN model cannot be calculated in parallel is broken through; compared to CNN, the number of operations required to calculate the association between two locations does not increase with distance; and self-attention can produce a more interpretable model. The invention examines the attention profile from the model; the various attention heads may learn to perform different tasks. In addition, aiming at melanoma images, the invention adds the BatchFormer module to strengthen sample characteristics so as to solve the problem of serious unbalance of a data set.
As shown in fig. 1 and 5, the deep learning-based melanoma auxiliary diagnosis method of the invention comprises melanoma image data acquisition, image preprocessing, image countermeasure generation, a transducer network learning classifier, model knowledge distillation light weight and clinical verification and optimization of an algorithm.
The invention discloses a deep learning-based melanoma auxiliary diagnosis method, which comprises the following specific steps:
1) Obtaining more than 3 ten thousand images of the dermoscope images from an ISIC database, wherein only 584 images are malignant melanoma, and dividing a data set into a training set and a verification set; the training mode is that data with labels are subjected to supervised learning;
2) Image segmentation is carried out aiming at the image quality problem to obtain a lesion area and pretreatment such as hair removal is carried out; for unifying the image size, accelerating model training, compressing the image size into 224 x 224 and normalizing the pixel value interval by utilizing a numerical normalization technology;
3) Aiming at the problem of unbalanced positive and negative samples in a training set, the invention adopts an antagonism generation network to generate a certain number of malignant melanoma images;
4) The generated melanoma image and the real sample are input into a network together for training, and finally a classification result of auxiliary diagnosis is obtained;
5) Later, the trained classifier model is finely adjusted by using training samples, so that the aim of improving the classification accuracy is fulfilled;
6) After training the model, sending the test set into the model for prediction, and calculating again according to the prediction result to obtain an evaluation index of image level classification; and (3) transplanting the weight of the optimal model obtained by the algorithm to an application program, then testing and using the model for doctors and patients, and continuously adjusting the optimization algorithm and the program according to the obtained result and the response of the user.
As shown in FIG. 2, the present invention introduces an countermeasure generation model for balancing data between different categories. The generator part of the model is mainly divided into three parts, namely an initial latent code, a nonlinear mapping network and affine learning. In order to improve the quality of a network generated picture, a multi-scale fusion module is added on the basis of an original network, so that semantic gaps among different feature channel layers are reduced. During training, a portion of the image is generated from two potential codes, which may prevent correlation of adjacent skin cancer lesion types. The generation network adds noise to each pixel in the melanoma image after each convolution to achieve diversity and randomness. In the generation network, the original lesion image is modulated and demodulated, a self-defined convolution layer and a demodulated pattern vector are fused together, the fused feature image is added with deviation and noise, and the finally accumulated RGB image is output as a generated melanoma image. Meanwhile, in order to avoid overfitting in the training process, an adaptive image enhancement method is adopted in both a generator and a discriminator of countermeasure generation.
The invention adopts a transducer model as a classification model backbone, the network consists of an encoder and a decoder, and each module consists of a plurality of transducer blocks. As shown in fig. 3, to enhance the characteristics of the sample data, the battformer is inserted into a ViT network characteristic extractor, so as to obtain an optimized network structure. The BatchFormer module may facilitate representation learning by exploring pairs of sample relationships, inserting a transform structure after the feature extractor, along the Batch dimension, with a pair of shared classifiers added before and after it to reduce the gap between training and testing, constituting the Batch-MVit network in FIG. 3. And inputting the generated melanoma image and the real sample into the network together for training, and finally obtaining a classification result of auxiliary diagnosis.
As shown in fig. 4, the present invention uses knowledge distillation to obtain a lightweight model. The knowledge distillation comprises two network structures of a teacher and a student, the teacher network has larger regulation and deeper network hierarchy, the student network selects a shallow network, the parameter quantity is smaller, the student network is trained by the aid of the predicted value of the teacher network, and the knowledge migration between the teacher network and the student network is realized. The invention adopts a knowledge distillation method to compress the model, and the whole process can be roughly divided into three stages. In the first stage, the original characteristics after normalization and sliding time window processing are input into a teacher network for training a model, the trained teacher model can participate in training of a student model in the second stage, namely, a predicted value of the teacher model helps to train the student model, a relation is established between the teacher model and the student model through a loss function, and the duty ratio of soft loss and hard loss is controlled through setting up super-parameter values to carry out sum operation as a loss value of a final distillation model, so that better effect of training of the student model is helped. Knowledge distillation can be used for carrying out knowledge transfer between isomorphic networks and knowledge migration between heterogeneous networks. Therefore, the first distillation of the method provided by the invention is the knowledge transfer of different frameworks, and the second distillation selects the same framework for knowledge transfer. And in the third stage, taking the optimal student model obtained by the first distillation as a teacher model of the second distillation, training a new student model, and carrying out final melanoma medical image classification and identification based on the student model obtained by the second distillation.
In summary, the invention provides a deep learning-based melanoma auxiliary diagnosis method, which adopts an optimized transducer classification network to obtain ideal auxiliary classification effect for melanoma images. The invention adopts an countermeasure generation mode to generate malignant melanoma so as to solve the problem of unbalance among data set classes. The invention can judge benign or malignant melanoma, and the result shows that the accuracy of the invention can reach 98.4 percent, which is higher than 75 percent of human eye identification, and can effectively help patients to screen themselves in early stage or assist doctors to diagnose clinically. The invention can realize the light weight of the model, reduce the parameter quantity of the model, reduce the resource and time cost of the actual development program, and facilitate the application of the algorithm to the actual development program. According to the invention, the relevant knowledge of computer vision can be applied to real life, so that the flow for detecting skin melanoma is effectively simplified, and a doctor patient can obtain a detection reference result in real time only by simply operating an application program.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that it will be apparent to those skilled in the art that several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the scope of the invention.

Claims (10)

1. The melanoma aided diagnosis method based on deep learning is characterized by comprising the following specific steps of:
1) Data preprocessing: dividing and removing hairs on an original image respectively for a training set, a verification set and a test set to accurately obtain a skin lesion area, and removing interference diagnostic factors such as hairs;
2) The number of different categories of the data set is balanced, and aiming at the problem of unbalanced samples of the training set, an antagonism generation network is adopted to generate malignant melanoma images so as to balance positive and negative samples;
3) Optimizing a network classification model: changing the number of model heads based on the Visiontransducer;
4) And (3) light weight of the model: performing model light weight operation on the basis of the network in the step 3), adding a distillation mark on the model, then compressing the model by using a knowledge distillation method, and inputting original characteristics subjected to normalization and sliding time window treatment into a teacher network for training the model;
5) Model transplanting: combining the weight of the model with the mobile terminal and the webpage terminal by using a calculation model migration technology, so that the model which is originally operated under the GPU condition can normally and quickly operate on a mobile phone or a website, and simultaneously, the model result is visualized;
6) Clinical verification and optimization: after the deep learning model for melanoma auxiliary diagnosis is transplanted, the algorithm testing stage is carried out, the models of the mobile terminal and the webpage terminal are delivered to doctors and patients for use, the test results are continuously summarized, the algorithm model is optimized, and finally the reliable algorithm model is obtained.
2. The deep learning based melanoma aided diagnosis method according to claim 1, wherein in step 2), in order to avoid overfitting, and ensure generalization ability, adaptively enhanced StyleGAN2 is used to generate images, i.e. data enhancement is used in both the generator and the arbiter.
3. The deep learning-based melanoma aided diagnosis method according to claim 1, wherein in step 3), the modified model backbox is combined with a batch former module; two sharing classifiers are added, so that the sample features are mutually learned, the problem of unbalanced data is further solved, and a reliable classification result is obtained.
4. The deep learning-based melanoma aided diagnosis method of claim 1, wherein in step 4), the trained teacher model guides training of the student model, that is, the predicted value of the teacher model helps training the student model, and a relationship is established between the two models through a loss function; the soft loss and the hard loss are controlled by setting the hyper-parameter value, the sum operation is carried out on the duty ratio of the soft loss and the hard loss, and the duty ratio is used as the loss value of the final distillation model, so that the student model is helped to train a better effect, and finally, the model with excellent classification performance and light weight is obtained through training.
5. The deep learning-based melanoma aided diagnosis method of claim 1, wherein the training mode is supervised learning of labeled data.
6. The deep learning-based melanoma aided diagnosis method of claim 1, wherein in order to improve the quality of network generated pictures, a multi-scale fusion module is added on the basis of an original network, so that semantic differences among different characteristic channel layers are reduced.
7. The deep learning based melanoma aided diagnosis method of claim 1, wherein during training, a portion of the image is generated from two potential codes, preventing correlation of adjacent skin cancer lesion types.
8. The deep learning based melanoma aided diagnosis method of claim 1, wherein the generation network adds noise to each pixel in the melanoma image after each convolution to achieve diversity and randomness.
9. The deep learning-based melanoma aided diagnosis method of claim 1, wherein in the generation network, the original lesion image is modulated and demodulated, the custom convolution layer and the demodulated pattern vector are fused together, the fused feature map is added with deviation and noise, and the final accumulated RGB image is output as the generated melanoma image.
10. The deep learning-based melanoma aided diagnosis method of claim 1, wherein the classification network model is independent of a convolution kernel of a fixed size, and can capture pathological characteristics of melanoma including asymmetry, boundary irregularities, color, size, concave-convex shape, and differential structure.
CN202311076709.5A 2023-08-25 2023-08-25 Deep learning-based melanoma auxiliary diagnosis method Pending CN117078642A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311076709.5A CN117078642A (en) 2023-08-25 2023-08-25 Deep learning-based melanoma auxiliary diagnosis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311076709.5A CN117078642A (en) 2023-08-25 2023-08-25 Deep learning-based melanoma auxiliary diagnosis method

Publications (1)

Publication Number Publication Date
CN117078642A true CN117078642A (en) 2023-11-17

Family

ID=88713067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311076709.5A Pending CN117078642A (en) 2023-08-25 2023-08-25 Deep learning-based melanoma auxiliary diagnosis method

Country Status (1)

Country Link
CN (1) CN117078642A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117393100A (en) * 2023-12-11 2024-01-12 安徽大学 Diagnostic report generation method, model training method, system, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117393100A (en) * 2023-12-11 2024-01-12 安徽大学 Diagnostic report generation method, model training method, system, equipment and medium
CN117393100B (en) * 2023-12-11 2024-04-05 安徽大学 Diagnostic report generation method, model training method, system, equipment and medium

Similar Documents

Publication Publication Date Title
Wang et al. CSU-Net: A context spatial U-Net for accurate blood vessel segmentation in fundus images
CN110600122B (en) Digestive tract image processing method and device and medical system
CN110689025B (en) Image recognition method, device and system and endoscope image recognition method and device
Kauppi Eye fundus image analysis for automatic detection of diabetic retinopathy
CN107516312A (en) A kind of Chinese medicine complexion automatic classification method using shallow-layer neutral net
CN102567734B (en) Specific value based retina thin blood vessel segmentation method
CN110135506B (en) Seven-class skin tumor detection method applied to web
CN114926459B (en) Image quality evaluation method, system and computer readable medium
CN117078642A (en) Deep learning-based melanoma auxiliary diagnosis method
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
Zhao et al. Vocal cord lesions classification based on deep convolutional neural network and transfer learning
Luo et al. Joint optic disc and optic cup segmentation based on boundary prior and adversarial learning
Kalyani et al. Multilevel thresholding for medical image segmentation using teaching-learning based optimization algorithm
Lu et al. PKRT-Net: prior knowledge-based relation transformer network for optic cup and disc segmentation
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN116452812A (en) Camouflage object identification and semantic segmentation method
CN110428405A (en) Method, relevant device and the medium of lump in a kind of detection biological tissue images
US10956735B1 (en) System and method for determining a refractive error from red reflex images of eyes
Dey et al. Image Examination System to Detect Gastric Polyps from Endoscopy Images.
CN114372985A (en) Diabetic retinopathy focus segmentation method and system adapting to multi-center image
Vinisha et al. A Novel Framework for Brain Tumor Segmentation using Neuro Trypetidae Fruit Fly-Based UNet
Alam et al. Benchmarking Deep Learning Frameworks for Automated Diagnosis of Ocular Toxoplasmosis: A Comprehensive Approach to Classification and Segmentation
Jadhav et al. Segmentation and Classification of Retina Images using SVD Features
Kumari et al. Automated process for retinal image segmentation and classification via deep learning based cnn model
Singh et al. Performance analysis of machine learning techniques for glaucoma detection based on textural and intensity features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination