CN113284613A - Face diagnosis system based on deep learning - Google Patents
Face diagnosis system based on deep learning Download PDFInfo
- Publication number
- CN113284613A CN113284613A CN202110565687.3A CN202110565687A CN113284613A CN 113284613 A CN113284613 A CN 113284613A CN 202110565687 A CN202110565687 A CN 202110565687A CN 113284613 A CN113284613 A CN 113284613A
- Authority
- CN
- China
- Prior art keywords
- model
- face
- data
- image
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Epidemiology (AREA)
- Computational Linguistics (AREA)
- Primary Health Care (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a face diagnosis system based on deep learning, which comprises a model construction module and a model application module, wherein the model construction module comprises a central processing unit, a GPU (graphics processing unit) server and a model construction memory, a program which can be operated by the central processing unit is stored in the model construction memory, and a face image of a patient with a related disease can be stored in the model construction memory, the model application module comprises a neural network chip and a camera, the camera is used for collecting a face picture of the patient, the neural network chip can be loaded with an auxiliary diagnosis model, the auxiliary diagnosis model is based on a deep convolution neural network, and the auxiliary diagnosis model is obtained through training, verification and optimization, and the disease probability of a specific disease can be predicted based on the face image. The face diagnosis system based on deep learning can predict the disease probability of certain diseases, thereby assisting doctors in disease screening work and improving the diagnosis accuracy and efficiency.
Description
Technical Field
The invention relates to the field of disease auxiliary diagnosis, in particular to a face diagnosis system based on deep learning.
Background
Computer-aided diagnosis systems for the management of various diseases have attracted the interest of many researchers over the past few decades. Recently, these computer-aided diagnosis systems use a deep learning architecture to analyze and classify medical images.
Many diseases such as partial genetic diseases have recognizable facial features, and some doctors can accumulate and learn according to certain experiences to preliminarily obtain a more correct diagnosis result by observing the facial features of patients. However, the diagnosis result is easily affected by the abundance degree of the experience of doctors, the skill level and the like, has poor repeatability, and is not easy to diagnose for some rare diseases. But currently there is a lack of auxiliary diagnostic systems for facial image analysis recognition.
Disclosure of Invention
The invention provides a face diagnosis system based on deep learning, aiming at solving the technical problem that an auxiliary diagnosis system aiming at facial image analysis and recognition is lacked in the prior art.
Therefore, the face diagnosis system based on deep learning provided by the invention comprises a model construction module and a model application module;
the model building module comprises a central processor, a GPU server and a model building memory, wherein the model building memory stores programs which can be run by the central processor and can store facial images of related disease patients;
the model application module comprises a neural network chip and a camera, the camera is used for collecting facial pictures of a patient, the neural network chip can carry an auxiliary diagnosis model, the auxiliary diagnosis model is based on a deep convolution neural network, and is obtained through training, verification and optimization, and the disease probability of a specific disease can be predicted based on a face image.
Further, the central processing unit may implement model building by running a program, and the specific steps of model building include:
s1, collecting a data set: acquiring a facial image of a patient with a relevant disease;
s2, preprocessing data: carrying out data cleaning, labeling and enhancing on the obtained image data;
s3, model training: inputting the preprocessed data set into a deep convolutional neural network for training to obtain an auxiliary diagnosis model;
and S4, verifying and optimizing the model performance, and transmitting the trained auxiliary diagnosis model to the model application module.
Further, the step S2 specifically includes:
s21, data cleaning: screening the collected face image data, removing fuzzy and defocused unqualified images, and keeping qualified images with clear face development;
s22, data annotation: carrying out standard classification labeling on the images according to corresponding symptoms;
s23, data enhancement: performing operations such as rotation, translation, shearing, scaling and the like on the image to perform data amplification so as to increase the sample size of the data set;
s24, face segmentation and face alignment: performing face detection, alignment processing and the like on the image and cropping the image to a face area;
s25, data format normalization: and obtaining images with the same resolution after image data resize with different resolutions, wherein the image formats are uniform.
Further, the step S24 specifically includes:
s241, finding out areas of the two eyes according to the range of the human face, calculating a center coordinate, and respectively recording as (x)left,yleft),(xright,yright);
S243, coordinate transformation is carried out according to the following formula by taking the center of the image as a coordinate origin
x′=x·cosa-y·sina
y′=y·cosa+x·Sina。
Further, in step S3, the data set is proportionally divided into a training set and a verification set, and the data set of the training set part is input into the deep convolutional neural network and trained on the GPU server.
Further, in the step S3, the last two layers of the deep convolutional neural network are a fully connected layer and a softmax layer.
Further, the value of each element in the output vector of the softmax layer is between 0 and 1, and the numerical value represents the prevalence probability of the face image corresponding to each disease.
Further, in step S4, it is determined whether the model is over-fit or under-fit according to the performance of the network on the verification set and the training set, and if the over-fit or under-fit occurs, measures are taken to optimize the model.
Further, in step S4, when the error rates of the top five ranked on the training set and the verification set are both lower than 5%, it is determined that the training of the auxiliary diagnostic model is completed.
Further, the working process of the model application module comprises the following steps:
s5, acquiring a clear image of the face of the person to be diagnosed through the camera;
s6, inputting the face image into an auxiliary diagnosis model;
s7, if a suspected disease is detected, outputting prediction data, otherwise, the prediction data is null;
and S8, outputting diagnosis.
Compared with the prior art, the invention has the following beneficial effects:
the face diagnosis based on deep learning is adopted, the disease probability of certain diseases can be predicted, so that doctors are assisted to carry out disease screening work, and the diagnosis accuracy and efficiency are improved.
Drawings
Fig. 1 is a block diagram of a face diagnosis system according to an embodiment of the present invention.
FIG. 2 is a flow chart of model building according to an embodiment of the present invention.
FIG. 3 is a flow chart of a model application according to an embodiment of the present invention.
Detailed Description
In order to more clearly understand the technical features, objects, and effects of the present invention, embodiments of the present invention will now be described with reference to the accompanying drawings.
The face diagnosis system based on deep learning mainly aims at genetic diseases with obvious facial phenotypes and the like, and specifically comprises a model construction module and a model application module as shown in figure 1.
The model building module comprises a central processing unit, a GPU server and a model building memory. The model building memory has stored therein a program that can be executed by the central processor and can store facial images of patients with the associated disease. The central processing unit may implement model building by running a program, as shown in fig. 2, the specific steps of model building include:
s1, collecting a data set: acquiring a facial image of a patient with a relevant disease;
in the data collection of step S1, the collected data set includes a plurality of diseases, the faces of the patients with the diseases have more obvious features, most of the diseases are genetic diseases with facial phenotypes, and specifically, relevant data of the diseases with more obvious facial features and facial images of the patients are collected from authorities such as normal hospitals or relevant research institutions, wherein the number of each disease image is more than 500, and the collected relevant data and the facial images of the patients are stored in a model construction memory.
S2, preprocessing data: and (3) performing data cleaning, labeling, enhancing and other operations on the obtained image data, wherein the data preprocessing step S2 specifically comprises the following steps:
s21, data cleaning: the method comprises the steps of screening collected face image data, removing blurred and defocused unqualified images, keeping qualified images with clear face images, specifically screening the collected image data, removing unqualified images, namely images which cannot be correctly analyzed and extracted by a neural network, and including invalid images which are shot to characteristic parts in a picture though the face of a patient exists in the pictures besides blurred and defocused images.
S22, data annotation: and carrying out standard classification labeling on the images according to corresponding symptoms.
S23, data enhancement: and performing data amplification on the image by performing operations such as rotation, translation, shearing, zooming and the like to increase the sample size of the data set, specifically, performing operations such as rotation, translation, shearing, zooming and the like on the image by using an open source image processing library OpenCV.
S24, face segmentation and face alignment: face detection and alignment processing and the like are performed on the image and the image is clipped to the face area. The face segmentation section uses a manner of segmenting a face using a rectangular frame. The human face alignment method comprises the following steps:
s241, finding out areas of the two eyes according to the range of the human face, calculating a center coordinate, and respectively recording as (x)left,yleft),(xright,yright);
S243, coordinate transformation is carried out according to the following formula by taking the center of the image as a coordinate origin
x′=x·cosa-y·sina
y′=y·cosa+x·sina
In particular, the implementation is also using OpenCV code.
S25, data format normalization: and obtaining images with the same resolution after image data resize with different resolutions, wherein the image formats are uniform.
S3, model training: and inputting the preprocessed data set into a deep convolutional neural network for training to obtain a diagnostic model, dividing the data set into a training set and a verification set according to a proportion, inputting the data set of the training set part into the deep convolutional neural network, and training on a GPU server. The convolutional neural network used for model training is a deep convolutional neural network with a plurality of convolutional layers and pooling layers, except the last layer, the rest layers are subjected to Batch normalization (Batch Norm) and ReLU, and the last two layers are a full-link layer and a softmax layer. The value of each element in the output vector of the softmax layer is between 0 and 1, and the numerical value represents the prevalence probability of each disease corresponding to the face image. Specifically, the deep convolutional neural network may utilize a network such as Resnet or inclusion v 4.
And S4, verifying and optimizing the performance of the model, judging whether the model is over-fit or under-fit according to the performance of the network on the verification set and the training set, and if so, optimizing the model by adopting measures such as adjusting network parameters or adding regularization and the like. And when the top5 error rates on the training set and the verification set are lower than 5%, confirming that the training of the auxiliary diagnosis model is finished, and transmitting the trained auxiliary diagnosis model to the model application module.
The model application module comprises a neural network chip and a camera, the camera is used for collecting facial pictures of a patient, and the neural network chip can carry a trained auxiliary diagnosis model. As shown in fig. 3, the working process of the model application module includes:
s5, acquiring a clear image of the face of the person to be diagnosed through the camera, specifically, under the conditions that the light is sufficient and the performance of the shooting equipment is qualified as much as possible.
And S6, inputting the face image into the auxiliary diagnosis model.
And S7, if the suspected disease is detected, outputting the prediction data, otherwise, the prediction data is null.
And S8, outputting diagnosis.
The specific application scene of the auxiliary diagnosis model is that the face of a patient who is seen at present has abnormal characteristics, a camera is used for shooting a picture of the face of the patient, and the picture is input into the auxiliary diagnosis model to obtain the output of the model. When the facial image of the patient is input into the model and then the softmax layer output vector is obtained through calculation, when the probability of each disease condition obtained in the vector is less than 0.5, the output is judged to be null, or because the disease condition data set of the patient is not collected during model training, the auxiliary diagnosis system cannot predict the disease condition of the patient; when the vector contains elements with the probability of being more than 0.5, the disease symptoms corresponding to the probability with the numerical value of the first three and more than 0.5 are output, and the prediction probability of the disease symptoms is output. The doctor can refer to the prediction of the system to perform further examination and diagnosis on the patient. (relevant suggestions: whether the example of the applicant is more abstract, whether a specific measured example can be given, the prediction accuracy is given, and the feasibility and the beneficial effect of the system are proved).
Although the auxiliary diagnosis model is mounted on the neural network chip of the model application module in the embodiment of the invention, according to the actual application requirements, the carrier for deploying the trained auxiliary diagnosis model can be mobile phone software, and can also be attached to certain medical hardware equipment, that is, the trained auxiliary diagnosis model can be deployed on other equipment independently.
The face diagnosis system based on deep learning can assist a doctor in diagnosing a patient with suspected disease characteristics on the face, and finds out possible corresponding disease types for reference of the doctor by taking a picture of the face of the patient and inputting the picture into a model, so that the disease is preliminarily screened, the patient can conveniently carry out targeted examination, and the diagnosis efficiency and accuracy are improved.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.
Claims (10)
1. A human face diagnosis system based on deep learning is characterized by comprising a model construction module and a model application module;
the model building module comprises a central processor, a GPU server and a model building memory, wherein the model building memory stores programs which can be run by the central processor and can store facial images of related disease patients;
the model application module comprises a neural network chip and a camera, the camera is used for collecting facial pictures of a patient, the neural network chip can carry an auxiliary diagnosis model, the auxiliary diagnosis model is based on a deep convolution neural network, and is obtained through training, verification and optimization, and the disease probability of a specific disease can be predicted based on a face image.
2. The deep learning-based face diagnosis system according to claim 1, wherein the central processing unit is capable of implementing model construction by running a program, and the specific steps of the model construction include:
s1, collecting a data set: acquiring a facial image of a patient with a relevant disease;
s2, preprocessing data: carrying out data cleaning, labeling and enhancing on the obtained image data;
s3, model training: inputting the preprocessed data set into a deep convolutional neural network for training to obtain an auxiliary diagnosis model;
and S4, verifying and optimizing the model performance, and transmitting the trained auxiliary diagnosis model to the model application module.
3. The deep learning based face diagnosis system according to claim 2, wherein the step S2 specifically includes:
s21, data cleaning: screening the collected face image data, removing fuzzy and defocused unqualified images, and keeping qualified images with clear face development;
s22, data annotation: carrying out standard classification labeling on the images according to corresponding symptoms;
s23, data enhancement: performing operations such as rotation, translation, shearing, scaling and the like on the image to perform data amplification so as to increase the sample size of the data set;
s24, face segmentation and face alignment: performing face detection, alignment processing and the like on the image and cropping the image to a face area;
s25, data format normalization: and obtaining images with the same resolution after image data resize with different resolutions, wherein the image formats are uniform.
4. The deep learning based face diagnosis system according to claim 3, wherein the step S24 specifically comprises:
s241, finding out areas of the two eyes according to the range of the human face, calculating a center coordinate, and respectively recording as (x)left,yleft),(xright,yright);
S243, coordinate transformation is carried out according to the following formula by taking the center of the image as a coordinate origin
x′=x·cosa-y·sina
y′=y·cos a+x·sin a。
5. The deep learning based face diagnosis system of claim 2, wherein in step S3, the data set is proportionally divided into a training set and a verification set, and the data set of the training set part is input into the deep convolutional neural network and trained on the GPU server.
6. The deep learning based face diagnosis system according to claim 5, wherein in the step S3, the last two layers of the deep convolutional neural network are a fully connected layer and a softmax layer.
7. The deep learning-based face diagnosis system according to claim 6, wherein the value of each element in the output vector of the softmax layer is between 0 and 1, and the numerical value represents the prevalence probability of each disease corresponding to the face image.
8. The deep learning based face diagnosis system of claim 5, wherein in step S4, it is determined whether the model is over-fit or under-fit according to the network performance on the verification set and the training set, and if over-fit or under-fit occurs, measures are taken to optimize the model.
9. The deep learning-based face diagnosis system according to claim 5, wherein in step S4, when the error rates of the top five ranked on the training set and the verification set are both lower than 5%, the training of the auxiliary diagnosis model is deemed to be completed.
10. The deep learning based face diagnosis system of claim 1, wherein the working process of the model application module comprises:
s5, acquiring a clear image of the face of the person to be diagnosed through the camera;
s6, inputting the face image into an auxiliary diagnosis model;
s7, if a suspected disease is detected, outputting prediction data, otherwise, the prediction data is null;
and S8, outputting diagnosis.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110565687.3A CN113284613A (en) | 2021-05-24 | 2021-05-24 | Face diagnosis system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110565687.3A CN113284613A (en) | 2021-05-24 | 2021-05-24 | Face diagnosis system based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113284613A true CN113284613A (en) | 2021-08-20 |
Family
ID=77281189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110565687.3A Pending CN113284613A (en) | 2021-05-24 | 2021-05-24 | Face diagnosis system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113284613A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116259422A (en) * | 2023-03-13 | 2023-06-13 | 暨南大学 | Virtual data enhancement-based ophthalmic disease diagnosis and treatment opinion generation method, system, medium and equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108198620A (en) * | 2018-01-12 | 2018-06-22 | 洛阳飞来石软件开发有限公司 | A kind of skin disease intelligent auxiliary diagnosis system based on deep learning |
CN108806792A (en) * | 2017-05-03 | 2018-11-13 | 金波 | Deep learning facial diagnosis system |
CN109994202A (en) * | 2019-03-22 | 2019-07-09 | 华南理工大学 | A method of the face based on deep learning generates prescriptions of traditional Chinese medicine |
CN110009630A (en) * | 2019-04-15 | 2019-07-12 | 中国医学科学院皮肤病医院 | A kind of skin targets region automatic testing method based on deep learning |
CN110415815A (en) * | 2019-07-19 | 2019-11-05 | 银丰基因科技有限公司 | The hereditary disease assistant diagnosis system of deep learning and face biological information |
CN111653365A (en) * | 2020-07-23 | 2020-09-11 | 中山大学附属第一医院 | Nasopharyngeal carcinoma auxiliary diagnosis model construction and auxiliary diagnosis method and system |
CN111816308A (en) * | 2020-07-13 | 2020-10-23 | 中国医学科学院阜外医院 | System for predicting coronary heart disease onset risk through facial picture analysis |
CN111899229A (en) * | 2020-07-14 | 2020-11-06 | 武汉楚精灵医疗科技有限公司 | Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology |
-
2021
- 2021-05-24 CN CN202110565687.3A patent/CN113284613A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108806792A (en) * | 2017-05-03 | 2018-11-13 | 金波 | Deep learning facial diagnosis system |
CN108198620A (en) * | 2018-01-12 | 2018-06-22 | 洛阳飞来石软件开发有限公司 | A kind of skin disease intelligent auxiliary diagnosis system based on deep learning |
CN109994202A (en) * | 2019-03-22 | 2019-07-09 | 华南理工大学 | A method of the face based on deep learning generates prescriptions of traditional Chinese medicine |
CN110009630A (en) * | 2019-04-15 | 2019-07-12 | 中国医学科学院皮肤病医院 | A kind of skin targets region automatic testing method based on deep learning |
CN110415815A (en) * | 2019-07-19 | 2019-11-05 | 银丰基因科技有限公司 | The hereditary disease assistant diagnosis system of deep learning and face biological information |
CN111816308A (en) * | 2020-07-13 | 2020-10-23 | 中国医学科学院阜外医院 | System for predicting coronary heart disease onset risk through facial picture analysis |
CN111899229A (en) * | 2020-07-14 | 2020-11-06 | 武汉楚精灵医疗科技有限公司 | Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology |
CN111653365A (en) * | 2020-07-23 | 2020-09-11 | 中山大学附属第一医院 | Nasopharyngeal carcinoma auxiliary diagnosis model construction and auxiliary diagnosis method and system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116259422A (en) * | 2023-03-13 | 2023-06-13 | 暨南大学 | Virtual data enhancement-based ophthalmic disease diagnosis and treatment opinion generation method, system, medium and equipment |
CN116259422B (en) * | 2023-03-13 | 2024-02-06 | 暨南大学 | Virtual data enhancement-based ophthalmic disease diagnosis and treatment opinion generation method, system, medium and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220051405A1 (en) | Image processing method and apparatus, server, medical image processing device and storage medium | |
WO2018120942A1 (en) | System and method for automatically detecting lesions in medical image by means of multi-model fusion | |
CN110503630B (en) | Cerebral hemorrhage classifying, positioning and predicting method based on three-dimensional deep learning model | |
CN111598867B (en) | Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome | |
CN114998210B (en) | Retinopathy of prematurity detecting system based on deep learning target detection | |
CN111598875A (en) | Method, system and device for building thyroid nodule automatic detection model | |
CN112699868A (en) | Image identification method and device based on deep convolutional neural network | |
CN111462102B (en) | Intelligent analysis system and method based on novel coronavirus pneumonia X-ray chest radiography | |
CN113808738B (en) | Disease identification system based on self-identification image | |
CN113610118B (en) | Glaucoma diagnosis method, device, equipment and method based on multitasking course learning | |
CN115423754A (en) | Image classification method, device, equipment and storage medium | |
CN113782184A (en) | Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning | |
CN112241961A (en) | Chest X-ray film auxiliary diagnosis method and system based on deep convolutional neural network | |
CN116091490A (en) | Lung nodule detection method based on YOLOv4-CA-CBAM-K-means++ -SIOU | |
CN114445356A (en) | Multi-resolution-based full-field pathological section image tumor rapid positioning method | |
Manikandan et al. | Segmentation and Detection of Pneumothorax using Deep Learning | |
CN115187566A (en) | Intracranial aneurysm detection method and device based on MRA image | |
CN113284613A (en) | Face diagnosis system based on deep learning | |
CN114140437A (en) | Fundus hard exudate segmentation method based on deep learning | |
CN116091446A (en) | Method, system, medium and equipment for detecting abnormality of esophageal endoscope image | |
CN113344022A (en) | Chest radiography detection method based on deep learning | |
CN112396597A (en) | Method and device for rapidly screening unknown cause pneumonia images | |
CN117496323B (en) | Multi-scale second-order pathological image classification method and system based on transducer | |
CN111816308A (en) | System for predicting coronary heart disease onset risk through facial picture analysis | |
Tasnim et al. | A Deep Learning Based Image Processing Technique for Early Lung Cancer Prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210820 |