CN111613306A - Multi-feature fusion facial paralysis automatic evaluation method - Google Patents
Multi-feature fusion facial paralysis automatic evaluation method Download PDFInfo
- Publication number
- CN111613306A CN111613306A CN202010426497.9A CN202010426497A CN111613306A CN 111613306 A CN111613306 A CN 111613306A CN 202010426497 A CN202010426497 A CN 202010426497A CN 111613306 A CN111613306 A CN 111613306A
- Authority
- CN
- China
- Prior art keywords
- region
- facial
- feature
- patient
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 208000004929 Facial Paralysis Diseases 0.000 title claims abstract description 52
- 208000036826 VIIth nerve paralysis Diseases 0.000 title claims abstract description 48
- 230000004927 fusion Effects 0.000 title claims abstract description 26
- 238000011156 evaluation Methods 0.000 title claims description 15
- 230000001815 facial effect Effects 0.000 claims abstract description 82
- 238000013135 deep learning Methods 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 21
- 230000008447 perception Effects 0.000 claims abstract description 19
- 238000012937 correction Methods 0.000 claims abstract description 8
- 238000001514 detection method Methods 0.000 claims abstract description 4
- 210000004709 eyebrow Anatomy 0.000 claims description 25
- 238000005192 partition Methods 0.000 claims description 7
- 210000001061 forehead Anatomy 0.000 claims description 6
- 210000003128 head Anatomy 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 210000000887 face Anatomy 0.000 claims 1
- 238000011282 treatment Methods 0.000 abstract description 10
- 238000003745 diagnosis Methods 0.000 abstract description 2
- 230000008859 change Effects 0.000 description 3
- 206010033799 Paralysis Diseases 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000011269 treatment regimen Methods 0.000 description 2
- 208000028389 Nerve injury Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 208000026106 cerebrovascular disease Diseases 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012854 evaluation process Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 210000000256 facial nerve Anatomy 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000003447 ipsilateral effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000008764 nerve damage Effects 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Human Computer Interaction (AREA)
- Public Health (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Primary Health Care (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Epidemiology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a multi-feature fusion facial paralysis automatic assessment method, which particularly relates to the technical field of medical diagnosis and comprises the following steps: the facial image sample of the facial paralysis patient is obtained, the facial paralysis patient is positioned at the front side of the photographing detection module in front of the patient, the facial photographing module carries out three-dimensional correction on the facial image according to facial features of the patient, the facial image of the patient is corrected from three dimensions of raw, roll and pitch, and the focal length correction is carried out when the head of the patient moves and deviates. According to the method, after the facial image sample of the patient is obtained, the reliability and relevance of the data are effectively improved during comparison of the data after multi-region feature fusion of the face of the patient, the variation precision obtained by comparison is greatly enhanced under comparison of a computer deep learning system through comparison of a plurality of facial region features and multi-feature fusion comparison of the side seat facial region features and the region perception features, the comparison efficiency is improved, meanwhile, compared with the prior art, accurate assessment of the facial paralysis grade of the patient can be effectively obtained, and treatment can be conveniently carried out according to the assessment in follow-up treatment.
Description
Technical Field
The invention relates to the technical field of medical diagnosis, in particular to a multi-feature fusion facial paralysis automatic evaluation method.
Background
Facial paralysis is a common and frequently encountered disease, and the main symptom of facial paralysis is that facial expression muscles cannot perform normal functional actions. Facial paralysis is classified into peripheral facial paralysis and central facial paralysis according to the location of nerve injury. Peripheral facial paralysis is caused by a nucleus and injury, and is manifested as ipsilateral global facial paralysis. This paralysis is caused by wind-cold, ear or meningeal infection, etc. Central facial paralysis, originating from damage above the facial nerve nucleus, manifests itself as paralysis of the lower muscles of the contralateral side, and is more common in cerebrovascular disease.
Although facial paralysis is not life-threatening, it is related to the appearance problem in the social process, which negatively affects the psychology of the patient, facial paralysis can be completely recovered as long as it is found early and treated in time, and the treatment measures are appropriate, and clinically, the facial paralysis patient is evaluated and graded according to the evaluation standard of facial paralysis, and then a proper treatment plan is made according to the grading result. The establishment of an appropriate treatment regimen for a patient is critical to the rehabilitation of facial paralysis, and the establishment of a treatment regimen is based on the grading results. Therefore, the grading evaluation of the facial paralysis has important significance for the rehabilitation therapy of the facial paralysis.
However, in the prior art, because the facial distortion conditions of different patients are different, doctors can only perform self-judgment according to standards, the standards can provide judgment bases, but the judgment of the facial grade still has great subjectivity, and meanwhile, because the facial area characteristics are different, the judgment weights of different areas are different, the doctors cannot well combine various characteristics for judgment, so that the deviation is easy to occur in the facial paralysis grade evaluation process of the patients, and unnecessary influence is brought to subsequent nursing treatment.
Disclosure of Invention
In order to overcome the above defects in the prior art, an embodiment of the present invention provides an automatic assessment method for facial paralysis with multi-feature fusion, and the technical problem to be solved by the present invention is: how to combine the facial multi-region characteristics to carry out fusion judgment to improve the accuracy of grading judgment when judging extension for facial paralysis patients.
In order to achieve the purpose, the invention provides the following technical scheme: a multi-feature fusion facial paralysis automatic evaluation method comprises the following steps:
the method comprises the steps that firstly, a facial image sample of a facial paralysis patient is obtained, the face of the patient is positioned in front of a photographing detection module in an orthographic view, a facial photographing module carries out three-dimensional correction on the facial image according to facial features of the patient, the facial image of the patient is corrected from three dimensions of raw, roll and pitch, focal length correction is carried out when the head of the patient moves and deviates, 10-30 image sets are photographed at intervals of 0.5s, 5-10 images are extracted for image combination, after the image sets are photographed, a voice module synthesizer reads out motion prompts to remind the patient to judge image set photographing, so that the patient can carry out eyebrow lifting, eye closing, tooth showing and cheek bulging operations, and the patient is prompted to sit on the side to photograph facial side information;
establishing a computer deep learning system, establishing a learning system standard database, learning the database standard male and female facial form region characteristic images to obtain a male and female facial form whole region characteristic standard image database, inquiring the perception condition of each characteristic region of a patient face partition after an image set is placed, and inputting the perception condition into the computer learning system;
thirdly, carrying out facial partition on the shot and input image set by the computer deep learning system, sequentially carrying out forehead feature region, eyebrow feature region, eye feature region, nose feature region, cheek feature region and mouth feature region from top to bottom by carrying out region classification on the face of a patient in the image set, and simultaneously extracting distinguishing features from the eyebrow feature region, the eye feature region, the lip feature region and the cheek feature region in eyebrow lifting, eye closing, tooth showing and cheek bulging operations;
step four, obtaining side-sitting face side-shooting characteristics, and judging inclined angle included angles of forehead line characteristic regions, eyebrow characteristic regions, eye characteristic regions, nose characteristic regions, cheek characteristic regions and mouth characteristic regions on two sides of the face according to the positions of central lines of the nose root for the side-shooting information of the deflected side-sitting face;
step five, fusing multiple features for comparison and evaluation, fusing and comparing according to a plurality of input facial region features, regional pain features and side face region included angle features, increasing regional facial paralysis feature value evaluation data due to mutual influence among the multiple features, influencing facial paralysis feature values due to side shooting region feature gradient, and performing transverse cross comparison according to perception feature vectors and data field data, wherein a computer deep learning system firstly performs the following steps on feature image sets of the multiple regions: the facial model comprises a facial model body, a facial model body and a computer deep learning system, and is characterized in that frontal line feature regions, eyebrow feature regions, eye feature regions, nose feature regions and cheek feature regions on two sides are pre-compared, after the facial feature comparison is completed, weighted comparison is performed according to inclination angle included angle features obtained by side shooting features of the feature regions, after the facial feature comparison is completed, score is obtained according to region perception features, side region included angle features, fused image region features and male and female facial feature fusion region features in the computer deep learning system, facial feature and database standard region features, and facial paralysis grade rating of a result model is obtained according to comparison results.
The method comprises the steps of shooting a patient in multiple angles, shooting a plurality of images on a front view face to be integrated to obtain accurate images of the patient, after obtaining facial image samples of the patient, carrying out classification judgment on facial region features of the patient, simultaneously obtaining facial region feature perception through consultation of doctors, carrying out comparison learning on standard image features such as diseased male and female facial paralysis features and non-diseased image features according to a computer deep learning system in the prior art to create a standard database, selecting a large number of standard images to carry out computer data learning to obtain a judgment module with higher accuracy, comparing the extracted facial region features with standard images of the database, carrying out numerical grading on the obtained difference value technology according to H-B grading in the prior standard, and effectively increasing the reliability and relevance of data when comparing multi-region features of the face of the patient after fusion, the comparison of the multiple facial region characteristics and the multi-feature fusion comparison of the side seat facial partition characteristics and the regional perception characteristics greatly enhances the change precision obtained by comparison under the comparison of a computer deep learning system, improves the comparison efficiency, can effectively obtain accurate assessment of the facial paralysis grade of a patient, and facilitates treatment according to the assessment in subsequent treatment.
In a preferred embodiment, the brow feature region includes brow both ends inclination angle region surfaces, brow top region surfaces, and eye region surfaces.
In a preferred embodiment, the nose feature region comprises two side face regions of the nose and a bottom end region of the nose.
In a preferred embodiment, the ocular feature regions comprise an upper ocular region, a lower ocular region, and an canthus region.
In a preferred embodiment, the eye feature region further comprises an eye side and eye angle oblique clip angle.
In a preferred embodiment, the mouth feature region comprises an inner lip region, an outer lip region, and a corner lip region.
In a preferred embodiment, the face side shot information shooting mode is shooting by rotating 90 degrees around both sides of a center line.
In a preferred embodiment, the database standard regional characteristic score is set according to an H-B rating score.
1. The invention obtains the facial image sample of the patient, classifies and judges the facial area characteristics of the patient to obtain the facial area characteristic perception, and the computer deep learning system according to the prior art carries out comparison learning on standard image characteristics such as diseased male and female facial paralysis characteristics and non-diseased image characteristics to create a standard database, when the facial area characteristics are extracted, the standard database is compared with the database standard image, the data after the multi-area characteristic fusion of the facial area characteristics of the patient effectively increases the reliability and the relevance of the data during comparison, the comparison of a plurality of facial area characteristics and the multi-characteristic fusion comparison of the side seat facial area characteristics and the area perception characteristics greatly enhance the change precision obtained by comparison under the comparison of the computer deep learning system, improves the comparison efficiency and simultaneously can effectively obtain the accurate evaluation of the facial paralysis grade of the patient compared with the prior art, treatment of subsequent treatments is facilitated in accordance with this assessment.
Drawings
FIG. 1 is a process flow of the present invention.
FIG. 2 is a flow chart of a step 2 computer deep learning building method of the present invention.
FIG. 3 is a flowchart of a method for performing face segmentation on a set of images in step 3 according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a multi-feature fusion facial paralysis automatic evaluation method, which comprises the following steps:
the method comprises the steps that firstly, a facial image sample of a facial paralysis patient is obtained, the face of the patient is positioned in front of a photographing detection module in an orthographic view, a facial photographing module carries out three-dimensional correction on the facial image according to facial features of the patient, the facial image of the patient is corrected from three dimensions of raw, roll and pitch, focal length correction is carried out when the head of the patient moves and deviates, 10-30 image sets are photographed at intervals of 0.5s, 5-10 images are extracted for image combination, after the image sets are photographed, a voice module synthesizer reads out motion prompts to remind the patient to judge image set photographing, so that the patient can carry out eyebrow lifting, eye closing, tooth showing and cheek bulging operations, and the patient is prompted to sit on the side to photograph facial side information;
establishing a computer deep learning system, establishing a learning system standard database, learning the database standard male and female facial form region characteristic images to obtain a male and female facial form whole region characteristic standard image database, inquiring the perception condition of each characteristic region of a patient face partition after an image set is placed, and inputting the perception condition into the computer learning system;
thirdly, carrying out facial partition on the shot and input image set by the computer deep learning system, sequentially carrying out forehead feature region, eyebrow feature region, eye feature region, nose feature region, cheek feature region and mouth feature region from top to bottom by carrying out region classification on the face of a patient in the image set, and simultaneously extracting distinguishing features from the eyebrow feature region, the eye feature region, the lip feature region and the cheek feature region in eyebrow lifting, eye closing, tooth showing and cheek bulging operations;
step four, obtaining side-sitting face side-shooting characteristics, and judging inclined angle included angles of forehead line characteristic regions, eyebrow characteristic regions, eye characteristic regions, nose characteristic regions, cheek characteristic regions and mouth characteristic regions on two sides of the face according to the positions of central lines of the nose root for the side-shooting information of the deflected side-sitting face;
step five, fusing multiple features for comparison and evaluation, fusing and comparing according to a plurality of input facial region features, regional pain features and side face region included angle features, increasing regional facial paralysis feature value evaluation data due to mutual influence among the multiple features, influencing facial paralysis feature values due to side shooting region feature gradient, and performing transverse cross comparison according to perception feature vectors and data field data, wherein a computer deep learning system firstly performs the following steps on feature image sets of the multiple regions: the facial model comprises a facial model body, a facial model body and a computer deep learning system, and is characterized in that frontal line feature regions, eyebrow feature regions, eye feature regions, nose feature regions and cheek feature regions on two sides are pre-compared, after the facial feature comparison is completed, weighted comparison is performed according to inclination angle included angle features obtained by side shooting features of the feature regions, after the facial feature comparison is completed, score is obtained according to region perception features, side region included angle features, fused image region features and male and female facial feature fusion region features in the computer deep learning system, facial feature and database standard region features, and facial paralysis grade rating of a result model is obtained according to comparison results.
Eyebrow portion characteristic region contains eyebrow portion both ends inclination angle regional face, eyebrow portion top regional face and eye regional face, nose portion characteristic region contains nose portion both sides face region and nose portion bottom region, eye characteristic region contains eye region, lower eye region and canthus region, eye characteristic region still includes the angle of clamping with canthus to the eye both sides, mouth characteristic region contains interior lip region, outer lip region and lip angle region, face portion side is clapped the information shooting mode and is rotated around central line both sides 90 degrees and shoot, database standard regional characteristic score is according to the hierarchical setting of grading of H-B.
As shown in fig. 1 to 3, the embodiment specifically is: the method comprises the steps of shooting a patient in multiple angles, shooting a plurality of images on a face in front view to obtain accurate images of the patient, after obtaining facial image samples of the patient, carrying out classification judgment on facial area features of the patient, damaging motor nerves, obtaining perception of peripheral movement such as eyebrow lifting, eye closing, gill bulging and smile influenced by the facial area features through consultation of doctors, establishing a standard database by carrying out comparison learning on standard image features such as facial paralysis features of sick men and women and image features not sick according to a computer deep learning matlab system in the prior art, selecting a large number of standard images to carry out computer data learning to obtain a judgment module with higher accuracy, comparing the extracted facial area features with standard images of the database, and carrying out numerical grading on the difference value obtained through comparison according to H-B grading in the existing standard, the data after the multi-region feature fusion of the face of the patient effectively increases the reliability and relevance of the data during comparison, the comparison of a plurality of facial region features and the multi-feature fusion comparison of the side seat facial region features and the region perception features greatly enhance the change precision obtained by comparison under the comparison of a computer deep learning system, the comparison efficiency is improved, meanwhile, the accurate assessment of the facial paralysis grade of the patient can be effectively obtained, and the subsequent treatment is convenient to treat according to the assessment.
The points to be finally explained are: first, in the description of the present application, it should be noted that, unless otherwise specified and limited, the terms "mounted," "connected," and "connected" should be understood broadly, and may be a mechanical connection or an electrical connection, or a communication between two elements, and may be a direct connection, and "upper," "lower," "left," and "right" are only used to indicate a relative positional relationship, and when the absolute position of the object to be described is changed, the relative positional relationship may be changed;
secondly, the method comprises the following steps: in the drawings of the disclosed embodiments of the invention, only the structures related to the disclosed embodiments are referred to, other structures can refer to common designs, and the same embodiment and different embodiments of the invention can be combined with each other without conflict;
and finally: the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present invention are intended to be included in the scope of the present invention.
Claims (8)
1. A multi-feature fusion facial paralysis automatic evaluation method is characterized by comprising the following steps:
the method comprises the steps that firstly, a facial image sample of a facial paralysis patient is obtained, the face of the patient is positioned in front of a photographing detection module in an orthographic view, a facial photographing module carries out three-dimensional correction on the facial image according to facial features of the patient, the facial image of the patient is corrected from three dimensions of raw, roll and pitch, focal length correction is carried out when the head of the patient moves and deviates, 10-30 image sets are photographed at intervals of 0.5s, 5-10 images are extracted for image combination, after the image sets are photographed, a voice module synthesizer reads out motion prompts to remind the patient to judge image set photographing, so that the patient can carry out eyebrow lifting, eye closing, tooth showing and cheek bulging operations, and the patient is prompted to sit on the side to photograph facial side information;
establishing a computer deep learning system, establishing a learning system standard database, learning the database standard male and female facial form region characteristic images to obtain a male and female facial form whole region characteristic standard image database, inquiring the perception condition of each characteristic region of a patient face partition after an image set is placed, and inputting the perception condition into the computer learning system;
thirdly, carrying out facial partition on the shot and input image set by the computer deep learning system, sequentially carrying out forehead feature region, eyebrow feature region, eye feature region, nose feature region, cheek feature region and mouth feature region from top to bottom by carrying out region classification on the face of a patient in the image set, and simultaneously extracting distinguishing features from the eyebrow feature region, the eye feature region, the lip feature region and the cheek feature region in eyebrow lifting, eye closing, tooth showing and cheek bulging operations;
step four, obtaining side-sitting face side-shooting characteristics, and judging inclined angle included angles of forehead line characteristic regions, eyebrow characteristic regions, eye characteristic regions, nose characteristic regions, cheek characteristic regions and mouth characteristic regions on two sides of the face according to the positions of central lines of the nose root for the side-shooting information of the deflected side-sitting face;
step five, fusing multiple features for comparison and evaluation, fusing and comparing according to a plurality of input facial region features, regional pain features and side face region included angle features, increasing regional facial paralysis feature value evaluation data due to mutual influence among the multiple features, influencing facial paralysis feature values due to side shooting region feature gradient, and performing transverse cross comparison according to perception feature vectors and data field data, wherein a computer deep learning system firstly performs the following steps on feature image sets of the multiple regions: the facial model comprises a facial model body, a facial model body and a computer deep learning system, and is characterized in that frontal line feature regions, eyebrow feature regions, eye feature regions, nose feature regions and cheek feature regions on two sides are pre-compared, after the facial feature comparison is completed, weighted comparison is performed according to inclination angle included angle features obtained by side shooting features of the feature regions, after the facial feature comparison is completed, score is obtained according to region perception features, side region included angle features, fused image region features and male and female facial feature fusion region features in the computer deep learning system, facial feature and database standard region features, and facial paralysis grade rating of a result model is obtained according to comparison results.
2. The method for automatically assessing facial paralysis through multi-feature fusion according to claim 1, wherein: the eyebrow feature area comprises eyebrow two-end inclination angle area faces, an eyebrow top area face and an eye area face.
3. The method for automatically assessing facial paralysis through multi-feature fusion according to claim 1, wherein: the nose feature region includes two side face regions of the nose and a bottom end region of the nose.
4. The method for automatically assessing facial paralysis through multi-feature fusion according to claim 1, wherein: the ocular feature regions include an upper ocular region, a lower ocular region, and an canthus region.
5. The method of claim 4, wherein the method comprises: the eye feature region further comprises inclined clip angles between two sides of the eye and the eye angle.
6. The method for automatically assessing facial paralysis through multi-feature fusion according to claim 1, wherein: the mouth feature region includes an inner lip region, an outer lip region, and a corner lip region.
7. The method for automatically assessing facial paralysis through multi-feature fusion according to claim 1, wherein: the side shooting information shooting mode of the face part is shooting by rotating around two sides of a central line by 90 degrees.
8. The method for automatically assessing facial paralysis through multi-feature fusion according to claim 1, wherein: and the characteristic score of the standard region of the database is set according to the H-B grading score.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010426497.9A CN111613306A (en) | 2020-05-19 | 2020-05-19 | Multi-feature fusion facial paralysis automatic evaluation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010426497.9A CN111613306A (en) | 2020-05-19 | 2020-05-19 | Multi-feature fusion facial paralysis automatic evaluation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111613306A true CN111613306A (en) | 2020-09-01 |
Family
ID=72196559
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010426497.9A Pending CN111613306A (en) | 2020-05-19 | 2020-05-19 | Multi-feature fusion facial paralysis automatic evaluation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111613306A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113488200A (en) * | 2021-07-26 | 2021-10-08 | 平安科技(深圳)有限公司 | Intelligent inquiry method, device, computer equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107713984A (en) * | 2017-02-07 | 2018-02-23 | 王俊 | Facial paralysis objective evaluation method and its system |
CN109508644A (en) * | 2018-10-19 | 2019-03-22 | 陕西大智慧医疗科技股份有限公司 | Facial paralysis grade assessment system based on the analysis of deep video data |
CN110084259A (en) * | 2019-01-10 | 2019-08-02 | 谢飞 | A kind of facial paralysis hierarchical synthesis assessment system of combination face texture and Optical-flow Feature |
CN110097970A (en) * | 2019-06-26 | 2019-08-06 | 北京康健数字化健康管理研究院 | A kind of facial paralysis diagnostic system and its system method for building up based on deep learning |
CN111126180A (en) * | 2019-12-06 | 2020-05-08 | 四川大学 | Facial paralysis severity automatic detection system based on computer vision |
-
2020
- 2020-05-19 CN CN202010426497.9A patent/CN111613306A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107713984A (en) * | 2017-02-07 | 2018-02-23 | 王俊 | Facial paralysis objective evaluation method and its system |
CN109508644A (en) * | 2018-10-19 | 2019-03-22 | 陕西大智慧医疗科技股份有限公司 | Facial paralysis grade assessment system based on the analysis of deep video data |
CN110084259A (en) * | 2019-01-10 | 2019-08-02 | 谢飞 | A kind of facial paralysis hierarchical synthesis assessment system of combination face texture and Optical-flow Feature |
CN110097970A (en) * | 2019-06-26 | 2019-08-06 | 北京康健数字化健康管理研究院 | A kind of facial paralysis diagnostic system and its system method for building up based on deep learning |
CN111126180A (en) * | 2019-12-06 | 2020-05-08 | 四川大学 | Facial paralysis severity automatic detection system based on computer vision |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113488200A (en) * | 2021-07-26 | 2021-10-08 | 平安科技(深圳)有限公司 | Intelligent inquiry method, device, computer equipment and storage medium |
CN113488200B (en) * | 2021-07-26 | 2023-07-25 | 平安科技(深圳)有限公司 | Intelligent inquiry method, intelligent inquiry device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108615051B (en) | Diabetic retina image classification method and system based on deep learning | |
Chutatape | A model-based approach for automated feature extraction in fundus images | |
EP2888718B1 (en) | Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation | |
CN109726743B (en) | Retina OCT image classification method based on three-dimensional convolutional neural network | |
CN108765392B (en) | Digestive tract endoscope lesion detection and identification method based on sliding window | |
CN110263755B (en) | Eye ground image recognition model training method, eye ground image recognition method and eye ground image recognition device | |
CN112837805B (en) | Eyelid topological morphology feature extraction method based on deep learning | |
CN111105881B (en) | Database system for 3D measurement of human phenotype | |
CN113436070B (en) | Fundus image splicing method based on deep neural network | |
CN106846293A (en) | Image processing method and device | |
CN114627067A (en) | Wound area measurement and auxiliary diagnosis and treatment method based on image processing | |
Chen et al. | Missing teeth and restoration detection using dental panoramic radiography based on transfer learning with CNNs | |
CN106960199B (en) | Complete extraction method of white eye region of true color eye picture | |
CN112957005A (en) | Automatic identification and laser photocoagulation region recommendation algorithm for fundus contrast image non-perfusion region | |
CN111613306A (en) | Multi-feature fusion facial paralysis automatic evaluation method | |
CN114445666A (en) | Deep learning-based method and system for classifying left eye, right eye and visual field positions of fundus images | |
CN111160431A (en) | Method and device for identifying keratoconus based on multi-dimensional feature fusion | |
CN111938567B (en) | Deep learning-based ophthalmologic parameter measurement method, system and equipment | |
CN111784641B (en) | Neural image curvature estimation method and device based on topological structure | |
CN110276333B (en) | Eye ground identity recognition model training method, eye ground identity recognition method and equipment | |
CN116030042B (en) | Diagnostic device, method, equipment and storage medium for doctor's diagnosis | |
CN116452571A (en) | Image recognition method based on deep neural network | |
CN101140660A (en) | Backbone pruning method based on discrete curve evolvement | |
Thanh et al. | A real-time classification of glaucoma from retinal fundus images using AI technology | |
Dutta et al. | Automatic evaluation and predictive analysis of optic nerve head for the detection of glaucoma |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200901 |
|
RJ01 | Rejection of invention patent application after publication |