CN111126180B - Facial paralysis severity automatic detection system based on computer vision - Google Patents

Facial paralysis severity automatic detection system based on computer vision Download PDF

Info

Publication number
CN111126180B
CN111126180B CN201911242254.3A CN201911242254A CN111126180B CN 111126180 B CN111126180 B CN 111126180B CN 201911242254 A CN201911242254 A CN 201911242254A CN 111126180 B CN111126180 B CN 111126180B
Authority
CN
China
Prior art keywords
image
facial paralysis
classifier
patient
severe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911242254.3A
Other languages
Chinese (zh)
Other versions
CN111126180A (en
Inventor
赵启军
张继成
袁志
李鹏飞
谭顺娥
周秀蓉
郭子衣
邵欣
闫思岑
朱春霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Integrative Medicine Hospital
Sichuan University
Original Assignee
Sichuan Integrative Medicine Hospital
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Integrative Medicine Hospital, Sichuan University filed Critical Sichuan Integrative Medicine Hospital
Priority to CN201911242254.3A priority Critical patent/CN111126180B/en
Publication of CN111126180A publication Critical patent/CN111126180A/en
Application granted granted Critical
Publication of CN111126180B publication Critical patent/CN111126180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a facial paralysis severity automatic detection system based on computer vision, which belongs to the field of computer vision. The system comprises: the facial classifier is exported, and preset classifier includes natural expression classifier, the open eye classifier, seriously shows tooth classifier and lifts the eyebrow classifier. The detection system protected by the invention is used for assisting doctors to evaluate and diagnose the facial paralysis severity, and compared with a manual measurement method, the system provided by the invention is convenient to use and high in efficiency.

Description

Facial paralysis severity automatic detection system based on computer vision
Technical Field
The invention relates to the field of face image recognition, in particular to a facial paralysis severity automatic detection system based on computer vision.
Background
Peripheral facial paralysis, also known as Bell-type facial paralysis, is a local paralysis disease manifested as dyskinesia of one side of the face. The clinical manifestations of the patient are that the daily expression actions such as lifting the eyebrows, crumpling the nose and showing the teeth can not be normally finished, and the obvious facial asymmetry phenomenon can be observed even under natural expression by the severe facial paralysis patient. Because the affected side of the patient can not move normally, the patient can not normally carry out daily activities such as drinking and eating, and the like, and the affected side often has the saliva to flow out. Therefore, the image of the patient is greatly influenced, and the daily life of the patient is also greatly influenced.
The grading of facial paralysis severity grade is most commonly used in H-B (House-Brackmann) facial paralysis severity assessment method. The H-B evaluation method is divided into six facial paralysis severity grades from 0 grade to 5 grade, wherein the 0 grade is the slightest facial paralysis severity grade, and the face of the patient is completely recovered to be normal; grade 5 is the most severe grade of facial paralysis. In traditional Chinese medicine, facial paralysis with a severity grade of 4 or 5 is considered severe facial paralysis, facial paralysis with a grade of 3 or 2 is considered moderate facial paralysis, and facial paralysis with a grade of 1 or 0 is considered mild facial paralysis. The grades of facial paralysis and the corresponding clinical symptoms are shown in Table 1 according to the evaluation method of H-B facial paralysis.
Table 1: symptom table for evaluating severity of H-B facial paralysis patient
Figure BDA0002306581960000021
At present, doctors can evaluate the severity of facial paralysis of patients by manual measurement and electromyography. With manual measurements, the physician first asks the patient to perform facial movements such as raising the eyebrows or looking up the teeth, and then measures the distance traveled by the eyebrows and corners of the mouth while they are more stationary to assess the severity of facial paralysis. Such an assessment method is accurate, but the implementation process is complex and time-consuming, and is not suitable for daily facial paralysis severity diagnosis. The electromyography is a method in which a patient is examined for facial electromyography to understand the response of facial nerves and to determine whether the facial nerves are normal or abnormal. Such a method is simpler than manual measurement, but gives inaccurate results on facial paralysis severity. There is a need for a device that assists a physician in making a quick and accurate assessment of facial paralysis severity.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide an automatic facial paralysis severity detection system based on computer vision.
In order to achieve the above purpose, the invention provides the following technical scheme:
an automatic facial paralysis severity detection system based on computer vision, comprising: an image acquisition module, an information storage module, an image processor and a man-machine interaction device,
the image acquisition module acquires image information for judging the severity of the facial paralysis and outputs the image information to the image processor;
the information storage module is used for storing the facial paralysis severity judgment intermediate judgment result output by the image processor;
the human-computer interaction device comprises an input device and a display device, wherein the input device is used for inputting a control instruction to the image processor, and the display device receives and displays a severity judgment result output by the image processor;
and the image processor is used for preprocessing the facial information of the patient acquired by the image acquisition module, inputting the preprocessed facial information of the patient into a preset classifier and outputting the result of judging the severity of facial paralysis, wherein the preset classifier comprises a natural expression classifier, an eye classifier, a severe tooth-showing classifier, a tooth-showing classifier and a eyebrow-lifting classifier.
The method for judging the facial paralysis severity degree by the system comprises the following steps:
s1, according to the natural facial expression control instruction of the input device, the image processor inputs the natural facial expression image of the patient acquired from the image acquisition module into a preset natural expression classifier after natural facial expression preprocessing, and outputs a natural expression judgment result to the information storage module;
s2, according to the image control instruction of the input device when the eyes are closed, the image processor inputs the image of the same patient when the eyes are closed, which is acquired from the image acquisition module, into a preset open-eye classifier after the eye closing pretreatment, and outputs the open-eye judgment result to the information storage module;
s3, the input device of the human-computer interaction device outputs a preliminary judgment instruction to the image processor, the image processor reads the open-eye judgment result and the natural expression judgment result from the information storage module according to the preliminary judgment instruction, and outputs the preliminary judgment result to the display device and the information storage module according to the open-eye judgment result and the natural expression judgment result, the preliminary judgment result is non-severe facial paralysis or severe facial paralysis, when the judgment result is severe facial paralysis, the step S4 is executed, and when the judgment result is non-severe facial paralysis, the steps S5-S7 are executed;
s4, when the patient is facial paralysis seriously, inputting a tooth-showing movement image acquisition instruction to an image acquisition module through an input device of a human-computer interaction device, controlling the image acquisition module to acquire an image of tooth-showing movement of the same patient, inputting the image of tooth-showing movement to an image processor, preprocessing the image of tooth-showing movement by the image processor, inputting the preprocessed image of tooth-showing movement to a preset severe tooth-showing classifier, outputting a severe tooth-showing judgment result to an information storage module, reading the severe tooth-showing judgment result and a preliminary judgment result in the information storage module by the image processor according to the severe facial paralysis judgment instruction input by the input device, obtaining the facial paralysis serious grade of the patient, and displaying the facial paralysis serious grade in a display device;
s5, when the patient is not severe facial paralysis, inputting a tooth-showing movement image acquisition instruction to an image acquisition module through an input device of the human-computer interaction device, controlling the image acquisition module to acquire an image of tooth-showing movement of the same patient, inputting the image of tooth-showing movement into an image processor, preprocessing the image of tooth-showing movement by the image processor, inputting the preprocessed image of tooth-showing movement into a preset tooth-showing classifier, and outputting a tooth-showing judgment result to an information storage module;
s6, inputting an eyebrow lifting action image acquisition instruction to an image acquisition module through an input device of the human-computer interaction device, controlling the image acquisition module to acquire an image of the eyebrow lifting action of the same patient again, storing the image of the eyebrow lifting action in an information storage module, inputting the image of the eyebrow lifting action into an image processor, preprocessing the image of the eyebrow lifting action by the image processor, inputting the preprocessed image of the eyebrow lifting action into a preset eyebrow lifting classifier, and outputting an eyebrow lifting judgment result to the information storage module;
s7, the image processor reads the judgment result of the tooth-showing and the judgment result of the eyebrow-lifting from the information storage module according to the judgment instruction of the non-severe facial paralysis input by the input device, obtains the judgment coefficient of the non-severe facial paralysis according to the judgment result of the tooth-showing and the judgment result of the eyebrow-lifting, determines the judgment grade of the non-severe facial paralysis according to the judgment coefficient of the non-severe facial paralysis, and displays the judgment grade of the non-severe facial paralysis in the display device.
And constructing a natural expression classifier by taking whether the face of the patient is symmetrical under the natural expression as a basis, wherein the training step of the natural expression classifier is as follows:
s101, obtaining pupil center characteristic point coordinates according to the eye contour characteristic point coordinates of the patient;
s102, obtaining a deflection angle of the human face relative to the horizontal direction according to the coordinates of the central feature points of the left eye pupil and the right eye pupil, and constructing a rotation matrix according to the deflection angle;
s103, carrying out coordinate correction on the image according to the rotation matrix, and carrying out pixel interpolation on the corrected image by using a nearest neighbor interpolation method;
s104, cutting the image subjected to the pixel interpolation processing, and carrying out scale normalization processing to obtain a preprocessed face image;
and S105, training the preprocessed face image by using a nonlinear SVM classifier based on a radial product kernel to obtain a natural expression classifier.
Whether the patient has the open-eye phenomenon under the eye-closing state is used as a basis to construct the open-eye classifier, and the training step of the open-eye classifier is as follows:
s201, obtaining pupil center characteristic point coordinates according to the eye contour characteristic point coordinates of the patient;
s202, obtaining a deflection angle of the human face relative to the horizontal direction according to the coordinates of the central feature points of the left eye pupil and the right eye pupil, and constructing a rotation matrix according to the deflection angle;
s203, correcting the coordinates of the image according to the rotation matrix, and performing pixel interpolation on the corrected image by using a nearest neighbor interpolation method;
s204, according to the input affected side information of the patient, turning the image after pixel interpolation processing is carried out, if the patient belongs to the left facial paralysis, the image of the patient does not need to be turned, and if the patient belongs to the right facial paralysis, the image is horizontally turned, so that the right facial paralysis of the patient is changed into the left facial paralysis.
S205, cutting out a left eye area in the image subjected to the turnover processing, and carrying out scale normalization processing;
s206, extracting local binary histogram features of the human eye region image after the scale normalization processing;
and S207, training the local binary histogram features by using an SVM classifier to obtain an open-eye classifier.
The method comprises the following steps of constructing a severe tooth-showing classifier by taking the movement situation of the mouth of the affected side of a severe facial paralysis patient as a basis, and training the severe tooth-showing classifier, wherein the severe tooth-showing classifier comprises the following steps:
s301, obtaining odontoscopic video data of a patient with severe facial paralysis;
s302, extracting the action characteristics of a group of mouth regions by using the gradient histogram characteristics according to the positions of the mouth characteristic points;
and S303, inputting the action characteristics of a group of mouth regions into an SVM classifier, and training to obtain the tooth display classifier.
The implementation step of step S302 includes:
s3021, selecting 5 frames of images from the odontoscopic video data of the severe facial paralysis patient as images for extracting motion features in an equal difference mode;
s3022, extracting local image areas I of the 5-frame image with the feature points as the centers according to the feature point positions of the mouth n
And S3023, respectively calculating gradient histogram features of local image regions of the 5-frame image with the feature point as the center, and arranging the gradient histogram features into mouth motion features consisting of one column vector.
In step S3023, the step of calculating the gradient histogram feature of the local image region centered on the feature point includes:
firstly, changing a color image into a gray image;
secondly, standardizing the color space of the input image by adopting a Gamma correction method;
thirdly, calculating the gradient G of each pixel in the x direction in the local image area with the characteristic point as the center x Gradient G in (x, y) and y directions y (x,y);
The fourth step, according to the gradient G of each pixel in the x direction x Gradient G in (x, y) and y directions y (x, y) calculating a gradient magnitude G (x, y) and a direction α (x, y) of each pixel;
and fifthly, obtaining gradient histogram characteristics according to the gradient size G (x, y) and the direction alpha (x, y) of each pixel.
Constructing a tooth display classifier by taking the movement condition of the mouth at the affected side of the patient with non-severe facial paralysis as a basis, wherein the training of the tooth display classifier comprises the following steps:
s401, obtaining odontoscopic video data of a patient with non-severe facial paralysis;
s402, extracting action characteristics of a group of mouth regions from odontoscopic video data of the non-severe facial paralysis patient by using gradient histogram characteristics according to the positions of the mouth characteristic points;
and S403, inputting the action characteristics of a group of mouth regions into an SVM classifier, and training to obtain the tooth display classifier.
Constructing an eyebrow lifting classifier according to the movement degree of eyebrows of a patient with non-severe facial paralysis, wherein the training of the eyebrow lifting classifier comprises the following steps:
s501, acquiring eyebrow lifting video data of a patient with non-severe facial paralysis;
s502, extracting action characteristics of a group of eyebrow regions from odontoscopic video data of a patient with non-severe facial paralysis by using gradient histogram characteristics according to the positions of the eyebrow characteristic points;
s503, inputting the action characteristics of a group of eyebrow areas into the SVM classifier, and training to obtain the eyebrow raising classifier.
Compared with the prior art, the invention has the beneficial effects that:
the automatic facial paralysis severity detection system based on computer vision is used for assisting doctors to evaluate and diagnose facial paralysis severity.
Drawings
FIG. 1 is a schematic structural diagram of an automatic facial paralysis severity detection system based on computer vision;
FIG. 2 is a flow chart of a method for automatically detecting facial paralysis severity based on computer vision;
FIG. 3 is a feature point diagram of a human face 68 in example 1;
FIG. 4 is a graph of symptoms of severe and non-severe facial paralysis patients with natural expression in example 1;
FIG. 5 is a face image input by the natural expression classifier in example 1 after clipping;
FIG. 6 is a graph of symptoms in patients with severe and non-severe eye closure in example 1;
FIG. 7 shows a clipped left-eye region in example 1;
FIG. 8 is a flowchart of the construction of a natural expression classifier according to embodiment 1;
FIG. 9 is a flowchart of the construction of the open-eye classifier in example 1;
FIG. 10 is a flow chart of a severe notch classifier construction in example 1;
FIG. 11 is a flowchart showing a tooth classifier construction in embodiment 1;
FIG. 12 is a flowchart showing the construction of the eyebrow raising classifier in example 1.
Detailed Description
The present invention will be described in further detail with reference to test examples and specific embodiments. It should be understood that the scope of the above-described subject matter is not limited to the following examples, and any techniques implemented based on the disclosure of the present invention are within the scope of the present invention.
Example 1
An automatic facial paralysis severity detection device based on computer vision comprises an image acquisition module, an information storage module, an image processor and a man-machine interaction device, wherein the structural schematic diagram of the system is shown in figure 1.
The image acquisition module acquires facial images of a patient through the camera and is used for acquiring image information used for facial paralysis severity judgment, such as natural expression images, facial images during eye closing, tooth showing action images, eyebrow lifting action images and the like of the patient.
And the information storage module is used for storing the intermediate result and the final result after the facial paralysis severity judgment is finished for query.
The man-machine interaction device comprises an input device and a display device, wherein the input device is used for inputting instructions and outputting and displaying results, and the input instructions comprise instructions of photographing, evaluating, generating evaluation results and the like. The display device outputs the displayed contents including the photographed face image and the evaluation of the severity level of facial paralysis.
And the image processor stores a program for judging the severity of facial paralysis, preprocesses the collected facial information of the patient, inputs the preprocessed facial information of the patient into the classifier, and evaluates the severity of the facial paralysis, wherein the classifier comprises a natural expression classifier, an open-eye classifier, a severe tooth-showing classifier, a eyebrow-lifting classifier and the like.
The flow chart of the system for performing the evaluation function is shown in fig. 2, and the steps include:
s1, inputting an image acquisition instruction to an image acquisition module through an input device of the human-computer interaction device, controlling the image acquisition module to acquire a natural facial expression image of a patient through a camera, inputting the image into an image processor, preprocessing the natural facial expression by the image processor, inputting the preprocessed natural facial expression into a preset natural expression classifier, operating a corresponding natural expression judgment program, and outputting a natural expression judgment result (0 or 1) to an information storage module.
S2, inputting an image acquisition instruction to the image acquisition module through an input device of the human-computer interaction device, controlling the image acquisition module to acquire an image of the same patient when the eyes are closed again through the camera, inputting the image into the image processor, preprocessing the image when the eyes are closed by the image processor, inputting the preprocessed image when the eyes are closed into a preset open-eye classifier, operating a corresponding open-eye judgment program, and outputting an open-eye judgment result (0 or 1) to the information storage module.
S3, inputting a preliminary judgment instruction by the input device of the human-computer interaction device, reading a natural expression judgment result and an open eye judgment result from the information storage module by the image processor according to the input preliminary judgment instruction, superposing the two results to obtain a preliminary judgment coefficient, obtaining a preliminary judgment result according to the preliminary judgment coefficient, and displaying the preliminary judgment result in the display device, wherein the preliminary judgment result is classified into non-severe facial paralysis or severe facial paralysis.
The value of the preliminary judgment coefficient is 0, 1 or 2. When the open-eye judgment result is 1 and the natural expression judgment result is 1, after superposition, the result is 2, the patient belongs to severe facial paralysis, otherwise, the patient is diagnosed as non-severe facial paralysis; the result of the non-severe facial paralysis judgment is as follows: a. the natural expression judgment result is 1, the open-eye judgment result is 0, and the result is 1; b. the natural expression judgment result is 0, the open-eye judgment result is 1, and the result is 1; c. the natural expression judgment result is 0, the open-eye judgment result is 0, and the result is 0.
S4, when the initial judgment result is severe facial paralysis, executing the step S5 and outputting the severe facial paralysis grade; and when the initial judgment result is the non-severe facial paralysis, executing the steps S6 and S7, and outputting the grade of the non-severe facial paralysis.
S5, when the patient is a severe facial paralysis, inputting an image acquisition instruction to an image acquisition module through an input device of a human-computer interaction device, controlling the image acquisition module to acquire an image of the tooth indicating action of the same patient again through a camera, inputting the image into an image processor, preprocessing the image of the tooth indicating action by the image processor, inputting the preprocessed image of the tooth indicating action into a preset severe tooth indicating classifier, operating a corresponding severe tooth indicating judgment program, outputting a severe tooth indicating judgment result (0 or 1) to an information storage module, obtaining the facial paralysis severe grade of the patient according to the severe tooth indicating judgment result and the primary judgment result, and displaying the non-severe facial paralysis judgment grade in a display device.
If the patient diagnoses severe facial paralysis in the first stage (the primary judgment coefficient is 2), the judgment result of severe facial paralysis class grade severe tooth-indicating of the patient is divided: if the severe tooth-indicating classifier judges that the patient has no movement at the affected side of tooth-indicating movement, and the severe tooth-indicating judgment result is 1, the facial paralysis severity grade of the patient is five grade; if the patient is judged to have slight movement on the tooth indicating action affected side and the serious tooth indicating judgment result is 0, the facial paralysis severity grade of the patient is four grades.
S6, when the patient is non-severe facial paralysis, inputting an image acquisition instruction to the image acquisition module through the input device of the human-computer interaction device, controlling the image acquisition module to acquire the image of the tooth-indicating action of the same patient again through the camera, and inputting the image into the image processor, wherein the image processor firstly preprocesses the image of the tooth-indicating action, then inputs the preprocessed image of the tooth-indicating action into a preset tooth-indicating classifier, runs a corresponding tooth-indicating judgment program, and outputs a tooth-indicating judgment result (0, 1 or 2) to the information storage module.
S7, inputting an image acquisition instruction to the image acquisition module through the input device of the human-computer interaction device, controlling the image acquisition module to acquire the image of the eyebrow raising action of the same patient again through the camera and inputting the image to the image processor, wherein the image processor firstly preprocesses the image of the eyebrow raising action, then inputs the preprocessed image of the eyebrow raising action into a preset eyebrow raising classifier, runs a corresponding eyebrow raising judgment program and outputs the eyebrow raising judgment result (0, 1 or 2) to the information storage module.
S8, inputting a non-severe facial paralysis judgment instruction by the input device of the human-computer interaction device, reading a tooth-indicating judgment result and a eyebrow-raising judgment result from the information storage module by the image processor according to the input non-severe facial paralysis judgment instruction, superposing the two results to obtain a non-severe facial paralysis judgment coefficient, obtaining a non-severe facial paralysis judgment grade, and displaying the non-severe facial paralysis judgment grade in the display device.
If the two results are superposed and the result is 4, the severity of the H-B facial paralysis of the patient is three levels; if the comprehensive score of the two classifiers is 3, the severity of the H-B facial paralysis of the patient is second grade; if the comprehensive score of the two classifiers is 2, the severity of the H-B facial paralysis of the patient is first grade; if the result is 1 or 0 after the two results are superposed, the severity grade of the H-B facial paralysis of the patient is 0.
Further, after receiving the image output by the image acquisition module and before processing the image according to the classifier, the image processor performs image preprocessing, and the preprocessing process comprises the following steps:
(1) face detection: according to the plurality of facial images acquired by the image acquisition module, the image processor detects the position and size of the human face in the image.
(2) Positioning face feature points: the image processor locates the coordinates of 68 feature points in the face according to the detected face position, and stores the coordinates of the 68 feature points in the information storage module. The 68 feature points all have self-fixed semantic information, the feature points 1 to 17 represent face contour feature points, the feature points 18 to 27 represent eyebrow feature points, the feature points 28 to 36 represent nose feature points, the feature points 37 to 48 represent eye feature points, and the feature points 49 to 68 represent feature points at the mouth. A map of the face 68 feature points is shown in figure 3.
(3) Correcting the affected side of facial paralysis: for the facial paralysis patient divided into left facial paralysis or right facial paralysis, according to the instruction of the input device, the image processor carries out left-right calibration work on the facial paralysis image, if the patient belongs to the left facial paralysis, the original image is retained; if the patient belongs to the right facial paralysis, the image is inverted left and right.
Further, the image processor generates a preset natural expression classifier, an open eye classifier, a severe tooth-showing classifier, a tooth-showing classifier and a eyebrow-raising classifier before performing the image classifier processing. The training process of the above classifier is as follows:
training a natural expression classifier: doctors usually take the symmetric facial condition of patients in natural expression as one of the criteria for judging whether the patients have severe facial paralysis. Facial paralysis patients can show different clinical characteristics under natural expression states according to facial paralysis severity, facial paralysis patients with facial paralysis severity of 4 or 5 can show the phenomenon of 'facial distortion and mouth inclination' under natural expression, namely, the nasolabial sulcus area and the mouth area are asymmetrical, and symptoms of severe and non-severe facial paralysis patients under natural expression are shown in figure 4; but the patient with the facial paralysis severity degree of 0 to 3 can show normal without asymmetry. And constructing a natural expression classifier by taking whether the face of the patient is symmetrical under the natural expression as a basis. The training step comprises:
(a) and (3) training data construction:
extracting the facial image data of a patient maintaining normal expression in the video, classifying facial paralysis severity degrees 4 and 5 into one class as a severe facial paralysis class, and classifying severity degrees 0 to 3 into one class as a non-severe facial paralysis class.
(b) Data preprocessing:
1) face calibration:
(1.1) firstly, obtaining the coordinates of the pupil center characteristic points according to the coordinates of the eye contour characteristic points. Let the coordinate of the characteristic point of the face be L ═ L 1 ,l 2 ,......l 68 }. The feature points at the left eye are numbered from 36 to 42, the feature points at the right eye are numbered from 43 to 47, and then the coordinates of the feature points at the center of the left and right pupils are respectively:
Figure BDA0002306581960000131
and (1.2) obtaining a deflection angle theta of the human face relative to the horizontal direction according to the central feature point positions of the left eye and the right eye, and constructing a rotation matrix R according to the deflection angle.
θ=arctan(l lec -l rec )×180/π
Figure BDA0002306581960000132
(1And 3) finally, rectifying the image according to the rotation matrix. Where (x, y) is the coordinate of each pixel in the image, (x ', y') is the coordinate of the pixel after correction, (x ', y') (x, y) × R -1
And (1.4) carrying out pixel interpolation on the corrected image by using a nearest neighbor interpolation method.
I′(x′,y′)=I(x,y)
2) Face cutting: according to the 36 th characteristic point coordinate (x) 36 ,y 36 ) Number 45 characteristic point coordinates (x) 45 ,y 45 ) And feature point coordinates No. 57 (x) 57 ,y 57 ) And determining the region of interest. After the region of interest is determined, the face is cropped, and the cropped face image is shown in fig. 5. I is an original image, and I' is a face image after clipping.
x 1 =x 36 ,x 2 =x 45 ,y 1 =min(y 36 ,y 45 ),y 2 =y 57
I′=I(x 1 :x 2 ,y 1 :y 2 )
3) Image scale normalization: due to the fact that the obtained image scales are not consistent, the image scales are normalized to 224 x 224 pixels through a nearest neighbor interpolation method. Wherein I 'is the face image after cutting, the pixel scale of the face image is w h, I' is the face image after scale normalization, and the pixel scale size of the face image is w '. h'
I″(x′ i ,y′ i )=I′(x′ i *(w/w′),y′ i (h/h′))
(d) Model training: and training the natural expression classifier through a nonlinear SVM classifier based on a radial basis kernel function.
Training an open-eye classifier: patients of varying severity present with varying symptoms when the eyes are closed. When the patient belongs to severe facial paralysis and the patient is in the eye-closing state, the eyelid at the affected side cannot be closed tightly, and the affected side is observed to expose the white eyes, which is defined as the open-eye phenomenon in the traditional Chinese medicine, but when the patient belongs to non-severe facial paralysis, the patient can normally close both eyes without the open-eye phenomenon, and the symptoms of severe and non-severe patients in the eye-closing state are shown in fig. 6. And constructing a naked eye classifier by taking whether the patient has a naked eye phenomenon in the eye closing state as a basis.
(a) Training data preparation
Extracting the face image data of the patient maintaining the eye closing state in the video, dividing facial paralysis severity degrees 4 and 5 into one class as severe facial paralysis class, and dividing severity degrees 0 to 3 into one class as non-severe facial paralysis class.
(b) Data pre-processing
1) Face calibration
(1.1) firstly, obtaining the coordinates of the pupil center characteristic points according to the coordinates of the eye contour characteristic points. Let the coordinate of the characteristic point of the face be L ═ L 1 ,l 2 ,......l 68 }. The number of the feature points at the left eye is 36 to 42, the number of the feature points at the right eye is 43 to 47, and then the coordinates of the feature points at the center of the left pupil and the right pupil are
Figure BDA0002306581960000151
And (1.2) obtaining a deflection angle theta of the human face relative to the horizontal direction according to the central feature point positions of the left eye and the right eye, and constructing a rotation matrix R according to the deflection angle.
θ=arctan(l lec -l rec )×180/π
Figure BDA0002306581960000152
And (1.3) finally, correcting the image according to the rotation matrix. Where (x, y) is the coordinate of each pixel in the image, (x ', y') is the coordinate of the pixel after correction, (x ', y') (x, y) × R -1
And (1.4) carrying out pixel interpolation on the corrected image by using a nearest neighbor interpolation method.
I′(x′,y′)=I(x,y)
2) Region of interest cropping:
extracting a left eye area I according to the coordinates of the left eye feature point No. 37 to 42 le The cropped left eye region image is shown in fig. 7.
I le =I(x 37 :x 40 ,y 39 :y 41 )
3) Dimension normalization
The eye region image is normalized to the size of pixel units 40 × 110 by means of nearest neighbor interpolation. Wherein I is an original image with pixel size w h, I ' is a target image with pixel size w ' y '
I(x′ i ,y′ i )=I(x′ i *(w/w′),y′ i *(h/h′))
(c) Local binary histogram feature extraction
(1.1) setting a window with the size of 3 x 3, taking a window center element as a threshold, comparing the gray values of 8 adjacent pixels with the window center element, and if the element is larger than the center pixel value, marking the position of the pixel point as 1, otherwise, marking as 0. Thus 8 points in 3 x 3 neighbourhood are compared to generate an 8 bit 2 digit number. The local binary feature of the central element of the window is obtained, so as to reflect the texture information of the 3 × 3 region. Setting i (c) to represent the gray value of a certain central pixel (xc, yc) in the image of the eye region of the human face, and i (p) to be the gray value of the p-th pixel in the neighborhood of the central pixel, the local binary feature LBP at the pixel is defined as:
Figure BDA0002306581960000161
Figure BDA0002306581960000162
and (1.2) counting the local binary histogram according to the local binary characteristic to obtain the local binary histogram characteristic.
(d) Classifier training
And training the local binary histogram features by using an SVM classifier to obtain the open-eye classifier.
Training of a severe tooth classifier: when the patient belongs to severe facial paralysis, it means that the facial paralysis severity grade of the patient belongs to 4 or 5. For the two facial paralysis patients, the severity grade of facial paralysis can be judged only by the movement condition of the mouth of the affected side according to the experience of doctors. If the patient's affected mouth can move slightly, the patient's facial paralysis severity level is 4, and if the patient's affected mouth cannot move, the patient's facial paralysis severity level is 5. And constructing a severe tooth-displaying classifier by taking the movement condition of the mouth of the affected side of the severe facial paralysis-like patient as a basis.
(a) Preparing training data
Dental video data of facial paralysis severity levels 4 and 5 are extracted. The facial paralysis severity grade 4 indicating tooth movement is taken as one type, and the facial paralysis severity grade 5 indicating tooth movement is taken as a second type.
(b) And (3) extracting action features:
1) and extracting the action characteristics of the mouth region by using the gradient histogram characteristics according to the positions of the feature points of the mouth.
The method comprises the following specific steps:
(1.1) extracting a local image area I with the feature point as the center according to the position of the feature point of the mouth n Local region size [16 x 16]The size of the pixel.
(1.2) calculation of I n Features of the gradient histogram of
Figure BDA0002306581960000171
i. Graying, namely changing the color image into a grayscale image;
standardizing the color space of the input image by adopting a Gamma correction method;
I Gamma =I 1/2
calculating the gradient G in the x-direction of each pixel in the local region x Gradient G in (x, y) and y directions y (x,y)。
G x (x,y)=I Gamma (x+1,y)-I Gamma (x-1,y)
G y (x,y)=I Gamma (x,y+1)-I Gamma (x,y-1)
Calculating the gradient magnitude G (x, y) and direction α (x, y) of each pixel point.
Figure BDA0002306581960000172
Figure BDA0002306581960000173
v. setting the gradient direction to be counted as 9 direction blocks, counting the gradient histogram according to the obtained gradient size and gradient direction as characteristics
Figure BDA0002306581960000174
(1.3) extracting gradient histogram features of all feature points of the mouth and arranging the gradient histogram features into column vectors to form gradient histogram features of a single frame of the mouth
Figure BDA0002306581960000175
2) And selecting 5 frames of images of the equal difference as images for extracting the motion characteristics. Let num video Is the total frame number of the video, j is the resulting image interval, frame i The sequence position of the ith image in the video.
j=num video /5
{frame i =(i-1)*j+1|i∈[1,5]}
3) The gradient histogram features of the calculated features are arranged into a column vector to form the mouth motion features.
Figure BDA0002306581960000181
(c) Training a classifier:
and training mouth movement characteristics of facial paralysis patients with different severity grades by using a nonlinear SVM classifier based on a radial basis kernel.
Training a tooth display classifier: when the patient belongs to a patient with non-severe facial paralysis, the patients with different facial paralysis severity have different clinical symptoms when performing the tooth-showing action. According to clinical observation of doctors, when the teeth-showing action is carried out, the mouth of the patient with facial paralysis grade 3 has obvious asymmetry degree and serious asymmetry for the mouth movement of the patient; for patients with facial paralysis grade 1 or 2, the mouth asymmetry phenomenon is slight, but can still be observed; for the patients with facial paralysis grade 0, the normal state is completely recovered, and the mouth has no asymmetric phenomenon. And constructing a tooth display classifier by taking the movement condition of the mouth at the affected side of the patient with non-severe facial paralysis as a basis.
(a) Preparing training data
And extracting the tooth-showing video data of facial paralysis severity grades from 0 to 3. Taking the odontoscopic actions of the patient with facial paralysis severity grade of 3 as a class; patients with facial paralysis severity grade 2 and 1 are regarded as one type of odontoscopic action; patients with facial paralysis severity grade 0 were classified as odontoscopic movements.
(b) And (3) extracting action features:
1) and extracting the action characteristics of the mouth region by using the gradient histogram characteristics according to the positions of the feature points of the mouth.
The method comprises the following specific steps:
(1.1) extracting a local image area I taking the feature point as the center according to the position of the feature point of the mouth n Local region size [8 x 8]The size of the pixel.
(1.2) calculation of I n Features of the gradient histogram of
Figure BDA0002306581960000191
i. Graying, namely changing the color image into a grayscale image;
standardizing the color space of the input image by adopting a Gamma correction method;
I Gamma =I 1/2
calculating the gradient G in the x-direction of each pixel in the local region x Gradient G in (x, y) and y directions y (x,y);
G x (x,y)=I Gamma (x+1,y)-I Gamma (x-1,y)
G y =I Gamma (x,y+1)--I Gamma (x,y-1)
Calculating the gradient size G (x, y) and the direction alpha (x, y) of each pixel point;
Figure BDA0002306581960000192
Figure BDA0002306581960000193
v. setting gradient direction as 9 direction blocks, counting gradient histogram according to the obtained gradient size and gradient direction as characteristics
Figure BDA0002306581960000194
(1.3) extracting gradient histogram features of all feature points of the mouth and arranging the gradient histogram features into column vectors to form gradient histogram features of a single frame of the mouth,
Figure BDA0002306581960000195
2) and selecting 5 frames of images of the equal difference as images for extracting the motion characteristics. Let num video Is the total frame number of the video, j is the resulting image interval, frame i The sequence position of the ith image in the video.
j=num video /5
{frame i =(i-1)*j+1|i∈[1,5]}
3) And calculating the gradient histogram features of the 5 frames of images, and arranging the gradient histogram features into a column vector to form the mouth motion features.
Figure BDA0002306581960000201
(c) Training a classifier:
training mouth movement characteristics of facial paralysis patients with different severity grades by using an SVM classifier to obtain a tooth-showing classifier.
Training of the eyebrow lifting classifier: when the patient belongs to a patient with non-severe facial paralysis, the patients with different facial paralysis severity have different clinical symptoms when carrying out eyebrow lifting action. According to the clinical observation of a doctor, when the eyebrow lifting action is carried out, the eyebrows on the affected side of the patient with the facial paralysis grade of 3 cannot move for the movement of the eyebrows of the patient; for a facial paralysis grade 2 patient, the eyebrows may move slightly; for the patients with facial paralysis grade 0 or 1, the normal state is completely recovered, and the eyebrows can move normally. And constructing an eyebrow lifting classifier according to the movement degree of the eyebrows of the patient with non-severe facial paralysis.
(a) Basis of classifier construction
When the patient belongs to a patient with non-severe facial paralysis, the patients with different facial paralysis severity have different clinical symptoms when carrying out eyebrow lifting action. According to the clinical observation of a doctor, when the eyebrow lifting action is carried out, the eyebrows on the affected side of the patient with the facial paralysis grade of 3 cannot move for the movement of the eyebrows of the patient; for a facial paralysis grade 2 patient, the eyebrows may move slightly; for the patients with facial paralysis grade 0 or 1, the normal state is completely recovered, and the eyebrows can move normally. And constructing an eyebrow lifting classifier according to the movement degree of the eyebrows of the patient with non-severe facial paralysis.
(a) Preparing training data
Eyebrow raising video data of facial paralysis severity grade 0 to 3 is extracted. Taking the eyebrow lifting action of a patient with facial paralysis severity grade of 3 as a class; taking the eyebrow lifting action of a patient with facial paralysis severity grade of 2 as a class; the eyebrow raising actions of the patients with facial paralysis severity grades of 0 and 1 are taken as a class.
(b) And (3) extracting action features:
1) and extracting the action characteristics of the eyebrow area by utilizing the gradient histogram characteristics according to the positions of the eyebrow characteristic points. The method comprises the following specific steps.
(1.1) extracting a local image area I taking the feature point as the center according to the positions of the feature points of the eyebrows n Local region size [16 x 16]The size of the pixel.
(1.2) calculation of I n Features of the gradient histogram of
Figure BDA0002306581960000211
The method comprises the following steps:
i. graying, namely changing the color image into a grayscale image;
standardizing the color space of the input image by adopting a Gamma correction method;
I Gamma =I 1/2
calculating the gradient G in the x-direction of each pixel in the local region x Gradient G in (x, y) and y directions y (x,y);
G x (x,y)=I Gamma (x+1,y)-I Gamma (x-1,y)
G y =I Gamma (x,y+1)--I Gamma (x,y-1)
Calculating the gradient size G (x, y) and the direction alpha (x, y) of each pixel point;
Figure BDA0002306581960000212
Figure BDA0002306581960000213
v. setting gradient direction as 9 direction blocks, counting gradient histogram according to the obtained gradient size and gradient direction as characteristics
Figure BDA0002306581960000221
(1.3) extracting gradient histogram features of all feature points of the eyebrows and arranging the gradient histogram features into column vectors to form single-frame eyebrow gradient histogram features,
Figure BDA0002306581960000222
2) and selecting 5 frames of images of the equal difference as images for extracting the motion characteristics. Let num video Is the total frame number of the video, j is the resulting image interval, frame i The sequence position of the ith image in the video.
j=num video /5
{frame i =(i-1)*j+1|i∈[1,5]}
3) And calculating gradient histogram features of the 5 frames of images, and arranging the gradient histogram features into a column vector to form eyebrow movement features.
Figure BDA0002306581960000223
(c) Training a classifier:
and training the eyebrow movement characteristics of facial paralysis patients with different severity grades by using an SVM classifier.
A natural expression classifier construction flow chart is shown in fig. 8; the flowchart of the construction of the open-eye classifier is shown in fig. 9; the severe representation classifier construction flow chart is shown in FIG. 10; the tooth classifier construction flow chart is shown in FIG. 11; the flow chart of the eyebrow raising classifier construction is shown in fig. 12.

Claims (8)

1. An automatic facial paralysis severity detection system based on computer vision, comprising: an image acquisition module, an information storage module, an image processor and a man-machine interaction device,
the image acquisition module acquires image information for judging the facial paralysis severity and outputs the image information to the image processor;
the information storage module is used for storing the facial paralysis severity judgment intermediate judgment result output by the image processor;
the human-computer interaction device comprises an input device and a display device, the input device is used for inputting a control instruction to the image processor, and the display device receives and displays a severity judgment result output by the image processor;
the image processor is used for preprocessing the facial information of the patient acquired by the image acquisition module, inputting the preprocessed facial information of the patient into a preset classifier and outputting the judgment result of the facial paralysis severity, wherein the preset classifier comprises a natural expression classifier, an open-eye classifier, a severe tooth-showing classifier, a tooth-showing classifier and a eyebrow-raising classifier;
the method for judging the facial paralysis severity degree by the system comprises the following steps:
s1, according to the natural facial expression control instruction of the input device, the image processor inputs the natural facial expression image of the patient acquired from the image acquisition module into a preset natural expression classifier after natural facial expression preprocessing, and outputs a natural expression judgment result to the information storage module;
s2, according to the image control instruction of the input device when the eyes are closed, the image processor inputs the image of the same patient when the eyes are closed, which is acquired from the image acquisition module, into a preset open-eye classifier after the closed-eye preprocessing, and outputs the open-eye judgment result to the information storage module;
s3, the input device of the human-computer interaction device outputs a preliminary judgment instruction to the image processor, the image processor reads the open-eye judgment result and the natural expression judgment result from the information storage module according to the preliminary judgment instruction, and outputs a preliminary judgment result to the display device and the information storage module according to the open-eye judgment result and the natural expression judgment result, the preliminary judgment result is non-severe facial paralysis or severe facial paralysis, when the judgment result is severe facial paralysis, the step S4 is executed, and when the judgment result is non-severe facial paralysis, the steps S5-S7 are executed;
s4, when the patient is the severe facial paralysis, inputting the tooth-indicating movement image acquisition instruction to the image acquisition module through the input device of the human-computer interaction device, and controlling the image acquisition module to acquire the tooth-indicating movement image of the same patient, and inputting the image of the tooth-indicating action into the image processor, the image processor pre-processing the image of the tooth-indicating action, then inputting the preprocessed image of the tooth-indicating action into a preset serious tooth-indicating classifier, outputting a judgment result of the serious tooth-indicating into the information storage module, the image processor reads the severe facial paralysis judging result and the preliminary judging result in the information storage module according to the severe facial paralysis judging instruction input by the input device to obtain the facial paralysis severe class grade of the patient, and displays the facial paralysis severe class grade in the display device;
s5, when the patient is non-severe facial paralysis, inputting a tooth-showing movement image acquisition instruction to an image acquisition module through an input device of the human-computer interaction device, acquiring an image of tooth-showing movement of the same patient by the image acquisition module, inputting the tooth-showing movement image into the image processor, preprocessing the tooth-showing movement image by the image processor, inputting the preprocessed tooth-showing movement image into a preset tooth-showing classifier, and outputting a tooth-showing judgment result to the information storage module;
s6, inputting an eyebrow lifting action image acquisition instruction to the image acquisition module through the input device of the human-computer interaction device, controlling the image acquisition module to acquire an image of the eyebrow lifting action of the same patient again, storing the image of the eyebrow lifting action in an information storage module, inputting the image of the eyebrow lifting action into the image processor, preprocessing the image of the eyebrow lifting action by the image processor, inputting the preprocessed image of the eyebrow lifting action into a preset eyebrow lifting classifier, and outputting an eyebrow lifting judgment result to the information storage module;
and S7, the image processor reads the teeth-indicating judgment result and the eyebrow-raising judgment result from the information storage module according to the non-severe facial paralysis judgment instruction input by the input device, obtains a non-severe facial paralysis judgment coefficient according to the teeth-indicating judgment result and the eyebrow-raising judgment result, determines a non-severe facial paralysis judgment grade according to the non-severe facial paralysis judgment coefficient, and displays the non-severe facial paralysis judgment grade in the display device.
2. The system of claim 1, wherein the natural expression classifier is constructed according to whether the face of the patient is symmetric under the natural expression, and the natural expression classifier is trained by:
s101, obtaining pupil center characteristic point coordinates according to the eye contour characteristic point coordinates of the patient;
s102, obtaining a deflection angle of the human face relative to the horizontal direction according to the coordinates of the central feature points of the left eye pupil and the right eye pupil, and constructing a rotation matrix according to the deflection angle;
s103, carrying out coordinate correction on the image according to the rotation matrix, and carrying out pixel interpolation on the corrected image by using a nearest neighbor interpolation method;
s104, cutting the image subjected to the pixel interpolation processing, and carrying out scale normalization processing to obtain a preprocessed face image;
and S105, training the preprocessed face image by using a nonlinear SVM classifier based on a radial product kernel to obtain a natural expression classifier.
3. The system of claim 1, wherein the open-eye classifier is constructed based on whether the patient has the appearance of open-eyes in the closed-eye state, and the training of the open-eye classifier comprises:
s201, obtaining pupil center characteristic point coordinates according to the eye contour characteristic point coordinates of the patient;
s202, obtaining a deflection angle of the human face relative to the horizontal direction according to the coordinates of the central feature points of the left eye pupil and the right eye pupil, and constructing a rotation matrix according to the deflection angle;
s203, correcting the coordinates of the image according to the rotation matrix, and performing pixel interpolation on the corrected image by using a nearest neighbor interpolation method;
s204, according to the input affected side information of the patient, turning the image subjected to pixel interpolation processing, if the patient belongs to left facial paralysis, turning the image of the patient is not needed, and if the patient belongs to right facial paralysis, horizontally turning the image to change the right facial paralysis of the patient into left facial paralysis;
s205, cutting out a left eye area in the image after the overturning processing, and carrying out scale normalization processing;
s206, extracting local binary histogram features of the human eye region image after the scale normalization processing;
and S207, training the local binary histogram features by using an SVM classifier to obtain an open-eye classifier.
4. The system of claim 1, wherein the severe facial paralysis classifier is constructed based on the movement of the mouth of the affected side of the severe facial paralysis patient, and the training of the severe facial paralysis classifier comprises:
s301, obtaining odontoscopic video data of a patient with severe facial paralysis;
s302, extracting the action characteristics of a group of mouth regions by using the gradient histogram characteristics according to the positions of the mouth characteristic points;
and S303, inputting the action characteristics of the mouth region group into an SVM classifier, and training to obtain the tooth display classifier.
5. The system of claim 4, wherein the step S302 comprises:
s3021, selecting 5 frames of images from the odontoscopic video data of the severe facial paralysis patient as images for extracting motion characteristics in an equal difference mode;
s3022, extracting local image regions centered on the feature point from each of the 5-frame images according to the feature point position of the mouth
Figure DEST_PATH_IMAGE001
And S3023, respectively calculating gradient histogram features of local image regions of the 5-frame image with the feature point as the center, and arranging the gradient histogram features into mouth motion features composed of a column vector.
6. The system of claim 5, wherein the step of calculating the gradient histogram feature of the local image region centered on the feature point in step S3023 comprises:
firstly, changing a color image into a gray image;
secondly, standardizing the color space of the input image by adopting a Gamma correction method;
thirdly, calculating the gradient of each pixel in the local image area with the characteristic point as the center in the x direction
Figure 588391DEST_PATH_IMAGE002
And gradient in y direction
Figure DEST_PATH_IMAGE003
The fourth step, according to the gradient of each pixel in the x direction
Figure 730659DEST_PATH_IMAGE002
And gradient in y-direction
Figure 84280DEST_PATH_IMAGE003
Calculating the gradient magnitude and direction of each pixel
Figure 7105DEST_PATH_IMAGE004
The fifth step, according to the gradient of each pixel
Figure DEST_PATH_IMAGE005
And direction
Figure 143689DEST_PATH_IMAGE004
Gradient histogram features are obtained.
7. The system of claim 1, wherein the teeth classifier is constructed based on movements of the mouth of the affected side of the patient with non-severe facial paralysis, and the training of the teeth classifier comprises:
s401, obtaining odontoscopic video data of a patient with non-severe facial paralysis;
s402, extracting action characteristics of a group of mouth regions from the odontoscopic video data of the non-severe facial paralysis patient by using gradient histogram characteristics according to the positions of the mouth characteristic points;
and S403, inputting the action characteristics of the group of mouth regions into an SVM classifier, and training to obtain the tooth display classifier.
8. The system of claim 1, wherein the eyebrow lifting classifier is constructed according to the movement degree of eyebrows of a patient with non-severe facial paralysis, and the training of the eyebrow lifting classifier comprises:
s501, acquiring eyebrow lifting video data of a patient with non-severe facial paralysis;
s502, extracting action characteristics of a group of eyebrow regions from odontoscopic video data of the non-severe facial paralysis patient by utilizing gradient histogram characteristics according to the positions of the eyebrow characteristic points;
and S503, inputting the action characteristics of the group of eyebrow areas into an SVM classifier, and training to obtain the eyebrow raising classifier.
CN201911242254.3A 2019-12-06 2019-12-06 Facial paralysis severity automatic detection system based on computer vision Active CN111126180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911242254.3A CN111126180B (en) 2019-12-06 2019-12-06 Facial paralysis severity automatic detection system based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911242254.3A CN111126180B (en) 2019-12-06 2019-12-06 Facial paralysis severity automatic detection system based on computer vision

Publications (2)

Publication Number Publication Date
CN111126180A CN111126180A (en) 2020-05-08
CN111126180B true CN111126180B (en) 2022-08-05

Family

ID=70497725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911242254.3A Active CN111126180B (en) 2019-12-06 2019-12-06 Facial paralysis severity automatic detection system based on computer vision

Country Status (1)

Country Link
CN (1) CN111126180B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553250B (en) * 2020-04-25 2021-03-09 深圳德技创新实业有限公司 Accurate facial paralysis degree evaluation method and device based on face characteristic points
CN111613306A (en) * 2020-05-19 2020-09-01 南京审计大学 Multi-feature fusion facial paralysis automatic evaluation method
CN111897986A (en) * 2020-06-29 2020-11-06 北京大学 Image selection method and device, storage medium and terminal
CN112466437A (en) * 2020-11-03 2021-03-09 桂林医学院附属医院 Apoplexy information processing system
CN112750531A (en) * 2021-01-21 2021-05-04 广东工业大学 Automatic inspection system, method, equipment and medium for traditional Chinese medicine
CN113327247B (en) * 2021-07-14 2024-06-18 中国科学院深圳先进技术研究院 Facial nerve function assessment method, device, computer equipment and storage medium
CN117372437B (en) * 2023-12-08 2024-02-23 安徽农业大学 Intelligent detection and quantification method and system for facial paralysis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980815A (en) * 2017-02-07 2017-07-25 王俊 Facial paralysis objective evaluation method under being supervised based on H B rank scores
CN110097970A (en) * 2019-06-26 2019-08-06 北京康健数字化健康管理研究院 A kind of facial paralysis diagnostic system and its system method for building up based on deep learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2264383B1 (en) * 2005-06-03 2007-11-16 Hospital Sant Joan De Deu OCULAR MOVEMENTS DEVICE DEVICE.
CN107713984B (en) * 2017-02-07 2024-04-09 王俊 Objective assessment method for facial paralysis
CN109508644B (en) * 2018-10-19 2022-10-21 陕西大智慧医疗科技股份有限公司 Facial paralysis grade evaluation system based on deep video data analysis
CN110084259B (en) * 2019-01-10 2022-09-20 谢飞 Facial paralysis grading comprehensive evaluation system combining facial texture and optical flow characteristics
CN110163098A (en) * 2019-04-17 2019-08-23 西北大学 Based on the facial expression recognition model construction of depth of seam division network and recognition methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980815A (en) * 2017-02-07 2017-07-25 王俊 Facial paralysis objective evaluation method under being supervised based on H B rank scores
CN110097970A (en) * 2019-06-26 2019-08-06 北京康健数字化健康管理研究院 A kind of facial paralysis diagnostic system and its system method for building up based on deep learning

Also Published As

Publication number Publication date
CN111126180A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111126180B (en) Facial paralysis severity automatic detection system based on computer vision
Yang et al. Exploiting ensemble learning for automatic cataract detection and grading
AU2014229498B2 (en) Systems, methods, and computer-readable media for identifying when a subject is likely to be affected by a medical condition
WO2018222812A1 (en) System and method for guiding a user to take a selfie
KR20190087272A (en) Method for diagnosing glaucoma using fundus image and apparatus therefor
CN110555845A (en) Fundus OCT image identification method and equipment
US20210056691A1 (en) Systems and methods utilizing artificial intelligence for placental assessment and examination
WO2021114817A1 (en) Oct image lesion detection method and apparatus based on neural network, and medium
EP3846126A1 (en) Preprocessing method for performing quantitative analysis on fundus image, and storage device
CN112686855B (en) Information association method of eye image and symptom information
CN112384127A (en) Eyelid droop detection method and system
CN112232128B (en) Eye tracking based method for identifying care needs of old disabled people
Wong et al. Learning-based approach for the automatic detection of the optic disc in digital retinal fundus photographs
CN116491893B (en) Method and device for evaluating change of ocular fundus of high myopia, electronic equipment and storage medium
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN113240655A (en) Method, storage medium and device for automatically detecting type of fundus image
WO2019073962A1 (en) Image processing device and program
US20230419492A1 (en) Method and apparatus for providing information associated with immune phenotypes for pathology slide image
Kolte et al. Advancing Diabetic Retinopathy Detection: Leveraging Deep Learning for Accurate Classification and Early Diagnosis
CN111588345A (en) Eye disease detection method, AR glasses and readable storage medium
WO2022137601A1 (en) Visual distance estimation method, visual distance estimation device, and visual distance estimation program
JP7439932B2 (en) Information processing system, data storage device, data generation device, information processing method, data storage method, data generation method, recording medium, and database
CN112330629A (en) Facial nerve disease rehabilitation condition static detection system based on computer vision
CN113052012A (en) Eye disease image identification method and system based on improved D-S evidence
TWI673034B (en) Methods and system for detecting blepharoptosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant