CN117352161A - Quantitative evaluation method and system for facial movement dysfunction - Google Patents
Quantitative evaluation method and system for facial movement dysfunction Download PDFInfo
- Publication number
- CN117352161A CN117352161A CN202311316114.2A CN202311316114A CN117352161A CN 117352161 A CN117352161 A CN 117352161A CN 202311316114 A CN202311316114 A CN 202311316114A CN 117352161 A CN117352161 A CN 117352161A
- Authority
- CN
- China
- Prior art keywords
- sub
- grid
- facial
- face
- key point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000001815 facial effect Effects 0.000 title claims abstract description 84
- 230000004064 dysfunction Effects 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000011158 quantitative evaluation Methods 0.000 title abstract description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 46
- 238000004458 analytical method Methods 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 17
- 238000010606 normalization Methods 0.000 claims description 15
- 208000012661 Dyskinesia Diseases 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 13
- 238000006073 displacement reaction Methods 0.000 claims description 13
- 230000000284 resting effect Effects 0.000 claims description 10
- 208000016285 Movement disease Diseases 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- 230000002146 bilateral effect Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000007637 random forest analysis Methods 0.000 claims description 6
- 238000003745 diagnosis Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000003066 decision tree Methods 0.000 claims description 4
- 238000012706 support-vector machine Methods 0.000 claims description 4
- 230000001133 acceleration Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 210000001061 forehead Anatomy 0.000 claims description 3
- 238000009432 framing Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 8
- 230000009286 beneficial effect Effects 0.000 description 7
- 210000000744 eyelid Anatomy 0.000 description 7
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 5
- 201000010099 disease Diseases 0.000 description 4
- 210000003205 muscle Anatomy 0.000 description 4
- 239000002106 nanomesh Substances 0.000 description 4
- 206010063006 Facial spasm Diseases 0.000 description 3
- 208000005392 Spasm Diseases 0.000 description 3
- 210000001097 facial muscle Anatomy 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 208000024891 symptom Diseases 0.000 description 3
- 208000004929 Facial Paralysis Diseases 0.000 description 2
- 208000010428 Muscle Weakness Diseases 0.000 description 2
- 206010028372 Muscular weakness Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004445 quantitative analysis Methods 0.000 description 2
- 208000008238 Muscle Spasticity Diseases 0.000 description 1
- 208000029578 Muscle disease Diseases 0.000 description 1
- 208000012902 Nervous system disease Diseases 0.000 description 1
- 208000025966 Neurological disease Diseases 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000000256 facial nerve Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000010984 neurological examination Methods 0.000 description 1
- 230000002232 neuromuscular Effects 0.000 description 1
- 208000018198 spasticity Diseases 0.000 description 1
- 230000008733 trauma Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Pathology (AREA)
- Human Computer Interaction (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a quantitative evaluation method and a quantitative evaluation system for facial movement dysfunction, which relate to the technical field of facial movement dysfunction evaluation.
Description
Technical Field
The invention relates to the technical field of facial movement dysfunction assessment, in particular to a quantitative assessment method and a quantitative assessment system for facial movement dysfunction.
Background
Facial dyskinesia, also known as facial muscle dysfunction, is a symptom affecting facial muscle control and expression, mostly caused by neurological diseases, muscle diseases, trauma.
Assessment of facial dyskinesia requires a comprehensive consideration of various aspects of the patient's symptoms, medical history, neuromuscular function, imaging and laboratory examination, and a detailed discussion of the medical history with the patient, including the onset time of symptoms, the course of development, possible causes and any other medical conditions associated with facial dyskinesia, followed by a neurological examination to assess facial nerve function.
At present, the diagnosis and evaluation of the facial movement dysfunction are mainly finished by doctors with abundant clinical experience, and the evaluation mainly depends on visual observation of the doctor on the face of the patient to rest or finish the action state of the specific face, and the amplitude, the speed, the bilateral symmetry and the like of the specific action of the patient are visually observed, the time cost of the mode is high, deviation caused by individual experience differences and subjective factors of the doctor can occur, quantitative indexes which can be used for comparison among different patients or different time periods of the same patient are lacking, the accurate evaluation of the disease progress and the rehabilitation condition is not facilitated, and in general, a better solution is not yet needed in the field, and a method and a system capable of helping the doctor to perform the quantitative evaluation of the facial movement dysfunction are urgently needed in the field.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides a quantitative evaluation method and a quantitative evaluation system for facial movement dysfunction, which solve the problems in the prior art that quantitative evaluation for facial movement dysfunction is difficult to carry out due to lack of quantitative indexes for comparison between patients or between different time periods of the same patient.
(II) technical scheme
In order to achieve the above object, the present invention provides a quantitative assessment method for facial movement dysfunction, comprising:
collecting front face videos of a subject in a specified face action and resting state, wherein the face actions at least comprise more than one actions of closing eyes, frowning, lifting forehead, opening mouth greatly, puckering mouth, rousing cheek and teeth;
extracting key point grids of the face of the person in the video frame by frame, and carrying out normalization processing on the key point grids, wherein the normalization processing at least comprises: translation, rotation, scaling;
dividing a plurality of sub-grids on the facial key point grids, and calculating quantized motion characteristic information of each sub-grid;
the facial movement dysfunction detection algorithm uses the quantized movement characteristics as input, outputs at least the position, the type, the degree and the confidence degree of the movement dysfunction, and at least comprises an algorithm based on a preset threshold comparison, a decision tree algorithm, a random forest algorithm, a support vector machine algorithm, a perceptron algorithm and a deep neural network algorithm.
The present invention is further configured such that, in the face key point meshing step:
configuring a corresponding sub-grid division scheme according to the actual diagnosis requirement, storing the configured sub-grid division scheme, and pre-storing at least one group of candidate sub-grid division schemes;
the present invention is further configured such that, in the face key point meshing step:
the quantized motion characteristic information at least comprises motion displacement, speed, acceleration, included angle, angular speed and the like of each sub-grid, and statistical characteristics of the time sequence information;
the statistical features at least comprise average value, maximum value, minimum value, standard deviation and quantile;
the calculation method of the quantized motion characteristic information of the sub-grid comprises the following steps:
calculating the motion characteristics of each key point in the sub-grids one by one, and carrying out normalized weighted summation on the motion characteristics of each key point in the sub-grids, wherein the normalized weighted summation is defined as giving a weight w not smaller than 0 and not larger than 1 to each key point in the sub-grids i And the sum of the weights of all key points in each sub-grid is 1;
the invention is further arranged such that said quantized motion feature information of said sub-grid is expressed as:
0≤w i ≤1;
wherein f k (. Cndot.) represents the calculation of the kth said quantized motion feature, c= { p 1 ,p 2 ,...p N The key point sub-grid is represented by N face key points p i Constitution, w i Represents a non-negative weight coefficient, the sum of which is 1;
the invention further provides that the time sequence information is information obtained by combining a single coordinate axis direction or multiple coordinate axis directions, and the method for combining the multiple coordinate axis directions comprises the following steps:
forming a feature vector by using quantized motion feature information of a plurality of coordinate axis directions, and calculating at least one of the following norms of the feature vector:
a) 1 norm;
b) 2 norms;
c) Infinite norm:
calculating the difference value of each bilateral symmetry sub-grid for the same motion characteristic;
the invention further provides that the reminding display method at least comprises the following steps:
generating and displaying an enhanced video, generating and displaying a time sequence curve graph, and generating and displaying a data statistics chart;
the enhanced video is defined as video processed using at least one method: overlapping a key point grid, cutting an area, zooming, highlighting, framing and adding a prompt arrow;
the invention also provides a quantitative evaluation system for facial movement dysfunction, which comprises:
the acquisition input module is used for shooting and recording the video of the face of the subject in a state of resting and completing the action of the appointed face, or designating the path of the video file of the face of the subject which is shot and recorded in a state of resting and completing the action of the appointed face;
the face key point grid extraction module is used for extracting the face key point grids in the video from the acquisition input module and carrying out normalization processing;
the motion characteristic analysis module is used for dividing the face key point sub-grids, calculating quantized motion characteristic information of each sub-grid and prompting the position and degree of possible movement disorder by using a face movement disorder detection algorithm and the quantized motion characteristics;
the output display module is used for reminding and displaying the analysis and identification results of the motion characteristic analysis module;
the invention is further arranged that the motion characteristic analysis module is also used for calculating the difference value of each bilateral symmetry sub-grid for the same motion characteristic;
the invention further provides that the motion characteristic analysis module is further used for configuring a plurality of different sub-meshing schemes according to the requirement and pre-storing at least one group of candidate sub-meshing schemes.
(III) beneficial effects
The invention provides a quantitative evaluation method and a quantitative evaluation system for facial movement dysfunction. The beneficial effects are as follows:
according to the facial movement dysfunction assessment method and the facial movement dysfunction assessment system, facial key point grids are extracted through collecting facial action videos of a subject, normalization processing is carried out, then movement characteristic information is calculated on the sub-grids, and finally analysis and identification are carried out by using a facial movement dysfunction detection algorithm, and compared with the traditional assessment method, the facial movement dysfunction assessment method has the remarkable improvement that:
with non-invasive assessment, the assessment method provided by the application does not require the use of any invasive tools or equipment to assess facial movement dysfunction, but rather is implemented by analyzing facial motion videos, which is safer and more comfortable.
The method has high automation, reduces the requirement of manual intervention by automatically extracting the facial key point grid and calculating the motion characteristic information, improves the evaluation efficiency and consistency, changes the size of the facial key point grid into a ratio relative to the reference length after normalization processing, and is beneficial to subsequent processing analysis and comparison.
By quantifying the movement characteristic information, the system can provide more objective and accurate assessment results, not just subjective observations, and by comparing the grid speed difference curve changes, the movement dysfunction of the face of the patient can be quantified.
The assessment method provided by the application can be configured with a plurality of different sub-grid division schemes according to the needs, can carry out customized assessment according to different types of facial movement dysfunction, and is more personalized.
In addition, the evaluation system provided by the application can generate enhanced videos, time sequence graphs and data statistics charts, and is beneficial to doctors and patients to better understand evaluation results.
In summary, the method and system for evaluating facial dyskinesia provided by the present application can detect and identify the location, type and degree of dyskinesia more accurately by quantitative analysis, and improve diagnosis and treatment of facial dyskinesia.
The problem that quantitative evaluation on facial movement dysfunction is difficult to carry out due to the fact that quantitative indexes for comparison among patients or among different time periods of the same patient are lacking in the prior art is solved.
Drawings
FIG. 1 is a flowchart of a quantitative assessment method for facial dyskinesia of the present invention;
FIG. 2 is a schematic diagram of a face key point grid in embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of face subgrid division in embodiment 2 of the present invention;
fig. 4 is a schematic diagram showing a speed curve and a speed difference curve of the orbicularis oculi muscle subgrid in example 4 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
Referring to fig. 1-4, the present invention provides a method for quantitatively evaluating facial movement dysfunction, comprising:
s101, collecting front face videos of a subject in a specified face action and resting state, wherein the face actions at least comprise more than one actions of closing eyes, frowning, lifting forehead, opening mouth, pucker, cheek and teeth;
s102, extracting key point grids of the face of the person in the video frame by frame, and carrying out normalization processing on the key point grids, wherein the normalization processing at least comprises: translation, rotation, scaling;
s103, dividing a plurality of sub-grids on the facial key point grids, and calculating quantized motion characteristic information of each sub-grid;
s104, analyzing and identifying the position, type and degree of the possible movement disorder by using a facial movement disorder analysis algorithm and quantized movement features, prompting and displaying, and outputting the position, type, degree and confidence degree of the possible movement disorder by using the quantized movement features as input by a facial movement disorder detection algorithm, wherein the facial movement disorder detection algorithm at least comprises an algorithm based on preset threshold comparison, a decision tree algorithm, a random forest algorithm, a support vector machine algorithm, a perceptron algorithm and a deep neural network algorithm.
The invention also provides a quantitative assessment system for facial movement dysfunction of a subject, comprising:
the acquisition input module is used for shooting and recording the video of the face of the subject in a state of resting and completing the action of the appointed face, or designating the path of the video file of the face of the subject which is shot and recorded in a state of resting and completing the action of the appointed face;
the face key point grid extraction module is used for extracting the face key point grids in the video from the acquisition input module and carrying out normalization processing;
the motion characteristic analysis module is used for dividing the face key point sub-grids, calculating the quantized motion characteristic information of each sub-grid and prompting the possible position and degree of the motion disorder by using a face motion dysfunction detection algorithm and the quantized motion characteristics;
the motion characteristic analysis module can be used for calculating the difference value of each bilateral symmetry sub-grid for the same motion characteristic;
the motion characteristic analysis module is also used for configuring a plurality of different sub-grid division schemes according to the requirement and pre-storing at least one group of candidate sub-grid division schemes;
and the output display module is used for reminding and displaying the analysis and identification results of the motion characteristic analysis module.
Example 1
As shown in fig. 2, in this embodiment, a face key point grid of 468 points of a face is extracted by using an open-source media model, the front observation effect of the face grid is shown on the left side of fig. 2, the side observation effect of the face grid is shown on the right side of fig. 2, the intersection points of line segments in the figure represent face key points, and each line segment represents grid lines;
in this embodiment, the method for normalizing the keypoint grid includes:
a1 Translation): translating all key point coordinates in the facial key point grid to a coordinate system with the nose tip as a coordinate origin;
a2 Rotation: selecting at least two key points on the central axis of a human face from the facial key point grids, fitting the central axis of the facial, for example selecting a nose tip point and a nose root point, calculating a straight line passing through the two key points under a plane rectangular coordinate system as the central axis of the facial, calculating an included angle of the central axis of the facial with respect to the vertical direction, and reversely rotating the key point grids with the nose tip point as the center to form an included angle so that the key point grids keep the original point at the nose tip point and are bilaterally symmetrical with respect to the vertical direction;
a3 Scaling: the face is selected as two key points with relatively fixed distance, for example, two key points of the inner eye angles of two eyes, or two key points of a nose root point and a nose lower point are selected, the plane Euclidean distance of coordinates of the two key points is calculated, the distance is taken as a reference length, and then the coordinate value of the grid of the face key points after translation and rotation is divided by the reference length.
After the normalization processing, the ruler of the whole facial key point grid is not influenced by shooting distance, camera resolution and different individual differences, becomes a ratio relative to the reference length, and is beneficial to subsequent processing analysis and comparison.
Example 2
As shown in fig. 3, in the present embodiment, the sub-grid is divided according to anatomical locations of facial muscle groups, including: left eye orbicularis son mesh (represented by open small squares in the figure), orbicularis left side molecular mesh (represented by open small circles in the figure), expressive muscle left side molecular mesh (represented by open small triangles in the figure), right eye orbicularis son mesh (represented by solid small squares in the figure), orbicularis right side molecular mesh (represented by solid small circles in the figure), expressive muscle right side molecular mesh (represented by solid small triangles in the figure);
it should be noted that, the division of the subgrid may have various different modes according to the evaluation requirement and/or doctor opinion, such as the division method shown in fig. 3 is more suitable for the evaluation of facial movement dysfunction of facial spasm type, because the disease condition of the patient is characterized by the rapid spasm of orbicularis oculi which is asymmetric left and right first, then the disease condition is developed to the rapid spasm of orbicularis oculi and expressive muscle which is asymmetric left and right, and the progress condition of the disease condition can be effectively evaluated by adopting the division method shown in fig. 3; in the assessment of other types of facial dyskinesia, such as peripheral facial paralysis, it may be more appropriate to divide the facial mesh into two sub-meshes, a left face and a right face, with specific divisions based on the mesh according to the actual patient condition;
a plurality of different sub-meshing schemes can be configured as required, the configured sub-meshing schemes are saved, and at least one group of candidate sub-meshing schemes are saved in advance.
Example 3
In step S103, the quantized motion characteristic information at least comprises motion displacement, speed, acceleration, included angle, angular speed and other information of each sub-grid, and statistical characteristics of time sequence information;
the statistical characteristics at least comprise average value, maximum value, minimum value, standard deviation, quantile and the like;
the calculation method of the quantized motion characteristic information of the sub-grid comprises the following steps: calculating the motion characteristics of each key point in the sub-grids one by one, and carrying out normalized weighted summation on the motion characteristics of each key point in the sub-grids, wherein the normalized weighted summation is defined as giving a weight which is not less than 0 and not more than 1 to each key point in the sub-grids, and the sum of the weights of all key points in each sub-grid is 1;
the quantized motion feature information of the sub-grid is expressed as:
0≤w i ≤1;
wherein f k (. Cndot.) represents the calculation of the kth quantized motion feature, c= { p 1 ,p 2 ,...p N The } represents a sub-grid of key points, consisting of N facial key points p i Constitution, w i Represents a non-negative weight coefficient, the sum of which is 1;
the time sequence information can be information obtained by combining a single coordinate axis direction or multiple coordinate axis directions, and the method for combining the multiple coordinate axis directions at least comprises the following steps: forming a feature vector by using quantized motion feature information of a plurality of coordinate axis directions, and calculating at least one of the following norms of the feature vector:
a) 1 norm;
b) 2 norms;
c) Infinite norms;
in this embodiment, the displacement of the sub-grid in the horizontal x direction may be calculated, or the euclidean displacement of the sub-grid in the xy plane may be calculated, that is, the displacement in the xy two directions may be used to form a vector, and the 2 norm of the vector may be calculated, that is, the euclidean displacement in the xy plane may be calculated, or the euclidean displacement of the sub-grid in the xyz space may be calculated;
further, the difference value of the left and right symmetric sub-grids for the same movement characteristic can be calculated, in this embodiment, in some left and right asymmetric facial movement dysfunctions, such as facial spasm, the left and right difference curves can more intuitively reflect the position and duration of the occurrence of the spasm, for example, fig. 4 is a schematic diagram of the velocity curve and the velocity difference curve of the two-eye orbicularis sub-grid according to an embodiment of the present invention, the upper diagram of fig. 4 is the velocity curve of the two-eye orbicularis sub-grid, the lower diagram is the velocity difference curve of the two-eye orbicularis sub-grid, two times of spasticity occur in the left eye around 1.9 seconds and 5.9 seconds in the diagram, two times of larger peaks appear on the velocity curve of the left-eye orbicularis sub-grid, and the left and right difference curve of the lower diagram more clearly reflects the left and right asymmetry of the movement characteristic, which suggests the possible facial movement dysfunctions.
Example 4
In step S104, the facial movement dysfunction detection algorithm uses the quantized movement characteristics as input, and the output of the facial movement dysfunction detection algorithm at least comprises the position, type, degree and confidence of the movement dysfunction;
the facial movement dysfunction detection algorithm at least comprises an algorithm based on preset threshold comparison, a decision tree algorithm, a random forest algorithm, a support vector machine algorithm, a perceptron algorithm, a deep neural network algorithm and the like;
as shown in fig. 4, in this embodiment, it is required to analyze whether the subject has facial movement dysfunction caused by facial spasm, 8 seconds of facial video of the subject in a resting state is collected, and fig. 4 shows a speed curve and a speed difference curve of a binocular orbicularis sub-grid of the subject, and an algorithm based on a preset threshold comparison is adopted: the speed difference threshold is set to be 15 mm/s, when the speed difference of the left sub-grid and the right sub-grid exceeds the threshold, the algorithm judges that the unilateral cramp occurs in the time period when the speed difference exceeds the threshold, the algorithm judges that the one-side algorithm with higher speed in the cramp occurs in the time period when the speed difference exceeds the threshold, the one-side algorithm with higher speed in the cramp occurs in the time period, and by combining the quantitative movement characteristics shown in fig. 4, the algorithm adopting the preset threshold comparison can judge that the first unilateral cramp of the eye occurs in 1.84-1.92 seconds and the second unilateral cramp of the eye occurs in 5.82-5.91 seconds, and the left eye is the affected side of the cramp.
In this embodiment, it is required to analyze whether a subject has facial movement dysfunction caused by central facial paralysis, collect videos of the subject under the actions of closing eyes, opening mouths, pucking mouths, rousing cheeks and connecting teeth, divide the facial subgrid into a left eye upper eyelid subgrid, a left eye lower eyelid subgrid, a right eye upper eyelid subgrid, a right eye lower eyelid subgrid, an upper lip subgrid and an upper lip subgrid, calculate a displacement curve and a speed curve of each eyelid subgrid in the vertical direction when the subject completes the eye closing action, extract a maximum displacement and a maximum speed movement characteristic of each eyelid subgrid in the vertical direction, and a displacement curve and a speed curve of each upper lip subgrid, extract a maximum displacement and a maximum speed movement characteristic of each upper lip subgrid in the vertical direction, form feature vectors by the maximum displacement and the movement characteristics of each subgrid, select a random forest algorithm as a facial movement dysfunction detection algorithm, and judge whether movement dysfunction of eyelid muscle weakness and/or lip muscle weakness exists by using a random forest algorithm trained in advance.
The reminding display method at least comprises the following steps: generating and displaying an enhanced video, generating and displaying a time sequence curve graph, and generating and displaying a data statistics chart; enhanced video is defined as video processed using at least one of the following methods: overlapping a key point grid, cutting out areas, zooming, highlighting, framing and adding a prompt arrow.
In combination with the above, in the present application:
according to the facial movement dysfunction assessment method and the facial movement dysfunction assessment system, facial key point grids are extracted through collecting facial action videos of a subject, normalization processing is carried out, then movement characteristic information is calculated on the sub-grids, and finally analysis and identification are carried out by using a facial movement dysfunction detection algorithm, and compared with the traditional assessment method, the facial movement dysfunction assessment method has the remarkable improvement that:
with non-invasive assessment, the assessment method provided by the application does not require the use of any invasive tools or equipment to assess facial movement dysfunction, but rather is implemented by analyzing facial motion videos, which is safer and more comfortable.
The method has high automation, reduces the requirement of manual intervention by automatically extracting the facial key point grid and calculating the motion characteristic information, improves the evaluation efficiency and consistency, changes the size of the facial key point grid into a ratio relative to the reference length after normalization processing, and is beneficial to subsequent processing analysis and comparison.
By quantifying the movement characteristic information, the system can provide more objective and accurate assessment results, not just subjective observations, and by comparing the grid speed difference curve changes, the movement dysfunction of the face of the patient can be quantified.
The assessment method provided by the application can be configured with a plurality of different sub-grid division schemes according to the needs, can carry out customized assessment according to different types of facial movement dysfunction, and is more personalized.
In addition, the evaluation system provided by the application can generate enhanced videos, time sequence graphs and data statistics charts, and is beneficial to doctors and patients to better understand evaluation results.
In summary, the method and system for evaluating facial dyskinesia provided by the present application can detect and identify the location, type and degree of dyskinesia more accurately by quantitative analysis, and improve diagnosis and treatment of facial dyskinesia.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.
Claims (8)
1. A method for quantitatively evaluating facial movement dysfunction, comprising:
collecting front face videos of a subject in a specified face action and resting state, wherein the face actions at least comprise more than one actions of closing eyes, frowning, lifting forehead, opening mouth greatly, puckering mouth, rousing cheek and teeth;
extracting key point grids of the face of the person in the video frame by frame, and carrying out normalization processing on the key point grids, wherein the normalization processing at least comprises: translation, rotation, scaling;
dividing a plurality of sub-grids on the facial key point grids, and calculating quantized motion characteristic information of each sub-grid;
the facial movement dysfunction detection algorithm uses the quantized movement characteristics as input, outputs at least the position, the type, the degree and the confidence degree of the movement dysfunction, and at least comprises an algorithm based on a preset threshold comparison, a decision tree algorithm, a random forest algorithm, a support vector machine algorithm, a perceptron algorithm and a deep neural network algorithm.
2. The method and system for quantitatively evaluating facial dyskinesia according to claim 1, wherein in the step of meshing the facial key points:
and configuring a corresponding sub-grid division scheme according to the actual diagnosis requirement, storing the configured sub-grid division scheme, and pre-storing at least one group of candidate sub-grid division schemes.
3. The method and system for quantitatively evaluating facial dyskinesia according to claim 1, wherein in the step of meshing the facial key points:
the quantized motion characteristic information at least comprises motion displacement, speed, acceleration, included angle, angular speed and the like of each sub-grid, and statistical characteristics of the time sequence information;
the statistical characteristics at least comprise average value, maximum value, minimum value, standard deviation and quantile;
the calculation method of the quantized motion characteristic information of the sub-grid comprises the following steps:
calculating the motion characteristics of each key point in the sub-grids one by one, and carrying out normalized weighted summation on the motion characteristics of each key point in the sub-grids, wherein the normalization is carried outThe weighted summation is defined as giving each key point in the sub-grid a weight w not less than 0 and not more than 1 i And the sum of the weights of all the keypoints in each sub-grid is 1.
4. A method and system for quantitatively evaluating facial movement dysfunction according to claim 3, wherein the quantized movement characteristic information of the sub-grid is expressed as:
0≤w i ≤1;
wherein f k (. Cndot.) represents the calculation of the kth said quantized motion feature, c= { p 1 ,p 2 ,...p N The key point sub-grid is represented by N face key points p i Constitution, w i Representing a non-negative weighting factor, the sum of which is 1.
5. The quantitative assessment method and system for facial movement dysfunction according to claim 3, wherein the time sequence information is information obtained by combining a single coordinate axis direction or multiple coordinate axis directions, and the method for combining the multiple coordinate axis directions is as follows:
forming a feature vector by using quantized motion feature information of a plurality of coordinate axis directions, and calculating at least one of the following norms of the feature vector:
a) 1 norm;
b) 2 norms;
c) Infinite norms;
and calculating the difference value of each bilateral symmetry sub-grid for the same motion characteristic.
6. The method and system for quantitatively evaluating facial dyskinesia according to claim 1, wherein the method for reminding and displaying at least comprises:
generating and displaying an enhanced video, generating and displaying a time sequence curve graph, and generating and displaying a data statistics chart;
the enhanced video is defined as video processed using at least one method: overlapping a key point grid, cutting out areas, zooming, highlighting, framing and adding a prompt arrow.
7. A quantitative assessment system for facial movement dysfunction, comprising:
the acquisition input module is used for shooting and recording the video of the face of the subject in a state of resting and completing the action of the appointed face, or designating the path of the video file of the face of the subject which is shot and recorded in a state of resting and completing the action of the appointed face;
the face key point grid extraction module is used for extracting the face key point grids in the video from the acquisition input module and carrying out normalization processing;
the motion characteristic analysis module is used for dividing the face key point sub-grids, calculating quantized motion characteristic information of each sub-grid and prompting the position and degree of possible movement disorder by using a face movement disorder detection algorithm and the quantized motion characteristics;
and the output display module is used for reminding and displaying the analysis and identification results of the motion characteristic analysis module.
8. The quantitative assessment system for facial movement dysfunction of claim 7, wherein:
the motion characteristic analysis module is also used for calculating the difference value of each bilateral symmetry sub-grid for the same motion characteristic;
the motion characteristic analysis module is also used for configuring a plurality of different sub-grid division schemes according to the requirement and pre-storing at least one group of candidate sub-grid division schemes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311316114.2A CN117352161B (en) | 2023-10-11 | 2023-10-11 | Quantitative evaluation method and system for facial movement dysfunction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311316114.2A CN117352161B (en) | 2023-10-11 | 2023-10-11 | Quantitative evaluation method and system for facial movement dysfunction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117352161A true CN117352161A (en) | 2024-01-05 |
CN117352161B CN117352161B (en) | 2024-07-05 |
Family
ID=89355346
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311316114.2A Active CN117352161B (en) | 2023-10-11 | 2023-10-11 | Quantitative evaluation method and system for facial movement dysfunction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117352161B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465773A (en) * | 2020-11-25 | 2021-03-09 | 王坚 | Facial nerve paralysis disease detection method based on human face muscle movement characteristics |
US20210073521A1 (en) * | 2019-09-10 | 2021-03-11 | Amarjot Singh | Continuously Evolving and Interactive Disguised Face Identification (DFI) with Facial Key Points using ScatterNet Hybrid Deep Learning (SHDL) Network |
CN112686853A (en) * | 2020-12-25 | 2021-04-20 | 刘铮 | Facial paralysis detection system based on artificial intelligence and muscle model |
CN113782184A (en) * | 2021-08-11 | 2021-12-10 | 杭州电子科技大学 | Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning |
CN116313058A (en) * | 2023-03-24 | 2023-06-23 | 浙江省中医院、浙江中医药大学附属第一医院(浙江省东方医院) | Facial paralysis intelligent assessment method, system, equipment and storage medium |
CN116740618A (en) * | 2023-08-07 | 2023-09-12 | 数聚工研(北京)科技有限公司 | Motion video action evaluation method, system, computer equipment and medium |
-
2023
- 2023-10-11 CN CN202311316114.2A patent/CN117352161B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210073521A1 (en) * | 2019-09-10 | 2021-03-11 | Amarjot Singh | Continuously Evolving and Interactive Disguised Face Identification (DFI) with Facial Key Points using ScatterNet Hybrid Deep Learning (SHDL) Network |
CN112465773A (en) * | 2020-11-25 | 2021-03-09 | 王坚 | Facial nerve paralysis disease detection method based on human face muscle movement characteristics |
CN112686853A (en) * | 2020-12-25 | 2021-04-20 | 刘铮 | Facial paralysis detection system based on artificial intelligence and muscle model |
CN113782184A (en) * | 2021-08-11 | 2021-12-10 | 杭州电子科技大学 | Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning |
CN116313058A (en) * | 2023-03-24 | 2023-06-23 | 浙江省中医院、浙江中医药大学附属第一医院(浙江省东方医院) | Facial paralysis intelligent assessment method, system, equipment and storage medium |
CN116740618A (en) * | 2023-08-07 | 2023-09-12 | 数聚工研(北京)科技有限公司 | Motion video action evaluation method, system, computer equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN117352161B (en) | 2024-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109543526B (en) | True and false facial paralysis recognition system based on depth difference characteristics | |
De Melo et al. | A deep multiscale spatiotemporal network for assessing depression from facial dynamics | |
RU2292839C2 (en) | Method and device for analyzing human behavior | |
US11663845B2 (en) | Method and apparatus for privacy protected assessment of movement disorder video recordings | |
CN105559802A (en) | Tristimania diagnosis system and method based on attention and emotion information fusion | |
CN110464367B (en) | Psychological anomaly detection method and system based on multi-channel cooperation | |
CN112101424B (en) | Method, device and equipment for generating retinopathy identification model | |
Loureiro et al. | Using a skeleton gait energy image for pathological gait classification | |
CN112883867A (en) | Student online learning evaluation method and system based on image emotion analysis | |
CN102567734A (en) | Specific value based retina thin blood vessel segmentation method | |
CN111403026A (en) | Facial paralysis grade assessment method | |
Martin et al. | Automated tackle injury risk assessment in contact-based sports-a rugby union example | |
CN111210415A (en) | Method for detecting facial expression coma of Parkinson patient | |
CN113974612A (en) | Automatic assessment method and system for upper limb movement function of stroke patient | |
CN116128814A (en) | Standardized acquisition method and related device for tongue diagnosis image | |
CN117438048B (en) | Method and system for assessing psychological disorder of psychiatric patient | |
CN117409930B (en) | Medical rehabilitation data processing method and system based on AI technology | |
CN113506274B (en) | Detection system for human cognitive condition based on visual saliency difference map | |
Zhang et al. | Research on Dyslexia Detection based on Eye Tracking | |
CN117352161B (en) | Quantitative evaluation method and system for facial movement dysfunction | |
CA2878374A1 (en) | Kinetic-based tool for biometric identification, verification, validation and profiling | |
CN115690528A (en) | Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene | |
WO2022024272A1 (en) | Information processing system, data accumulation device, data generation device, information processing method, data accumulation method, data generation method, recording medium, and database | |
Hammal et al. | Holistic and feature-based information towards dynamic multi-expressions recognition | |
JP2023078857A (en) | Eyewear virtual try-on system, eyewear selection system, eyewear try-on system, and eyewear classification system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |