CN117352161A - Quantitative evaluation method and system for facial movement dysfunction - Google Patents

Quantitative evaluation method and system for facial movement dysfunction Download PDF

Info

Publication number
CN117352161A
CN117352161A CN202311316114.2A CN202311316114A CN117352161A CN 117352161 A CN117352161 A CN 117352161A CN 202311316114 A CN202311316114 A CN 202311316114A CN 117352161 A CN117352161 A CN 117352161A
Authority
CN
China
Prior art keywords
sub
grid
facial
face
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311316114.2A
Other languages
Chinese (zh)
Inventor
付思超
周东
吴曦
何麒
董博雅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningdong Wansheng Medical Technology Wuhan Co ltd
Original Assignee
Ningdong Wansheng Medical Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningdong Wansheng Medical Technology Wuhan Co ltd filed Critical Ningdong Wansheng Medical Technology Wuhan Co ltd
Priority to CN202311316114.2A priority Critical patent/CN117352161A/en
Publication of CN117352161A publication Critical patent/CN117352161A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a quantitative evaluation method and a quantitative evaluation system for facial movement dysfunction, which relate to the technical field of facial movement dysfunction evaluation.

Description

Quantitative evaluation method and system for facial movement dysfunction
Technical Field
The invention relates to the technical field of facial movement dysfunction assessment, in particular to a quantitative assessment method and a quantitative assessment system for facial movement dysfunction.
Background
Facial dyskinesia, also known as facial muscle dysfunction, is a symptom affecting facial muscle control and expression, mostly caused by neurological diseases, muscle diseases, trauma.
Assessment of facial dyskinesia requires a comprehensive consideration of various aspects of the patient's symptoms, medical history, neuromuscular function, imaging and laboratory examination, and a detailed discussion of the medical history with the patient, including the onset time of symptoms, the course of development, possible causes and any other medical conditions associated with facial dyskinesia, followed by a neurological examination to assess facial nerve function.
At present, the diagnosis and evaluation of the facial movement dysfunction are mainly finished by doctors with abundant clinical experience, and the evaluation mainly depends on visual observation of the doctor on the face of the patient to rest or finish the action state of the specific face, and the amplitude, the speed, the bilateral symmetry and the like of the specific action of the patient are visually observed, the time cost of the mode is high, deviation caused by individual experience differences and subjective factors of the doctor can occur, quantitative indexes which can be used for comparison among different patients or different time periods of the same patient are lacking, the accurate evaluation of the disease progress and the rehabilitation condition is not facilitated, and in general, a better solution is not yet needed in the field, and a method and a system capable of helping the doctor to perform the quantitative evaluation of the facial movement dysfunction are urgently needed in the field.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides a quantitative evaluation method and a quantitative evaluation system for facial movement dysfunction, which solve the problems in the prior art that quantitative evaluation for facial movement dysfunction is difficult to carry out due to lack of quantitative indexes for comparison between patients or between different time periods of the same patient.
(II) technical scheme
In order to achieve the above object, the present invention provides a quantitative assessment method for facial movement dysfunction, comprising:
collecting front face videos of a subject in a specified face action and resting state, wherein the face actions at least comprise more than one actions of closing eyes, frowning, lifting forehead, opening mouth greatly, puckering mouth, rousing cheek and teeth;
extracting key point grids of the face of the person in the video frame by frame, and carrying out normalization processing on the key point grids, wherein the normalization processing at least comprises: translation, rotation, scaling;
dividing a plurality of sub-grids on the facial key point grids, and calculating quantized motion characteristic information of each sub-grid;
the facial movement dysfunction detection algorithm uses the quantized movement characteristics as input, outputs at least the position, the type, the degree and the confidence degree of the movement dysfunction, and at least comprises an algorithm based on a preset threshold comparison, a decision tree algorithm, a random forest algorithm, a support vector machine algorithm, a perceptron algorithm and a deep neural network algorithm.
The present invention is further configured such that, in the face key point meshing step:
configuring a corresponding sub-grid division scheme according to the actual diagnosis requirement, storing the configured sub-grid division scheme, and pre-storing at least one group of candidate sub-grid division schemes;
the present invention is further configured such that, in the face key point meshing step:
the quantized motion characteristic information at least comprises motion displacement, speed, acceleration, included angle, angular speed and the like of each sub-grid, and statistical characteristics of the time sequence information;
the statistical features at least comprise average value, maximum value, minimum value, standard deviation and quantile;
the calculation method of the quantized motion characteristic information of the sub-grid comprises the following steps:
calculating the motion characteristics of each key point in the sub-grids one by one, and carrying out normalized weighted summation on the motion characteristics of each key point in the sub-grids, wherein the normalized weighted summation is defined as giving a weight w not smaller than 0 and not larger than 1 to each key point in the sub-grids i And the sum of the weights of all key points in each sub-grid is 1;
the invention is further arranged such that said quantized motion feature information of said sub-grid is expressed as:
0≤w i ≤1;
wherein f k (. Cndot.) represents the calculation of the kth said quantized motion feature, c= { p 1 ,p 2 ,...p N The key point sub-grid is represented by N face key points p i Constitution, w i Represents a non-negative weight coefficient, the sum of which is 1;
the invention further provides that the time sequence information is information obtained by combining a single coordinate axis direction or multiple coordinate axis directions, and the method for combining the multiple coordinate axis directions comprises the following steps:
forming a feature vector by using quantized motion feature information of a plurality of coordinate axis directions, and calculating at least one of the following norms of the feature vector:
a) 1 norm;
b) 2 norms;
c) Infinite norm:
calculating the difference value of each bilateral symmetry sub-grid for the same motion characteristic;
the invention further provides that the reminding display method at least comprises the following steps:
generating and displaying an enhanced video, generating and displaying a time sequence curve graph, and generating and displaying a data statistics chart;
the enhanced video is defined as video processed using at least one method: overlapping a key point grid, cutting an area, zooming, highlighting, framing and adding a prompt arrow;
the invention also provides a quantitative evaluation system for facial movement dysfunction, which comprises:
the acquisition input module is used for shooting and recording the video of the face of the subject in a state of resting and completing the action of the appointed face, or designating the path of the video file of the face of the subject which is shot and recorded in a state of resting and completing the action of the appointed face;
the face key point grid extraction module is used for extracting the face key point grids in the video from the acquisition input module and carrying out normalization processing;
the motion characteristic analysis module is used for dividing the face key point sub-grids, calculating quantized motion characteristic information of each sub-grid and prompting the position and degree of possible movement disorder by using a face movement disorder detection algorithm and the quantized motion characteristics;
the output display module is used for reminding and displaying the analysis and identification results of the motion characteristic analysis module;
the invention is further arranged that the motion characteristic analysis module is also used for calculating the difference value of each bilateral symmetry sub-grid for the same motion characteristic;
the invention further provides that the motion characteristic analysis module is further used for configuring a plurality of different sub-meshing schemes according to the requirement and pre-storing at least one group of candidate sub-meshing schemes.
(III) beneficial effects
The invention provides a quantitative evaluation method and a quantitative evaluation system for facial movement dysfunction. The beneficial effects are as follows:
according to the facial movement dysfunction assessment method and the facial movement dysfunction assessment system, facial key point grids are extracted through collecting facial action videos of a subject, normalization processing is carried out, then movement characteristic information is calculated on the sub-grids, and finally analysis and identification are carried out by using a facial movement dysfunction detection algorithm, and compared with the traditional assessment method, the facial movement dysfunction assessment method has the remarkable improvement that:
with non-invasive assessment, the assessment method provided by the application does not require the use of any invasive tools or equipment to assess facial movement dysfunction, but rather is implemented by analyzing facial motion videos, which is safer and more comfortable.
The method has high automation, reduces the requirement of manual intervention by automatically extracting the facial key point grid and calculating the motion characteristic information, improves the evaluation efficiency and consistency, changes the size of the facial key point grid into a ratio relative to the reference length after normalization processing, and is beneficial to subsequent processing analysis and comparison.
By quantifying the movement characteristic information, the system can provide more objective and accurate assessment results, not just subjective observations, and by comparing the grid speed difference curve changes, the movement dysfunction of the face of the patient can be quantified.
The assessment method provided by the application can be configured with a plurality of different sub-grid division schemes according to the needs, can carry out customized assessment according to different types of facial movement dysfunction, and is more personalized.
In addition, the evaluation system provided by the application can generate enhanced videos, time sequence graphs and data statistics charts, and is beneficial to doctors and patients to better understand evaluation results.
In summary, the method and system for evaluating facial dyskinesia provided by the present application can detect and identify the location, type and degree of dyskinesia more accurately by quantitative analysis, and improve diagnosis and treatment of facial dyskinesia.
The problem that quantitative evaluation on facial movement dysfunction is difficult to carry out due to the fact that quantitative indexes for comparison among patients or among different time periods of the same patient are lacking in the prior art is solved.
Drawings
FIG. 1 is a flowchart of a quantitative assessment method for facial dyskinesia of the present invention;
FIG. 2 is a schematic diagram of a face key point grid in embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of face subgrid division in embodiment 2 of the present invention;
fig. 4 is a schematic diagram showing a speed curve and a speed difference curve of the orbicularis oculi muscle subgrid in example 4 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
Referring to fig. 1-4, the present invention provides a method for quantitatively evaluating facial movement dysfunction, comprising:
s101, collecting front face videos of a subject in a specified face action and resting state, wherein the face actions at least comprise more than one actions of closing eyes, frowning, lifting forehead, opening mouth, pucker, cheek and teeth;
s102, extracting key point grids of the face of the person in the video frame by frame, and carrying out normalization processing on the key point grids, wherein the normalization processing at least comprises: translation, rotation, scaling;
s103, dividing a plurality of sub-grids on the facial key point grids, and calculating quantized motion characteristic information of each sub-grid;
s104, analyzing and identifying the position, type and degree of the possible movement disorder by using a facial movement disorder analysis algorithm and quantized movement features, prompting and displaying, and outputting the position, type, degree and confidence degree of the possible movement disorder by using the quantized movement features as input by a facial movement disorder detection algorithm, wherein the facial movement disorder detection algorithm at least comprises an algorithm based on preset threshold comparison, a decision tree algorithm, a random forest algorithm, a support vector machine algorithm, a perceptron algorithm and a deep neural network algorithm.
The invention also provides a quantitative assessment system for facial movement dysfunction of a subject, comprising:
the acquisition input module is used for shooting and recording the video of the face of the subject in a state of resting and completing the action of the appointed face, or designating the path of the video file of the face of the subject which is shot and recorded in a state of resting and completing the action of the appointed face;
the face key point grid extraction module is used for extracting the face key point grids in the video from the acquisition input module and carrying out normalization processing;
the motion characteristic analysis module is used for dividing the face key point sub-grids, calculating the quantized motion characteristic information of each sub-grid and prompting the possible position and degree of the motion disorder by using a face motion dysfunction detection algorithm and the quantized motion characteristics;
the motion characteristic analysis module can be used for calculating the difference value of each bilateral symmetry sub-grid for the same motion characteristic;
the motion characteristic analysis module is also used for configuring a plurality of different sub-grid division schemes according to the requirement and pre-storing at least one group of candidate sub-grid division schemes;
and the output display module is used for reminding and displaying the analysis and identification results of the motion characteristic analysis module.
Example 1
As shown in fig. 2, in this embodiment, a face key point grid of 468 points of a face is extracted by using an open-source media model, the front observation effect of the face grid is shown on the left side of fig. 2, the side observation effect of the face grid is shown on the right side of fig. 2, the intersection points of line segments in the figure represent face key points, and each line segment represents grid lines;
in this embodiment, the method for normalizing the keypoint grid includes:
a1 Translation): translating all key point coordinates in the facial key point grid to a coordinate system with the nose tip as a coordinate origin;
a2 Rotation: selecting at least two key points on the central axis of a human face from the facial key point grids, fitting the central axis of the facial, for example selecting a nose tip point and a nose root point, calculating a straight line passing through the two key points under a plane rectangular coordinate system as the central axis of the facial, calculating an included angle of the central axis of the facial with respect to the vertical direction, and reversely rotating the key point grids with the nose tip point as the center to form an included angle so that the key point grids keep the original point at the nose tip point and are bilaterally symmetrical with respect to the vertical direction;
a3 Scaling: the face is selected as two key points with relatively fixed distance, for example, two key points of the inner eye angles of two eyes, or two key points of a nose root point and a nose lower point are selected, the plane Euclidean distance of coordinates of the two key points is calculated, the distance is taken as a reference length, and then the coordinate value of the grid of the face key points after translation and rotation is divided by the reference length.
After the normalization processing, the ruler of the whole facial key point grid is not influenced by shooting distance, camera resolution and different individual differences, becomes a ratio relative to the reference length, and is beneficial to subsequent processing analysis and comparison.
Example 2
As shown in fig. 3, in the present embodiment, the sub-grid is divided according to anatomical locations of facial muscle groups, including: left eye orbicularis son mesh (represented by open small squares in the figure), orbicularis left side molecular mesh (represented by open small circles in the figure), expressive muscle left side molecular mesh (represented by open small triangles in the figure), right eye orbicularis son mesh (represented by solid small squares in the figure), orbicularis right side molecular mesh (represented by solid small circles in the figure), expressive muscle right side molecular mesh (represented by solid small triangles in the figure);
it should be noted that, the division of the subgrid may have various different modes according to the evaluation requirement and/or doctor opinion, such as the division method shown in fig. 3 is more suitable for the evaluation of facial movement dysfunction of facial spasm type, because the disease condition of the patient is characterized by the rapid spasm of orbicularis oculi which is asymmetric left and right first, then the disease condition is developed to the rapid spasm of orbicularis oculi and expressive muscle which is asymmetric left and right, and the progress condition of the disease condition can be effectively evaluated by adopting the division method shown in fig. 3; in the assessment of other types of facial dyskinesia, such as peripheral facial paralysis, it may be more appropriate to divide the facial mesh into two sub-meshes, a left face and a right face, with specific divisions based on the mesh according to the actual patient condition;
a plurality of different sub-meshing schemes can be configured as required, the configured sub-meshing schemes are saved, and at least one group of candidate sub-meshing schemes are saved in advance.
Example 3
In step S103, the quantized motion characteristic information at least comprises motion displacement, speed, acceleration, included angle, angular speed and other information of each sub-grid, and statistical characteristics of time sequence information;
the statistical characteristics at least comprise average value, maximum value, minimum value, standard deviation, quantile and the like;
the calculation method of the quantized motion characteristic information of the sub-grid comprises the following steps: calculating the motion characteristics of each key point in the sub-grids one by one, and carrying out normalized weighted summation on the motion characteristics of each key point in the sub-grids, wherein the normalized weighted summation is defined as giving a weight which is not less than 0 and not more than 1 to each key point in the sub-grids, and the sum of the weights of all key points in each sub-grid is 1;
the quantized motion feature information of the sub-grid is expressed as:
0≤w i ≤1;
wherein f k (. Cndot.) represents the calculation of the kth quantized motion feature, c= { p 1 ,p 2 ,...p N The } represents a sub-grid of key points, consisting of N facial key points p i Constitution, w i Represents a non-negative weight coefficient, the sum of which is 1;
the time sequence information can be information obtained by combining a single coordinate axis direction or multiple coordinate axis directions, and the method for combining the multiple coordinate axis directions at least comprises the following steps: forming a feature vector by using quantized motion feature information of a plurality of coordinate axis directions, and calculating at least one of the following norms of the feature vector:
a) 1 norm;
b) 2 norms;
c) Infinite norms;
in this embodiment, the displacement of the sub-grid in the horizontal x direction may be calculated, or the euclidean displacement of the sub-grid in the xy plane may be calculated, that is, the displacement in the xy two directions may be used to form a vector, and the 2 norm of the vector may be calculated, that is, the euclidean displacement in the xy plane may be calculated, or the euclidean displacement of the sub-grid in the xyz space may be calculated;
further, the difference value of the left and right symmetric sub-grids for the same movement characteristic can be calculated, in this embodiment, in some left and right asymmetric facial movement dysfunctions, such as facial spasm, the left and right difference curves can more intuitively reflect the position and duration of the occurrence of the spasm, for example, fig. 4 is a schematic diagram of the velocity curve and the velocity difference curve of the two-eye orbicularis sub-grid according to an embodiment of the present invention, the upper diagram of fig. 4 is the velocity curve of the two-eye orbicularis sub-grid, the lower diagram is the velocity difference curve of the two-eye orbicularis sub-grid, two times of spasticity occur in the left eye around 1.9 seconds and 5.9 seconds in the diagram, two times of larger peaks appear on the velocity curve of the left-eye orbicularis sub-grid, and the left and right difference curve of the lower diagram more clearly reflects the left and right asymmetry of the movement characteristic, which suggests the possible facial movement dysfunctions.
Example 4
In step S104, the facial movement dysfunction detection algorithm uses the quantized movement characteristics as input, and the output of the facial movement dysfunction detection algorithm at least comprises the position, type, degree and confidence of the movement dysfunction;
the facial movement dysfunction detection algorithm at least comprises an algorithm based on preset threshold comparison, a decision tree algorithm, a random forest algorithm, a support vector machine algorithm, a perceptron algorithm, a deep neural network algorithm and the like;
as shown in fig. 4, in this embodiment, it is required to analyze whether the subject has facial movement dysfunction caused by facial spasm, 8 seconds of facial video of the subject in a resting state is collected, and fig. 4 shows a speed curve and a speed difference curve of a binocular orbicularis sub-grid of the subject, and an algorithm based on a preset threshold comparison is adopted: the speed difference threshold is set to be 15 mm/s, when the speed difference of the left sub-grid and the right sub-grid exceeds the threshold, the algorithm judges that the unilateral cramp occurs in the time period when the speed difference exceeds the threshold, the algorithm judges that the one-side algorithm with higher speed in the cramp occurs in the time period when the speed difference exceeds the threshold, the one-side algorithm with higher speed in the cramp occurs in the time period, and by combining the quantitative movement characteristics shown in fig. 4, the algorithm adopting the preset threshold comparison can judge that the first unilateral cramp of the eye occurs in 1.84-1.92 seconds and the second unilateral cramp of the eye occurs in 5.82-5.91 seconds, and the left eye is the affected side of the cramp.
In this embodiment, it is required to analyze whether a subject has facial movement dysfunction caused by central facial paralysis, collect videos of the subject under the actions of closing eyes, opening mouths, pucking mouths, rousing cheeks and connecting teeth, divide the facial subgrid into a left eye upper eyelid subgrid, a left eye lower eyelid subgrid, a right eye upper eyelid subgrid, a right eye lower eyelid subgrid, an upper lip subgrid and an upper lip subgrid, calculate a displacement curve and a speed curve of each eyelid subgrid in the vertical direction when the subject completes the eye closing action, extract a maximum displacement and a maximum speed movement characteristic of each eyelid subgrid in the vertical direction, and a displacement curve and a speed curve of each upper lip subgrid, extract a maximum displacement and a maximum speed movement characteristic of each upper lip subgrid in the vertical direction, form feature vectors by the maximum displacement and the movement characteristics of each subgrid, select a random forest algorithm as a facial movement dysfunction detection algorithm, and judge whether movement dysfunction of eyelid muscle weakness and/or lip muscle weakness exists by using a random forest algorithm trained in advance.
The reminding display method at least comprises the following steps: generating and displaying an enhanced video, generating and displaying a time sequence curve graph, and generating and displaying a data statistics chart; enhanced video is defined as video processed using at least one of the following methods: overlapping a key point grid, cutting out areas, zooming, highlighting, framing and adding a prompt arrow.
In combination with the above, in the present application:
according to the facial movement dysfunction assessment method and the facial movement dysfunction assessment system, facial key point grids are extracted through collecting facial action videos of a subject, normalization processing is carried out, then movement characteristic information is calculated on the sub-grids, and finally analysis and identification are carried out by using a facial movement dysfunction detection algorithm, and compared with the traditional assessment method, the facial movement dysfunction assessment method has the remarkable improvement that:
with non-invasive assessment, the assessment method provided by the application does not require the use of any invasive tools or equipment to assess facial movement dysfunction, but rather is implemented by analyzing facial motion videos, which is safer and more comfortable.
The method has high automation, reduces the requirement of manual intervention by automatically extracting the facial key point grid and calculating the motion characteristic information, improves the evaluation efficiency and consistency, changes the size of the facial key point grid into a ratio relative to the reference length after normalization processing, and is beneficial to subsequent processing analysis and comparison.
By quantifying the movement characteristic information, the system can provide more objective and accurate assessment results, not just subjective observations, and by comparing the grid speed difference curve changes, the movement dysfunction of the face of the patient can be quantified.
The assessment method provided by the application can be configured with a plurality of different sub-grid division schemes according to the needs, can carry out customized assessment according to different types of facial movement dysfunction, and is more personalized.
In addition, the evaluation system provided by the application can generate enhanced videos, time sequence graphs and data statistics charts, and is beneficial to doctors and patients to better understand evaluation results.
In summary, the method and system for evaluating facial dyskinesia provided by the present application can detect and identify the location, type and degree of dyskinesia more accurately by quantitative analysis, and improve diagnosis and treatment of facial dyskinesia.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (8)

1. A method for quantitatively evaluating facial movement dysfunction, comprising:
collecting front face videos of a subject in a specified face action and resting state, wherein the face actions at least comprise more than one actions of closing eyes, frowning, lifting forehead, opening mouth greatly, puckering mouth, rousing cheek and teeth;
extracting key point grids of the face of the person in the video frame by frame, and carrying out normalization processing on the key point grids, wherein the normalization processing at least comprises: translation, rotation, scaling;
dividing a plurality of sub-grids on the facial key point grids, and calculating quantized motion characteristic information of each sub-grid;
the facial movement dysfunction detection algorithm uses the quantized movement characteristics as input, outputs at least the position, the type, the degree and the confidence degree of the movement dysfunction, and at least comprises an algorithm based on a preset threshold comparison, a decision tree algorithm, a random forest algorithm, a support vector machine algorithm, a perceptron algorithm and a deep neural network algorithm.
2. The method and system for quantitatively evaluating facial dyskinesia according to claim 1, wherein in the step of meshing the facial key points:
and configuring a corresponding sub-grid division scheme according to the actual diagnosis requirement, storing the configured sub-grid division scheme, and pre-storing at least one group of candidate sub-grid division schemes.
3. The method and system for quantitatively evaluating facial dyskinesia according to claim 1, wherein in the step of meshing the facial key points:
the quantized motion characteristic information at least comprises motion displacement, speed, acceleration, included angle, angular speed and the like of each sub-grid, and statistical characteristics of the time sequence information;
the statistical characteristics at least comprise average value, maximum value, minimum value, standard deviation and quantile;
the calculation method of the quantized motion characteristic information of the sub-grid comprises the following steps:
calculating the motion characteristics of each key point in the sub-grids one by one, and carrying out normalized weighted summation on the motion characteristics of each key point in the sub-grids, wherein the normalization is carried outThe weighted summation is defined as giving each key point in the sub-grid a weight w not less than 0 and not more than 1 i And the sum of the weights of all the keypoints in each sub-grid is 1.
4. A method and system for quantitatively evaluating facial movement dysfunction according to claim 3, wherein the quantized movement characteristic information of the sub-grid is expressed as:
0≤w i ≤1;
wherein f k (. Cndot.) represents the calculation of the kth said quantized motion feature, c= { p 1 ,p 2 ,...p N The key point sub-grid is represented by N face key points p i Constitution, w i Representing a non-negative weighting factor, the sum of which is 1.
5. The quantitative assessment method and system for facial movement dysfunction according to claim 3, wherein the time sequence information is information obtained by combining a single coordinate axis direction or multiple coordinate axis directions, and the method for combining the multiple coordinate axis directions is as follows:
forming a feature vector by using quantized motion feature information of a plurality of coordinate axis directions, and calculating at least one of the following norms of the feature vector:
a) 1 norm;
b) 2 norms;
c) Infinite norms;
and calculating the difference value of each bilateral symmetry sub-grid for the same motion characteristic.
6. The method and system for quantitatively evaluating facial dyskinesia according to claim 1, wherein the method for reminding and displaying at least comprises:
generating and displaying an enhanced video, generating and displaying a time sequence curve graph, and generating and displaying a data statistics chart;
the enhanced video is defined as video processed using at least one method: overlapping a key point grid, cutting out areas, zooming, highlighting, framing and adding a prompt arrow.
7. A quantitative assessment system for facial movement dysfunction, comprising:
the acquisition input module is used for shooting and recording the video of the face of the subject in a state of resting and completing the action of the appointed face, or designating the path of the video file of the face of the subject which is shot and recorded in a state of resting and completing the action of the appointed face;
the face key point grid extraction module is used for extracting the face key point grids in the video from the acquisition input module and carrying out normalization processing;
the motion characteristic analysis module is used for dividing the face key point sub-grids, calculating quantized motion characteristic information of each sub-grid and prompting the position and degree of possible movement disorder by using a face movement disorder detection algorithm and the quantized motion characteristics;
and the output display module is used for reminding and displaying the analysis and identification results of the motion characteristic analysis module.
8. The quantitative assessment system for facial movement dysfunction of claim 7, wherein:
the motion characteristic analysis module is also used for calculating the difference value of each bilateral symmetry sub-grid for the same motion characteristic;
the motion characteristic analysis module is also used for configuring a plurality of different sub-grid division schemes according to the requirement and pre-storing at least one group of candidate sub-grid division schemes.
CN202311316114.2A 2023-10-11 2023-10-11 Quantitative evaluation method and system for facial movement dysfunction Pending CN117352161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311316114.2A CN117352161A (en) 2023-10-11 2023-10-11 Quantitative evaluation method and system for facial movement dysfunction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311316114.2A CN117352161A (en) 2023-10-11 2023-10-11 Quantitative evaluation method and system for facial movement dysfunction

Publications (1)

Publication Number Publication Date
CN117352161A true CN117352161A (en) 2024-01-05

Family

ID=89355346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311316114.2A Pending CN117352161A (en) 2023-10-11 2023-10-11 Quantitative evaluation method and system for facial movement dysfunction

Country Status (1)

Country Link
CN (1) CN117352161A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465773A (en) * 2020-11-25 2021-03-09 王坚 Facial nerve paralysis disease detection method based on human face muscle movement characteristics
US20210073521A1 (en) * 2019-09-10 2021-03-11 Amarjot Singh Continuously Evolving and Interactive Disguised Face Identification (DFI) with Facial Key Points using ScatterNet Hybrid Deep Learning (SHDL) Network
CN112686853A (en) * 2020-12-25 2021-04-20 刘铮 Facial paralysis detection system based on artificial intelligence and muscle model
CN113782184A (en) * 2021-08-11 2021-12-10 杭州电子科技大学 Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN116313058A (en) * 2023-03-24 2023-06-23 浙江省中医院、浙江中医药大学附属第一医院(浙江省东方医院) Facial paralysis intelligent assessment method, system, equipment and storage medium
CN116740618A (en) * 2023-08-07 2023-09-12 数聚工研(北京)科技有限公司 Motion video action evaluation method, system, computer equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210073521A1 (en) * 2019-09-10 2021-03-11 Amarjot Singh Continuously Evolving and Interactive Disguised Face Identification (DFI) with Facial Key Points using ScatterNet Hybrid Deep Learning (SHDL) Network
CN112465773A (en) * 2020-11-25 2021-03-09 王坚 Facial nerve paralysis disease detection method based on human face muscle movement characteristics
CN112686853A (en) * 2020-12-25 2021-04-20 刘铮 Facial paralysis detection system based on artificial intelligence and muscle model
CN113782184A (en) * 2021-08-11 2021-12-10 杭州电子科技大学 Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN116313058A (en) * 2023-03-24 2023-06-23 浙江省中医院、浙江中医药大学附属第一医院(浙江省东方医院) Facial paralysis intelligent assessment method, system, equipment and storage medium
CN116740618A (en) * 2023-08-07 2023-09-12 数聚工研(北京)科技有限公司 Motion video action evaluation method, system, computer equipment and medium

Similar Documents

Publication Publication Date Title
CN109543526B (en) True and false facial paralysis recognition system based on depth difference characteristics
RU2292839C2 (en) Method and device for analyzing human behavior
CN105559802A (en) Tristimania diagnosis system and method based on attention and emotion information fusion
US11663845B2 (en) Method and apparatus for privacy protected assessment of movement disorder video recordings
CN110464367B (en) Psychological anomaly detection method and system based on multi-channel cooperation
KR20190105180A (en) Apparatus for Lesion Diagnosis Based on Convolutional Neural Network and Method thereof
CN112101424B (en) Method, device and equipment for generating retinopathy identification model
CN111210415B (en) Method for detecting facial expression hypo of Parkinson patient
Loureiro et al. Using a skeleton gait energy image for pathological gait classification
CN112883867A (en) Student online learning evaluation method and system based on image emotion analysis
CN102567734A (en) Specific value based retina thin blood vessel segmentation method
CN117438048B (en) Method and system for assessing psychological disorder of psychiatric patient
CN111403026A (en) Facial paralysis grade assessment method
Martin et al. Automated tackle injury risk assessment in contact-based sports-a rugby union example
CN116128814A (en) Standardized acquisition method and related device for tongue diagnosis image
Feng et al. Using eye aspect ratio to enhance fast and objective assessment of facial paralysis
CN113506274B (en) Detection system for human cognitive condition based on visual saliency difference map
CN113974612A (en) Automatic assessment method and system for upper limb movement function of stroke patient
CN111723869A (en) Special personnel-oriented intelligent behavior risk early warning method and system
CN117352161A (en) Quantitative evaluation method and system for facial movement dysfunction
CA2878374A1 (en) Kinetic-based tool for biometric identification, verification, validation and profiling
CN115713800A (en) Image classification method and device
Zhang et al. Research on Dyslexia Detection based on Eye Tracking
Daza et al. mEBAL2 database and benchmark: Image-based multispectral eyeblink detection
CN115690528A (en) Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination