CN112686853A - Facial paralysis detection system based on artificial intelligence and muscle model - Google Patents
Facial paralysis detection system based on artificial intelligence and muscle model Download PDFInfo
- Publication number
- CN112686853A CN112686853A CN202011567769.3A CN202011567769A CN112686853A CN 112686853 A CN112686853 A CN 112686853A CN 202011567769 A CN202011567769 A CN 202011567769A CN 112686853 A CN112686853 A CN 112686853A
- Authority
- CN
- China
- Prior art keywords
- region
- image
- facial paralysis
- face
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Landscapes
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention relates to the technical field of computer vision, in particular to a facial paralysis detection system based on artificial intelligence and a muscle model, which comprises a human face region division module, a symmetric region detection module, a unilateral region association module and a facial paralysis detection module which are sequentially connected, and solves the problems that the existing facial paralysis detection technology ignores the movement association between unilateral adjacent regions, and the facial regions can not be accurately divided, so that certain errors exist in facial paralysis detection; the facial paralysis detection method based on the single-side regions has the advantages that facial paralysis detection is carried out by utilizing the movement correlation characteristics among the single-side regions and the characteristics of the symmetrical regions, accuracy and reliability of facial paralysis detection are improved, meanwhile, weight distribution is achieved through the obtained deviation angle vectors of each face region, operation is simple, and complexity is low; in addition, according to the three-dimensional human face muscle model, the human face regions are reasonably divided according to different human face muscle distributions, so that the data error is reduced, and the data detection precision is improved.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a facial paralysis detection system based on artificial intelligence and a muscle model.
Background
Facial paralysis is a disease with facial expression muscle movement dysfunction as the main characteristic, and the general symptoms are facial distortion and uncoordinated facial expression. Due to the inability to control facial muscles, facial paralysis patients often do not have sufficient flexibility in their movements on their face and may present other symptoms including running water, speech problems, and nasal congestion, among others.
However, the existing facial paralysis detection method usually determines facial paralysis according to the asymmetric or different conditions of the face, and the face region is only roughly divided according to the position of a key point, so that the movement information of facial muscles of the facial paralysis detection method is not reasonably utilized, and meanwhile, the movement correlation between adjacent regions on one side is ignored, and the facial paralysis detection result is not accurate enough.
Disclosure of Invention
The invention provides a facial paralysis detection system based on artificial intelligence and a muscle model, and solves the technical problems that the existing facial paralysis detection technology ignores the movement correlation between adjacent regions on one side, and cannot accurately divide the face region, so that certain errors exist in facial paralysis detection.
In order to solve the technical problems, the invention provides a facial paralysis detection system based on artificial intelligence and a muscle model, which comprises an image acquisition module, a face region division module, a symmetrical region detection module, a unilateral region association module and a facial paralysis detection module, wherein the face region division module, the symmetrical region detection module, the unilateral region association module and the facial paralysis detection module are sequentially connected;
the human face region division module is used for carrying out region division on the human face key point image according to the three-dimensional human face muscle model and acquiring an action image sequence of each type of action of a person to be detected;
the symmetrical region detection module is used for obtaining symmetrical center vectors according to the eye region and the nose tip center point, selecting the starting point of each region, obtaining deviation angle vectors of all motion images according to the connecting vectors of other key points of each region and the corresponding region starting point and the symmetrical center vectors, selecting key frame images in each motion image sequence according to the deviation angle vectors, and obtaining region weights according to the key frame images and the deviation angle vectors corresponding to the key frame images;
the single-side area correlation module is used for calculating the central point of each area on the single side in the key frame image, acquiring vectors of adjacent areas on the single side and an included angle of the adjacent vectors according to the central points of the areas to obtain a single-side correlation characteristic matrix, and acquiring another single-side correlation characteristic matrix in the same way; the system is also used for acquiring two unilateral feature descriptors of one side according to a local binary pattern algorithm;
the facial paralysis detection module is used for receiving the region weight, the single-side correlation characteristic matrix and the single-side characteristic descriptor and detecting facial paralysis according to a correlation model and a texture model.
Specifically, the facial paralysis detection module specifically comprises:
inputting the received single-side incidence characteristic matrix into the incidence model to obtain an incidence detection result;
inputting the received region weight and the single-side feature descriptor into the texture model to obtain a texture detection result;
obtaining an action detection result according to the correlation detection result and the texture detection result;
and obtaining a facial paralysis detection result according to the action detection result of each type of action.
Preferably, the symmetric region detection module selects the motion image with the largest offset angle vector modulo length in each motion image sequence as a key frame image of each type of motion.
Preferably, the coordinate of the central point of the region is an average value of the coordinates of all the key points of the corresponding region.
Specifically, the image acquisition module is configured to input the acquired image of the person to be detected into a semantic segmentation neural network to obtain a face image, and input the face image into a key point detection network to perform key point detection, so as to obtain a face key point image.
Specifically, the performing region division on the face key point image according to the three-dimensional face muscle model includes:
acquiring a three-dimensional human face muscle model from a database, and acquiring a two-dimensional human face muscle image according to the three-dimensional human face muscle model;
inputting the two-dimensional face muscle image into the key point detection network to obtain a two-dimensional key point image, aligning the face key point image with the nose tip key point of the two-dimensional key point image, and reversely propagating the error of the two images to obtain a new three-dimensional face muscle model;
and obtaining a new two-dimensional face muscle image according to the new three-dimensional face muscle model, and dividing the face region according to the new two-dimensional face muscle image and the face key point image.
Preferably, the face region includes an eyebrow region, the eye region, a cheek region, and a mouth region.
Preferably, the actions include frowning, eye closing, gilling, whistling actions.
Specifically, the motion image sequence comprises a frown motion image sequence, an eye closing motion image sequence, a drum cheek motion image sequence and a whistle blowing motion image sequence;
the key frame images include a frown key frame image, a closed-eye key frame image, a drum cheek key frame image, and a whistle key frame image.
The facial paralysis detection system based on artificial intelligence and a muscle model provided by the invention utilizes the movement correlation characteristics and the symmetrical region characteristics between single-side regions, and solves the problems that the existing facial paralysis detection technology ignores the movement correlation between the adjacent regions of the single side, and the face region can not be accurately divided, so that a certain error exists in the facial paralysis detection result; according to the invention, the three-dimensional human face muscle model is utilized to obtain accurate human face muscle distribution, so that the human face area is reasonably divided by utilizing the muscle distribution, the data error is reduced, and the data detection precision is improved; in addition, the offset angle vector of each face region is obtained according to the face key points, so that the weight distribution of each face region is realized, more key face region characteristics can be concerned during facial paralysis detection, the accuracy of facial paralysis detection results is improved, the calculation process is simple and easy to realize, the complexity is low, and the practicability is high; meanwhile, the method not only detects the facial paralysis from the symmetry of two sides, but also reflects the muscle flexibility degree of adjacent areas by utilizing the single-side incidence matrix, thereby greatly improving the accuracy and reliability of the facial paralysis detection.
Drawings
Fig. 1 is a schematic flow chart of a facial paralysis detection system based on artificial intelligence and a muscle model according to an embodiment of the present invention;
fig. 2 is a simple schematic diagram of the distribution of key points in the face area according to the embodiment of the present invention.
Reference numerals:
an image acquisition module 1; a face region dividing module 2; a symmetric region detection module 3; a single-sided zone association module 4; facial paralysis detection module 5.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings, which are given solely for the purpose of illustration and are not to be construed as limitations of the invention, including the drawings which are incorporated herein by reference and for illustration only and are not to be construed as limitations of the invention, since many variations thereof are possible without departing from the spirit and scope of the invention.
Aiming at the problems that the existing bubble volume detection precision cannot be guaranteed, the reliability is low, the detection equipment is relatively complex and the cost is high, the embodiment of the invention provides a facial paralysis detection system based on artificial intelligence and a muscle model, as shown in figure 1, the facial paralysis detection system comprises an image acquisition module 1 and is characterized in that: the facial paralysis detection device is characterized by further comprising a face region dividing module 2, a symmetrical region detection module 3, a unilateral region association module 4 and a facial paralysis detection module 5 which are sequentially connected, wherein the image acquisition module 1 is connected with the face region dividing module 2, and the symmetrical region detection module 3 is further connected with the facial paralysis detection module 5;
in the embodiment of the invention, the image acquisition module 1 acquires an image of a person to be detected in a natural state by using a camera so as to obtain an accurate model parameter adjusting result when subsequently adjusting the shape parameters of the three-dimensional human face muscle model, then inputs the image of the person to be detected into a semantic segmentation neural network, outputs a human face mask, and multiplies the human face mask and the image of the person to be detected to obtain a human face image, thereby eliminating the influence of a background on the detection of a human face area and isolating irrelevant working conditions;
the semantic segmentation neural network takes the image of the person to be tested acquired by the camera as a training set and labels the image of the person to be tested, namely: marking the pixel point of the face area as 1, and marking the pixel points of other areas as 0 to obtain a label image; inputting the training set and the label images into the semantic segmentation neural network for training, wherein a loss function adopts a cross entropy function, and the semantic segmentation neural network can adopt U-Net, deep Labv3+ and the like;
in this embodiment, the face image is input into a key point detection network to perform key point detection, so as to obtain a face key point image.
Facial paralysis is a disease that facial muscle motor function is hindered, and the motor characteristics of different face regions can reflect the control ability of a person to be tested on different muscles, however, the existing technology for dividing the face regions by using key points has the problem that the key points for controlling different muscles are divided in the same region, such as: when the eyebrow area is divided, part of key points for controlling the movement of the eyes are divided into the eyebrow area, so that the facial paralysis detection result has deviation; in addition, when the muscle regions are divided based on the local image features and the context information by utilizing semantic segmentation and the like, a network is difficult to achieve a good effect, and the division result is not accurate enough, so the face region is divided by the three-dimensional face muscle model;
the human face region division module 2 firstly obtains the three-dimensional human face muscle model from a database, simultaneously projects the three-dimensional human face muscle model to a two-dimensional plane to obtain a two-dimensional human face muscle image, and inputs the two-dimensional human face muscle image into the key point detection network to obtain a two-dimensional key point image;
the three-dimensional human face muscle model comprises a plurality of grid points, each grid point has a respective number, the topological structures of the grid points are unchanged, the distribution of the grid points corresponds to the distribution of the muscles, and the distribution of the muscles can be determined according to the distribution of the grid points;
then, in this embodiment, the key point images of the face and the nose tip of the two-dimensional key point image are aligned, errors of the key point images of the face and the two-dimensional key point image are calculated, shape parameters are adjusted by error back propagation, iteration is continuously performed, a three-dimensional face muscle model is adjusted, finally, a new three-dimensional face muscle model is obtained, and the new three-dimensional face muscle model is projected to a two-dimensional plane to obtain a new two-dimensional face muscle map; it should be noted that the new two-dimensional face muscle image obtained by using the three-dimensional face muscle model is in pixel alignment with the face image;
finally, the new two-dimensional face muscle image and the face key point image are corresponded and face areas are divided, the muscle distribution condition of the person to be detected can be reflected by the corresponding relation, and key point information in each face area can be obtained at the same time;
there are 42 muscles in the face, but this embodiment only needs to extract the muscle with obvious motion feature as the attention area, so as to perform face area division, and the specific division rule is as follows:
dividing an eyebrow area comprising a left eyebrow area and a right eyebrow area by using frown muscles;
dividing an eye region by using orbicularis oculi muscles, wherein the eye region comprises a left eye region and a right eye region;
dividing cheek areas including a left cheek area and a right cheek area by using zygomatic superior muscles, zygomatic inferior muscles and cheek muscles;
dividing a mouth region comprising a left mouth region and a right mouth region by using the labyrinthic muscles, the labyrinthic muscles and the mandible muscles;
in the embodiment, four face regions are divided, and each face region comprises a left sub-region and a right sub-region.
The human face region division module 2 also collects images of the tested person performing frowning action, eye closing action, gill bulging action and whistle blowing action, and inputs the obtained frowning action image sequence, eye closing action image sequence, gill bulging action image sequence and whistle blowing action image sequence into the symmetric region detection module 3;
the symmetric region detection module 3 analyzes each frame of motion image in the four motion image sequences to obtain the offset angles of the four face regions in each frame of motion image, and specifically includes:
selecting one of the motionsOne frame of motion image in the image sequence is plotted, and two inner corner key points P of the left and right eye regions are calculated as shown in FIG. 255、P58Using the center point as the center point P of the corner of the eye55-58Centering said corner of eye P55-58And the center point P of the nose tip49Is taken as a symmetrical center vector
For the left and right eyebrow regions, the present embodiment combines two eyebrow key points P of the left and right eyebrow regions37、P38Center point P of connecting line37-38Connecting all key points in the left and right eyebrow regions with the eyebrow starting points as eyebrow starting points to obtain a plurality of eyebrow connecting vectors, sequentially adding all the eyebrow connecting vectors to finally obtain offset vectors of the eyebrow regions, calculating included angles between the offset vectors of the eyebrow regions and the symmetrical center vectors to obtain offset angles theta of the eyebrow regions1;
For the left and right eye regions, the center point P of the eye corner is used in the present embodiment55-58Connecting all key points in the left eye region and the right eye region with the eye starting point to obtain a plurality of eye connecting vectors, sequentially adding all the eye connecting vectors to finally obtain an offset vector of the eye region, calculating an included angle between the offset vector of the eye region and the symmetric center vector to obtain an offset angle theta of the eye region2;
For the left and right cheek regions and the left and right mouth regions, the nose tip key point P is used in the present embodiment46Obtaining the offset angle theta of the cheek region as the starting point of the cheek and the starting point of the mouth by the same steps3And the offset angle theta of the mouth region4;
Obtaining a deviation angle vector in the action image of the frame according to the deviation angle of each face region: theta ═ theta1,θ2,θ3,θ4};
Repeating the above steps to obtain a deviation angle vector of each frame of motion image in the four motion image sequences, calculating the modular length | θ | of all the deviation angle vectors, wherein the larger the vector modular length is, the larger the deviation degree is, and the more the facial muscle movement condition of the person to be detected can be reflected, so that each motion image sequence selects the motion image with the largest modular length in the sequence, and uses the motion image as the key frame image corresponding to the motion image sequence, thereby obtaining four frames of key frame images, that is, each motion image sequence has one key frame image, and the four frames of key frame images are respectively: a frown key frame image, a closed-eye key frame image, a drum cheek key frame image and a whistle blow key frame image;
each frame of key frame image corresponds to a deviation angle vector, and region weight distribution is carried out on each face region according to the deviation angle of each face region, wherein the method specifically comprises the following steps:
In the formula (I), the compound is shown in the specification,offset magnification angle, theta, representing the ith personal face areaiAnd (3) representing the offset angle of the ith personal face area, wherein the value of i is {1, 2, 3 and 4} and corresponds to the eyebrow area, the eye area, the cheek area and the mouth area respectively.
This embodiment calculates the offset amplification angleThe purpose is as follows: amplifying the difference of the offsets between different face areas, so that the detection result is more accurate;
in the formula, AiA region weight indicating the ith personal face region,an offset magnification angle representing the ith personal face area.
In this embodiment, the larger the deviation angle is, the larger the asymmetry of the movement of the corresponding face region is, and the provided features can contain movement information of facial muscles to a greater extent, so that the region weight allocated to the corresponding face region is also larger, so that the movement information is paid more attention to subsequent facial paralysis detection, and the facial paralysis detection accuracy is greatly improved; in addition, the region weight vector a ═ a1,A2,A3,A4And (4) region weights respectively corresponding to the eyebrow, eye, cheek and mouth regions, and the sum of the region weights of the four regions is 1.
The single-side region correlation module 4 analyzes each of the four key frame images to obtain a single-side correlation characteristic matrix and a single-side characteristic descriptor;
in the embodiment of the invention, the single-side correlation characteristic matrix can reflect the motion correlation between single-side face areas; for convenience of explaining the calculation process of the single-side correlation feature matrix, the present embodiment is explained by using the right side of the face in a certain frame of key frame image, specifically:
first, the coordinates of the center point of the face region on the right side of the face are calculated, taking the right brow region as an example, and taking the coordinate mean of all points in the right brow region as the center point C of the right brow region1Similarly, the central point C of the right eye region is obtained2Center point C of right cheek region3Center point C of right mouth region4;
Then, connecting the center points of the adjacent face regions on the right side of the face to obtainTo three right vectors Taking the modulus of the three right vectors as the first row of three elements of the right single-side correlation characteristic matrix, and taking the included angle alpha between the three right vectors1、α2、α3The three elements of the second row of the right single-side correlation characteristic matrix are used for obtaining a right single-side correlation characteristic matrix Y with 2 rows and 3 columnsR:
Wherein alpha is1Is composed ofAngle between them, α2Is composed ofAngle between them, α3Is composed of The included angle therebetween.
Similarly, acquiring a left single-side face associated feature matrix Y in the key frame image of the frameL(ii) a And repeating the steps to sequentially obtain the left and right single-side associated feature matrixes of each frame of the key frame image.
The unilateral region association module 4 further obtains feature descriptors of left and right unilateral face regions respectively by using a local binary pattern LBP algorithm, such as: respectively obtaining right single-side feature descriptors of a right eyebrow area, a right eye area, a right cheek area and a right mouth area by using the LBP algorithm, and sequentially marking as WR1、WR2、WR3、WR4Respectively, texture feature matrices; similarly, left one-sided feature descriptors of the left eyebrow region, the left eye region, the left cheek region and the left mouth region are obtained and sequentially marked as WL1、WL2、WL3、WL4;
It should be noted that feature extraction by using the LBP algorithm is a known technique, and the embodiment of the present invention is not described in detail.
The facial paralysis detection module 5 analyzes each frame of key frame image in sequence, and specifically comprises:
inputting the received left and right single-side correlation characteristic matrixes into the correlation model to obtain a correlation detection result, wherein the correlation model is as follows:
wherein Signal is1Indicates the correlation detection result, YLRepresenting the left unilateral incidence feature matrix, YRThe right one-sided correlation feature matrix is shown, δ represents a correlation threshold, and δ is preferably 1.5 in this embodiment.
Inputting the received region weight and the left and right unilateral feature descriptors into the texture model to obtain a texture detection result, wherein the texture model is as follows:
wherein Signal is2Represents the texture detection result, AiArea weight, W, representing the ith personal face areaLiLeft one-sided feature descriptor, W, representing the ith personal face regionRiA right one-sided feature descriptor representing the ith personal face region,indicating texture threshold, this embodiment prefers
Obtaining an action detection result according to the correlation detection result and the texture detection result, and specifically:
if the intersection of the correlation detection result and the texture detection result is 1, Signal1∩Signal2If not, the person to be tested is determined to be a normal person under the corresponding action, such as: performing the analysis on the frown key frame image to obtain an intersection of the correlation detection result of the frown and the texture detection result as 1, and judging that the person to be detected is a frown facial paralysis patient;
according to the action detection result of each type of action, obtaining a facial paralysis detection result, which specifically comprises the following steps: by utilizing the steps, the action detection results of four frames of the key frame images are sequentially obtained and respectively correspond to four types of actions, and if at least three action detection results exist in the four obtained action detection results and are the facial paralysis patients, the person to be detected is judged to be the facial paralysis patients; if not, the person to be tested is judged to be a normal person.
The facial paralysis detection system based on artificial intelligence and a muscle model comprises a face region division module 2, a symmetrical region detection module 3, a unilateral region association module 4 and a facial paralysis detection module 5 which are sequentially connected; the method solves the problems that the existing facial paralysis detection technology ignores the movement correlation between adjacent regions on one side, and cannot accurately divide the face region, so that certain errors exist in facial paralysis detection; according to the embodiment of the invention, the movement correlation characteristics and the symmetrical region characteristics between the unilateral regions are utilized, so that the accuracy and reliability of facial paralysis detection are greatly improved; in addition, the embodiment utilizes the three-dimensional human face muscle model to obtain accurate human face muscle distribution, thereby being capable of reasonably dividing human face regions, reducing data errors and improving the accuracy of detection data; according to the embodiment, the regional weight is obtained according to the offset angle vector, the operation is simple, and the complexity is low.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (9)
1. The utility model provides a facial paralysis detecting system based on artificial intelligence and muscle model, includes the image acquisition module, its characterized in that: the facial paralysis detection device is characterized by further comprising a face region dividing module, a symmetrical region detection module, a unilateral region correlation module and a facial paralysis detection module which are sequentially connected, wherein the symmetrical region detection module is further connected with the facial paralysis detection module;
the human face region division module is used for carrying out region division on the human face key point image according to the three-dimensional human face muscle model and acquiring an action image sequence of each type of action of a person to be detected;
the symmetrical region detection module is used for obtaining symmetrical center vectors according to the eye region and the nose tip center point, selecting the starting point of each region, obtaining deviation angle vectors of all motion images according to the connecting vectors of other key points of each region and the corresponding region starting point and the symmetrical center vectors, selecting key frame images in each motion image sequence according to the deviation angle vectors, and obtaining region weights according to the key frame images and the deviation angle vectors corresponding to the key frame images;
the single-side area correlation module is used for calculating the central point of each area on the single side in the key frame image, acquiring vectors of adjacent areas on the single side and an included angle of the adjacent vectors according to the central points of the areas to obtain a single-side correlation characteristic matrix, and acquiring another single-side correlation characteristic matrix in the same way; the system is also used for acquiring a unilateral feature descriptor according to a local binary pattern algorithm;
the facial paralysis detection module is used for receiving the region weight, the single-side correlation characteristic matrix and the single-side characteristic descriptor and detecting facial paralysis according to a correlation model and a texture model.
2. The facial paralysis detection system based on artificial intelligence and muscle model as claimed in claim 1, wherein said facial paralysis detection module specifically is:
inputting the received single-side incidence characteristic matrix into the incidence model to obtain an incidence detection result;
inputting the received region weight and the single-side feature descriptor into the texture model to obtain a texture detection result;
obtaining an action detection result according to the correlation detection result and the texture detection result;
and obtaining a facial paralysis detection result according to the action detection result of each type of action.
3. The facial paralysis detection system based on artificial intelligence and muscle model of claim 1, wherein: and the symmetrical region detection module selects the motion image with the maximum offset angle vector mode length in each motion image sequence as a key frame image of each motion.
4. A facial paralysis detection system based on artificial intelligence and muscle models, as claimed in claim 3, wherein: the coordinate of the central point of the area is the average value of the coordinates of all key points of the corresponding area.
5. The facial paralysis detection system based on artificial intelligence and muscle model of claim 1, wherein: the image acquisition module is used for inputting the acquired image of the person to be detected into the semantic segmentation neural network to obtain a face image, and inputting the face image into the key point detection network to perform key point detection to obtain a face key point image.
6. The facial paralysis detection system based on artificial intelligence and muscle model as claimed in claim 5, wherein said area division of the face key point image according to the three-dimensional face muscle model comprises:
acquiring a three-dimensional human face muscle model from a database, and acquiring a two-dimensional human face muscle image according to the three-dimensional human face muscle model;
inputting the two-dimensional face muscle image into the key point detection network to obtain a two-dimensional key point image, aligning the face key point image with the nose tip key point of the two-dimensional key point image, and reversely propagating the error of the two images to obtain a new three-dimensional face muscle model;
and obtaining a new two-dimensional face muscle image according to the new three-dimensional face muscle model, and dividing the face region according to the new two-dimensional face muscle image and the face key point image.
7. The facial paralysis detection system based on artificial intelligence and muscle model of claim 6, wherein: the face region includes an eyebrow region, the eye region, a cheek region, and a mouth region.
8. The facial paralysis detection system based on artificial intelligence and muscle model of claim 2, wherein: the actions include frowning, closing eyes, bulging cheek, and whistling.
9. The facial paralysis detection system based on artificial intelligence and muscle model of claim 8, wherein: the action image sequence comprises a frown action image sequence, an eye closing action image sequence, a gill drum action image sequence and a whistle blowing action image sequence;
the key frame images include a frown key frame image, a closed-eye key frame image, a drum cheek key frame image, and a whistle key frame image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011567769.3A CN112686853A (en) | 2020-12-25 | 2020-12-25 | Facial paralysis detection system based on artificial intelligence and muscle model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011567769.3A CN112686853A (en) | 2020-12-25 | 2020-12-25 | Facial paralysis detection system based on artificial intelligence and muscle model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112686853A true CN112686853A (en) | 2021-04-20 |
Family
ID=75453375
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011567769.3A Withdrawn CN112686853A (en) | 2020-12-25 | 2020-12-25 | Facial paralysis detection system based on artificial intelligence and muscle model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112686853A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113688701A (en) * | 2021-08-10 | 2021-11-23 | 江苏仁和医疗器械有限公司 | Facial paralysis detection method and system based on computer vision |
CN117352161A (en) * | 2023-10-11 | 2024-01-05 | 凝动万生医疗科技(武汉)有限公司 | Quantitative evaluation method and system for facial movement dysfunction |
-
2020
- 2020-12-25 CN CN202011567769.3A patent/CN112686853A/en not_active Withdrawn
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113688701A (en) * | 2021-08-10 | 2021-11-23 | 江苏仁和医疗器械有限公司 | Facial paralysis detection method and system based on computer vision |
CN113688701B (en) * | 2021-08-10 | 2022-04-22 | 江苏仁和医疗器械有限公司 | Facial paralysis detection method and system based on computer vision |
CN117352161A (en) * | 2023-10-11 | 2024-01-05 | 凝动万生医疗科技(武汉)有限公司 | Quantitative evaluation method and system for facial movement dysfunction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11967175B2 (en) | Facial expression recognition method and system combined with attention mechanism | |
CN109815826B (en) | Method and device for generating face attribute model | |
CN105005769B (en) | A kind of sign Language Recognition Method based on depth information | |
CN108876879A (en) | Method, apparatus, computer equipment and the storage medium that human face animation is realized | |
CN115661943B (en) | Fall detection method based on lightweight attitude assessment network | |
CN110807364A (en) | Modeling and capturing method and system for three-dimensional face and eyeball motion | |
CN111046734B (en) | Multi-modal fusion sight line estimation method based on expansion convolution | |
CN101833654B (en) | Sparse representation face identification method based on constrained sampling | |
CN104615996B (en) | A kind of various visual angles two-dimension human face automatic positioning method for characteristic point | |
CN106203284B (en) | Method for detecting human face based on convolutional neural networks and condition random field | |
CN110826462A (en) | Human body behavior identification method of non-local double-current convolutional neural network model | |
CN112232128B (en) | Eye tracking based method for identifying care needs of old disabled people | |
CN104268932A (en) | 3D facial form automatic changing method and system | |
CN112686853A (en) | Facial paralysis detection system based on artificial intelligence and muscle model | |
CN111259739A (en) | Human face pose estimation method based on 3D human face key points and geometric projection | |
Zahedi et al. | Appearance-based recognition of words in american sign language | |
JP2016045884A (en) | Pattern recognition device and pattern recognition method | |
CN111881732A (en) | SVM (support vector machine) -based face quality evaluation method | |
Yu et al. | 3D facial motion tracking by combining online appearance model and cylinder head model in particle filtering | |
CN112541897A (en) | Facial paralysis degree evaluation system based on artificial intelligence | |
CN118506330A (en) | Fatigue driving detection method based on face recognition | |
CN117351957A (en) | Lip language image recognition method and device based on visual tracking | |
CN112597842B (en) | Motion detection facial paralysis degree evaluation system based on artificial intelligence | |
CN113688701B (en) | Facial paralysis detection method and system based on computer vision | |
CN115205956A (en) | Left and right eye detection model training method, method and device for identifying left and right eyes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210420 |