CN112560786A - Facial muscle feature-based expression database using method and computing processing equipment - Google Patents

Facial muscle feature-based expression database using method and computing processing equipment Download PDF

Info

Publication number
CN112560786A
CN112560786A CN202011575760.7A CN202011575760A CN112560786A CN 112560786 A CN112560786 A CN 112560786A CN 202011575760 A CN202011575760 A CN 202011575760A CN 112560786 A CN112560786 A CN 112560786A
Authority
CN
China
Prior art keywords
expression
face
facial
facial muscle
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011575760.7A
Other languages
Chinese (zh)
Inventor
马丙全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Yuanruini Technology Co ltd
Original Assignee
Suzhou Yuanruini Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Yuanruini Technology Co ltd filed Critical Suzhou Yuanruini Technology Co ltd
Priority to CN202011575760.7A priority Critical patent/CN112560786A/en
Publication of CN112560786A publication Critical patent/CN112560786A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for using an expression database based on facial muscle characteristics, which comprises the following steps: s1, establishing an expression module: the expression database is divided into a plurality of expression modules, each expression module corresponds to one expression, and each expression module contains N expression coordinate sequences with the same expression and different degrees; s2, muscle movement data: the same expression of each expression module is divided into five grades of weak, small, medium, large and strong according to different degrees of the expression; setting a range for the motion data of each muscle position of each expression coordinate sequence; s3, accurate positioning and screening: inputting at least 5 expression coordinate sequences into an expression database, and searching and outputting at least one expression coordinate sequence and a corresponding facial static image by the expression database; and S4, manually screening out a proper face static image. The traditional expression database only uses single manual identification is abandoned, and the use efficiency of the expression database is greatly improved.

Description

Facial muscle feature-based expression database using method and computing processing equipment
Technical Field
The invention relates to the technical field of computers, in particular to a using method and a computing processing device of an expression database based on human face muscle characteristics.
Background
With the development of human-computer interaction, facial expression recognition has become a popular topic in recent decades. Expressions are the most effective way in human emotional communication, and contain a lot of personal behavior information. Facial expressions typically include happiness, sadness, anger, fear, surprise, and disgust, among others. By establishing the facial expression database, convenience can be provided for later-stage movie and television special effects and role facial animation, and the cost of post-stage image processing is reduced.
The traditional facial expression database is a facial expression recognition algorithm based on a neural network, and a large number of training pictures are needed to form the expression database. The training pictures are usually collected manually, so the size of the pictures is greatly limited, resulting in an insufficient number of training samples. The facial expression database is concentrated in studying facial expression characteristics in static images, and overall characteristics cannot describe facial detail information, so that the detail characteristics of facial expressions are easily lost, and the trained characteristics lose the detail information of various expressions, so that the expression recognition effect is poor.
In addition, the image recognition and search call of the traditional facial expression database are mainly through manual recognition, so the traditional facial expression database is very inconvenient to use.
Disclosure of Invention
The invention overcomes the defects of the prior art and provides a method for using an expression database based on human face muscle characteristics.
In order to achieve the purpose, the invention adopts the technical scheme that: a use method of an expression database based on facial muscle characteristics is characterized in that the expression database is an expression coordinate sequence database of facial specific muscle position and motion data, and the use method comprises the following steps:
s1, establishing an expression module: the expression database is divided into a plurality of expression modules, each expression module corresponds to one expression, and each expression module contains N expression coordinate sequences with the same expression and different degrees;
s2, muscle movement data: the same expression of each expression module is divided into five grades of weak, small, medium, large and strong according to different degrees of the expression; setting a range for the motion data of each muscle position of each expression coordinate sequence;
s3, accurate positioning and screening: inputting at least 5 expression coordinate sequences into an expression database, and searching and outputting at least one expression coordinate sequence and a corresponding facial static image by the expression database;
s4, manual screening: a number of suitable static images of the face are selected by artificial subjectivity.
In a preferred embodiment of the invention, the expression module expression at least comprises expression coordinate sequences of facial happiness, sadness, anger, disgust, urgency, surprise, fear and blankness and a corresponding facial static image.
In a preferred embodiment of the present invention, in S1, the method further includes the following steps:
s11, establishing plane coordinates by taking the intersection point of the axis in the human face and the horizontal line as a point O;
s12, the position coordinate of the facial muscle feature point on the human face is (X)1,Y1),(X2,Y2),(X3,Y3)……(Xn,Yn);X1,X2……XnIs the lateral coordinate value of the facial feature point, Y1,Y2……YnIs the longitudinal coordinate value of the facial feature point, and the muscle movement data Z of each facial muscle feature point when making expression1,Z2……ZnObtaining an expression coordinate sequence (X) of an expression1,Y1,Z1),(X2,Y2,Z2),(X3,Y3,Z3)……(Xn,Yn,Zn);
And S13, repeatedly making different expressions, and classifying the expressions into a plurality of expression modules according to the different expressions.
In a preferred embodiment of the present invention, in S1, the method further includes the following steps:
s11, selecting facial muscle feature points only on one side of the face, wherein the positions of the facial muscle feature points are symmetrically arranged along the central axis of the face and are numbered as C1,C2……Cn
S12, when making expression, testing the muscle movement data of the facial muscle feature points of the half face to be N1,N2……NnThe position symmetrical to the facial muscle feature point is also set to N1,N2……NnAnd constructing a complete expression coordinate sequence (C)1,N1),(C2,N2),……(Ck,N1),(Ck+1,N2)……(Cn,Nn) (ii) a Wherein, CkTo correspond to C1Numbering facial muscle feature points of the positions;
and S13, repeatedly making different expressions, and classifying the expressions into a plurality of expression modules according to the different expressions.
In a preferred embodiment of the present invention, the facial muscle feature points are selected at least at the forehead, eyebrows, eye sockets, cheeks, corners of the mouth, and chin of the face.
In a preferred embodiment of the present invention, in S2, the range of the muscle motion capture value for each facial muscle feature point is N1x~N1y,N2x~N2y,……Nnx~Nny
In a preferred embodiment of the present invention, in S3, the greater the number of the expression coordinate sequences input to the expression database, the fewer the expression coordinate sequences and the corresponding static images of the face searched and output by the expression database are, and the number of the expression coordinate sequences and the corresponding static images of the face is at least one.
In a preferred embodiment of the present invention, the number of the facial muscle feature points is at least 62, and the facial muscle feature points are symmetrically arranged along the central axis of the face or only one side of the face is arranged.
In a preferred embodiment of the present invention, in S1, N is an integer of at least 10000.
The invention also provides a computing processing device, which comprises a memory and a processor, wherein the memory stores expression coordinate sequences of different expression modules of the expression database and corresponding static images of the face, and the processor realizes the steps of the method when executing the computer program stored in the memory.
The invention solves the defects in the background technology, and has the following beneficial effects:
(1) the invention provides a method for using an expression database based on facial muscle characteristics, which divides expression coordinate sequences of different expressions into different expression modules, sets a range for motion data of each muscle position of each expression coordinate sequence, and can obtain a complete expression coordinate sequence and a corresponding facial static image by inputting a certain number of coordinate sequences when using the expression database.
(2) The method describes the muscle detail characteristics and the rules of the human face through the muscle movement data range of each muscle of each expression of a specific actor, realizes the portrayal of different expressions and different degrees of the same expression, abandons the traditional single human face expression database of static images, facilitates the use of the database, provides convenience for the later stage movie and television special effects and the role facial animation, and reduces the cost of the post-stage image processing.
(3) The computing processing equipment comprises a memory and a processor, wherein the memory stores expression coordinate sequences of different expression modules of an expression database and corresponding static facial images, and the processor screens proper images and coordinate sequences from the expression database according to input position coordinates and muscle movement data. The traditional single manual identification is carried out, the range of the motion data of facial muscles is used for searching static images, the static images are identified into thousands of static images, and the range of the static images is carried out by manual identification, so that the leap of database retrieval is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts;
FIG. 1 is a flow chart of a method of using an expression database of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
In the description of the present application, it is to be understood that the terms "center," "longitudinal," "lateral," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in the orientation or positional relationship indicated in the drawings for convenience in describing the present application and for simplicity in description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated in a particular manner, and are not to be considered limiting of the scope of the present application. Furthermore, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the invention, the meaning of "a plurality" is two or more unless otherwise specified.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art through specific situations.
As shown in figure 1, the expression database is an expression coordinate sequence database of facial specific muscle positions and motion data, expression coordinate sequences of different expressions are divided into different expression modules, the motion data of each muscle position of each expression coordinate sequence is set to be in a range, when the expression database is used, a complete expression coordinate sequence and a corresponding facial static image can be obtained only by inputting a certain number of coordinate sequences, the traditional expression database is abandoned, only single manual identification is used, and the use efficiency of the expression database is greatly improved. The muscle detail characteristics and the rules of the human face are described through the muscle movement data range of each muscle of each expression of a specific actor, so that different expressions and different degrees of the same expression can be depicted, a traditional single human face expression database of static images is abandoned, the use of the database is facilitated, convenience is provided for later stage movie and television special effects and role facial animation, and the cost of post-stage image processing is reduced.
The use method of the expression database based on the facial muscle characteristics comprises the following steps:
s1, establishing an expression module: the expression database is divided into a plurality of expression modules, each expression module corresponds to one expression, and each expression module at least comprises 10000 expression coordinate sequences with the same expression and different degrees;
s11, establishing plane coordinates by taking the intersection point of the axis in the human face and the horizontal line as a point O;
s12, the position coordinate of the facial muscle feature point on the human face is (X)1,Y1),(X2,Y2),(X3,Y3)……(Xn,Yn);X1,X2……XnIs the lateral coordinate value of the facial feature point, Y1,Y2……YnIs the longitudinal coordinate value of the facial feature point, and the muscle movement data Z of each facial muscle feature point when making expression1,Z2……ZnObtaining an expression coordinate sequence (X) of an expression1,Y1,Z1),(X2,Y2,Z2),(X3,Y3,Z3)……(Xn,Yn,Zn);
S13, repeatedly making different expressions, and classifying the expressions into a plurality of expression modules according to the different expressions;
s2, muscle movement data: the same expression of each expression module is divided into five grades of weak, small, medium, large and strong according to different degrees of the expression; the motion data of each muscle position of each expression coordinate sequence is set with a range, and the numerical range of the muscle motion capture of each facial muscle feature point is N1x~N1y,N2x~N2y,……Nnx~Nny
S3, accurate positioning and screening: inputting at least 5 expression coordinate sequences into an expression database, and searching and outputting at least one expression coordinate sequence and a corresponding facial static image by the expression database;
s4, manual screening: a number of suitable static images of the face are selected by artificial subjectivity.
The method for using the expression database based on the human face muscle characteristics can further comprise the following steps:
s1, establishing an expression module: the expression database is divided into a plurality of expression modules, each expression module corresponds to one expression, and each expression module at least comprises 10000 expression coordinate sequences with the same expression and different degrees;
s11, selecting facial muscle feature points only on one side of the face, wherein the positions of the facial muscle feature points are symmetrically arranged along the central axis of the face and are numbered as C1,C2……Cn
S12, when making expression, testing the muscle movement data of the facial muscle feature points of the half face to be N1,N2……NnThe position symmetrical to the facial muscle feature point is also set to N1,N2……NnAnd is constructed in fullExpression coordinate sequence (C)1,N1),(C2,N2),……(Ck,N1),(Ck+1,N2)……(Cn,Nn) (ii) a Wherein, CkTo correspond to C1Numbering facial muscle feature points of the positions;
and S13, repeatedly making different expressions, and classifying the expressions into a plurality of expression modules according to the different expressions.
Facial muscle feature points are selected at least at the forehead, eyebrows, eye sockets, cheeks, corners of the mouth, and chin of the face;
s2, muscle movement data: the same expression of each expression module is divided into five grades of weak, small, medium, large and strong according to different degrees of the expression; the motion data of each muscle position of each expression coordinate sequence is set with a range, and the numerical range of the muscle motion capture of each facial muscle feature point is N1x~N1y,N2x~N2y,……Nnx~Nny
S3, accurate positioning and screening: inputting at least 5 expression coordinate sequences into an expression database, and searching and outputting at least one expression coordinate sequence and a corresponding facial static image by the expression database;
s4, manual screening: a number of suitable static images of the face are selected by artificial subjectivity.
The expression module of the invention at least comprises expression coordinate sequences of happiness, sadness, anger, disgust, urgency, surprise, fear and no expression of the face and a corresponding static image of the face.
The expression coordinate sequence input into the expression database is more, and the expression coordinate sequences searched and output by the expression database and the corresponding static face images are fewer and at least one.
The invention has at least 62 facial muscle feature points which are symmetrically arranged along the central axis of the face or only one side of the face.
The measuring method of the muscle movement data can be realized by the following equipment: the method comprises the steps of symmetrically selecting a plurality of facial muscle feature points along the inner axis of a face, adhering multidirectional tension sensors at the feature points, connecting adjacent multidirectional tension sensors through fibers, and adhering each multidirectional tension sensor to a fiber frame of the face through fiber connections in different directions. The fiber frame comprises circumferential fibers surrounding the face, middle axis fibers coincident with the middle axis of the face, and transverse fibers which are positioned at the nasal wings and perpendicular to the middle axis fibers so as to fix the multidirectional tension sensor. The different tension applied to the fibers in different directions connected with each multi-directional tension sensor through the movement of the facial muscles is used for displaying the muscle movement values of different facial muscle characteristic points of the whole face.
The invention also provides a computing processing device, which comprises a memory and a processor, wherein the memory stores expression coordinate sequences of different expression modules of the expression database and corresponding static images of the face, and the steps in the method are realized when the processor executes a computer program stored in the memory. And the processor screens a proper image and a coordinate sequence from the expression database according to the input position coordinates and the muscle movement data. The traditional single manual identification is carried out, the range of the motion data of facial muscles is used for searching static images, the static images are identified into thousands of static images, and the range of the static images is carried out by manual identification, so that the leap of database retrieval is realized.
In light of the foregoing description of the preferred embodiment of the present invention, it is to be understood that various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (10)

1. A use method of an expression database based on facial muscle characteristics is characterized in that the expression database is an expression coordinate sequence database of facial specific muscle position and motion data, and the use method comprises the following steps:
s1, establishing an expression module: the expression database is divided into a plurality of expression modules, each expression module corresponds to one expression, and each expression module contains N expression coordinate sequences with the same expression and different degrees;
s2, muscle movement data: the same expression of each expression module is divided into five grades of weak, small, medium, large and strong according to different degrees of the expression; setting a range for the motion data of each muscle position of each expression coordinate sequence;
s3, accurate positioning and screening: inputting at least 5 expression coordinate sequences into an expression database, and searching and outputting at least one expression coordinate sequence and a corresponding facial static image by the expression database;
s4, manual screening: a number of suitable static images of the face are selected by artificial subjectivity.
2. The method for using the facial muscle feature-based expression database according to claim 1, wherein the facial muscle feature-based expression database comprises: the expression module at least comprises expression coordinate sequences of happy face, sad face, angry face, disgust face, urgent face, surprise face, fear face and non-expression face and a corresponding static image of the face.
3. The method for using the facial muscle feature-based expression database according to claim 1, wherein the facial muscle feature-based expression database comprises: in S1, the method further includes:
s11, establishing plane coordinates by taking the intersection point of the axis in the human face and the horizontal line as a point O;
s12, the position coordinate of the facial muscle feature point on the human face is (X)1,Y1),(X2,Y2),(X3,Y3)……(Xn,Yn);X1,X2……XnIs the lateral coordinate value of the facial feature point, Y1,Y2……YnIs the longitudinal coordinate value of the facial feature point, and the muscle movement data Z of each facial muscle feature point when making expression1,Z2……ZnObtaining an expression coordinate sequence (X) of an expression1,Y1,Z1),(X2,Y2,Z2),(X3,Y3,Z3)……(Xn,Yn,Zn);
And S13, repeatedly making different expressions, and classifying the expressions into a plurality of expression modules according to the different expressions.
4. The method for using the facial muscle feature-based expression database according to claim 1, wherein the facial muscle feature-based expression database comprises: in S1, the method further includes:
s11, selecting facial muscle feature points only on one side of the face, wherein the positions of the facial muscle feature points are symmetrically arranged along the central axis of the face and are numbered as C1,C2……Cn
S12, when making expression, testing the muscle movement data of the facial muscle feature points of the half face to be N1,N2……NnThe position symmetrical to the facial muscle feature point is also set to N1,N2……NnAnd constructing a complete expression coordinate sequence (C)1,N1),(C2,N2),……(Ck,N1),(Ck+1,N2)……(Cn,Nn) (ii) a Wherein, CkTo correspond to C1Numbering facial muscle feature points of the positions;
and S13, repeatedly making different expressions, and classifying the expressions into a plurality of expression modules according to the different expressions.
5. The use method of the facial muscle feature-based expression database according to claims 3 to 4, wherein the expression database comprises: the facial muscle feature points are selected at least at the forehead, eyebrows, eye sockets, cheeks, corners of the mouth, and chin of the face.
6. The method for using the facial muscle feature-based expression database according to claim 1, wherein the facial muscle feature-based expression database comprises: in S2, the range of values of the muscle motion capture for each of the facial muscle feature points is N1x~N1y,N2x~N2y,……Nnx~Nny
7. The method for using the facial muscle feature-based expression database according to claim 1, wherein the facial muscle feature-based expression database comprises: in S3, the greater the number of the expression coordinate sequences input to the expression database, the fewer the expression coordinate sequences and the corresponding static images of the face searched and output by the expression database, and the fewer the expression coordinate sequences and the corresponding static images of the face, the more at least one.
8. The use method of the facial muscle feature-based expression database according to claims 3 to 4, wherein the expression database comprises: the number of the facial muscle feature points is at least 62, and the facial muscle feature points are symmetrically arranged along the central axis of the face or only arranged on one side of the face.
9. The method for using the facial muscle feature-based expression database according to claim 1, wherein the facial muscle feature-based expression database comprises: in S1, N is an integer of at least 10000.
10. A computing processing device comprising a memory storing sequences of emoji coordinates and corresponding static images of faces of different emoji modules of the emoji database, a processor implementing the steps in the method of claim 1 when executing a computer program stored in the memory.
CN202011575760.7A 2020-12-28 2020-12-28 Facial muscle feature-based expression database using method and computing processing equipment Withdrawn CN112560786A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011575760.7A CN112560786A (en) 2020-12-28 2020-12-28 Facial muscle feature-based expression database using method and computing processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011575760.7A CN112560786A (en) 2020-12-28 2020-12-28 Facial muscle feature-based expression database using method and computing processing equipment

Publications (1)

Publication Number Publication Date
CN112560786A true CN112560786A (en) 2021-03-26

Family

ID=75033589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011575760.7A Withdrawn CN112560786A (en) 2020-12-28 2020-12-28 Facial muscle feature-based expression database using method and computing processing equipment

Country Status (1)

Country Link
CN (1) CN112560786A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113925511A (en) * 2021-11-08 2022-01-14 北京九州安华信息安全技术有限公司 Muscle nerve vibration time-frequency image processing method and device
CN115249393A (en) * 2022-05-09 2022-10-28 深圳市麦驰物联股份有限公司 Identity authentication access control system and method
CN116977515A (en) * 2023-08-08 2023-10-31 广东明星创意动画有限公司 Virtual character expression driving method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113925511A (en) * 2021-11-08 2022-01-14 北京九州安华信息安全技术有限公司 Muscle nerve vibration time-frequency image processing method and device
CN115249393A (en) * 2022-05-09 2022-10-28 深圳市麦驰物联股份有限公司 Identity authentication access control system and method
CN116977515A (en) * 2023-08-08 2023-10-31 广东明星创意动画有限公司 Virtual character expression driving method
CN116977515B (en) * 2023-08-08 2024-03-15 广东明星创意动画有限公司 Virtual character expression driving method

Similar Documents

Publication Publication Date Title
CN112560786A (en) Facial muscle feature-based expression database using method and computing processing equipment
CN106469302B (en) A kind of face skin quality detection method based on artificial neural network
CN109978754A (en) Image processing method, device, storage medium and electronic equipment
CN110097606A (en) Face synthesis
CN109325443A (en) A kind of face character recognition methods based on the study of more example multi-tag depth migrations
CN108363969B (en) Newborn pain assessment method based on mobile terminal
CN106687989A (en) Method and system of facial expression recognition using linear relationships within landmark subsets
Wei et al. Real-time facial expression recognition for affective computing based on Kinect
CN107911643A (en) Show the method and apparatus of scene special effect in a kind of video communication
CN111008971B (en) Aesthetic quality evaluation method of group photo image and real-time shooting guidance system
CN109685148A (en) Multi-class human motion recognition method and identifying system
CN114724224A (en) Multi-mode emotion recognition method for medical care robot
CN115170550A (en) Deep learning-based battery defect detection method and system
Fink et al. Lsfb-cont and lsfb-isol: Two new datasets for vision-based sign language recognition
CN115205933A (en) Facial expression recognition method, device, equipment and readable storage medium
CN115081615A (en) Neural network training method, data processing method and equipment
CN113076916B (en) Dynamic facial expression recognition method and system based on geometric feature weighted fusion
CN111339941A (en) Head posture detection method
WO2022141895A1 (en) Real-time training method for expression database and feedback mechanism for expression database
CN111680566A (en) Hand sample face recognition method based on sliding block generation countermeasure network
CN110287761A (en) A kind of face age estimation method analyzed based on convolutional neural networks and hidden variable
Vijayakumar et al. Hand gesture to speech and text conversion device
CN114202807A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN110210336B (en) Low-resolution single-sample face recognition method
CN111553249A (en) H-B grading-based accurate facial paralysis degree evaluation method and device under CV

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210326