CN112084856A - Face posture detection method and device, terminal equipment and storage medium - Google Patents
Face posture detection method and device, terminal equipment and storage medium Download PDFInfo
- Publication number
- CN112084856A CN112084856A CN202010777199.4A CN202010777199A CN112084856A CN 112084856 A CN112084856 A CN 112084856A CN 202010777199 A CN202010777199 A CN 202010777199A CN 112084856 A CN112084856 A CN 112084856A
- Authority
- CN
- China
- Prior art keywords
- face
- face image
- angle
- detection
- pose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 175
- 238000003062 neural network model Methods 0.000 claims abstract description 56
- 238000000034 method Methods 0.000 claims abstract description 25
- 230000006870 function Effects 0.000 claims description 25
- 238000004422 calculation algorithm Methods 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 21
- 238000012549 training Methods 0.000 claims description 18
- 238000007781 pre-processing Methods 0.000 claims description 10
- 238000005286 illumination Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 4
- 210000000697 sensory organ Anatomy 0.000 claims description 3
- 210000001508 eye Anatomy 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 210000001331 nose Anatomy 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application is applicable to the technical field of machine vision, and provides a human face posture detection method, a human face posture detection device, a terminal device and a storage medium, wherein the method comprises the following steps: carrying out face detection on the target image; when the target image comprises a face image, detecting at least two key points in the face image; according to at least two key points in the face image, performing coarse detection on the pose of the face in the face image to obtain a coarse detection result; and when the coarse detection result meets a fine detection condition, inputting the face image into a pre-trained neural network model to perform fine detection on the posture of the face in the face image, so as to obtain a fine detection result. According to the embodiment of the application, the human face posture can be roughly detected according to the key points, then the neural network model is used for accurately detecting the human face posture, and the accuracy of human face posture detection can be improved.
Description
Technical Field
The application belongs to the technical field of machine vision, and particularly relates to a human face posture detection method and device, a terminal device and a storage medium.
Background
The human face posture detection is widely applied to the application field of machine vision, for example, in the application fields of human-computer interaction, virtual reality, intelligent monitoring and the like, and the human face posture detection is used for detecting the orientation of a face.
In many application scenarios of face tasks, such as face recognition, attribute recognition, mask recognition, etc., it is necessary to detect the face pose first, and select the front face with more feature information for corresponding task processing, so it becomes important to detect the face pose accurately to detect the front face and the side face. At present, the common human face posture detection adopts key point detection and template combination for analysis to obtain the human face posture, however, because a preset standard template has little error with an actual human face, the accuracy of the human face posture detection is lower.
Disclosure of Invention
The embodiment of the application provides a face gesture detection method, a face gesture detection device, terminal equipment and a storage medium, and aims to solve the problem that the accuracy of existing face gesture detection is low.
In a first aspect, an embodiment of the present application provides a face pose detection method, including:
carrying out face detection on the target image;
when the target image comprises a face image, detecting at least two key points in the face image;
according to at least two key points in the face image, performing coarse detection on the pose of the face in the face image to obtain a coarse detection result;
and when the coarse detection result meets a fine detection condition, inputting the face image into a pre-trained neural network model to perform fine detection on the posture of the face in the face image, so as to obtain a fine detection result.
In a second aspect, an embodiment of the present application provides a face pose detection apparatus, including:
the first detection module is used for carrying out face detection on the target image;
the second detection module is used for detecting at least two key points in the face image when the target image comprises the face image;
the rough detection module is used for carrying out rough detection on the posture of the face in the face image according to at least two key points in the face image to obtain a rough detection result;
and the fine detection module is used for inputting the face image into a pre-trained neural network model to perform fine detection on the posture of the face in the face image when the coarse detection result meets a fine detection condition, so as to obtain a fine detection result.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the above-mentioned face pose detection method when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the steps of the above-mentioned face pose detection method.
In a fifth aspect, the present application provides a computer program product, which when run on an electronic device, causes the electronic device to execute the above steps of the above-mentioned face pose detection method.
Compared with the prior art, the embodiment of the application has the advantages that: the embodiment of the application carries out face detection on the target image; when the target image comprises a face image, detecting at least two key points in the face image; according to at least two key points in the face image, performing coarse detection on the pose of the face in the face image to obtain a coarse detection result; and when the coarse detection result meets a fine detection condition, inputting the face image into a pre-trained neural network model to perform fine detection on the posture of the face in the face image, so as to obtain a fine detection result. According to the embodiment of the application, the human face posture can be roughly detected according to the key points, then the neural network model is used for accurately detecting the human face posture, and the accuracy of human face posture detection can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of a face pose detection method according to an embodiment of the present application;
fig. 2 is a schematic specific flowchart of step S103 according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario for coarse detection of a face pose according to an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating a training process for the neural network model constructed in advance according to another embodiment of the present application;
FIG. 5 is a schematic diagram of a neural network architecture for fine detection of human face pose according to another embodiment of the present application;
fig. 6 is a schematic structural diagram of a face pose detection apparatus according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to another embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The face gesture detection method provided by the embodiment of the application can be applied to terminal equipment such as robots, mobile terminal equipment desktop computers, notebooks and palm computers, and the specific types of the machine terminal equipment are not limited at all.
In order to explain the technical means described in the present application, the following examples are given below.
Referring to fig. 1, a method for detecting a face pose provided in an embodiment of the present application includes:
step S101, carrying out face detection on the target image.
Specifically, the target image to be detected is acquired through an image detection device, for example, the image detection device is a camera. The target image shot by the camera can be obtained, and the face detection is carried out on the target image through a face detection algorithm. The purpose of face detection is to determine information such as the position and number in a target image.
Step S102, when the target image comprises a face image, at least two key points in the face image are detected.
Specifically, when a face image is detected to exist in a target image, key point detection is performed on the face image, and position information of at least two key points in the face image is obtained. The keypoints include, but are not limited to, keypoints located in the eyes, nose, mouth, eyebrows, ears, etc. of the human face. The key points in the face image can be detected through a face key point detection algorithm based on a neural network.
Step S103, according to at least two key points in the face image, performing coarse detection on the pose of the face in the face image to obtain a coarse detection result.
Specifically, the distance between at least two key points can be determined according to the positions of the at least two key points in the face image, and then the pose of the face is roughly detected according to the distance.
In one embodiment, step S103 specifically includes:
step S1031, acquiring a first distance between a first key point and a second key point in the face image and a second distance between the first key point and a third key point or a fourth key point;
wherein the first key point, the second key point, the third key point and the fourth key point are respectively located in different five sense organs of the face in the face image;
in a specific application, for example, the first key point is a left eye located in a face image, the second key point is a right eye located in the face image, the third key point is a mouth located in the face image, and the fourth key point is a nose located in the face image; or the first key point is a right eye in the face image, the second key point is a left eye in the face image, the third key point is a mouth in the face image, and the fourth key point is a nose in the face image.
Step S1032, according to the ratio of the first distance to the second distance, performing coarse detection on the pose of the face in the face image to obtain a coarse detection result;
in specific application, the first distance and the second distance of the front face and the side face of the human face are different, so that the human face posture in the human face image is roughly detected according to the ratio of the first distance to the second distance, and the obtained rough detection result can be that the human face in the human face image subjected to rough detection is the front face or the side face.
In a specific application scenario, the first distance is a distance between key points in two eyes of a face image, and the second distance is a distance between one eye of the two eyes and a mouth or a nose, and the second distance is substantially constant while the first distance is significantly reduced in a process of rotating a front face of a face to a side face. Therefore, the positive side degree can be quantified by the ratio of the first distance to the second distance, as shown in table 1, which is the corresponding relationship between the faces at different angles and the ratio of the first distance to the second distance:
face angle a (°) | <10 | 10<a<30 | 30<a<60 | 60<a<90 |
d1/d2 | 1.2-1.5 | 0.95-1.2 | 0.55-0.95 | <0.55 |
TABLE 1 ratio distribution of human faces d1/d2 at different angles
Wherein d1/d2 in Table 1 represents the ratio of the first distance to the second distance.
In a practical application scenario, a ratio of the first distance to the second distance in table 1 above is 0.95 as a boundary threshold between the front face and the side face. Fig. 3 is a schematic diagram of an application scenario of rough detection of a face pose, showing a display result of a front face and a side face of the face pose with a ratio of the first distance to the second distance being 0.95 as a demarcation threshold. Such as is _ ce: ture is expressed as frontal face, is _ ce: false is expressed as a side face.
Step S1033, when the ratio is greater than a first preset threshold and smaller than a second preset threshold, determining that the coarse detection result satisfies a fine detection condition.
In a specific application, the second preset threshold is greater than the first preset threshold, and after the human face posture in the human face image is roughly detected, some human face images may be inaccurately detected, so that the human face image in which the ratio of the first distance to the second distance in the rough detection is between the first preset threshold and the second preset threshold range is finely detected, that is, the human face image in which the ratio of the first distance to the second distance is between the first preset threshold and the second preset threshold range is determined to satisfy the fine detection condition.
And S104, when the coarse detection result meets a fine detection condition, inputting the face image into a pre-trained neural network model to perform fine detection on the posture of the face in the face image, so as to obtain a fine detection result.
In a specific application, when the coarse detection result meets the fine detection condition, the face image meeting the fine detection condition is input into a pre-trained neural network model to perform fine detection on the pose of the face in the face image, and the discrimination result of the fine detection on the face pose is obtained.
In one embodiment, in the coarse detection process, the face image of which the ratio of the first distance to the second distance is greater than a second preset threshold is taken as a target front face image to be output; and carrying out fine detection on the face image which is larger than the first preset threshold and smaller than the second preset threshold, and outputting the face image which is judged to be the front face image in the fine detection result as a target front face image.
In one embodiment, the inputting the face image into a pre-trained neural network model to perform fine detection on the pose of the face in the face image to obtain a fine detection result includes: inputting the face image into a pre-trained neural network model to perform fine detection on the pose of the face in the face image to obtain angle values of three pose angles of the face in the face image; the three attitude angles are respectively a pitching angle of the human face, a left-right turning angle of the human face and a left-right inclination angle of the human face.
In a specific application, the pre-trained neural network model is a pre-constructed and trained neural network model. The pre-trained neural network model outputs three attitude angles of the face image, wherein the three attitude angles are the Pitch angle Pitch of the face, the left-right turning angle Yaw of the face and the left-right inclination angle Roll of the face respectively.
Inputting the face image into a pre-trained neural network model to perform fine detection on the pose of the face in the face image to obtain a fine detection result, and further comprising: and when the angle values of the three attitude angles are respectively in the respective corresponding preset angle ranges, determining that the face attitude in the face image is a front face.
In specific application, the face image is input into a pre-trained neural network model to perform fine detection on the pose of the face in the face image, three pose angles of the output face image are obtained, the face image with the face pose as the front face is determined according to the three pose angles, and specifically, the face image with the face pose as the front face is determined according to whether the three pose angles are within respective corresponding preset angle ranges.
In an embodiment, before the face detection is performed on the target image, the pre-constructed neural network model is trained, and after the pre-constructed neural network model is trained, the pre-trained neural network model is obtained. The pre-constructed neural network model can adopt a lightweight backbone to carry out network design, preferably the neural network model constructed according to the mobilonetv 3_ small neural network, and also can be constructed according to the lightweight neural networks such as the mobilonetv 2, the shufflenev 2 or the mobileface according to the practical application scene. As shown in fig. 4, the training process of the pre-constructed neural network model includes steps S201 to S205:
step S201, obtaining a face image to be trained, and preprocessing the face image to be trained.
In specific application, a face image to be trained is obtained first, and the face image to be trained can be preprocessed to obtain a preprocessed face image to be trained.
In one embodiment, acquiring a face image to be trained, and preprocessing the face image to be trained, includes: acquiring a face image to be trained, and preprocessing the face image to be trained through at least one preset enhancement algorithm; the preset enhancement algorithm comprises a motion blur enhancement algorithm and/or an illumination disturbance enhancement algorithm.
In specific application, because the pose detection of the face image is generally applied to a dynamic video, the effect of applying the image to be trained, which is not preprocessed, to a dynamic scene is poor, and a large amount of training data is required. The main reason is caused by motion blur and illumination disturbance in a dynamic scene, and the image to be trained in the training data does not have such an image, so the face image to be trained is preprocessed by at least one preset enhancement algorithm to expand the training data, and specifically, the face image to be trained is preprocessed by a motion blur enhancement algorithm and an illumination disturbance enhancement algorithm respectively, and/or the face image is preprocessed by a motion blur enhancement algorithm and then an illumination disturbance enhancement algorithm (or by an illumination disturbance enhancement algorithm and then a motion blur enhancement algorithm). The motion blur enhancement algorithm can be specifically realized by motionBlur enhancement, and the illumination disturbance enhancement algorithm can be specifically realized by ChannelShuffle enhancement.
Step S202, a face image data set is established according to the face image to be trained and the preprocessed face image to be trained.
In specific application, a face image data set is established according to the face image to be trained and the preprocessed face image to be trained. The neural network model can be trained through the facial image data set.
Step S203, marking the angle values of the three attitude angles of all the face images in the face image data set; the three attitude angles are respectively a pitching angle of the human face, a left-right turning angle of the human face and a left-right inclination angle of the human face.
In specific application, marking the angle values of three attitude angles of all face images in the face image data set to obtain the result of marking all face images in the image data set; the marking result is the angle values of three attitude angles of the marked face image, wherein the three attitude angles are the Pitch angle Pitch of the face, the angle Yaw of the left and right turning of the face and the angle Roll of the left and right inclination of the face respectively;
step S204, inputting the marked face image data set into a neural network model for training, and obtaining the probability that each posture angle of each face image in the face image data set belongs to each type of N types of preset angles; wherein N is more than or equal to 2.
In specific application, training a labeled face image input value neural network model to obtain the probability that each attitude angle of each face image in each face image data set belongs to each type of N types of preset angles; wherein N is more than or equal to 2.
Step S205, calculating the angle values of the three attitude angles of each face image according to the probability that each attitude angle of each face image belongs to the preset angle of each category in the preset angles of the N categories.
In a specific application scenario, the neural network model may be trained by combining the classification and regression ideas in advance to obtain the prediction result of the human face pose, for example, three pose angles are used as three separate classification tasks, for example, the prediction angles [ -99 °,99 ° ], and each class is distinguished by 3 °, which is only an example here. Of course, other angles can be distinguished, and the final prediction for each attitude angle is 66 categories. The result obtained by the classification sub-network in the neural network model can be understood that the attitude angle roughly enters a certain interval, and the error in the interval is about 3 degrees, so that the angle is refined and detected by adopting a regression idea. Such as refinement detection using the idea of depth expectation DEX.
In an embodiment, the formula 1 for calculating the angle values of the three pose angles of each face image according to the probability that each pose angle of each face image belongs to the preset angle of each of the preset angles of N categories is as follows:
wherein E (O) represents an angle value of the attitude angle, yiRepresenting the probability that the attitude angle obtained by the training of the neural network model belongs to the i-th class preset angle, OiAnd the angle value of the ith type preset angle is represented.
In a specific application, in order to refine the angle, the refinement detection using the regression idea may specifically be: assuming that one of the three attitude angles y, y is classified into 66 categories, for example, the angle value of the preset angle in the 66 categories is set as y, y { -99, -96, -93, ·. The classification subnetwork in the neural network obtains the result of these 66 classes as o ═ { o ═ o1,o2,o3,......,o64,o65,o66},o1,o2,...066The probabilities of the corresponding categories belonging to the 66 categories are respectively represented, and the angle value of the preset angle of each category is multiplied by the probability of the corresponding category (namely formula 1), so as to finally obtain the attitude angle E.
And S206, when the preset loss function of the neural network model is not converged, adjusting parameters of the neural network model and returning to the step of inputting the marked face image data set into the neural network model for training until the preset loss function is converged.
In a specific application, the preset loss function is used for indicating the difference between the calculated angle values of the three pose angles of each face image and the angle values of the marked three pose angles. And when the preset loss function of the neural network model is not converged, after the parameters of the neural network model are adjusted, the step of training the neural network model according to the marked face image data set is continuously executed until the preset loss function is converged.
In a specific application scenario, when the output value of the preset loss function is greater than the preset threshold, after adjusting the parameters of the neural network model, the step of training the neural network model according to the labeled face image data set is continuously performed until the output value of the preset loss function is less than or equal to the preset threshold.
In one embodiment, the predetermined loss function is calculated by the following formula:
wherein, theA cross entropy between the angle value representing the calculated attitude angle and the angle value of the marked attitude angleThe mean square error between the calculated angle value of the attitude angle and the angle value of the marked attitude angle is represented.
In one embodiment, as shown in fig. 5, a schematic diagram of a neural network architecture for fine detection of human face pose includes an input image 51, a lightweight network 52, and the lightweight network 52 includes: the full connection layer 521 is used for outputting three posture angle features indicating the human face; the activation layer 522 is used for performing multi-classification through softmax, calculating the probability of each class and outputting the probability; a cross entropy loss function 523 for the calculated cross entropy loss error; an angle value calculating unit 524 of the attitude angle, configured to calculate angle values of the three attitude angles; a mean square error loss function 525 for calculating a mean square error; and a default loss function 526 for calculating the difference between the angle values of the three pose angles of each face image and the angle values of the three pose angles of the mark.
Therefore, the embodiment of the application carries out face detection on the target image; when the target image comprises a face image, detecting at least two key points in the face image; according to at least two key points in the face image, performing coarse detection on the pose of the face in the face image to obtain a coarse detection result; and when the coarse detection result meets a fine detection condition, inputting the face image into a pre-trained neural network model to perform fine detection on the posture of the face in the face image, so as to obtain a fine detection result. According to the embodiment of the application, the human face posture can be roughly detected according to the key points, then the neural network model is used for accurately detecting the human face posture, and the accuracy of human face posture detection can be improved.
Fig. 6 shows a block diagram of a face pose detection apparatus provided in the embodiment of the present application, corresponding to the face pose detection method described in the above embodiment, and only shows portions related to the embodiment of the present application for convenience of description. Referring to fig. 6, the face pose detection apparatus 600 includes:
a first detection module 601, configured to perform face detection on a target image;
a second detection module 602, configured to detect at least two key points in a face image when the target image includes the face image;
a rough detection module 603, configured to perform rough detection on the pose of the face in the face image according to at least two key points in the face image, so as to obtain a rough detection result;
and a fine detection module 604, configured to, when the coarse detection result meets a fine detection condition, input the face image into a neural network model to perform fine detection on the pose of the face in the face image, so as to obtain a fine detection result.
In one embodiment, the coarse detection module 603 includes:
the acquiring unit is used for acquiring a first distance between a first key point and a second key point in the face image and a second distance between the first key point and a third key point or a fourth key point; wherein the first key point, the second key point, the third key point and the fourth key point are respectively located in different five sense organs of the face in the face image;
an obtaining unit, configured to perform coarse detection on the pose of the face in the face image according to a ratio of the first distance to the second distance, so as to obtain a coarse detection result;
and the determining unit is used for determining that the coarse detection result meets the fine detection condition when the ratio is greater than a first preset threshold and smaller than a second preset threshold.
In one embodiment, the fine detection module 604 is specifically configured to:
inputting the face image into a pre-trained neural network model to perform fine detection on the pose of the face in the face image to obtain angle values of three pose angles of the face in the face image; the three attitude angles are respectively a pitching angle of the human face, a left-right turning angle of the human face and a left-right inclination angle of the human face.
In one embodiment, the fine detection module 604 is further specifically configured to:
and when the angle values of the three attitude angles are respectively in the respective corresponding preset angle ranges, determining that the face attitude in the face image is a front face.
In one embodiment, the face pose detection apparatus 600 includes:
the preprocessing module is used for acquiring a face image to be trained and preprocessing the face image to be trained;
the establishing module is used for establishing a face image data set according to the face image to be trained and the preprocessed face image to be trained;
the marking module is used for marking the angle values of the three attitude angles of all the face images in the face image data set; the three attitude angles are respectively a pitching angle of a human face, a left-right turning angle of the human face and a left-right inclination angle of the human face;
the training module is used for inputting the marked face image data set into a neural network model for training to obtain the probability that each posture angle of each face image in the face image data set belongs to each type of N types of preset angles; wherein N is more than or equal to 2;
the calculation module is used for calculating the angle values of the three attitude angles of each face image according to the probability that each attitude angle of each face image belongs to the preset angle of each of the N classes of preset angles;
the adjusting module is used for adjusting parameters of the neural network model and returning to the step of inputting the marked face image data set into the neural network model for training until the preset loss function is converged when the preset loss function of the neural network model is not converged; the preset loss function is used for indicating the difference between the calculated angle values of the three attitude angles of each face image and the angle values of the marked three attitude angles.
In one embodiment, the calculation formula of the calculation module is:
wherein E (O) represents an angle value of the attitude angle, yiRepresenting the probability that the attitude angle obtained by the training of the neural network model belongs to the i-th class preset angle, OiAnd the angle value of the ith type preset angle is represented.
In one embodiment, the preprocessing module is specifically configured to:
acquiring a face image to be trained, and preprocessing the face image to be trained through at least one preset enhancement algorithm; the preset enhancement algorithm comprises a motion blur enhancement algorithm and/or an illumination disturbance enhancement algorithm.
In one embodiment, the predetermined loss function is calculated by the following formula:
wherein, theRepresenting calculated attitude anglesThe cross entropy between the angle value of the angle of the said angle and the angle value of the attitude angle of the markThe mean square error between the calculated angle value of the attitude angle and the angle value of the marked attitude angle is represented.
Therefore, the embodiment of the application carries out face detection on the target image; when the target image comprises a face image, detecting at least two key points in the face image; according to at least two key points in the face image, performing coarse detection on the pose of the face in the face image to obtain a coarse detection result; and when the coarse detection result meets a fine detection condition, inputting the face image into a pre-trained neural network model to perform fine detection on the posture of the face in the face image, so as to obtain a fine detection result. According to the embodiment of the application, the human face posture can be roughly detected according to the key points, then the neural network model is used for accurately detecting the human face posture, and the accuracy of human face posture detection can be improved.
As shown in fig. 7, an embodiment of the present invention further provides a terminal device 700 including: a processor 701, a memory 702 and a computer program 703, such as a face pose detection program, stored in said memory 702 and executable on said processor 701. The processor 701 implements the steps in each of the above embodiments of the face pose detection method when executing the computer program 703. The processor 701, when executing the computer program 703, implements the functions of the modules in the above-described device embodiments, such as the functions of the modules 601 to 604 shown in fig. 7.
Illustratively, the computer program 703 may be partitioned into one or more modules that are stored in the memory 702 and executed by the processor 701 to implement the present invention. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program 703 in the terminal device 700. For example, the computer program 703 may be divided into a first detection module, a second detection module, a coarse detection module, and a fine detection module, and specific functions of the modules are described in the foregoing embodiments, and are not described herein again.
The terminal device 700 may be a robot, a desktop computer, a notebook, a palm computer, or other computing devices. The terminal device may include, but is not limited to, a processor 701 and a memory 702. Those skilled in the art will appreciate that fig. 7 is merely an example of a terminal device 700 and does not constitute a limitation of terminal device 700 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 701 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 702 may be an internal storage unit of the terminal device 700, such as a hard disk or a memory of the terminal device 700. The memory 702 may also be an external storage device of the terminal device 700, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device 700. Further, the memory 702 may also include both an internal storage unit and an external storage device of the terminal device 700. The memory 702 is used for storing the computer program and other programs and data required by the terminal device. The memory 702 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated module, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (11)
1. A face pose detection method is characterized by comprising the following steps:
carrying out face detection on the target image;
when the target image comprises a face image, detecting at least two key points in the face image;
according to at least two key points in the face image, performing coarse detection on the pose of the face in the face image to obtain a coarse detection result;
and when the coarse detection result meets a fine detection condition, inputting the face image into a pre-trained neural network model to perform fine detection on the posture of the face in the face image, so as to obtain a fine detection result.
2. The method according to claim 1, wherein the performing a coarse detection on the pose of the face in the face image according to at least two key points in the face image to obtain a coarse detection result comprises:
acquiring a first distance between a first key point and a second key point in the face image and a second distance between the first key point and a third key point or a fourth key point; wherein the first key point, the second key point, the third key point and the fourth key point are respectively located in different five sense organs of the face in the face image;
according to the ratio of the first distance to the second distance, performing coarse detection on the posture of the face in the face image to obtain a coarse detection result;
and when the ratio is greater than a first preset threshold and smaller than a second preset threshold, determining that the coarse detection result meets a fine detection condition.
3. The method according to claim 1, wherein the inputting the face image into a pre-trained neural network model for performing a fine detection on the pose of the face in the face image to obtain a fine detection result comprises:
inputting the face image into a pre-trained neural network model to perform fine detection on the pose of the face in the face image to obtain angle values of three pose angles of the face in the face image; the three attitude angles are respectively a pitching angle of the human face, a left-right turning angle of the human face and a left-right inclination angle of the human face.
4. The method according to claim 3, wherein the inputting the face image into a pre-trained neural network model performs a fine detection on the pose of the face in the face image to obtain a fine detection result, further comprising:
and when the angle values of the three attitude angles are respectively in the respective corresponding preset angle ranges, determining that the face attitude in the face image is a front face.
5. The method according to any one of claims 1 to 4, wherein before the face detection of the target image, the method comprises:
acquiring a face image to be trained, and preprocessing the face image to be trained;
establishing a face image data set according to the face image to be trained and the preprocessed face image to be trained;
marking the angle values of the three attitude angles of all the face images in the face image data set; the three attitude angles are respectively a pitching angle of a human face, a left-right turning angle of the human face and a left-right inclination angle of the human face;
inputting the marked face image data set into a neural network model for training to obtain the probability that each posture angle of each face image in the face image data set belongs to each preset angle in N types of preset angles; wherein N is more than or equal to 2;
calculating angle values of three attitude angles of each face image according to the probability that each attitude angle of each face image belongs to the preset angle of each of N classes of preset angles;
when the preset loss function of the neural network model is not converged, adjusting parameters of the neural network model and returning to the step of inputting the marked face image data set into the neural network model for training until the preset loss function is converged; the preset loss function is used for indicating the difference between the calculated angle values of the three attitude angles of each face image and the angle values of the marked three attitude angles.
6. The method according to claim 5, wherein the calculation formula for calculating the angle values of the three pose angles of each face image according to the probability that each pose angle of each face image belongs to the preset angle of each of the N classes of preset angles is as follows:
wherein E (O) represents an angle value of the attitude angle, yiRepresenting results obtained by training of the neural network modelThe probability that the attitude angle belongs to the i-th class preset angle is described, and O isiAnd the angle value of the ith type preset angle is represented.
7. The method for detecting the human face pose according to claim 5, wherein the step of obtaining the human face image to be trained and the step of preprocessing the human face image to be trained comprise the following steps:
acquiring a face image to be trained, and preprocessing the face image to be trained through at least one preset enhancement algorithm; the preset enhancement algorithm comprises a motion blur enhancement algorithm and/or an illumination disturbance enhancement algorithm.
8. The method according to any one of claims 5 to 7, wherein the predetermined loss function is calculated by the following formula:
9. A face pose detection apparatus, comprising:
the first detection module is used for carrying out face detection on the target image;
the second detection module is used for detecting at least two key points in the face image when the target image comprises the face image;
the rough detection module is used for carrying out rough detection on the posture of the face in the face image according to at least two key points in the face image to obtain a rough detection result;
and the fine detection module is used for inputting the face image into the neural network model to perform fine detection on the posture of the face in the face image when the coarse detection result meets a fine detection condition, so as to obtain a fine detection result.
10. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010777199.4A CN112084856A (en) | 2020-08-05 | 2020-08-05 | Face posture detection method and device, terminal equipment and storage medium |
PCT/CN2020/140411 WO2022027912A1 (en) | 2020-08-05 | 2020-12-28 | Face pose recognition method and apparatus, terminal device, and storage medium. |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010777199.4A CN112084856A (en) | 2020-08-05 | 2020-08-05 | Face posture detection method and device, terminal equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112084856A true CN112084856A (en) | 2020-12-15 |
Family
ID=73735678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010777199.4A Pending CN112084856A (en) | 2020-08-05 | 2020-08-05 | Face posture detection method and device, terminal equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112084856A (en) |
WO (1) | WO2022027912A1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112528903A (en) * | 2020-12-18 | 2021-03-19 | 平安银行股份有限公司 | Face image acquisition method and device, electronic equipment and medium |
CN112651490A (en) * | 2020-12-28 | 2021-04-13 | 深圳万兴软件有限公司 | Training method and device for face key point detection model and readable storage medium |
CN112932407A (en) * | 2021-01-29 | 2021-06-11 | 上海市内分泌代谢病研究所 | Face front calibration method and system |
CN112949492A (en) * | 2021-03-03 | 2021-06-11 | 南京视察者智能科技有限公司 | Model series training method and device for face detection and key point detection and terminal equipment |
CN113297423A (en) * | 2021-05-24 | 2021-08-24 | 深圳市优必选科技股份有限公司 | Pushing method, pushing device and electronic equipment |
WO2022027912A1 (en) * | 2020-08-05 | 2022-02-10 | 深圳市优必选科技股份有限公司 | Face pose recognition method and apparatus, terminal device, and storage medium. |
CN114399803A (en) * | 2021-11-30 | 2022-04-26 | 际络科技(上海)有限公司 | Face key point detection method and device |
CN114550235A (en) * | 2022-01-17 | 2022-05-27 | 合肥的卢深视科技有限公司 | Attitude angle detection method, system, electronic device and storage medium |
WO2022199419A1 (en) * | 2021-03-22 | 2022-09-29 | 深圳市百富智能新技术有限公司 | Facial detection method and apparatus, and terminal device and computer-readable storage medium |
WO2023231400A1 (en) * | 2022-05-31 | 2023-12-07 | 青岛云天励飞科技有限公司 | Method and apparatus for predicting facial angle, and device and readable storage medium |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114882567A (en) * | 2022-05-27 | 2022-08-09 | 支付宝(杭州)信息技术有限公司 | Method, device and equipment for training expression recognition model |
CN115620082B (en) * | 2022-09-29 | 2023-09-01 | 合肥的卢深视科技有限公司 | Model training method, head posture estimation method, electronic device, and storage medium |
CN115512427B (en) * | 2022-11-04 | 2023-04-25 | 北京城建设计发展集团股份有限公司 | User face registration method and system combined with matched biopsy |
CN115798000A (en) * | 2022-11-23 | 2023-03-14 | 中国科学院深圳先进技术研究院 | Face pose estimation method and device based on structured light system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1924894A (en) * | 2006-09-27 | 2007-03-07 | 北京中星微电子有限公司 | Multiple attitude human face detection and track system and method |
US20100074479A1 (en) * | 2008-09-19 | 2010-03-25 | Altek Corpoartion | Hierarchical face recognition training method and hierarchical face recognition method thereof |
CN108197547A (en) * | 2017-12-26 | 2018-06-22 | 深圳云天励飞技术有限公司 | Face pose estimation, device, terminal and storage medium |
CN108875492A (en) * | 2017-10-11 | 2018-11-23 | 北京旷视科技有限公司 | Face datection and crucial independent positioning method, device, system and storage medium |
CN111401456A (en) * | 2020-03-20 | 2020-07-10 | 杭州涂鸦信息技术有限公司 | Training method of human face posture recognition model and system and device thereof |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919049A (en) * | 2019-02-21 | 2019-06-21 | 北京以萨技术股份有限公司 | Fatigue detection method based on deep learning human face modeling |
CN112084856A (en) * | 2020-08-05 | 2020-12-15 | 深圳市优必选科技股份有限公司 | Face posture detection method and device, terminal equipment and storage medium |
-
2020
- 2020-08-05 CN CN202010777199.4A patent/CN112084856A/en active Pending
- 2020-12-28 WO PCT/CN2020/140411 patent/WO2022027912A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1924894A (en) * | 2006-09-27 | 2007-03-07 | 北京中星微电子有限公司 | Multiple attitude human face detection and track system and method |
US20100074479A1 (en) * | 2008-09-19 | 2010-03-25 | Altek Corpoartion | Hierarchical face recognition training method and hierarchical face recognition method thereof |
CN108875492A (en) * | 2017-10-11 | 2018-11-23 | 北京旷视科技有限公司 | Face datection and crucial independent positioning method, device, system and storage medium |
CN108197547A (en) * | 2017-12-26 | 2018-06-22 | 深圳云天励飞技术有限公司 | Face pose estimation, device, terminal and storage medium |
CN111401456A (en) * | 2020-03-20 | 2020-07-10 | 杭州涂鸦信息技术有限公司 | Training method of human face posture recognition model and system and device thereof |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022027912A1 (en) * | 2020-08-05 | 2022-02-10 | 深圳市优必选科技股份有限公司 | Face pose recognition method and apparatus, terminal device, and storage medium. |
CN112528903A (en) * | 2020-12-18 | 2021-03-19 | 平安银行股份有限公司 | Face image acquisition method and device, electronic equipment and medium |
CN112528903B (en) * | 2020-12-18 | 2023-10-31 | 平安银行股份有限公司 | Face image acquisition method and device, electronic equipment and medium |
CN112651490A (en) * | 2020-12-28 | 2021-04-13 | 深圳万兴软件有限公司 | Training method and device for face key point detection model and readable storage medium |
CN112651490B (en) * | 2020-12-28 | 2024-01-05 | 深圳万兴软件有限公司 | Training method and device for human face key point detection model and readable storage medium |
CN112932407A (en) * | 2021-01-29 | 2021-06-11 | 上海市内分泌代谢病研究所 | Face front calibration method and system |
CN112949492A (en) * | 2021-03-03 | 2021-06-11 | 南京视察者智能科技有限公司 | Model series training method and device for face detection and key point detection and terminal equipment |
WO2022199419A1 (en) * | 2021-03-22 | 2022-09-29 | 深圳市百富智能新技术有限公司 | Facial detection method and apparatus, and terminal device and computer-readable storage medium |
CN113297423A (en) * | 2021-05-24 | 2021-08-24 | 深圳市优必选科技股份有限公司 | Pushing method, pushing device and electronic equipment |
CN114399803A (en) * | 2021-11-30 | 2022-04-26 | 际络科技(上海)有限公司 | Face key point detection method and device |
CN114550235A (en) * | 2022-01-17 | 2022-05-27 | 合肥的卢深视科技有限公司 | Attitude angle detection method, system, electronic device and storage medium |
WO2023231400A1 (en) * | 2022-05-31 | 2023-12-07 | 青岛云天励飞科技有限公司 | Method and apparatus for predicting facial angle, and device and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2022027912A1 (en) | 2022-02-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112084856A (en) | Face posture detection method and device, terminal equipment and storage medium | |
CN110197146B (en) | Face image analysis method based on deep learning, electronic device and storage medium | |
CN109829448B (en) | Face recognition method, face recognition device and storage medium | |
CN112085701B (en) | Face ambiguity detection method and device, terminal equipment and storage medium | |
CN112633084B (en) | Face frame determining method and device, terminal equipment and storage medium | |
CN112348778B (en) | Object identification method, device, terminal equipment and storage medium | |
CN110852311A (en) | Three-dimensional human hand key point positioning method and device | |
EP4024270A1 (en) | Gesture recognition method, electronic device, computer-readable storage medium, and chip | |
CN112990318B (en) | Continuous learning method, device, terminal and storage medium | |
CN112069887A (en) | Face recognition method, face recognition device, terminal equipment and storage medium | |
CN111126268A (en) | Key point detection model training method and device, electronic equipment and storage medium | |
CN110032941B (en) | Face image detection method, face image detection device and terminal equipment | |
CN113065523B (en) | Target tracking method and device, electronic equipment and storage medium | |
CN104463085A (en) | Face recognition method based on local binary pattern and KFDA | |
CN113688785A (en) | Multi-supervision-based face recognition method and device, computer equipment and storage medium | |
CN115471439A (en) | Method and device for identifying defects of display panel, electronic equipment and storage medium | |
CN112488054A (en) | Face recognition method, face recognition device, terminal equipment and storage medium | |
CN109241942B (en) | Image processing method and device, face recognition equipment and storage medium | |
CN116309643A (en) | Face shielding score determining method, electronic equipment and medium | |
CN114913567A (en) | Mask wearing detection method and device, terminal equipment and readable storage medium | |
CN109993178B (en) | Feature data generation and feature matching method and device | |
CN112416128A (en) | Gesture recognition method and terminal equipment | |
CN111931794A (en) | Sketch-based image matching method | |
CN114360044A (en) | Gesture recognition method and device, terminal equipment and computer readable storage medium | |
CN111523373A (en) | Vehicle identification method and device based on edge detection and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201215 |