CN113361486A - Multi-pose face recognition method and device, storage medium and electronic equipment - Google Patents

Multi-pose face recognition method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113361486A
CN113361486A CN202110775388.2A CN202110775388A CN113361486A CN 113361486 A CN113361486 A CN 113361486A CN 202110775388 A CN202110775388 A CN 202110775388A CN 113361486 A CN113361486 A CN 113361486A
Authority
CN
China
Prior art keywords
face
matched
target
characteristic value
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110775388.2A
Other languages
Chinese (zh)
Inventor
王琪琪
张磊
张元鹏
程晓杰
曾一林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Taoche Technology Co ltd
Original Assignee
Beijing Taoche Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Taoche Technology Co ltd filed Critical Beijing Taoche Technology Co ltd
Priority to CN202110775388.2A priority Critical patent/CN113361486A/en
Publication of CN113361486A publication Critical patent/CN113361486A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the application discloses a multi-pose face recognition method, a multi-pose face recognition device, a storage medium and a terminal. The method comprises the following steps: the method comprises the steps of obtaining a human body motion track of a user in a detection area, extracting a face image set from a human body image set associated with the human body motion track, determining a face characteristic value to be matched corresponding to each target face image in at least one target face image when at least one target face image meeting a preset rule exists in the face image set, obtaining target identity information associated with the target face characteristic value when a target face characteristic value matched with the face characteristic value to be matched exists in a face characteristic library, and taking the target identity information as identity information of the user. By the method, the face images of the same user under multiple postures are obtained, and the face characteristic values corresponding to the face images are matched in the face characteristic library, so that the identity information of the user is determined, and the accuracy of face identification in an unlimited scene can be improved.

Description

Multi-pose face recognition method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a multi-pose face recognition method, apparatus, storage medium, and electronic device.
Background
At present, under some scenes, the accuracy of face recognition is very high, including scenes such as face brushing attendance and identity authentication. In the application scene of face brushing attendance, because a face feature library of all employees is recorded, each employee can brush faces, and a pair of N feature values is compared during face recognition. In such a scenario, the face feature values in the face feature library are full and stable, and the face feature values in the face feature library do not have an accumulation process. The human face image that the staff gathered when brushing face, its background is single and uncomplicated, and the staff can look directly at the camera lens, carries out face identification many times.
However, under a non-limiting scene, for example, people flow in the same day in a store is counted, returning passengers arriving at the store for multiple times in the same day even every other day are found, and therefore the scenes such as accurate marketing are facilitated, and if clothes of a user are not replaced, pedestrian re-identification can be accurately achieved. If the color of the clothes of the user is close, the style is close, or the clothes are changed the next day and appear again, the effect of re-identifying the pedestrians is poor. In the above scenario, there is no full face feature library, and face images need to be captured randomly during the user shop entering activity to extract face feature values and store the face feature values. And randomly capturing the face image and extracting the face characteristic value, not determining the quality of the face characteristic value, and gradually accumulating the face characteristic value in the face characteristic library. When capturing the face images of the user with other postures such as side face, low head face and the like for recognition, the problem of face recognition failure can occur due to the fact that the face feature value of the user when the face is the front face is stored in the face feature library.
Disclosure of Invention
The embodiment of the application provides a multi-pose face recognition method, a multi-pose face recognition device, a computer storage medium and electronic equipment, and aims to solve the technical problem of how to improve the face recognition accuracy rate in a non-limiting scene in the related art. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a multi-pose face recognition method, where the method includes:
acquiring a human body movement track of a user in a detection area;
extracting a human face image set from the human body image set associated with the human body activity track;
when at least one target face image which accords with a preset rule exists in the face image set, determining a face characteristic value to be matched corresponding to each target face image in the at least one target face image;
and when determining that a target face characteristic value matched with the face characteristic value to be matched exists in a face characteristic library, acquiring target identity information associated with the target face characteristic value, and taking the target identity information as the identity information of the user.
In a second aspect, an embodiment of the present application provides a multi-pose face recognition apparatus, including:
the first acquisition module is used for acquiring a human body activity track of a user in a detection area;
the second extraction module is used for extracting a human face image set from the human body image set associated with the human body activity track;
the first determining module is used for determining a face feature value to be matched corresponding to each target face image in at least one target face image when at least one target face image which meets a preset rule is determined to exist in the face image set;
and the second determining module is used for acquiring target identity information associated with the target face characteristic value when the target face characteristic value matched with the face characteristic value to be matched exists in the face characteristic library, and taking the target identity information as the identity information of the user.
In a third aspect, embodiments of the present application provide a computer storage medium having a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides an electronic device, which may include: a memory and a processor; wherein the memory stores a computer program adapted to be loaded by the memory and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
when the scheme of the embodiment of the application is executed, the human body motion track of a user in a detection area is obtained, a human face image set is extracted from a human body image set associated with the human body motion track, when at least one target human face image which accords with a preset rule exists in the human face image set, a to-be-matched human face characteristic value corresponding to each target human face image in the at least one target human face image is determined, when a target human face characteristic value matched with the to-be-matched human face characteristic value exists in a human face characteristic library, target identity information associated with the target human face characteristic value is obtained, and the target identity information is used as identity information of the user. By the method, the face images of the same user in multiple postures are obtained, and the face characteristic values corresponding to the face images in the multiple postures are matched in the face characteristic library, so that the identity information of the user is determined, and the accuracy of face identification in an unlimited scene can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a multi-pose face recognition method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of another multi-pose face recognition method provided by the embodiment of the present application;
FIG. 3 is a schematic diagram of a face pose provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of a multi-pose face recognition apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
FIG. 6 is a schematic structural diagram of an operating system and a user space provided in an embodiment of the present application;
FIG. 7 is an architectural diagram of the android operating system of FIG. 5;
FIG. 8 is an architecture diagram of the IOS operating system of FIG. 5.
Detailed Description
In order to make the objects, features and advantages of the embodiments of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
In the following method embodiments, for convenience of description, only the main execution subject of each step is described as an electronic device.
Please refer to fig. 1, which is a flowchart illustrating a multi-pose face recognition method according to an embodiment of the present application. As shown in fig. 1, the method of the embodiment of the present application may include the steps of:
s101, obtaining a human body movement track of a user in the detection area.
The human body activity track is a set of a series of continuous human body images formed by continuous activities of a user from a starting point to an end point of a certain time interval.
Specifically, the embodiment of the application can be applied to the electronic camera equipment in a market or a shop, can be applied to the electronic camera equipment for checking attendance in an enterprise, and can also be applied to the electronic camera equipment in other scenes. According to the embodiment of the application, the human body activity track of the user in the video can be acquired by acquiring the video containing the user in the detection area. For a user in a video, there may be one user in the same time period, and there may also be multiple users in the same time period, which is not limited in this embodiment of the application.
For example: if the embodiment of the application is applied to the electronic camera device at the door of a certain store, the video from a customer to the store between 9 am and 10 pm can be collected through the electronic camera device, and further, the human body activity track of the user is extracted from the video. Since there may be many users who enter and leave the store between 9 am and 10 am, the user may capture one or more human body images in human body gestures before entering the store, or the user may capture one or more human body images in human body gestures when leaving the store, then the human body activity tracks of each user in multiple human body gestures during entering and leaving the store can be extracted from the video. The human body image of the human body posture refers to a vertical human body posture, a stooped human body posture, an inclined human body posture and other human body postures.
And S102, extracting a human face image set from the human body image set associated with the human body activity track.
The human body image set is a set of a plurality of human body images corresponding to a plurality of human body postures of the same user under a continuous motion track. The human face image set is a set of human face images corresponding to the human body images in the human body image set. Accordingly, the user's facial pose may also be a variety of different facial poses at different body poses, and the user's facial pose may include, but is not limited to, a frontal face, a left face, a right face, a raised head face, and a low head face. Therefore, each face in the face image set may be a face image corresponding to a front face, a face image corresponding to a left side face, a face image corresponding to a right side face, a face image corresponding to a head-up face, and a face image corresponding to a head-down face.
Specifically, for each human body image in the human body image set, a human face detection algorithm can be used for detecting each human body image to obtain coordinates of an external rectangle of a human face region in each human body image in the human body image, further, human face images corresponding to the human face regions can be cut out from the human body images according to the coordinates of the external rectangle, and the human face images corresponding to all the human face regions form the human face image set.
S103, when at least one target face image which accords with a preset rule exists in the face image set, determining a face feature value to be matched corresponding to each target face image in the at least one target face image.
The preset rule is used for selecting a target face image from the face image set in the sun drying mode. Determining that at least one target face image meeting a preset rule exists in the face image set, wherein the face images in five postures of a left face, a right face, a front face, a head-up face and a head-down face are screened from the face image set. The target face image may be understood as a face image corresponding to a left face, a face image corresponding to a right face, a face image corresponding to a front face, a face image corresponding to a head-up face, and a face image corresponding to a head-down face.
The face feature value is an information set composed of face features, and can be understood as a face feature vector. Each face can be marked with a plurality of feature points, such as 21 feature points, 29 feature points, 68 feature points, 81 feature points, and the like, the feature points comprise five sense organs and facial contours, and the feature vector extractor obtains a feature vector of each face according to the feature points. Each face has a feature vector in the algorithm, and the feature vector is the basis for face comparison.
Specifically, if at least one target face image which meets a preset rule is identified from the face image set, a feature vector corresponding to each target face image is extracted through a feature extractor, and each feature vector is called as a face feature value to be matched.
And S104, when determining that the target face characteristic value matched with the face characteristic value to be matched exists in the face characteristic library, acquiring target identity information associated with the target face characteristic value, and taking the target identity information as the identity information of the user.
The face feature library stores face feature values of a plurality of users, and for each user, the face feature library may have one face feature value corresponding to each user or a plurality of face feature values corresponding to each user. A plurality of face feature values stored in the face feature library are referred to as target face feature values.
Specifically, determining that a target face feature value matched with the face feature value to be matched exists in the face feature library can be understood as calculating the similarity between the target face feature value and the face feature value to be matched. If the similarity greater than the similarity threshold exists, the matching between the target face characteristic value corresponding to the similarity and the face characteristic value to be matched can be determined. If only the similarity smaller than or equal to the similarity threshold value exists, it can be determined that the target face feature value matched with the face feature value to be matched does not exist in the face feature library. Further, if a target face characteristic value matched with the face characteristic value to be matched exists, target identity information of the user associated with the target face characteristic value is obtained, and the identity information is used as identity information of the user.
It will be appreciated that Euclidean distance or cosine distance may be generally used to measure the distance between two vectors. Euclidean distance is the most common distance metric used to compute the absolute distance between points in a multidimensional space. The cosine distance is used for calculating cosine values of included angles of two vectors in a vector space and is used for measuring the relative difference between two individuals. Cosine similarity focuses more on the difference in direction of two vectors than on distance measurement, rather than distance or length. Therefore, for the face recognition method of the embodiment of the present application, it is more appropriate to use the cosine distance as the similarity to describe the relationship between two feature vectors compared to the euclidean distance.
For example: in the face feature library, the target face feature value matched with the face feature value to be matched is the face feature value of the user "zhang san", and then, the target identity information associated with the target face feature value is also the identity information of the user "zhang san", so that it can be determined that the user corresponding to the face feature value to be matched is "zhang san".
When the scheme of the embodiment of the application is executed, the human body motion track of a user in a detection area is obtained, a human face image set is extracted from a human body image set associated with the human body motion track, when at least one target human face image which accords with a preset rule exists in the human face image set, a to-be-matched human face characteristic value corresponding to each target human face image in the at least one target human face image is determined, when a target human face characteristic value matched with the to-be-matched human face characteristic value exists in a human face characteristic library, target identity information associated with the target human face characteristic value is obtained, and the target identity information is used as identity information of the user. By the method, the face images of the same user in multiple postures are obtained, and the face characteristic values corresponding to the face images in the multiple postures are matched in the face characteristic library, so that the identity information of the user is determined, and the accuracy of face identification in an unlimited scene can be improved.
Please refer to fig. 2, which is a flowchart illustrating a multi-pose face recognition method according to an embodiment of the present application. As shown in fig. 2, the method of the embodiment of the present application may include the steps of:
s201, acquiring a human body motion track of a user in the detection area.
S202, extracting a human face image set from the human body image set associated with the human body activity track.
Specifically, S201 to S202 can refer to S101 to S102 in fig. 1, and are not described herein again.
And S203, respectively carrying out quality grading and posture grading on each face image in the face image set to obtain a quality score and a posture score.
Specifically, the quality scoring is performed on the face image, and it is understood that partial attributes of the face image are evaluated, and the attributes may include, but are not limited to, shadow, geometric distortion, sharpness, resolution, texture blur, and noise, the quality score obtained by the evaluation may be in a range of 0 to 1, and the closer the score is to 1, the better the quality of the face image is. The pose score of the face image is evaluated, and the pose score obtained by evaluation can be 0-1, and evaluation standards corresponding to different face poses can be different. For the front face, if the five sense organs of the face are not shielded and are in a normal state, the front face posture can be considered to be better, and the score can be rated as 1. For the side face, the distance between five sense organs is different for the side face at different angles, for example, the distance between two eyes is different in the side face at different angles, and the left side face or the right side face can be scored based on the distance between five sense organs. For a raised or lowered head, the distances between five sense organs are different for raised heads at different raising angles or lowered heads at different lowering angles, for example, the distances between the mouth and the nose are different at different raising angles or lowered heads, so that the raised or lowered head can be evaluated based on the distances between the five sense organs.
For example: referring to the schematic diagram of two human face images as shown in fig. 3, for the image 310, the image 310 is a standard left side face, and the outline of the face of the user is clear, so that the quality score of the image 310 can be given as 1, and the pose score can also be given as 1; for image 320, image 320 is a standard front face and the image is sharp, and the quality score of image 320 may be given a score of 1 and the pose score may be given a score of 1.
And S204, taking the face image with the quality score being greater than or equal to the first threshold value and the posture score being greater than or equal to the second threshold value as a target face image.
Specifically, a first threshold value can be preset for screening out face images with better quality, a second threshold value can be preset for screening out face images with better posture, because the face images with better quality and better posture are collected, the face images are subjected to feature extraction, the face feature values of the face images are stored in a storage, the face feature values with better quality can be ensured to be stored in a face feature library, and the accuracy of face recognition can be improved. And if the value interval of the quality score and the posture score obtained in step S203 is 0 to 1, the first threshold and the second threshold may be evaluated according to the mass human face image samples and a reasonable quality threshold and a reasonable posture threshold may be calculated, for example, the first threshold may be set to 0.7, and the posture threshold may be set to 0.6.
S205, determining the face feature value to be matched corresponding to each target face image in at least one target face image.
The face feature value is an information set composed of face features, and can be understood as a face feature vector. Each face can be marked with a plurality of feature points, such as 21 feature points, 29 feature points, 68 feature points, 81 feature points, and the like, the feature points comprise five sense organs and facial contours, and the feature vector extractor obtains a feature vector of each face according to the feature points. Each face has a feature vector in the algorithm, and the feature vector is the basis for face comparison. Specifically, a feature extractor may be used to extract a face feature value to be matched corresponding to each target face image.
And S206, matching the face characteristic value to be matched with each preset face characteristic value in the face characteristic library.
The preset face characteristic value is a face characteristic value corresponding to a face image of each user stored in a face characteristic library. The face feature library is an npy file, npy file stores multiple lines of data, and each line stores data as a 256-bit face feature value. And the pkl file stores the identification of each user, and stores the line number of the face feature value corresponding to each identification in the npy file, and also stores the pose, pose score and quality score corresponding to each line of data. That is, one user corresponds to one identity and one user corresponds to at least one face feature value, that is, one user occupies at least one line of data in the npy file, and each line of data in the at least one line of data represents a face feature value of a face of the user in different postures. It should be noted that a user occupies at most five rows of data in the face feature library, because a user stores at most five pose face feature values in the face feature library, which are five poses of a front face, a left face, a right face, a head-up face, and a head-down face.
Specifically, the face feature value to be matched is matched with each preset face feature value, which can be understood as calculating the similarity between the face feature value to be matched and each preset face feature value, and comparing the similarity with the similarity threshold value, so as to determine whether the face feature value to be matched is matched with each preset face feature value. In the embodiment of the present application, the cosine distance may be selected as the similarity, and the specific reason may be referred to as S104 in fig. 1, which is not described herein again. Therefore, the cosine distance between the face characteristic value to be matched and each preset face characteristic value can be calculated, the sizes of the cosine distance and the cosine distance threshold value are judged, and if the cosine distance is greater than or equal to the cosine distance threshold value, the preset face characteristic value corresponding to the cosine distance is matched with the face characteristic value to be matched; and if the cosine distance is smaller than the cosine distance threshold, the preset face characteristic value corresponding to the cosine distance is not matched with the face characteristic value to be matched.
And S207, if the preset face characteristic value matched with the face characteristic value to be matched exists in the face characteristic library, acquiring a preset user identity associated with the preset face characteristic value.
The preset user identity is used for identifying the identity information of the preset user. The face feature library stores preset face feature values of a plurality of preset users, and also stores preset user identification of each preset user, each preset user corresponds to one preset user identification, each preset user identification can correspond to one preset face feature value, and each preset user identification can also correspond to a plurality of preset face feature values. It can be understood that the face feature library may store a face feature value of a face image corresponding to a front face of a preset user, and may also store face feature values of face images in other postures, such as a side face of the preset user.
Specifically, if there is a preset face feature value matching the face feature value to be matched, a preset user identity associated with the preset face feature value may be obtained. It is understood that the face feature values to be matched may be one or more, and then it may be determined whether there is a preset face feature value matching each face feature value to be matched. And if each face characteristic value to be matched has a preset face characteristic value matched with the face characteristic value, respectively acquiring a preset user identity associated with each preset face characteristic value.
For example: the current human face images to be recognized in three different postures are human face images of a user A, namely a left side face, a head-up face and a front face, wherein a human face characteristic value corresponding to the left side face is a vector 1, a human face characteristic value corresponding to the front face is a vector 3, and a human face characteristic value corresponding to the head-up face is a vector 5. In the face feature library, the preset face feature value matched with the vector 1 is a vector 2, the preset face feature value matched with the vector 3 is a vector 4, and the preset face feature value matched with the vector 5 is a vector 6. The preset user identities associated with the vector 2, the vector 4, and the vector 6 may be the same identity, the preset user identities associated with the vector 2, the vector 4, and the vector 6 may be different identities, the preset user identities associated with the vector 2 and the vector 4 may be the same identity, the preset user identities associated with the vector 2 and the vector 6 may be the same identity, and the preset user identities associated with the vector 4 and the vector 6 may be the same identity.
And S208, when the number of the preset user identification marks is one, determining the preset face characteristic value as a target face characteristic value.
Specifically, the number of the preset user identifiers is one, and two scenarios can be understood in the embodiment of the present application, that is, if one face feature value to be matched is one, one preset face feature value matched with the face feature value to be matched is also one, and one preset user identity identifier associated with the preset face feature value is also one; secondly, if the face feature values to be matched are multiple, and the preset face feature values matched with the face feature values to be matched are also multiple, the preset user identity identifications associated with the multiple preset face feature values are the same identification. Therefore, in the above two scenarios, the preset face feature value is used as the target face feature value.
S209, acquiring target identity information associated with the target face characteristic value based on the preset user identity, and taking the target identity information as the identity information of the user.
The target identity information is the identity information of the user stored in the database. Specifically, the database has a face feature library for storing the preset face feature value and the preset user identity of the preset user, and can also store the identity information of the preset user, so that the target identity information of the preset user associated with the preset user identity can be obtained according to the preset user identity, and further, the target identity information can be used as the identity information of the user corresponding to the face feature value to be matched.
And S210, when the number of the preset user identification marks is at least two, determining the matching number of the preset face characteristic value corresponding to each preset user identification mark and the face characteristic value to be matched.
S211, determining the target user identity with the largest matching number in the matching numbers, and taking the preset face characteristic value corresponding to the target user identity as the target face characteristic value.
S210 to S211 will be explained below.
Specifically, the number of the preset user identifiers is at least two, and in the embodiment of the present application, it can be understood that, if the number of the face feature values to be matched is at least two, and the number of the preset face feature values matched with the face feature values to be matched is also at least two, the number of the preset user identifiers associated with the at least two preset face feature values is at least two. Furthermore, the matching number of the preset face characteristic values corresponding to each preset user identity and matched with the face characteristic values to be matched can be determined. Further, the preset user identification with the largest matching number can be used as the target user identification, and further, the preset face feature value corresponding to the target user identification can be used as the target face feature value.
For example: following the example in S207, the face images of the three different poses to be currently recognized are face images of a user a, which are a left face, a head-up face, and a front face, respectively, the face feature value corresponding to the left face is vector 1, the face feature value corresponding to the front face is vector 3, and the face feature value corresponding to the head-up face is vector 5. In the face feature library, the preset face feature value matched with the vector 1 is a vector 2, the preset face feature value matched with the vector 3 is a vector 4, and the preset face feature value matched with the vector 5 is a vector 6. If the preset user id associated with each of vector 2 and vector 4 is id 1, the preset user id associated with vector 6 may be id 2. Then, it may be determined that the matching number of the preset face feature value corresponding to the identifier 1 is 2, and the matching number of the preset face feature value corresponding to the identifier 2 is 1. Therefore, the preset user identity with the largest matching number can be determined as the identity 1, the identity 1 can be used as the target user identity, and further, the vector 2 and the vector 4 corresponding to the identity 1 can be used as the target face feature value.
S212, target identity information related to the target face characteristic value is obtained based on the target user identity identification, and the target identity information is used as identity information of the user.
Specifically, see S209, which is not described herein again.
And S213, storing the face characteristic value to be matched, which is not matched with the target face characteristic value, into the face characteristic library as a preset face characteristic value of the target user identity.
Specifically, the face feature value to be matched that is not matched with the target face feature value is stored in the face feature library, and in the embodiment of the present application, it is understood that, in addition to the face feature value to be matched that is matched with the preset face feature value corresponding to the target user identity, in S210, face feature values to be matched that are matched with preset face feature values corresponding to other preset user identities are also present, since the face images corresponding to the face feature values to be matched are all face images with a quality score greater than or equal to a first threshold and a pose score greater than or equal to a second threshold, which are all selected face images with better quality, and the face feature values in these poses corresponding to the target user identity are not stored in the face feature library, these face feature values to be matched can be stored in the face feature library, so that the face feature values in the corresponding poses can be completed for the user corresponding to the target user identity, the data of the face feature library can be perfected.
For example: following the examples in S210 to S211, the target user identity is id 1, the preset face feature values of id 1 in the face feature library are vector 2 and vector 4, respectively, the face feature value to be matched that is matched with vector 2 is vector 1, the face feature value to be matched that is matched with vector 4 is vector 3, the preset face feature value of id 2 in the face feature library is vector 6, and the face feature value to be matched that is matched with vector 6 is vector 5, so that vector 5 is referred to as the face feature value to be matched that is not matched with the target user identity, that is, vector 5 is the face feature value to be matched that is not matched with id 1. Because the vector 1 is the face characteristic value of the left face, the vector 3 is the face characteristic value of the front face, the vector 5 is the face characteristic value of the head-up face, the vector 1 and the vector 3 are successfully matched in the face characteristic library, and the vector 5 fails to be matched, it is indicated that the face characteristic value of the head-up face of the identifier 1 is not stored in the face characteristic library, and because the quality score corresponding to the vector 5 is greater than or equal to the first threshold, the posture score corresponding to the vector 5 is greater than or equal to the second threshold, and the vector 5 is the face characteristic value with better quality, the vector 5 can be used as the face characteristic value of the head-up face of the identifier 1, that is, stored in the face characteristic library as the preset face characteristic value.
S214, if the preset face characteristic value matched with the face characteristic value to be matched does not exist in the face characteristic library, storing the face characteristic value to be matched into the face characteristic library, and establishing user identity information related to the face characteristic value to be matched for the user.
Specifically, if there is no preset face feature value matching the face feature value to be matched, which indicates that the face feature value of the user corresponding to the face feature value to be matched is not stored in the face feature library, a new preset user identity may be established for the face feature value to be matched, the face feature value to be matched is stored in the face feature library based on the new preset user identity, and the new preset user identity and the face feature value to be matched are in a corresponding relationship in the face feature library. The new preset user identity is used for identifying identity information of the user.
For example: the two face images to be recognized at present in different postures are face images of a user B, the face images in the two different postures correspond to two feature values to be matched respectively, the face feature value of the front face is a vector 11, and the face feature value of the low head face is a vector 22. If the preset face characteristic value matched with the vector 11 does not exist in the face characteristic library, or the preset face characteristic value matched with the vector 22 does not exist, a new preset user identity identifier 33 can be generated, the vector 11 and the vector 22 are used as the preset face characteristic value of the identifier 33 and stored in the face characteristic library, the user B can be recognized as a stranger B, and the identifier 33 is an identity identifier of the stranger B in the face characteristic library.
In some embodiments, the face feature library may be updated periodically, for example, for 3 months, and the face feature values of the user stored in the face feature library may be updated. When the face characteristic values under different postures are collected, the extracted quality scores and posture scores both meet the face characteristic values of the face image under the threshold condition. For example, for the user a, the face feature library stores face feature values of the user in five face poses, and after 3 months, the face feature values of the user a in the five face poses can be extracted again. If the quality score of the left side face of the user a extracted this time is 1, the attitude score is 1, the quality score of the left side face of the user a stored in the face feature library is 0.8, and the attitude score is 1, the face feature value of the previous left side face may be replaced with the face feature value of the left side face of the user a acquired this time, and the data of the user a is updated in the pkl file, then the face feature value of the previous left side face of the user a may be cached in an assigned file, and the data in the assigned file may be cleared after a predetermined period of time. It should be understood that only one gesture face of one user is explained here, but other gesture faces of the user or all gesture faces of other users may be updated according to the above process, and the face feature values with higher quality scores or gesture scores are stored in the database, so that the quality of the face feature values in the face feature database is ensured to be higher, and the accuracy of face recognition can be improved.
When the scheme of the embodiment of the application is executed, the human body motion track of a user in a detection area is obtained, a human face image set is extracted from a human body image set related to the human body motion track, each face image in the human face image set is graded based on a human face quality model to obtain each grade value, a target grade value meeting the grading condition in the grade values is determined, a human face image corresponding to the target grade value is used as a target human face image, a to-be-matched human face characteristic value corresponding to each target human face image in at least one target human face image is determined, and the to-be-matched human face characteristic value is matched with each preset human face characteristic value in a human face characteristic library. On one hand, if a preset face characteristic value matched with the face characteristic value to be matched exists in the face characteristic library, acquiring a preset user identity associated with the preset face characteristic value, determining the preset face characteristic value as a target face characteristic value when the number of the preset user identity is one, acquiring target identity information associated with the target face characteristic value based on the preset user identity, and taking the target identity information as identity information of a user; when the number of the preset user identity identifications is at least two, determining the matching number of the preset face feature value corresponding to each preset user identity identification and the face feature value to be matched, determining the target user identity identification with the maximum matching number in each matching number, taking the preset face feature value corresponding to the target user identity identification as the target face feature value, acquiring target identity information associated with the target face feature value based on the target user identity identification, taking the target identity information as the identity information of the user, and storing the face feature value to be matched, which is not matched with the target face feature value, in a face feature library as the preset face feature value of the target user identity identification. On the other hand, if the preset face characteristic value matched with the face characteristic value to be matched does not exist in the face characteristic library, the face characteristic value to be matched is stored in the face characteristic library, and user identity information related to the face characteristic value to be matched is established for the user. The method comprises the steps of collecting face images of a user in multiple postures, respectively extracting face characteristic values of the face images in the multiple postures, storing preset face characteristic values of a preset user in the multiple postures in a face characteristic library, searching preset face characteristic values matched with the face characteristic values in the face characteristic library, identifying the user as a preset user if all the matched preset face characteristic values are the face characteristic values of the same preset user, and determining identity information of the user from the preset users if all the matched preset face characteristic values are the face characteristic values of the preset users, so that the face identification accuracy under an unlimited scene can be improved.
Fig. 4 is a schematic structural diagram of a multi-pose face recognition apparatus according to an embodiment of the present disclosure. The multi-pose face recognition apparatus 400 may be implemented as all or a portion of a server, in software, hardware, or a combination of both. The apparatus 400 comprises:
a first obtaining module 410, configured to obtain a human body movement track of a user in a detection area;
a second extraction module 420, configured to extract a face image set from the human body image set associated with the human body activity trajectory;
the first determining module 430 is configured to determine a face feature value to be matched corresponding to each target face image in the at least one target face image when it is determined that at least one target face image meeting a preset rule exists in the face image set;
a second determining module 440, configured to, when it is determined that a target face feature value matching the face feature value to be matched exists in the face feature library, obtain target identity information associated with the target face feature value, and use the target identity information as identity information of the user.
Optionally, the first determining module 430 includes:
the first scoring unit is used for respectively performing quality scoring and posture scoring on each face image in the face image set to obtain a quality score and a posture score;
and the second scoring unit is used for taking the face image of which the quality score is greater than or equal to a first threshold value and the posture score is greater than or equal to a second threshold value as a target face image.
Optionally, the second determining module 440 includes:
the first matching unit is used for matching the face characteristic value to be matched with each preset face characteristic value in the face characteristic library;
and the second matching unit is used for taking the preset face characteristic value meeting the voting matching condition as a target face characteristic value matched with the face characteristic value to be matched.
Optionally, the second matching unit includes:
the third matching unit is used for determining a preset face characteristic value matched with the face characteristic value to be matched and acquiring a preset user identity associated with the preset face characteristic value;
and the fourth matching unit is used for determining the preset face characteristic value as a target face characteristic value when the number of the preset user identity marks is one.
Optionally, the second matching unit further includes:
a fifth matching unit, configured to determine, when the number of the preset user identities is at least two, a matching number of matching between a preset face feature value corresponding to each preset user identity and the face feature value to be matched;
and the sixth matching unit is used for determining the target user identity with the maximum matching number in each matching number and taking the preset face characteristic value corresponding to the target user identity as the target face characteristic value.
Optionally, the second matching unit further includes:
and the seventh matching unit is used for storing the face characteristic value to be matched, which is not matched with the target face characteristic value, into the face characteristic library as a preset face characteristic value of the target user identity.
Optionally, the apparatus 400 further includes:
and the identification module is used for storing the face characteristic value to be matched into the face characteristic library and establishing user identity information associated with the face characteristic value to be matched for the user when the face characteristic library does not have a preset face characteristic value matched with the face characteristic value to be matched.
When the scheme of the embodiment of the application is executed, the human body motion track of a user in a detection area is obtained, a human face image set is extracted from a human body image set associated with the human body motion track, when at least one target human face image which accords with a preset rule exists in the human face image set, a to-be-matched human face characteristic value corresponding to each target human face image in the at least one target human face image is determined, when a target human face characteristic value matched with the to-be-matched human face characteristic value exists in a human face characteristic library, target identity information associated with the target human face characteristic value is obtained, and the target identity information is used as identity information of the user. By the method, the face images of the same user in multiple postures are obtained, and the face characteristic values corresponding to the face images in the multiple postures are matched in the face characteristic library, so that the identity information of the user is determined, and the accuracy of face identification in an unlimited scene can be improved.
Referring to fig. 5, a block diagram of an electronic device according to an exemplary embodiment of the present application is shown. The electronic device in the present application may comprise one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be connected by a bus 150.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall electronic device using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system, including a system based on Android system depth development, an IOS system developed by apple, including a system based on IOS system depth development, or other systems. The data storage area can also store data created by the electronic equipment in use, such as a phone book, audio and video data, chatting record data and the like.
Referring to fig. 6, the memory 120 may be divided into an operating system space, in which an operating system runs, and a user space, in which native and third-party applications run. In order to ensure that different third-party application programs can achieve a better operation effect, the operating system allocates corresponding system resources for the different third-party application programs. However, the requirements of different application scenarios in the same third-party application program on system resources are different, for example, in a local resource loading scenario, the third-party application program has a higher requirement on the disk reading speed; in the animation rendering scene, the third-party application program has a high requirement on the performance of the GPU. The operating system and the third-party application program are independent from each other, and the operating system cannot sense the current application scene of the third-party application program in time, so that the operating system cannot perform targeted system resource adaptation according to the specific application scene of the third-party application program.
In order to enable the operating system to distinguish a specific application scenario of the third-party application program, data communication between the third-party application program and the operating system needs to be opened, so that the operating system can acquire current scenario information of the third-party application program at any time, and further perform targeted system resource adaptation based on the current scenario.
Taking an operating system as an Android system as an example, programs and data stored in the memory 120 are as shown in fig. 7, and a Linux kernel layer 320, a system runtime library layer 340, an application framework layer 360, and an application layer 380 may be stored in the memory 120, where the Linux kernel layer 320, the system runtime library layer 340, and the application framework layer 360 belong to an operating system space, and the application layer 380 belongs to a user space. The Linux kernel layer 320 provides underlying drivers for various hardware of the electronic device, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, power management, and the like. The system runtime library layer 340 provides a main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system runtime library layer 340 is an Android runtime library (Android runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language. The application framework layer 360 provides various APIs that may be used in building an application, and developers may build their own applications by using these APIs, such as activity management, window management, view management, notification management, content provider, package management, session management, resource management, and location management. At least one application program runs in the application layer 380, and the application programs may be native application programs carried by the operating system, such as a contact program, a short message program, a clock program, a camera application, and the like; or a third-party application developed by a third-party developer, such as a game application, an instant messaging program, a photo beautification program, a file processing program, and the like.
Taking an operating system as an IOS system as an example, programs and data stored in the memory 120 are shown in fig. 8, and the IOS system includes: a Core operating system Layer 420(Core OS Layer), a Core Services Layer 440(Core Services Layer), a Media Layer 460(Media Layer), and a touchable Layer 480(Cocoa Touch Layer). The kernel operating system layer 420 includes an operating system kernel, drivers, and underlying program frameworks that provide functionality closer to hardware for use by program frameworks located in the core services layer 440. The core services layer 440 provides system services and/or program frameworks, such as a Foundation framework, an account framework, an advertisement framework, a data storage framework, a network connection framework, a geographic location framework, a motion framework, and so forth, as required by the application. The media layer 460 provides audiovisual related interfaces for applications, such as graphics image related interfaces, audio technology related interfaces, video technology related interfaces, audio video transmission technology wireless playback (AirPlay) interfaces, and the like. Touchable layer 480 provides various common interface-related frameworks for application development, and touchable layer 480 is responsible for user touch interaction operations on the electronic device. Such as a local notification service, a remote push service, an advertising framework, a game tool framework, a messaging User Interface (UI) framework, a User Interface UIKit framework, a map framework, and so forth.
In the framework illustrated in FIG. 8, the framework associated with most applications includes, but is not limited to: a base framework in the core services layer 440 and a UIKit framework in the touchable layer 480. The base framework provides many basic object classes and data types, provides the most basic system services for all applications, and is UI independent. While the class provided by the UIKit framework is a basic library of UI classes for creating touch-based user interfaces, iOS applications can provide UIs based on the UIKit framework, so it provides an infrastructure for applications for building user interfaces, drawing, processing and user interaction events, responding to gestures, and the like.
The Android system can be referred to as a mode and a principle for realizing data communication between the third-party application program and the operating system in the IOS system, and details are not repeated herein.
The input device 130 is used for receiving input instructions or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In one example, the input device 130 and the output device 140 may be combined, and the input device 130 and the output device 140 are touch display screens for receiving touch operations of a user on or near the touch display screens by using any suitable object such as a finger, a touch pen, and the like, and displaying user interfaces of various applications. Touch displays are typically provided on the front panel of an electronic device. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configurations of the electronic devices illustrated in the above-described figures do not constitute limitations on the electronic devices, which may include more or fewer components than illustrated, or some components may be combined, or a different arrangement of components. For example, the electronic device further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In the embodiment of the present application, the main body of execution of each step may be the electronic device described above. Optionally, the execution subject of each step is an operating system of the electronic device. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
The electronic device of the embodiment of the application can also be provided with a display device, and the display device can be various devices capable of realizing a display function, for example: a cathode ray tube display (CR), a light-emitting diode display (LED), an electronic ink panel, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. A user may utilize a display device on the electronic device 101 to view information such as displayed text, images, video, and the like. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
In the electronic device shown in fig. 5, where the electronic device may be a terminal, the processor 110 may be configured to call the multi-pose face recognition program stored in the memory 120, and specifically perform the following operations:
acquiring a human body movement track of a user in a detection area;
extracting a human face image set from the human body image set associated with the human body activity track;
when at least one target face image which accords with a preset rule exists in the face image set, determining a face characteristic value to be matched corresponding to each target face image in the at least one target face image;
and when determining that a target face characteristic value matched with the face characteristic value to be matched exists in a face characteristic library, acquiring target identity information associated with the target face characteristic value, and taking the target identity information as the identity information of the user.
In one embodiment, when the processor 110 determines that at least one target face image meeting a preset rule exists in the face image set, the following operations are specifically performed:
respectively carrying out quality grading and posture grading on each face image in the face image set to obtain a quality score and a posture score;
and taking the face image with the quality score being larger than or equal to a first threshold value and the pose score being larger than or equal to a second threshold value as a target face image.
In an embodiment, the memory 120 includes a plurality of preset users in executing the face feature library, each preset user corresponds to at least one preset face feature value, and when it is determined that a target face feature value matching the face feature value to be matched exists in the face feature library, the following operations are specifically executed:
matching the face characteristic value to be matched with each preset face characteristic value in the face characteristic library;
and taking the preset face characteristic value meeting the voting matching condition as a target face characteristic value matched with the face characteristic value to be matched.
In one embodiment, when the preset face feature value meeting the voting matching condition is executed as the target face feature value matched with the face feature value to be matched, the processor 110 specifically executes the following operations:
determining a preset face characteristic value matched with a face characteristic value to be matched, and acquiring a preset user identity associated with the preset face characteristic value;
and when the number of the preset user identification marks is one, determining the preset face characteristic value as a target face characteristic value.
In one embodiment, the processor 110 further specifically performs the following operations:
when the number of the preset user identification marks is at least two, determining the matching number of the preset face characteristic value corresponding to each preset user identification mark and the face characteristic value to be matched;
and determining the target user identity with the maximum matching quantity in each matching quantity, and taking a preset face characteristic value corresponding to the target user identity as a target face characteristic value.
In one embodiment, the processor 110 further specifically performs the following operations:
and storing the face characteristic value to be matched, which is not matched with the target face characteristic value, into the face characteristic library as a preset face characteristic value of the target user identity.
In one embodiment, the processor 110 further specifically performs the following operations:
and when determining that the preset face characteristic value matched with the face characteristic value to be matched does not exist in the face characteristic library, storing the face characteristic value to be matched into the face characteristic library, and establishing user identity information associated with the face characteristic value to be matched for the user.
It is clear to a person skilled in the art that the solution of the present application can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, a Field-ProgrammaBLE Gate Array (FPGA), an Integrated Circuit (IC), or the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some service interfaces, devices or units, and may be an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, and the memory may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above description is only an exemplary embodiment of the present disclosure, and the scope of the present disclosure should not be limited thereby. That is, all equivalent changes and modifications made in accordance with the teachings of the present disclosure are intended to be included within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A multi-pose face recognition method, the method comprising:
acquiring a human body movement track of a user in a detection area;
extracting a human face image set from the human body image set associated with the human body activity track;
when at least one target face image which accords with a preset rule exists in the face image set, determining a face characteristic value to be matched corresponding to each target face image in the at least one target face image;
and when determining that a target face characteristic value matched with the face characteristic value to be matched exists in a face characteristic library, acquiring target identity information associated with the target face characteristic value, and taking the target identity information as the identity information of the user.
2. The method according to claim 1, wherein the determining that at least one target face image meeting a preset rule exists in the face image set comprises:
respectively carrying out quality grading and posture grading on each face image in the face image set to obtain a quality score and a posture score;
and taking the face image with the quality score being larger than or equal to a first threshold value and the pose score being larger than or equal to a second threshold value as a target face image.
3. The method according to claim 1 or 2, wherein the face feature library includes preset face feature values of a plurality of preset users, each preset user corresponds to at least one preset face feature value, and the determining that the target face feature value matching the face feature value to be matched exists in the face feature library includes:
matching the face characteristic value to be matched with each preset face characteristic value in the face characteristic library one by one;
and taking the preset face characteristic value meeting the voting matching condition as a target face characteristic value matched with the face characteristic value to be matched.
4. The method according to claim 3, wherein the using the preset face feature value satisfying the voting matching condition as the target face feature value matched with the face feature value to be matched comprises:
determining a preset face characteristic value matched with the face characteristic value to be matched, and acquiring a preset user identity associated with the preset face characteristic value;
and when the number of the preset user identification marks is one, determining the preset face characteristic value as a target face characteristic value.
5. The method of claim 4, further comprising:
when the number of the preset user identification marks is at least two, determining the matching number of the preset face characteristic value corresponding to each preset user identification mark and the face characteristic value to be matched;
and determining the target user identity with the maximum matching quantity in each matching quantity, and taking a preset face characteristic value corresponding to the target user identity as a target face characteristic value.
6. The method of claim 5, further comprising:
and storing the face characteristic value to be matched, which is not matched with the target face characteristic value, into the face characteristic library as a preset face characteristic value of the target user identity.
7. The method of claim 1, further comprising:
and when determining that the preset face characteristic value matched with the face characteristic value to be matched does not exist in the face characteristic library, storing the face characteristic value to be matched into the face characteristic library, and establishing user identity information associated with the face characteristic value to be matched for the user.
8. A multi-pose face recognition apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a human body activity track of a user in a detection area;
the second extraction module is used for extracting a human face image set from the human body image set associated with the human body activity track;
the first determining module is used for determining a face feature value to be matched corresponding to each target face image in at least one target face image when at least one target face image which meets a preset rule is determined to exist in the face image set;
and the second determining module is used for acquiring target identity information associated with the target face characteristic value when the target face characteristic value matched with the face characteristic value to be matched exists in the face characteristic library, and taking the target identity information as the identity information of the user.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the method steps according to any one of claims 1 to 7.
10. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 7.
CN202110775388.2A 2021-07-08 2021-07-08 Multi-pose face recognition method and device, storage medium and electronic equipment Pending CN113361486A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110775388.2A CN113361486A (en) 2021-07-08 2021-07-08 Multi-pose face recognition method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110775388.2A CN113361486A (en) 2021-07-08 2021-07-08 Multi-pose face recognition method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113361486A true CN113361486A (en) 2021-09-07

Family

ID=77538744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110775388.2A Pending CN113361486A (en) 2021-07-08 2021-07-08 Multi-pose face recognition method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113361486A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471932A (en) * 2022-08-17 2022-12-13 宁波美喵科技有限公司 Method, device, equipment and storage medium for unlocking shared bicycle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471932A (en) * 2022-08-17 2022-12-13 宁波美喵科技有限公司 Method, device, equipment and storage medium for unlocking shared bicycle

Similar Documents

Publication Publication Date Title
JP6681342B2 (en) Behavioral event measurement system and related method
CN109635680B (en) Multitask attribute identification method and device, electronic equipment and storage medium
CN109474850B (en) Motion pixel video special effect adding method and device, terminal equipment and storage medium
CN110059624B (en) Method and apparatus for detecting living body
WO2021023047A1 (en) Facial image processing method and device, terminal, and storage medium
CN112839223B (en) Image compression method, image compression device, storage medium and electronic equipment
WO2021104097A1 (en) Meme generation method and apparatus, and terminal device
US20140232748A1 (en) Device, method and computer readable recording medium for operating the same
CN111198724A (en) Application program starting method and device, storage medium and terminal
CN111767554A (en) Screen sharing method and device, storage medium and electronic equipment
WO2023030176A1 (en) Video processing method and apparatus, computer-readable storage medium, and computer device
WO2023030177A1 (en) Video processing method and apparatus, computer readable storage medium, and computer device
CN111176533A (en) Wallpaper switching method, device, storage medium and terminal
WO2022262719A1 (en) Live streaming processing method and apparatus, storage medium, and electronic device
CN111866372A (en) Self-photographing method, device, storage medium and terminal
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
US20220101355A1 (en) Determining lifetime values of users in a messaging system
CN113361486A (en) Multi-pose face recognition method and device, storage medium and electronic equipment
CN112991555B (en) Data display method, device, equipment and storage medium
CN108845733A (en) Screenshot method, device, terminal and storage medium
CN111275012A (en) Advertisement screen passenger flow volume statistical system and method based on face recognition
CN111274476A (en) Room source matching method, device and equipment based on face recognition and storage medium
CN111107264A (en) Image processing method, image processing device, storage medium and terminal
CN110636363A (en) Multimedia information playing method and device, electronic equipment and storage medium
KR20240036715A (en) Evolution of topics in messaging systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination