CN113536947A - Face attribute analysis method and device - Google Patents

Face attribute analysis method and device Download PDF

Info

Publication number
CN113536947A
CN113536947A CN202110687994.9A CN202110687994A CN113536947A CN 113536947 A CN113536947 A CN 113536947A CN 202110687994 A CN202110687994 A CN 202110687994A CN 113536947 A CN113536947 A CN 113536947A
Authority
CN
China
Prior art keywords
attribute
face
analysis
branch
attributes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110687994.9A
Other languages
Chinese (zh)
Inventor
冯子钜
毛永雄
叶润源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshan Xidao Technology Co ltd
Original Assignee
Zhongshan Xidao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongshan Xidao Technology Co ltd filed Critical Zhongshan Xidao Technology Co ltd
Priority to CN202110687994.9A priority Critical patent/CN113536947A/en
Publication of CN113536947A publication Critical patent/CN113536947A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a face attribute analysis method and a device, wherein the method comprises the following steps: and acquiring a face image, and performing condition detection and face attribute analysis on the face image to obtain a condition detection result and face attributes. And if the condition detection result meets the preset detection condition corresponding to the face attribute and the confidence coefficient of the face attribute reaches a confidence coefficient threshold value, determining an attribute analysis result according to the face attribute. Therefore, the method and the device consider the influence of image quality on analysis of different attributes, and ensure that the screened face attributes meet reliable detection conditions and achieve high confidence level through screening the attributes, so that the attribute analysis result is optimized, and the accuracy of face attribute analysis is greatly improved.

Description

Face attribute analysis method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for analyzing a face attribute.
Background
The face attribute is an important biological feature, and contains a large amount of attribute information convenient for face recognition, such as gender, race, age and the like. Therefore, how to detect the attribute information is one of the research hotspots in the field of face recognition.
The existing face attribute analysis method mainly acquires face video information and then performs attribute analysis on each frame of face image in the face video information to obtain a face attribute analysis result. In practice, it is found that under complex shooting conditions (such as crowd congestion, face occlusion, and the like), a face image with low image quality is easily acquired, and the accuracy of a face attribute analysis result is further influenced.
Disclosure of Invention
The present application is directed to solving at least one of the problems in the prior art. Therefore, the application provides a method and a device for analyzing the face attribute, which can improve the accuracy of the face attribute analysis.
According to the embodiment of the first aspect of the application, a face attribute analysis method comprises the following steps:
acquiring a face image; carrying out condition detection and face attribute analysis on the face image to obtain a condition detection result and face attributes; and if the condition detection result meets the preset detection condition corresponding to the face attribute and the confidence coefficient of the face attribute reaches a confidence coefficient threshold value, determining an attribute analysis result according to the face attribute.
The face attribute analysis method according to the embodiment of the application has at least the following beneficial effects:
in the embodiment of the application, the condition detection result and the face attribute are obtained by performing condition detection and face attribute analysis on the obtained face image. If the condition detection result meets the preset detection condition corresponding to the face attribute, the face image can meet the condition for accurately detecting the face attribute, and based on the condition, if the confidence coefficient of the face attribute also reaches a confidence coefficient threshold value, the face attribute analyzed from the face image is credible, and at the moment, the attribute analysis result can be determined according to the face attribute. Therefore, the method and the device consider the influence of image quality on analysis of different attributes, and ensure that the screened face attributes meet reliable detection conditions and achieve high confidence level through screening the attributes, so that the attribute analysis result is optimized, and the accuracy of face attribute analysis is greatly improved.
According to some embodiments of the present application, the performing condition detection and face attribute analysis on the face image to obtain a condition detection result and a face attribute includes:
inputting the face image into a pre-constructed face analysis model for condition detection and face attribute analysis, wherein the face analysis model comprises a backbone network, a condition detection branch and an attribute analysis branch; and obtaining a condition detection result output through the condition detection branch and a face attribute output through the attribute analysis branch.
According to some embodiments of the present application, the condition detection branches include at least an image quality detection branch, a face large angle detection branch, a face key point detection branch, and a five sense organ semantic segmentation branch; the attribute analysis branch at least comprises a gender attribute analysis branch, an age attribute analysis branch, a posture attribute analysis branch and an accessory attribute analysis branch.
According to some embodiments of the present application, the face attribute is a numerical attribute, and the face analysis model further includes numerical classification branches corresponding to different numerical attribute categories; after the face image is input into a pre-constructed face analysis model for condition detection and face attribute analysis, the method further comprises the following steps:
and obtaining the confidence coefficient of the face attribute through the numerical classification branch.
According to some embodiments of the present application, the obtaining the confidence level of the face attribute through the numerical classification branch includes:
acquiring a numerical range and a plurality of classification ranges divided into the numerical range according to the numerical attribute category of the face attribute through the numerical classification branch; determining a target classification range from a plurality of classification ranges according to the face attribute; the target classification range comprises a classification range corresponding to the face attribute and/or M classification ranges adjacent to the classification range corresponding to the face attribute, wherein M is a positive integer; and determining the confidence coefficient of the face attribute according to the confidence coefficient of the target classification range.
According to some embodiments of the present application, determining an attribute analysis result according to the face attribute includes:
determining a statistical queue corresponding to the face attribute; adding the face attribute to the statistical queue; and calculating according to the confidence degrees corresponding to the attributes in the statistical queue to obtain an attribute analysis result.
According to some embodiments of the present application, the adding the face attributes to the statistical queue includes:
if the statistical queue is full, acquiring a target attribute meeting a deletion condition in the statistical queue; wherein the deletion condition includes: the target attribute is the attribute with the lowest confidence level in the statistical queue, and/or the target attribute is the attribute with the longest addition time in the statistical queue;
if the confidence coefficient of the face attribute is greater than or equal to the confidence coefficient of the target attribute, deleting the target attribute from the statistical queue to obtain an updated statistical queue; adding the face attribute to the updated statistical queue.
According to some embodiments of the present application, the face attribute is a numerical attribute; the calculating according to the confidence degrees corresponding to the attributes in the statistical queue to obtain an attribute analysis result includes:
calculating the normalization weight of each attribute in the statistical queue according to the confidence coefficient or the confidence coefficient square value corresponding to each attribute in the statistical queue; and carrying out weighted average calculation according to the normalized weight of each attribute in the statistical queue and the value of each attribute to obtain an attribute analysis result.
According to some embodiments of the present application, the face attribute is a classification attribute; the calculating according to the confidence degrees corresponding to the attributes in the statistical queue to obtain an attribute analysis result includes:
classifying and counting the attributes belonging to the same attribute category in the statistical queue to obtain the number of attributes corresponding to each attribute category; determining the attribute type with the maximum number of attributes as an attribute analysis result according to the number of attributes corresponding to each attribute type;
or, carrying out the cumulative calculation of the confidence coefficient or the confidence coefficient square value on the attributes belonging to the same attribute category in the statistical queue to obtain the cumulative value corresponding to each attribute category, and determining the attribute category with the maximum cumulative value as the attribute analysis result.
According to the second aspect of the application, the apparatus for analyzing the face attribute comprises:
the acquisition module is used for acquiring a face image;
the detection analysis module is used for carrying out condition detection and face attribute analysis on the face image to obtain a condition detection result and face attributes;
and the determining module is used for determining an attribute analysis result according to the face attribute when the condition detection result meets the preset detection condition corresponding to the face attribute and the confidence coefficient of the face attribute reaches a confidence coefficient threshold value.
According to the embodiment of the third aspect of the application, the human face attribute analysis device comprises:
one or more memories; one or more processors configured to execute one or more computer programs stored in the one or more memories, and further configured to perform a method as described in embodiments of the first aspect of the present application.
A computer-readable storage medium according to an embodiment of the fourth aspect of the present application includes instructions that, when executed on a computer, cause the computer to perform the method according to the embodiment of the first aspect of the present application.
A computer program product according to an embodiment of the fifth aspect of the present application contains instructions that, when executed on a computer, cause the computer to perform the method according to an embodiment of the first aspect of the present application.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a face attribute analysis method disclosed in an embodiment of the present application;
fig. 2 is a schematic diagram of an embodiment of performing condition detection and face attribute analysis on a face image in the embodiment of the present application;
fig. 3 is a schematic structural diagram of a face analysis model in an embodiment of the present application;
fig. 4 is a schematic flow chart of another face attribute analysis method disclosed in the embodiment of the present application;
fig. 5 is a schematic structural diagram of a face attribute analysis apparatus disclosed in an embodiment of the present application;
fig. 6 is a schematic structural diagram of another face attribute analysis device disclosed in the embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
In the description of the present application, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and larger, smaller, larger, etc. are understood as excluding the present number, and larger, smaller, inner, etc. are understood as including the present number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
The embodiment of the application discloses a face attribute analysis method and device, which can improve the accuracy of face attribute analysis. The method is applicable to a terminal having a computing processing function, and more specifically, to software or a software module installed on the terminal, where the terminal may include a mobile phone, a tablet computer, a notebook computer, a Personal Computer (PC), a mobile internet device, a server, and the like, and the embodiment of the present application is not particularly limited. The following detailed description is made with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic flow chart of a face attribute analysis method disclosed in an embodiment of the present application.
110. And acquiring a human face image.
In the embodiment of the application, the face video information can be acquired by calling the shooting equipment to shoot the video or by carrying out data communication connection with other terminals for video acquisition, and no specific limitation is made. Based on this, the face video information is decomposed frame by frame to obtain multiple frames of face images, so that the face attribute analysis method shown in steps 110 to 140 is performed on each frame of face image.
120. And carrying out condition detection and face attribute analysis on the face image to obtain a condition detection result and face attributes.
In this embodiment of the present application, performing condition detection on a face image may refer to: the image quality of the face image, the face size in the face image, the face key points, the semantic segmentation size of the five sense organs, and other detection condition types are detected, which is not specifically limited. The condition detection result can be used for judging whether the face attribute analyzed from the face image is reliable or not. The number of the face attributes may be one or more, and the attribute types of the face attributes may be divided into numerical attributes and classification attributes, the numerical attributes may include, but are not limited to, age attributes and posture attributes (such as head orientation angle), and the classification attributes may include, but are not limited to, gender attributes, accessory attributes (such as whether glasses are worn or not, etc.), expression attributes, face attributes, and the like. The number of attributes and the attribute type of the face are not particularly limited.
130. And if the condition detection result meets the preset detection condition corresponding to the face attribute and the confidence coefficient of the face attribute reaches a confidence coefficient threshold value, determining an attribute analysis result according to the face attribute.
In this embodiment of the present application, the preset detection condition corresponding to the face attribute may refer to: and (4) according to the attribute type of the face attribute, presetting detection condition thresholds for different detection condition types. Therefore, the actual detection result of each detection condition type in the condition detection result is compared with the corresponding detection condition threshold, so that whether the condition detection result meets the preset detection condition corresponding to the face attribute can be judged. For example, for the detection condition type of image quality level, it is assumed that the image quality level is classified into 4 levels, and the corresponding detection condition threshold is image quality level 3, and when the condition detection result indicates that the image quality level is 2 levels (< 3 levels), it indicates that the condition detection result does not satisfy the preset detection condition.
In the embodiment of the present application, optionally, if all the detection results related to the face attribute in the condition detection results satisfy the corresponding detection condition threshold, it is determined that the condition detection results satisfy the preset detection condition; otherwise, if the detection result of the judgment condition does not meet the preset detection condition, the face attribute can be filtered out, and subsequent processing is not carried out.
In this embodiment of the application, the confidence threshold is used to determine whether the attribute of the face obtained from the analysis of the face image is credible, and the confidence threshold may be a preset (or artificially set) threshold, such as 80%, 85%, or 90%, which is not specifically limited.
In the embodiment of the application, in another case, if the confidence of the face attribute does not reach the confidence threshold, the face attribute is judged to be unreliable, so that the face attribute can be filtered, the attribute analysis result is not determined according to the face attribute, and the screening of the unreliable attribute is realized.
Therefore, by implementing the method embodiment, the influence of the detection condition in the face image and the confidence coefficient of the face attribute on the analysis of different attributes is considered, and the screened face attribute is ensured to meet the reliable detection condition and reach high confidence coefficient through screening the attribute, so that the attribute analysis result is optimized, and the accuracy of the face attribute analysis is greatly improved.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating an embodiment of performing condition detection and face attribute analysis on a face image according to an embodiment of the present application. As shown in fig. 2, step 120 may specifically be:
121. and inputting the face image into a pre-constructed face analysis model for condition detection and face attribute analysis.
In the embodiment of the present application, the face analysis model may be a pre-constructed neural network model. Specifically, the face analysis model may include an integrated backbone network, a condition detection branch, and an attribute analysis branch. The main network is used for extracting the features of the face image, the condition detection branch is used for condition detection, and the attribute analysis branch is used for face attribute analysis.
In the embodiment of the present application, the backbone network may adopt various backbone networks of a deep learning neural network, such as a ResNet18 network and a MobilenetV1 network, and is not particularly limited. The condition detection branch and the attribute analysis branch may each include a plurality of task branches for implementing different functions, and each task branch may be constructed by using a network structure of a convolutional layer, an excitation layer, a normalization layer, and a full connectivity layer, or by using a neural Network Architecture Search (NAS) technology, which is not limited in particular.
Optionally, before step 120, a large number of face sample images may be used as input data, and the condition detection labels and the face attribute labels labeled from each of the face sample images are used as reference data to train and learn the face analysis model, so as to improve the multitask learning capability of the face analysis model and improve the accuracy of condition detection and face attribute analysis.
122. And obtaining a condition detection result output through the condition detection branch and a human face attribute output through the attribute analysis branch.
In this embodiment, the condition detection branch may at least include an image quality detection branch, a face large angle detection branch, a face key point detection branch, and a five sense organs semantic segmentation branch, which is not specifically limited. The attribute analysis branch may at least include a gender attribute analysis branch, an age attribute analysis branch, a posture attribute analysis branch, and an accessory attribute analysis branch, and is not particularly limited. The following description is made of each task branch included in the condition detection branch.
And the image quality detection branch is used for detecting the image quality of the face image. As an alternative embodiment, a plurality of image quality levels may be set, for example, setting image quality levels 1, 2, 3, 4, where image quality level 1 indicates that the image is very blurred and/or noisy, image quality level 2 indicates that the image is slightly blurred and/or noisy, image quality level 3 indicates that the image is not blurred and contains a small amount of noise, and image quality level 4 indicates that the image is sharp and almost free of noise. Based on the above, the noise characteristic data and the fuzzy characteristic data extracted from the multiple face sample images based on the backbone network and the image quality grade marked on each face sample image are taken as reference data, and the image quality detection branch is trained and learned, so that the image quality judgment is realized by simulating human eyes through the image quality detection branch, and the method is suitable for the face images collected under the low-illumination environment.
Further, if the conditional detection branch includes an image quality detection branch, and the conditional detection result includes an image quality detection result, step 122 may specifically be:
through the image quality detection branch, the quality detection is carried out on the face image based on a quality scoring algorithm to obtain an image quality detection result, wherein the quality scoring algorithm is as follows:
Figure BDA0003125278650000101
where Quality is an image Quality detection result, and a higher score of the image Quality detection result may indicate a higher image Quality. N is the total number of image quality levels and N is a positive integer, PiConfidence, Q, that the feature data of the face image belongs to the i-th level of image qualityiIs the result of normalizing the image quality grade of the ith grade. Optionally, QiCan satisfy the following conditions:
Figure BDA0003125278650000102
for example, if the image isQuality grade is classified into 3 grades, then
Figure BDA0003125278650000103
Therefore, the image quality detection branch is adopted, so that the automatic quality grading of the image quality is realized, and a numerical standard is provided for judging whether the face image quality meets the requirement or not.
And the human face large angle detection branch is used for carrying out secondary classification on the human face large angle and the human face small angle so as to judge whether the human face angle is too large. Wherein, the large angle of the human face can mean that the angle of the human face is 90 degrees or more than 90 degrees; the small angle of the face can mean that the angle of the face is 0-90 degrees, or the angle of the face of the facial features can be positioned from the face image, and no specific limitation is made.
And the face key point detection branch is used for predicting the face key points. Specifically, the face keypoint detection branch may be a detection branch constructed after training is performed in combination with multiple face keypoint data sets (e.g., 300W, WFLW, JD _ Landmark data sets, etc.). Face keypoints may include, but are not limited to, keypoints of eyes, mouth, nose, and eyebrows.
In one implementation, if the face key points include key points of the eyes, there may be two canthus key points, an upper eyelid highest key point, and a lower eyelid lowest key point. Then, from the two corner key points, the corner width can be identified. Eyelid height can be identified from the upper eyelid highest keypoints and the lower eyelid lowest keypoints. In combination with the ratio of the width of the canthus and the height of the eyelids, the degree of closure of the eye can be determined. For example, if the degree of closure of the eyes is less than a threshold of degree of closure in the preset detection condition, it may be determined that the degree of closure of the eyes does not satisfy the preset detection condition, so as to filter out the face image in which the closed-eye face is recognized.
Similarly, in another implementation, if the key points of the face include key points of the mouth, the width of the mouth corner can be identified according to the key points of the mouth corner, the widths of the upper lip and the lower lip can be identified according to the key points of the upper lip and the lower lip, and the opening degree of the mouth can be determined by combining the ratio of the width of the mouth corner to the widths of the upper lip and the lower lip. Illustratively, if the opening degree of the mouth is greater than the opening degree threshold value in the preset detection condition, the opening degree of the mouth is judged not to meet the preset detection condition, so as to filter out the face image with the yawning action.
And the five sense organs semantic segmentation branch is used for calculating the semantic segmentation size of the facial features. The facial features may include eyebrows, eyes, a nose, left and right faces, a chin, and the like, which are not particularly limited. Specifically, a facial feature segmentation data set (such as a CelebAMask-HQ data set) with different occlusion effects added or a facial feature segmentation template generated based on a facial key point data set can be adopted to train and learn the semantic segmentation branches of the facial features. Further, in some implementations, the degree of occlusion for each facial feature may be determined directly from the semantic segmentation size of the facial feature, or the degree of occlusion for a facial feature may also be determined from the ratio of the semantic segmentation size of the facial feature to the standard segmentation size of the facial feature. It is visible, with the degree of sheltering from of facial feature with shelter from the degree threshold value and compare, can judge whether the degree of sheltering from of facial feature satisfies preset detection condition to filter the face image that exists and shelter from the interference.
It can be understood that other task branches can be added to the condition detection branch and the attribute analysis branch according to actual requirements, so that the face analysis model is richer in implementation functions and more flexible in model structure.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a face analysis model according to an embodiment of the present application. As shown in fig. 3, the above steps 121 and 122 are implemented, and a plurality of condition detection results and a plurality of face attributes of the face image can be output by using only one face analysis model. In addition, in one face analysis model, a plurality of task branches included by the condition detection branch and the attribute analysis branch share one backbone network, and a plurality of analysis models do not need to be additionally constructed, so that the calculation amount can be greatly reduced, and the face analysis model is more suitable for being deployed on equipment with limited calculation capacity.
In the embodiment of the present application, if the face attribute is a classification attribute, the confidence of the face attribute, that is, the probability of being classified into the face attribute, can be directly obtained through the attribute analysis branch. In some alternative implementations, the face analysis model may further include numerical classification branches corresponding to different numerical attribute categories. Specifically, the input end of the numerical classification branch may be connected to the output end of the corresponding numerical attribute analysis branch, and configured to perform confidence calculation on the numerical attribute output by the numerical attribute analysis branch, for example, to input the age output by the age attribute analysis branch into the numerical classification branch corresponding to the age attribute. Based on this, if the face attribute is a numerical attribute, the following step 123 may also be performed.
123. And obtaining the confidence coefficient of the face attribute through numerical classification branches.
Therefore, the face analysis model can also obtain the confidence coefficient of the numerical attribute through a special task branch so as to realize the confidence coefficient analysis on the numerical attribute.
Further, in some optional implementation manners, the confidence of the obtained face attribute through the numerical classification branch may specifically be:
and acquiring a numerical range and a plurality of classification ranges divided into the numerical range according to the numerical attribute category of the face attribute through the numerical classification branch. And determining a target classification range from the multiple classification ranges according to the face attributes through the numerical classification branch. And then, determining the confidence coefficient of the face attribute according to the confidence coefficient of the target classification range through numerical classification branches.
In some implementations, the target classification range may be a classification range corresponding to the face attribute in multiple classification ranges, or M (M is a positive integer) classification ranges adjacent to the classification range corresponding to the face attribute, or both of the foregoing classification ranges. Taking the age attribute as an example, if the numerical range of the age attribute is 1 year to 90 years, 3 years can be taken as a classification range, and the numerical range can be divided into 30 classification ranges. Assuming that the age attribute has a value of 5 years, a classification range corresponding to the age attribute, i.e., 4 to 6 years, may be used as the target classification range.
In other implementations, by obtaining the confidence level that the face attribute is divided into each classification range, the target classification range may also be a classification range with the highest confidence level among the multiple classification ranges, or W (W is a positive integer) classification ranges adjacent to the classification range with the highest confidence level, or both of them.
Further, as an optional implementation manner, if the target classification range is a single classification range, the confidence of the target classification range may be determined as the confidence of the human face attribute. Or, if the target classification range includes two or more classification ranges, the confidence degrees of the classification ranges in the target classification range may be summed to obtain the confidence degree of the face attribute.
Referring to fig. 4, fig. 4 is a schematic flow chart of another face attribute analysis method disclosed in the embodiment of the present application.
410. And acquiring a human face image.
420. And carrying out condition detection and face attribute analysis on the face image to obtain a condition detection result and face attributes.
After step 420, if the face attribute belongs to the constant attribute type and the confidence of the face attribute reaches the confidence threshold, the face attribute may be directly determined as the attribute analysis result. If the face attribute does not belong to a constant attribute type, the following step 430 may be performed. The constant attribute type may refer to an attribute type in which an attribute value is constant over a certain period, such as a gender attribute and an age attribute.
In the embodiment of the present application, step 410 and step 420 may refer to the description of step 110 to step 120 in the embodiment shown in fig. 1, and are not described herein again.
430. And if the condition detection result meets the preset detection condition corresponding to the face attribute and the confidence coefficient of the face attribute reaches a confidence coefficient threshold value, determining a statistical queue corresponding to the face attribute.
In the embodiment of the application, different statistical queues can be constructed according to the attribute type of the face attribute. The number of attributes that can be added to the statistical queue of the face attributes may also be set manually, for example, 5 or 10, and is not particularly limited.
440. The face attributes are added to the statistical queue.
As an alternative implementation, if the statistics queue is full, the target attribute meeting the deletion condition in the statistics queue may be obtained. And if the confidence coefficient of the face attribute is greater than or equal to the confidence coefficient of the target attribute, deleting the target attribute from the statistical queue to obtain an updated statistical queue, and adding the face attribute into the updated statistical queue.
Wherein, the deleting condition may be: the target attribute is the attribute with the lowest confidence level in the statistical queue; or the target attribute is the attribute with the longest adding time in the statistical queue; or, both of the above items are satisfied, for example, when two or more attributes with the lowest confidence exist in the statistical queue, the attribute with the longest joining time is determined from the attributes as the target attribute. The deletion condition is not particularly limited.
Further, in another case, if the confidence of the face attribute is less than the confidence of the target attribute, the face attribute may be discarded. Therefore, when the statistical queue is full, the confidence coefficient comparison is carried out on the face attribute and the attribute reaching the deletion condition in the statistical queue, and the attribute with lower confidence coefficient is discarded, so that the statistical queue always has the attribute with higher confidence coefficient, and the reliability of the statistical queue can be ensured.
450. And calculating according to the confidence degrees corresponding to the attributes in the statistical queue to obtain an attribute analysis result.
If the face attribute is a numerical attribute, in an implementation manner, step 450 may specifically be: and calculating the normalization weight of each attribute in the statistical queue according to the confidence coefficient or the confidence coefficient square value corresponding to each attribute in the statistical queue. And carrying out weighted average calculation according to the normalized weight of each attribute in the statistical queue and the value of each attribute to obtain an attribute analysis result. Specifically, the weighted average calculation formula may be:
Figure BDA0003125278650000151
while
Figure BDA0003125278650000152
Or
Figure BDA0003125278650000153
Wherein R is the attribute analysis result, S is the attribute number in the statistical queue, Ci' normalized weight, Q, for each attribute in the statistical queueiFor each attribute value, CiIs the confidence of each attribute.
If the face attribute is a classification attribute, in an implementation manner, step 450 may specifically be: classifying and counting the attributes belonging to the same attribute category in the statistical queue to obtain the number of attributes corresponding to each attribute category; and determining the attribute type with the maximum number of attributes as an attribute analysis result according to the number of attributes corresponding to each attribute type. For example, if the face attribute is a gender attribute, assuming that there are 3 male attributes and 5 female attributes in the statistical queue, a female may be determined as an attribute analysis result, that is, the face gender in the face image is a female.
In other implementation manners, the attributes belonging to the same attribute category in the statistical queue may be subjected to cumulative calculation of confidence or a confidence square value to obtain a cumulative value corresponding to each attribute category, and the attribute category with the largest cumulative value is determined as the attribute analysis result, so as to ensure that the attribute analysis result satisfies the maximum confidence.
It can be seen that, by implementing the above steps 440 and 450, as the face analysis model analyzes the multiple frames of face images in the face video information one by one, each statistical queue can be continuously updated by using more and more attributes with high confidence, so that the attribute analysis result calculated based on the statistical queue is more and more optimized and reliable, thereby greatly improving the accuracy of the face attribute analysis.
Therefore, by implementing the method embodiment, the influence of image quality on analysis of different attributes is considered, and the screened face attributes are ensured to meet reliable detection conditions and reach high confidence level through screening the attributes, so that the attribute analysis result is optimized, and the accuracy of face attribute analysis is greatly improved.
The above description is made on the face attribute analysis method in the embodiment of the present application, and the following description is made on the face attribute analysis device in the embodiment of the present application.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a face attribute analysis device according to an embodiment of the present application. As shown in fig. 5, the apparatus may include an acquisition module 501, a detection analysis module 502, and a determination module 503, wherein:
an obtaining module 501, configured to obtain a face image.
The detection analysis module 502 is configured to perform condition detection and face attribute analysis on the face image to obtain a condition detection result and a face attribute.
The determining module 503 is configured to determine an attribute analysis result according to the face attribute when the condition detection result meets a preset detection condition corresponding to the face attribute and the confidence of the face attribute reaches a confidence threshold.
In this embodiment of the present application, as an optional implementation manner, the detection analysis module 502 is further configured to input a face image into a pre-constructed face analysis model for condition detection and face attribute analysis, where the face analysis model includes a backbone network, a condition detection branch, and an attribute analysis branch; and obtaining the condition detection result output through the condition detection branch and the human face attribute output through the attribute analysis branch.
Further, as an optional implementation manner, the condition detection branch at least includes an image quality detection branch, a human face large angle detection branch, a human face key point detection branch and a five sense organ semantic segmentation branch; the attribute analysis branch at least comprises a gender attribute analysis branch, an age attribute analysis branch, a posture attribute analysis branch and an accessory attribute analysis branch.
Further, as an optional implementation, the face attribute is a numerical attribute, and the face analysis model further includes numerical classification branches corresponding to different numerical attribute categories; the detection analysis module 502 is further configured to obtain a confidence of the face attribute through a numerical classification branch after the face image is input into a pre-constructed face analysis model for condition detection and face attribute analysis.
Still further, as an optional implementation manner, the detection analysis module 502 is further configured to obtain, through a numerical classification branch, a numerical range and a plurality of classification ranges into which the numerical range is divided according to the numerical attribute category of the face attribute; determining a target classification range from a plurality of classification ranges according to the face attribute; and determining the confidence coefficient of the face attribute according to the confidence coefficient of the target classification range. The target classification range comprises a classification range corresponding to the face attribute and/or M classification ranges adjacent to the classification range corresponding to the face attribute, wherein M is a positive integer.
In this embodiment of the present application, as an optional implementation manner, the determining module 503 may include a determining unit, an adding unit, and a calculating unit, where: and the determining unit is used for determining the statistical queue corresponding to the face attribute when the condition detection result meets the preset detection condition corresponding to the face attribute and the confidence coefficient of the face attribute reaches a confidence coefficient threshold value. And the adding unit is used for adding the face attribute into the statistical queue. And the calculating unit is used for calculating according to the confidence degrees corresponding to the attributes in the statistical queue to obtain an attribute analysis result.
Further, as an optional implementation manner, the adding unit may be specifically configured to, when the statistics queue is full, obtain a target attribute that satisfies a deletion condition in the statistics queue, where the deletion condition includes: the target attribute is the attribute with the lowest confidence level in the statistical queue, and/or the target attribute is the attribute with the longest adding time in the statistical queue; and when the confidence coefficient of the face attribute is greater than or equal to the confidence coefficient of the target attribute, deleting the target attribute from the statistical queue to obtain an updated statistical queue, and then adding the face attribute to the updated statistical queue.
In the embodiment of the present application, as an optional implementation manner, the face attribute is a numerical attribute. The calculating unit is further configured to calculate a normalization weight of each attribute in the statistical queue according to the confidence coefficient or the confidence square value corresponding to each attribute in the statistical queue; and carrying out weighted average calculation according to the normalized weight of each attribute in the statistical queue and the value of each attribute to obtain an attribute analysis result.
As another alternative, the face attribute is a classification attribute. The computing unit is further configured to classify and count attributes belonging to the same attribute category in the statistical queue, and obtain the number of attributes corresponding to each attribute category; determining the attribute type with the maximum number of attributes as an attribute analysis result according to the number of attributes corresponding to each attribute type; or, carrying out the cumulative calculation of the confidence coefficient or the confidence coefficient square value on the attributes belonging to the same attribute category in the statistical queue to obtain the cumulative value corresponding to each attribute category, and determining the attribute category with the maximum cumulative value as the attribute analysis result.
It should be noted that, for the specific implementation process of the present embodiment, reference may be made to the specific implementation process described in the above method embodiment, and a description thereof is omitted here.
Referring to fig. 6, fig. 6 is a schematic structural diagram of another face attribute analysis device disclosed in the embodiment of the present application, including:
one or more memories 601;
one or more processors 602 for executing one or more computer programs stored in the one or more memories 601 to perform the methods described in the embodiments above.
It should be noted that, for the specific implementation process of the present embodiment, reference may be made to the specific implementation process described in the above method embodiment, and a description thereof is omitted here.
The embodiment of the present application provides a computer-readable storage medium, on which computer instructions are stored, and when the computer instructions are executed, the computer is caused to execute the face attribute analysis method described in the above method embodiment.
The embodiments of the present application also disclose a computer program product, wherein, when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method as in the above method embodiments.
Those skilled in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program instructing associated hardware, the program may be stored in a computer-readable storage medium including read-only memory (ROM), Random Access Memory (RAM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), one-time programmable read-only memory (PROM), an electronic erasable rewritable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, magnetic disk storage, tape storage, or any other medium capable of being used to carry or store data and readable by a computer.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the application, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. A method for analyzing human face attributes is characterized by comprising the following steps:
acquiring a face image;
carrying out condition detection and face attribute analysis on the face image to obtain a condition detection result and face attributes;
and if the condition detection result meets the preset detection condition corresponding to the face attribute and the confidence coefficient of the face attribute reaches a confidence coefficient threshold value, determining an attribute analysis result according to the face attribute.
2. The method of claim 1, wherein the performing condition detection and face attribute analysis on the face image to obtain a condition detection result and a face attribute comprises:
inputting the face image into a pre-constructed face analysis model for condition detection and face attribute analysis, wherein the face analysis model comprises a backbone network, a condition detection branch and an attribute analysis branch;
and obtaining a condition detection result output through the condition detection branch and a face attribute output through the attribute analysis branch.
3. The method according to claim 2, wherein the condition detection branch comprises at least an image quality detection branch, a face large angle detection branch, a face key point detection branch and a five sense organ semantic segmentation branch; the attribute analysis branch at least comprises a gender attribute analysis branch, an age attribute analysis branch, a posture attribute analysis branch and an accessory attribute analysis branch.
4. The method of claim 2, wherein the face attributes are numerical attributes, and the face analysis model further comprises numerical classification branches corresponding to different numerical attribute categories; after the face image is input into a pre-constructed face analysis model for condition detection and face attribute analysis, the method further comprises the following steps:
and obtaining the confidence coefficient of the face attribute through the numerical classification branch.
5. The method of claim 4, wherein obtaining confidence of the face attribute through the numerical classification branch comprises:
acquiring a numerical range and a plurality of classification ranges divided into the numerical range according to the numerical attribute category of the face attribute through the numerical classification branch;
determining a target classification range from a plurality of classification ranges according to the face attribute; the target classification range comprises a classification range corresponding to the face attribute and/or M classification ranges adjacent to the classification range corresponding to the face attribute, wherein M is a positive integer;
and determining the confidence coefficient of the face attribute according to the confidence coefficient of the target classification range.
6. The method according to any one of claims 1 to 5, wherein determining an attribute analysis result according to the face attribute comprises:
determining a statistical queue corresponding to the face attribute;
adding the face attribute to the statistical queue;
and calculating according to the confidence degrees corresponding to the attributes in the statistical queue to obtain an attribute analysis result.
7. The method of claim 6, wherein adding the face attributes to the statistical queue comprises:
if the statistical queue is full, acquiring a target attribute meeting a deletion condition in the statistical queue; wherein the deletion condition includes: the target attribute is the attribute with the lowest confidence level in the statistical queue, and/or the target attribute is the attribute with the longest addition time in the statistical queue;
if the confidence coefficient of the face attribute is greater than or equal to the confidence coefficient of the target attribute, deleting the target attribute from the statistical queue to obtain an updated statistical queue;
adding the face attribute to the updated statistical queue.
8. The method of claim 6, wherein the face attribute is a numerical attribute; the calculating according to the confidence degrees corresponding to the attributes in the statistical queue to obtain an attribute analysis result includes:
calculating the normalization weight of each attribute in the statistical queue according to the confidence coefficient or the confidence coefficient square value corresponding to each attribute in the statistical queue;
and carrying out weighted average calculation according to the normalized weight of each attribute in the statistical queue and the value of each attribute to obtain an attribute analysis result.
9. The method of claim 6, wherein the face attribute is a classification attribute; the calculating according to the confidence degrees corresponding to the attributes in the statistical queue to obtain an attribute analysis result includes:
classifying and counting the attributes belonging to the same attribute category in the statistical queue to obtain the number of attributes corresponding to each attribute category; determining the attribute type with the maximum number of attributes as an attribute analysis result according to the number of attributes corresponding to each attribute type;
or, carrying out the cumulative calculation of the confidence coefficient or the confidence coefficient square value on the attributes belonging to the same attribute category in the statistical queue to obtain the cumulative value corresponding to each attribute category, and determining the attribute category with the maximum cumulative value as the attribute analysis result.
10. An apparatus for analyzing attributes of a human face, the apparatus comprising:
the acquisition module is used for acquiring a face image;
the detection analysis module is used for carrying out condition detection and face attribute analysis on the face image to obtain a condition detection result and face attributes;
and the determining module is used for determining an attribute analysis result according to the face attribute when the condition detection result meets the preset detection condition corresponding to the face attribute and the confidence coefficient of the face attribute reaches a confidence coefficient threshold value.
CN202110687994.9A 2021-06-21 2021-06-21 Face attribute analysis method and device Pending CN113536947A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110687994.9A CN113536947A (en) 2021-06-21 2021-06-21 Face attribute analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110687994.9A CN113536947A (en) 2021-06-21 2021-06-21 Face attribute analysis method and device

Publications (1)

Publication Number Publication Date
CN113536947A true CN113536947A (en) 2021-10-22

Family

ID=78125414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110687994.9A Pending CN113536947A (en) 2021-06-21 2021-06-21 Face attribute analysis method and device

Country Status (1)

Country Link
CN (1) CN113536947A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882576A (en) * 2022-07-07 2022-08-09 中关村科学城城市大脑股份有限公司 Face recognition method, electronic device, computer-readable medium, and program product

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143079A (en) * 2013-05-10 2014-11-12 腾讯科技(深圳)有限公司 Method and system for face attribute recognition
CN104298683A (en) * 2013-07-18 2015-01-21 佳能株式会社 Theme digging method and equipment and query expansion method and equipment
CN107818335A (en) * 2017-10-09 2018-03-20 南京航空航天大学 A kind of rail cracks recognition methods adaptive weighted based on multi-categorizer
CN109389135A (en) * 2017-08-03 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of method for screening images and device
CN109542232A (en) * 2018-11-29 2019-03-29 努比亚技术有限公司 Switching method, terminal and the computer readable storage medium of terminal horizontal/vertical screen mode
CN109635680A (en) * 2018-11-26 2019-04-16 深圳云天励飞技术有限公司 Multitask attribute recognition approach, device, electronic equipment and storage medium
CN109993102A (en) * 2019-03-28 2019-07-09 北京达佳互联信息技术有限公司 Similar face retrieval method, apparatus and storage medium
CN110163171A (en) * 2019-05-27 2019-08-23 北京字节跳动网络技术有限公司 The method and apparatus of face character for identification
CN111199491A (en) * 2018-10-31 2020-05-26 百度在线网络技术(北京)有限公司 Social circle recommendation method and device
CN111354362A (en) * 2020-02-14 2020-06-30 北京百度网讯科技有限公司 Method and device for assisting hearing-impaired communication
CN111797653A (en) * 2019-04-09 2020-10-20 华为技术有限公司 Image annotation method and device based on high-dimensional image
CN112069885A (en) * 2020-07-30 2020-12-11 深圳市优必选科技股份有限公司 Face attribute identification method and device and mobile terminal
CN112668455A (en) * 2020-12-24 2021-04-16 平安科技(深圳)有限公司 Face age identification method and device, terminal equipment and storage medium
CN112785495A (en) * 2021-01-27 2021-05-11 驭势科技(南京)有限公司 Image processing model training method, image generation method, device and equipment
CN112949693A (en) * 2021-02-02 2021-06-11 北京嘀嘀无限科技发展有限公司 Training method of image classification model, image classification method, device and equipment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143079A (en) * 2013-05-10 2014-11-12 腾讯科技(深圳)有限公司 Method and system for face attribute recognition
CN104298683A (en) * 2013-07-18 2015-01-21 佳能株式会社 Theme digging method and equipment and query expansion method and equipment
CN109389135A (en) * 2017-08-03 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of method for screening images and device
CN107818335A (en) * 2017-10-09 2018-03-20 南京航空航天大学 A kind of rail cracks recognition methods adaptive weighted based on multi-categorizer
CN111199491A (en) * 2018-10-31 2020-05-26 百度在线网络技术(北京)有限公司 Social circle recommendation method and device
CN109635680A (en) * 2018-11-26 2019-04-16 深圳云天励飞技术有限公司 Multitask attribute recognition approach, device, electronic equipment and storage medium
CN109542232A (en) * 2018-11-29 2019-03-29 努比亚技术有限公司 Switching method, terminal and the computer readable storage medium of terminal horizontal/vertical screen mode
CN109993102A (en) * 2019-03-28 2019-07-09 北京达佳互联信息技术有限公司 Similar face retrieval method, apparatus and storage medium
CN111797653A (en) * 2019-04-09 2020-10-20 华为技术有限公司 Image annotation method and device based on high-dimensional image
CN110163171A (en) * 2019-05-27 2019-08-23 北京字节跳动网络技术有限公司 The method and apparatus of face character for identification
CN111354362A (en) * 2020-02-14 2020-06-30 北京百度网讯科技有限公司 Method and device for assisting hearing-impaired communication
CN112069885A (en) * 2020-07-30 2020-12-11 深圳市优必选科技股份有限公司 Face attribute identification method and device and mobile terminal
CN112668455A (en) * 2020-12-24 2021-04-16 平安科技(深圳)有限公司 Face age identification method and device, terminal equipment and storage medium
CN112785495A (en) * 2021-01-27 2021-05-11 驭势科技(南京)有限公司 Image processing model training method, image generation method, device and equipment
CN112949693A (en) * 2021-02-02 2021-06-11 北京嘀嘀无限科技发展有限公司 Training method of image classification model, image classification method, device and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882576A (en) * 2022-07-07 2022-08-09 中关村科学城城市大脑股份有限公司 Face recognition method, electronic device, computer-readable medium, and program product
CN114882576B (en) * 2022-07-07 2022-09-20 中关村科学城城市大脑股份有限公司 Face recognition method, electronic device, computer-readable medium, and program product

Similar Documents

Publication Publication Date Title
CN109697416B (en) Video data processing method and related device
WO2021217934A1 (en) Method and apparatus for monitoring number of livestock, and computer device and storage medium
CN100592322C (en) An automatic computer authentication method for photographic faces and living faces
CN108717663B (en) Facial tag fraud judging method, device, equipment and medium based on micro expression
TW202004637A (en) Risk prediction method and apparatus, storage medium, and server
CN111767900B (en) Face living body detection method, device, computer equipment and storage medium
US20120141017A1 (en) Reducing false detection rate using local pattern based post-filter
CN112016527B (en) Panda behavior recognition method, system, terminal and medium based on deep learning
CN106557723A (en) A kind of system for face identity authentication with interactive In vivo detection and its method
CN113111690B (en) Facial expression analysis method and system and satisfaction analysis method and system
CN109299690B (en) Method capable of improving video real-time face recognition precision
CN105335691A (en) Smiling face identification and encouragement system
US9378406B2 (en) System for estimating gender from fingerprints
CN109635643A (en) A kind of fast human face recognition based on deep learning
CN110149531A (en) The method and apparatus of video scene in a kind of identification video data
CN110674680A (en) Living body identification method, living body identification device and storage medium
CN116386120B (en) A noninductive control management system for wisdom campus dormitory
CN112150692A (en) Access control method and system based on artificial intelligence
CN111326139A (en) Language identification method, device, equipment and storage medium
CN111401343A (en) Method for identifying attributes of people in image and training method and device for identification model
CN113536947A (en) Face attribute analysis method and device
CN112906810B (en) Target detection method, electronic device, and storage medium
JP4708835B2 (en) Face detection device, face detection method, and face detection program
Chandran et al. Pedestrian crowd level estimation by Head detection using bio-inspired retina model
CN108024148A (en) The multimedia file recognition methods of Behavior-based control feature, processing method and processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211022