CN116343300A - Face feature labeling method, device, terminal and medium - Google Patents

Face feature labeling method, device, terminal and medium Download PDF

Info

Publication number
CN116343300A
CN116343300A CN202310306726.7A CN202310306726A CN116343300A CN 116343300 A CN116343300 A CN 116343300A CN 202310306726 A CN202310306726 A CN 202310306726A CN 116343300 A CN116343300 A CN 116343300A
Authority
CN
China
Prior art keywords
feature
sample
face
preset
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310306726.7A
Other languages
Chinese (zh)
Inventor
张旺
周宸
吴振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202310306726.7A priority Critical patent/CN116343300A/en
Publication of CN116343300A publication Critical patent/CN116343300A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application belongs to the technical field of Internet, and particularly relates to a face feature labeling method, a device, a terminal and a medium. The face feature labeling method can be applied to various application scenes such as financial transactions, security verification, medical technology and the like, and comprises the following steps: extracting preset key points from a sample to be marked, wherein the preset key points are used for representing facial features to be identified of faces in the sample to be marked; calculating feature classification parameters according to preset key points, wherein the feature classification parameters are determined according to the types of the facial features to be identified; comparing the feature classification parameters with a preset classification threshold to obtain a comparison result, wherein the preset classification threshold is used for distinguishing the feature types of the facial features to be identified; classifying the sample to be marked according to the comparison result; and carrying out feature labeling on the classified sample to be labeled to obtain a target sample. According to the labeling method and the labeling device, the labeling is carried out after the sample to be labeled is classified, so that the labeling accuracy of the sample for training the face feature recognition model is improved.

Description

Face feature labeling method, device, terminal and medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a method, an apparatus, a terminal, and a medium for labeling facial features.
Background
In daily life, the face recognition technology is widely applied to various fields and scenes, such as financial transaction, security verification, medical technology and the like, and the face image is recognized through the neural network model, so that various face features corresponding to the face image can be obtained. In some special cases, people need to pay further attention to various attributes of face features, and common face feature attributes include non-long face, square face, long face, sharp chin, high cheekbone, cheek width and the like.
In practical application, training samples need to be labeled in advance when training a neural network model with a face feature recognition function, and the conventional technical scheme is to label the training samples manually. The definition of the face characteristic attribute often has the characteristics of strong subjectivity and fuzzy boundary, so that the sample labeling difficulty is high, different labeling personnel can easily understand the same face image, and the accuracy of sample labeling and the model training effect are affected.
Therefore, how to improve the labeling accuracy of the sample for training the face feature recognition model, so as to improve the model training effect, is a difficult problem to be solved in the technical field of the internet at present.
Disclosure of Invention
The invention mainly aims to provide a face feature labeling method, a device, a terminal and a medium, wherein the face feature labeling method can be applied to various fields and scenes such as financial transactions, security verification and medical technology, and aims to reduce the influence of human errors on a sample labeling process and improve the labeling accuracy of samples for training a face feature recognition model by classifying samples to be labeled and then labeling the samples correspondingly.
According to an aspect of the embodiment of the application, a face feature labeling method is disclosed, which comprises the following steps:
extracting preset key points from a sample to be marked, wherein the preset key points are used for representing facial features to be identified of faces in the sample to be marked;
calculating feature classification parameters according to the preset key points, wherein the feature classification parameters are determined according to the types of the facial features to be identified;
comparing the feature classification parameters with a preset classification threshold to obtain a comparison result, wherein the preset classification threshold is used for distinguishing the feature types of the facial features to be identified;
classifying the sample to be marked according to the comparison result;
and carrying out feature labeling on the classified sample to be labeled to obtain a target sample.
In some embodiments of the present application, based on the above technical solution, calculating the feature classification parameter according to the preset key point includes:
selecting a plurality of target key points from the preset key points to obtain a target key point combination, wherein the target key points are representative relative to other preset key points when the target key points are used for representing the facial features to be identified;
and calculating characteristic classification parameters according to the target key point combination.
In some embodiments of the present application, based on the above technical solution, before calculating the feature classification parameter according to the preset key point, the face feature labeling method further includes:
calculating a plurality of alternative feature classification parameters based on different preset key points;
analyzing the multiple alternative feature classification parameters to obtain multiple corresponding numerical trend analysis results;
and if the numerical trend analysis results have the difference, determining the alternative characteristic classification parameters as the characteristic classification parameters.
In some embodiments of the present application, based on the above technical solution, before comparing the feature classification parameter with a preset classification threshold to obtain a comparison result, the face feature labeling method further includes:
Dividing a plurality of characteristic category areas according to the trend analysis result;
and respectively determining corresponding preset classification thresholds for the plurality of feature type areas.
In some embodiments of the present application, based on the above technical solution, determining corresponding preset classification thresholds for the plurality of feature type regions respectively includes:
acquiring a plurality of numerical trend change information corresponding to the numerical trend analysis results;
and analyzing and determining preset classification thresholds corresponding to the feature type areas respectively according to the numerical trend change information.
In some embodiments of the present application, based on the above technical solution, the plurality of feature type regions are provided with fuzzy regions, and two adjacent preset classification thresholds are provided with a threshold range to be classified corresponding to the fuzzy regions.
In some embodiments of the present application, based on the above technical solution, after performing feature labeling on the classified sample to be labeled to obtain a target sample, the face feature labeling method further includes:
training according to the target sample to obtain a preset image recognition model;
and identifying a sample set through the preset image identification model, wherein the sample set comprises a plurality of images to be identified.
According to an aspect of the embodiments of the present application, a face feature labeling device is disclosed, including:
the extraction module is configured to extract preset key points from a sample to be marked, wherein the preset key points are used for representing facial features to be identified of a face in the sample to be marked;
the computing module is configured to compute feature classification parameters according to the preset key points, wherein the feature classification parameters are determined according to the types of the facial features to be identified;
the comparison module is configured to compare the feature classification parameters with a preset classification threshold value to obtain a comparison result, and the preset classification threshold value is used for distinguishing the feature types of the facial features to be identified;
the classification module is configured to classify the sample to be marked according to the comparison result;
and the labeling module is configured to perform feature labeling on the classified samples to be labeled to obtain target samples.
According to an aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the face feature labeling method as in the above technical solution.
According to the face feature labeling method, the preset key points are extracted from the sample to be labeled through the existing algorithm and are used for representing the face features to be recognized of the face in the sample to be labeled, such as the overall face shape or the chin shape, then feature classification parameters for distinguishing the types of the face features to be recognized are calculated according to the preset key points, if distinguishing the overall face shape to be a long face or a wide face, the aspect ratio of the face of the image is calculated according to the preset key points, then the obtained feature classification parameters are compared with the preset classification threshold to obtain a comparison result, the sample to be labeled is classified according to the comparison result, and finally the classified sample to be labeled is subjected to more detailed labeling, so that the target sample finally used for training the model is obtained.
Therefore, according to the face feature labeling method, the sample to be labeled is classified and then is labeled correspondingly, so that the influence of human errors on the sample labeling process is reduced, and the labeling accuracy of the sample for training the face feature recognition model is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a flowchart illustrating steps of a face feature labeling method in an embodiment of the present application.
Fig. 2 shows an application flowchart of the face feature labeling method in one embodiment of the present application.
FIG. 3 illustrates a graph corresponding to the numerical trend analysis result generated by calculating the aspect ratio of a face for a sample to be annotated in one embodiment of the present application.
Fig. 4 shows a graph corresponding to a numerical trend analysis result generated by calculating the aspect ratio of a face for a sample to be annotated according to another embodiment of the present application.
Fig. 5 schematically shows a block diagram of a face feature labeling apparatus according to an embodiment of the present application.
Fig. 6 schematically illustrates a block diagram of a computer system suitable for use in implementing embodiments of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present application. One skilled in the relevant art will recognize, however, that the aspects of the application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In daily life, the face recognition technology is widely applied to various fields and scenes, such as financial transaction, security verification, medical technology and the like, and the face image is recognized through the neural network model, so that various face features corresponding to the face image can be obtained. In some special occasions, people need to pay further attention to various attributes of face features, and common face feature attributes include non-long face, square face, long face, sharp chin, cheekbone height, cheek width and the like, namely whether the current face image accords with a pre-stored face image corresponding to security verification standard or not is judged through the face feature attributes, or the medical direction, the specific medical scheme and the like of the face image is judged through the face feature attributes.
In practical application, training samples need to be labeled in advance when training a neural network model with a face feature recognition function, and the conventional technical scheme is to label the training samples manually. The definition of the face characteristic attribute often has the characteristics of strong subjectivity and fuzzy boundary, so that the sample labeling difficulty is high, different labeling personnel can easily understand the same face image, and the accuracy of sample labeling and the model training effect are affected.
Specifically, for example, each labeling person has own judgment standards for the types of face contours, such as a round face, a square face or a melon seed face, and the judgment standards have stronger subjectivity and fuzzy boundaries, so that in the actual labeling process, each labeling person may have a sample of the round face type labeled by the labeling person A, the judgment standards of the labeling person B should belong to the square face type, and meanwhile, the accuracy of sample labeling is seriously affected when a plurality of labeling persons label according to different judgment standards because the number of the samples to be labeled is very huge, and when the target sample input model obtained through the labeling process is trained, the model training result is likely to be not converged, or the model obtained through training cannot achieve better effect on sample recognition.
In order to solve the above technical problems, it is necessary to reduce the influence of human errors in the labeling process as much as possible, so that the judgment standards of the sample to be labeled in the labeling process are as consistent as possible. Based on the above thought, the invention concept of the application is that for a sample set containing a large number of samples to be marked, rough classification is performed on the sample set once or multiple times according to certain specific parameters, then finer manual marking is performed on the samples to be marked which are subjected to rough classification, and finally a target sample is obtained. That is, the above rough classification process is equivalent to limiting the feature labeling range of the labeling personnel to a certain extent when the labeling personnel performs feature labeling on the sample to be labeled after the pretreatment, so that the effect that a plurality of labeling personnel approach each other for different judgment standards of the sample to be labeled is facilitated.
The following describes in detail the face feature labeling method, device, terminal, medium and other technical schemes provided in the present application in combination with specific embodiments.
Fig. 1 shows a step flowchart of a face feature labeling method in an embodiment of the present application, and as shown in fig. 1, the face feature labeling method mainly includes the following steps S100 to S500.
Step S100, extracting preset key points from a sample to be marked, wherein the preset key points are used for representing facial features to be identified of faces in the sample to be marked.
Step S200, calculating feature classification parameters according to the preset key points, wherein the feature classification parameters are determined according to the types of the facial features to be identified.
And step S300, comparing the feature classification parameters with a preset classification threshold to obtain a comparison result, wherein the preset classification threshold is used for distinguishing the feature type of the facial feature to be identified.
And step S400, classifying the sample to be marked according to the comparison result.
And S500, performing feature labeling on the classified sample to be labeled to obtain a target sample.
According to the face feature labeling method, the preset key points are extracted from the sample to be labeled through the existing algorithm and are used for representing the face features to be recognized of the face in the sample to be labeled, such as the overall face shape or the chin shape, then feature classification parameters for distinguishing the types of the face features to be recognized are calculated according to the preset key points, if distinguishing the overall face shape to be a long face or a wide face, the aspect ratio of the face of the image is calculated according to the preset key points, then the obtained feature classification parameters are compared with the preset classification threshold to obtain a comparison result, the sample to be labeled is classified according to the comparison result, and finally the classified sample to be labeled is subjected to more detailed labeling, so that the target sample finally used for training the model is obtained.
Therefore, according to the face feature labeling method, the sample to be labeled is classified and then is labeled correspondingly, so that the influence of human errors on the sample labeling process is reduced, and the labeling accuracy of the sample for training the face feature recognition model is improved.
In one possible embodiment of the present application, when the face feature labeling method of the present applicant is applied to a financial transaction scenario, face recognition needs to be performed on a user to verify the identity of the user in the online transaction process of the user. In the face recognition process, a user needs to be scanned to obtain a face image of the user, and then the features of the face image are recognized through a face feature recognition model so as to judge whether the face image is matched with the data record of the database. Therefore, the corresponding face feature recognition model is required to be obtained through training the training sample in advance, in the labeling process of the training sample, the feature classification parameters of the training sample are classified, and then the classified training sample is carefully labeled, so that the influence of human factors on the training sample in the labeling process is reduced, the labeling accuracy of the sample for training the feature face feature recognition model is improved, the face feature recognition model obtained through training is more effective, and the transaction safety is correspondingly improved in a financial transaction scene.
In another possible embodiment of the present application, when the face feature labeling method of the present applicant is applied to the medical technical field, such as an on-line inquiry scenario, a user needs to be scanned to obtain a face image of the user, then the face image is identified through a face feature identification model, and the identification result is analyzed, and finally a corresponding medical suggestion/medical solution is generated and output. Therefore, the corresponding face feature recognition model is required to be obtained through training the training sample in advance, in the labeling process of the training sample, the feature classification parameters of the training sample are classified, and then the classified training sample is carefully labeled, so that the influence of human factors on the training sample in the labeling process is reduced, the labeling accuracy of the sample for training the feature face feature recognition model is improved, the face feature recognition model obtained through training is more effective, the face feature recognition result of the user in an on-line consultation scene is more accurate, and more targeted and more suitable medical advice/medical treatment schemes can be provided for the user.
The following describes each method step in the face feature labeling method in detail.
Step S100, extracting preset key points from a sample to be marked, wherein the preset key points are used for representing facial features to be identified of faces in the sample to be marked.
The key points of the face are key region positions obtained by marking, positioning or aligning the key points of the face, including eyebrows, eyes, nose, mouth, face contours and the like. The key points of the face are usually marked manually, and key points, such as face feature points, skeleton connection points and the like, are marked at specified positions, so that a face recognition model and a statistical model are trained.
The face key point labeling is a key step in the fields of face recognition and analysis, and is a precondition and a break of other face related problems such as automatic face recognition, expression analysis and the like.
In this embodiment, an existing face key point model is applied to a face image as a sample to be annotated to extract key points for characterizing each facial feature of the face image, for example, 468 key points can be extracted by a mediap algorithm. The key points can be used for representing facial features such as facial contours, facial feature shapes and the like of the human face, and the facial features comprise facial features to be identified for labeling and classifying the sample to be labeled. It should be noted that the number of facial key points that can be extracted by different facial key point models is different, on the basis of which, the greater the number of extracted facial key points, the greater the operability in the numerical processing process of calculating the feature classification parameters according to the facial key points, so that the accuracy of classifying and labeling the sample to be labeled can be improved.
Step S200, calculating feature classification parameters according to the preset key points, wherein the feature classification parameters are determined according to the types of the facial features to be identified.
The key points corresponding to the facial features to be identified are selected to calculate feature classification parameters for distinguishing the kinds of the facial features to be identified.
Specifically, for example, assuming that the facial feature to be recognized is a face contour, the types of the face contour include long faces and non-long faces, and the aspect ratio of the face is calculated according to the key points representing the face contour, so that the face contour of the sample to be marked is marked and classified according to the aspect ratio of the face.
As another alternative embodiment, assuming that the facial feature to be identified is a chin outline, the types of chin outlines include sharp chin and round chin, and calculating the included angle of the chin according to the key points representing the chin outline, so as to label and classify the chin outline of the sample to be labeled according to the included angle of the chin.
And step S300, comparing the feature classification parameters with a preset classification threshold to obtain a comparison result, wherein the preset classification threshold is used for distinguishing the feature type of the facial feature to be identified.
And after the feature classification parameters are obtained through calculation, comparing the feature classification parameters with a preset classification threshold, wherein the preset classification threshold is used for dividing different numerical ranges according to the feature types of the facial features to be identified.
Specifically, for example, assuming that the facial feature to be recognized is a face contour, the feature classification parameter is an aspect ratio of a face, when the aspect ratio of the face is less than 0.73, the corresponding face contour is a long face, and when the aspect ratio of the face is greater than 0.74, the corresponding face contour is a non-long face.
And step S400, classifying the sample to be marked according to the comparison result.
And determining the feature type corresponding to the feature classification parameter according to the fact that the feature classification parameter is located in a numerical range determined by dividing a preset classification threshold.
Specifically, for example, assuming that the facial feature to be recognized is a face contour, the feature classification parameter is an aspect ratio of a face, when the aspect ratio of the face is 0.75, the aspect ratio of the face is greater than 0.74 at this time, and thus the face contour corresponding to the aspect ratio of the face is classified as a non-long face type.
And S500, performing feature labeling on the classified sample to be labeled to obtain a target sample.
And classifying the total samples to be marked with larger quantity according to the characteristic classification parameters and the preset classification threshold value to obtain samples to be marked with different categories, and then carrying out characteristic marking on the samples to be marked with the same category to obtain target samples.
Specifically, for example, assuming that the facial feature to be recognized is a face contour, the feature classification parameter is an aspect ratio of a face, classifying the face contour with the aspect ratio of more than 0.74 as a non-long face type, and then performing specific feature labeling on a sample to be labeled belonging to the non-long face type, such as labeling as a round face or a square face, so as to obtain a target sample finally used for training a model.
As an optional embodiment, for an application scenario with very limited computing resources or low face recognition accuracy requirement, the face may be roughly recognized and classified directly through the above classification parameters according to the features.
Further, as shown in fig. 2, on the basis of the above embodiment, the feature classification parameter is calculated according to the preset key point in the step S200, which includes the following steps S201 and S202.
Step S201, selecting a plurality of target keypoints from the preset keypoints to obtain a target keypoint combination, where the target keypoints are representative of other preset keypoints when the target keypoints are used for characterizing the facial feature to be identified.
Step S202, calculating characteristic classification parameters according to the target key point combination.
When the feature classification parameters corresponding to the facial features to be identified are calculated, as the key points for representing the facial features to be identified are multiple, the feature classification parameters calculated according to different key points are different, so that the classification accuracy of the sample to be marked is affected. Therefore, it is necessary to select representative keypoints as target keypoints, and to use a target keypoint combination formed from the target keypoints as a calculation basis for the feature classification parameters.
Specifically, for example, assuming that the facial feature to be identified is a face contour, the feature classification parameter is the aspect ratio of a face, selecting three pairs of key points representing the face contour, corresponding to eyes, nose tips and mouth, in a face image of a sample to be marked, performing mean processing to obtain the width mean value of the face image in the horizontal direction, simultaneously selecting a pair of key points corresponding to the central position from the forehead to the chin in the face image as the height in the horizontal direction, and finally calculating according to the width mean value and the height to obtain the aspect ratio of the face serving as the feature classification parameter. It will be appreciated that the three pairs of key points representing the face contours corresponding to the eyes, nose tip and mouth, respectively, are more representative than the other key points representing the face contours in the horizontal direction.
In this way, in the embodiment, the representative key points are selected as the target key points, and the target key point combination formed according to the target key points is used as the calculation basis of the feature classification parameters, so that the calculation accuracy of the feature classification parameters is ensured, and the practicability of the technical scheme of the application is improved.
Further, on the basis of the above embodiment, before calculating the feature classification parameter according to the preset key point in the above step S200, the face feature labeling method further includes the following steps S203 to S205.
Step S203, calculating to obtain a plurality of candidate feature classification parameters based on different preset key points.
Step S204, analyzing the plurality of candidate feature classification parameters to obtain a plurality of corresponding numerical trend analysis results.
Step S205, if there is a difference between the plurality of numerical trend analysis results, determining the candidate feature classification parameter as the feature classification parameter.
It will be appreciated that there are a plurality of different candidate feature classification parameters for distinguishing the types of the features to be identified, and therefore the most suitable candidate feature classification parameter needs to be selected from the plurality of different candidate feature classification parameters as the feature classification parameter ultimately used for labeling and classifying the sample to be labeled. The method comprises the steps of calculating alternative feature classification parameters of the same type according to different key points for a plurality of samples to be marked, further obtaining a plurality of numerical trend analysis results, and if differences exist among the numerical trend analysis results corresponding to the different key points, indicating that the alternative feature classification parameters of the type are effective and can be used as final feature classification parameters.
Specifically, for example, assuming that the facial feature to be identified is a face contour, the corresponding feature classification parameters may include an aspect ratio, a wide-to-high difference value, a diagonal slope, and the like, and by selecting different key points to calculate the aspect ratio, a plurality of numerical trend analysis results having differences and still reflecting a variation trend corresponding to the aspect ratio as the feature classification parameters may be obtained, and the aspect ratio is taken as the final feature classification parameter. Fig. 3 and fig. 4 are graphs formed by corresponding to the numerical trend analysis results obtained when the aspect ratio of the face is calculated according to different key points for the same batch of samples to be marked, wherein the abscissa is the aspect ratio, and the ordinate is the number of the samples to be marked.
In another embodiment, for example, assuming that the facial feature to be identified is a chin outline, the corresponding feature classification parameters may include an aspect ratio of a chin, an angle of a mandible, and the like, and by selecting different key points to calculate the feature classification parameters respectively, a plurality of numerical trend analysis results with differences and still capable of reflecting a variation trend corresponding to the mandible clip angle as the feature classification parameters can be obtained, so that the mandible clip angle is taken as a final feature classification parameter.
In this way, the embodiment provides the method for determining the type of the feature classification parameter, thereby ensuring the accuracy of classifying the sample to be marked according to the feature classification parameter and improving the practicability of the technical scheme.
Further, on the basis of the above embodiment, before the comparing result is obtained by comparing the feature classification parameter with the preset classification threshold in the step S300, the face feature labeling method further includes the following step S301 and step S302.
Step S301, dividing a plurality of characteristic category areas according to the trend analysis result.
Step S302, determining corresponding preset classification thresholds for the plurality of feature type regions respectively.
And when the feature classification parameters of the sample to be marked fall into a numerical range corresponding to one of the feature class areas, the face feature to be identified of the sample to be marked belongs to the feature class, wherein a preset classification threshold is a limit numerical value among the feature type areas.
Specifically, for example, assuming that the facial feature to be recognized is a face contour, the feature classification parameter is an aspect ratio of a face, calculating the aspect ratio of the face according to the key points for a plurality of samples to be labeled, generating a corresponding graph according to the aspect ratio of the face calculated, and dividing the graph into a long face area and a non-long face area. And taking the corresponding value with the largest change of the slope of the curve in the curve graph as a preset classification threshold, namely when the aspect ratio of the human face is smaller than 0.73, the corresponding characteristic type area is a long face area, and when the aspect ratio of the human face is larger than 0.74, the corresponding characteristic type area is a non-long face area.
As an alternative embodiment, the graph passing through the primary feature class area may be further divided into the secondary feature class area, such as dividing the non-long-face area in the graph into a circular-face area and a square-face area.
Further, on the basis of the above embodiment, the plurality of feature type regions are provided with fuzzy regions, and two adjacent preset classification thresholds are provided with threshold ranges to be classified corresponding to the fuzzy regions.
Specifically, whether the feature type region is used for primarily classifying the sample to be marked is convenient for carrying out finer feature marking on the classified sample to be marked, and in order to provide some adjustment space for the subsequent feature marking, a fuzzy region is arranged between two adjacent feature type regions.
For example, assuming that the facial feature to be recognized is a face contour, the feature classification parameter is an aspect ratio of a face, when the aspect ratio of the face is smaller than 0.73, the corresponding feature class region is a long face region, when the aspect ratio of the face is larger than 0.74, the corresponding feature class region is a non-long face region, and if the aspect ratio of the face is within a threshold range to be classified between 0.73 and 0.74, the facial feature to be recognized of the sample to be labeled neither belongs to a long face type nor to a non-long face type.
In this way, the embodiment provides a specific method for dividing the feature class area and determining the preset classification threshold, so that the accuracy and the flexibility of classifying the sample to be marked are improved, and the practicability of the technical scheme of the application is improved.
Further, on the basis of the above embodiment, after the feature labeling is performed on the classified sample to be labeled in the above step S500 to obtain the target sample, the face feature labeling method further includes the following step S501 and step S502.
Step S501, training according to the target sample to obtain a preset image recognition model.
Step S502, identifying a sample set through the preset image identification model, where the sample set includes a plurality of images to be identified.
Specifically, after a classified sample to be marked is subjected to feature marking to obtain a target sample, training according to the target sample to obtain a preset image recognition model for recognizing facial features, and in practical application, inputting a sample set containing a large number of images to be recognized into the preset image recognition model so that the preset image recognition model recognizes the images to be recognized of the sample set according to the types of the facial features.
As an alternative embodiment, for example, after a corresponding target sample a is obtained by performing feature labeling on a sample to be labeled obtained by classifying according to a face contour, an image recognition model a for recognizing the type of the face contour is obtained according to the target sample training a, and then recognition can be performed according to the type of the face contour of the image to be recognized of the sample set by using the image recognition model a.
As another alternative embodiment, for example, after the feature labeling is performed on the sample to be labeled obtained by classifying according to the chin outline to obtain the corresponding target sample B, an image recognition model B for recognizing the type of the chin outline is obtained by training according to the target sample B, and then recognition can be performed according to the type of the face outline of the image to be recognized of the sample set by the image recognition model B.
As an alternative embodiment, for another example, the image to be identified of the sample set is sequentially identified through the image identification model a and the image identification model B, so that an identification result corresponding to the face contour and the chin contour is output to the image to be identified of the sample set.
As an alternative embodiment, training may be performed according to sequentially inputting the target sample a and the target sample b into a blank image recognition model, so as to obtain a corresponding comprehensive image recognition model C, where the comprehensive image recognition model C has processing layers for recognizing a face contour and a chin contour, and the comprehensive image recognition model C is used to recognize an image to be recognized of the sample set, so as to output a recognition result corresponding to the face contour and the chin contour for the image to be recognized of the sample set.
It should be noted that, in the image recognition model related to the above embodiment, a two-stage neural network model, for example, a CNN series object detection model, may be adopted, that is, first, candidate frames located in different areas of an image to be processed are screened, then, CNN feature extraction is performed on the screened candidate frames, features are classified according to a degree level, and a recognition result after feature classification is output; it can be understood that the image recognition model may also be a neural network model of a stage, for example, real-time performance and recognition accuracy can be considered, and a target detection model of a yolo series which is widely applied to an on-line man-machine interaction scene is obtained by dividing an image to be processed into a plurality of blocks, then directly performing feature recognition on the blocks, classifying the features according to a degree level, and outputting recognition results after feature classification.
The following describes an embodiment of the apparatus of the present application, which may be used to execute the face feature labeling method in the foregoing embodiment of the present application. Fig. 5 schematically shows a block diagram of a face feature labeling apparatus according to an embodiment of the present application. As shown in fig. 5, the face feature labeling apparatus 500 includes:
The extracting module 510 is configured to extract preset key points from a sample to be marked, where the preset key points are used for characterizing facial features to be identified of a face in the sample to be marked;
a calculating module 520 configured to calculate feature classification parameters according to the preset key points, where the feature classification parameters are determined according to the types of the facial features to be identified;
the comparison module 530 is configured to compare the feature classification parameter with a preset classification threshold to obtain a comparison result, where the preset classification threshold is used for distinguishing the feature type of the facial feature to be identified;
the classification module 540 is configured to classify the sample to be marked according to the comparison result;
the labeling module 550 is configured to perform feature labeling on the classified sample to be labeled to obtain a target sample.
In one embodiment of the present application, based on the above embodiment, the computing module includes:
the selection unit is configured to select a plurality of target key points from the preset key points to obtain target key point combinations, and the target key points are representative relative to other preset key points when the target key points are used for representing the facial features to be identified; and calculating characteristic classification parameters according to the target key point combination.
In an embodiment of the present application, based on the above embodiment, the face feature labeling device further includes:
the determining module is configured to calculate a plurality of alternative feature classification parameters based on different preset key points; analyzing the multiple alternative feature classification parameters to obtain multiple corresponding numerical trend analysis results; and if the numerical trend analysis results have the difference, determining the alternative characteristic classification parameter as the characteristic classification parameter.
In one embodiment of the present application, based on the above embodiment, the determining module includes:
a region dividing unit configured to divide a plurality of feature class regions according to the trend analysis result; and respectively determining corresponding preset classification thresholds for the plurality of feature type areas.
In one embodiment of the present application, based on the above embodiment, the area dividing unit includes:
a threshold determining unit configured to obtain a plurality of numerical trend change information corresponding to the plurality of numerical trend analysis results, respectively; and analyzing the plurality of numerical trend change information to determine preset classification thresholds corresponding to the plurality of feature type areas respectively.
In an embodiment of the present application, based on the above embodiment, the face feature labeling device further includes:
the image recognition module is configured to train according to the target sample to obtain a preset image recognition model; and identifying a sample set through the preset image identification model, wherein the sample set comprises a plurality of images to be identified.
Fig. 6 schematically shows a block diagram of a computer system for implementing an electronic device according to an embodiment of the present application.
It should be noted that, the computer system 600 of the electronic device shown in fig. 6 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a central processing unit 601 (Central Processing Unit, CPU) which can execute various appropriate actions and processes according to a program stored in a Read-Only Memory 602 (ROM) or a program loaded from a storage section 608 into a random access Memory 603 (Random Access Memory, RAM). In the random access memory 603, various programs and data required for system operation are also stored. The cpu 601, the rom 602, and the ram 603 are connected to each other via a bus 604. An Input/Output interface 605 (i.e., an I/O interface) is also connected to the bus 604.
The following components are connected to the input/output interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and a speaker, etc.; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a local area network card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the input/output interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present application, the processes described in the various method flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The computer programs, when executed by the central processor 601, perform the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal that propagates in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a usb disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. The face feature labeling method is characterized by comprising the following steps of:
extracting preset key points from a sample to be marked, wherein the preset key points are used for representing facial features to be identified of faces in the sample to be marked;
calculating feature classification parameters according to the preset key points, wherein the feature classification parameters are determined according to the types of the facial features to be identified;
comparing the feature classification parameters with a preset classification threshold to obtain a comparison result, wherein the preset classification threshold is used for distinguishing the feature types of the facial features to be identified;
classifying the sample to be marked according to the comparison result;
and carrying out feature labeling on the classified sample to be labeled to obtain a target sample.
2. The face feature labeling method according to claim 1, wherein calculating feature classification parameters according to the preset key points comprises:
selecting a plurality of target key points from the preset key points to obtain a target key point combination, wherein the target key points are representative relative to other preset key points when the target key points are used for representing the facial features to be identified;
and calculating characteristic classification parameters according to the target key point combination.
3. The face feature labeling method according to claim 1, wherein before calculating feature classification parameters according to the preset key points, the face feature labeling method further comprises:
calculating a plurality of alternative feature classification parameters based on different preset key points;
analyzing the multiple alternative feature classification parameters to obtain multiple corresponding numerical trend analysis results;
and if the numerical trend analysis results have the difference, determining the alternative characteristic classification parameters as the characteristic classification parameters.
4. A face feature labeling method as claimed in claim 3, wherein before comparing the feature classification parameter with a preset classification threshold to obtain a comparison result, the face feature labeling method further comprises:
dividing a plurality of characteristic category areas according to the trend analysis result;
and respectively determining corresponding preset classification thresholds for the plurality of feature type areas.
5. The method of claim 4, wherein determining the corresponding preset classification threshold for each of the plurality of feature type regions comprises:
acquiring a plurality of numerical trend change information corresponding to the numerical trend analysis results;
And analyzing and determining preset classification thresholds corresponding to the feature type areas respectively according to the numerical trend change information.
6. The face feature labeling method according to claim 4, wherein the feature type regions are provided with fuzzy regions, and adjacent two preset classification thresholds are provided with a threshold range to be classified corresponding to the fuzzy regions.
7. The face feature labeling method according to claim 1, wherein after the classifying the sample to be labeled is feature-labeled to obtain the target sample, the face feature labeling method further comprises:
training according to the target sample to obtain a preset image recognition model;
and identifying a sample set through the preset image identification model, wherein the sample set comprises a plurality of images to be identified.
8. The utility model provides a facial feature labeling device which characterized in that, facial feature labeling device includes:
the extraction module is configured to extract preset key points from a sample to be marked, wherein the preset key points are used for representing facial features to be identified of a face in the sample to be marked;
the computing module is configured to compute feature classification parameters according to the preset key points, wherein the feature classification parameters are determined according to the types of the facial features to be identified;
The comparison module is configured to compare the feature classification parameters with a preset classification threshold value to obtain a comparison result, and the preset classification threshold value is used for distinguishing the feature types of the facial features to be identified;
the classification module is configured to classify the sample to be marked according to the comparison result;
and the labeling module is configured to perform feature labeling on the classified samples to be labeled to obtain target samples.
9. A terminal device, characterized in that the terminal device comprises: a memory, a processor, and a face feature labeling program stored on the memory and executable on the processor, which when executed by the processor, implements the face feature labeling method of any of claims 1 to 7.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the face feature labeling method of any of claims 1-7.
CN202310306726.7A 2023-03-21 2023-03-21 Face feature labeling method, device, terminal and medium Pending CN116343300A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310306726.7A CN116343300A (en) 2023-03-21 2023-03-21 Face feature labeling method, device, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310306726.7A CN116343300A (en) 2023-03-21 2023-03-21 Face feature labeling method, device, terminal and medium

Publications (1)

Publication Number Publication Date
CN116343300A true CN116343300A (en) 2023-06-27

Family

ID=86880204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310306726.7A Pending CN116343300A (en) 2023-03-21 2023-03-21 Face feature labeling method, device, terminal and medium

Country Status (1)

Country Link
CN (1) CN116343300A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580445A (en) * 2023-07-14 2023-08-11 江西脑控科技有限公司 Large language model face feature analysis method, system and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580445A (en) * 2023-07-14 2023-08-11 江西脑控科技有限公司 Large language model face feature analysis method, system and electronic equipment
CN116580445B (en) * 2023-07-14 2024-01-09 江西脑控科技有限公司 Large language model face feature analysis method, system and electronic equipment

Similar Documents

Publication Publication Date Title
CN110147726B (en) Service quality inspection method and device, storage medium and electronic device
CN110096570B (en) Intention identification method and device applied to intelligent customer service robot
US10509985B2 (en) Method and apparatus for security inspection
Chang et al. A new model for fingerprint classification by ridge distribution sequences
US20170140138A1 (en) Behavior based authentication for touch screen devices
Singh et al. Face recognition using facial symmetry
CN109376717A (en) Personal identification method, device, electronic equipment and the storage medium of face comparison
EP2360619A1 (en) Fast fingerprint searching method and fast fingerprint searching system
CN105681324B (en) Internet financial transaction system and method
CN106372624A (en) Human face recognition method and human face recognition system
CN116343300A (en) Face feature labeling method, device, terminal and medium
CN113158777A (en) Quality scoring method, quality scoring model training method and related device
CN102592142A (en) Computer-system-based handwritten signature stability evaluation method
Zhao et al. Fingerprint pre-processing and feature engineering to enhance agricultural products categorization
CN113486664A (en) Text data visualization analysis method, device, equipment and storage medium
CN110222660B (en) Signature authentication method and system based on dynamic and static feature fusion
CN109460768B (en) Text detection and removal method for histopathology microscopic image
CN116612538A (en) Online confirmation method of electronic contract content
JP6896260B1 (en) Layout analysis device, its analysis program and its analysis method
CN114677552A (en) Fingerprint detail database labeling method and system for deep learning
CN113361666A (en) Handwritten character recognition method, system and medium
CN113255582A (en) Handwriting identification method and device based on deep neural network and block chain
Tariq et al. An automated system for fingerprint classification using singular points for biometric security
Yahyatabar et al. Online signature verification: A Persian-language specific approach
TWI809343B (en) Image content extraction method and image content extraction device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination