CN104680121B - Method and device for processing face image - Google Patents

Method and device for processing face image Download PDF

Info

Publication number
CN104680121B
CN104680121B CN201310636576.2A CN201310636576A CN104680121B CN 104680121 B CN104680121 B CN 104680121B CN 201310636576 A CN201310636576 A CN 201310636576A CN 104680121 B CN104680121 B CN 104680121B
Authority
CN
China
Prior art keywords
characteristic
feature
value
negative sample
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310636576.2A
Other languages
Chinese (zh)
Other versions
CN104680121A (en
Inventor
郑志昊
侯方
吴永坚
倪辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201310636576.2A priority Critical patent/CN104680121B/en
Priority to PCT/CN2014/089885 priority patent/WO2015078261A1/en
Publication of CN104680121A publication Critical patent/CN104680121A/en
Priority to HK15107064.9A priority patent/HK1206463A1/en
Application granted granted Critical
Publication of CN104680121B publication Critical patent/CN104680121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for processing a face image, wherein the method for processing the face image can comprise the following steps: acquiring a plurality of feature points of a preset face element in a portrait image, and calculating a feature value of the preset face element according to the plurality of feature points of the preset face element; acquiring positive/negative sample characteristic values of positive/negative sample images corresponding to preset face elements, and calculating deviation values of the characteristic values of the preset face elements and the positive/negative sample characteristic values to obtain target characteristic values; and weighting the target characteristic value according to a preset weighting strategy, determining a face image processing result, and displaying the face image processing result on a display screen. The invention can calculate the deviation value between the characteristic value of the preset face element and the corresponding positive/negative sample characteristic value, and then carry out weighting to obtain the face image processing result, thereby improving the accuracy of face image processing and the flexibility of evaluating the beauty degree of the face image.

Description

Method and device for processing face image
Technical Field
The invention relates to the field of computer image processing, in particular to a method and a device for processing a face image.
Background
With the popularization of the shooting terminals such as digital cameras, smart phones and cameras, the demand of users for shooting pictures shot by the terminals is not limited to recording the pictures, and the users can edit the pictures, for example, for face images, the users can perform editing operations such as whitening and skin grinding, and the like, so as to beautify the face images. The continuous development of the face recognition technology enables the editing of the face image to be more flexible, and the face image can be matched with a set face model, such as a star face.
In the prior art, a user evaluates the beauty degree of a face image, center position points of elements such as eyes, a nose, lips and the like in the face image can be obtained through a face recognition technology, then distance proportions between the elements, such as a ratio of a distance from the eyes to the nose to a distance from the nose to the lips, are calculated according to the center position points, deviations from an aesthetic standard value of the face are calculated respectively, and further the beauty degree of the face image is evaluated. In the prior art, the positions of elements in a face image are positioned into one point, the calculation precision is low, the ratio of the distances between the points is calculated, the deviation from an aesthetic standard value of the face is calculated respectively, the calculation dimension is rough, and the precision and the flexibility of the evaluation on the beauty degree of the face image are reduced.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing a face image. The deviation value between the characteristic value of the preset face element and the corresponding positive/negative sample characteristic value can be calculated, then the face image processing result is obtained by weighting, and the accuracy of face image processing and the flexibility of evaluation of the beauty degree of the face image are improved.
The first aspect of the present invention provides a method for processing a face image, which may include:
acquiring a plurality of characteristic points of a preset face element in a portrait image, and calculating a characteristic value of the preset face element according to the acquired plurality of characteristic points of the preset face element;
acquiring positive/negative sample characteristic values of positive/negative sample images corresponding to the preset face elements, and calculating deviation values of the characteristic values of the preset face elements and the positive/negative sample characteristic values to obtain target characteristic values;
and weighting the target characteristic value according to a preset weighting strategy, determining a face image processing result, and displaying the face image processing result on a display screen.
A second aspect of the present invention provides a device for processing a face image, which may include:
the human face element processing module is used for acquiring a plurality of characteristic points of a preset human face element in a portrait image and calculating a characteristic value of the preset human face element according to the acquired plurality of characteristic points of the preset human face element;
the characteristic value processing module is used for acquiring the positive/negative sample characteristic values of the positive/negative sample images corresponding to the preset face elements, and calculating the deviation values of the characteristic values of the preset face elements and the positive/negative sample characteristic values to obtain target characteristic values;
and the image result processing module is used for weighting the target characteristic value according to a preset weighting strategy, determining a face image processing result and displaying the face image processing result on a display screen.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention can calculate the characteristic value of the preset face element according to a plurality of characteristic points of the preset face element, calculate the deviation value between the characteristic value of the preset face element and the characteristic value of the positive/negative sample, carry out weighting according to the preset weighting strategy to obtain the face image processing result, and display the face image processing result on the display screen, thereby improving the precision of portrait processing and the flexibility of evaluation of the beauty degree of the face image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for processing a face image according to an embodiment of the present invention;
fig. 2 is a schematic diagram of feature points of a face image according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a face image processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a face element processing module according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the embodiment of the present invention, the processing device for a face image includes, but is not limited to: the processing device of the face image may also be a client module in the terminal device, for example: an image processing client, etc. The face image may be an image including a face captured by a terminal device capable of capturing, or an image including a face obtained through other approaches (such as drawing), or may also be an image including a face recognized by a terminal device, and specifically, the process of recognizing the image including a face by the processing device of the face image may include:
(1) a large number of (more than ten thousand levels) face images and non-face images are collected through an off-line training module of a face detection system, haar (haar) features are respectively extracted, and the optimal haar features and corresponding thresholds and weights are selected through a self-adaptive boosting classifier to form a cascade strong classifier.
(2) Inputting an image, decoding the image, and sending the decoded image data to a face detection system.
(3) An online classification module of the face detection system carries out multi-scale space search on decoded image data through windows with different sizes and positions and extracts haar characteristics, the characteristics of each search window are input into a cascade strong classifier to judge whether a face image is contained in the window or not, finally all judgment results are combined, and the face position and size are output to obtain the face image.
The embodiment of the invention provides a method and a device for processing a face image, which can calculate a characteristic value of a preset face element according to a plurality of characteristic points of the preset face element, calculate a deviation value between the characteristic value of the preset face element and a characteristic value of a positive/negative sample, perform weighting according to a preset weighting strategy to obtain a face image processing result, and display the face image processing result on a display screen, so that the accuracy of portrait processing and the flexibility of beauty evaluation of the face image are improved. For example, the processing method of the face image according to the embodiment of the present invention may be applied to scoring a face in a photograph, and the like.
The following describes in detail a method for processing a face image according to an embodiment of the present invention with reference to fig. 1 to 2.
Referring to fig. 1, it is a flowchart of a method for processing a face image according to an embodiment of the present invention; the method may comprise the steps of: s101 to S103.
S101, acquiring a plurality of characteristic points of a preset face element in a portrait image, and calculating a characteristic value of the preset face element according to the acquired plurality of characteristic points of the preset face element.
As an alternative implementation manner, in step S101, a plurality of feature points of a preset face element in the portrait image are obtained, where the preset face element includes, but is not limited to: left eye, right eye, left eyebrow, right eyebrow, nose, mouth, and human face edge.
As an optional implementation manner, the plurality of feature points of the preset face element may be a plurality of feature points obtained by processing the preset face element in the face image through a preset face matching template, where the preset face matching template is implemented by an Active Shape Model (ASM) in the prior art.
The ASM (active shape Model) is established on the basis of a PDM (Point Distribution Model), statistical information of feature Point Distribution of a training image sample is obtained through the training image sample, and the allowable change direction of the feature Point is obtained, so that the position of the corresponding feature Point on a target image is searched. For the training sample, the positions of all the feature points need to be marked manually, the coordinates of the feature points are recorded, and a local gray scale model corresponding to each feature point is calculated to serve as a feature vector for adjusting the local feature points. The trained model is placed on a target image, when the next position of each feature point is searched, the feature point with the minimum Mahalanobis distance of the local gray model in the appointed direction of the current feature point is searched by adopting the local gray model and is taken as the position to which the current feature point is to move, the position is called the recommended point (recommended feature point), a searched recommended shape can be obtained by finding all the recommended points, then the current model is adjusted to the recommended shape by adjusting parameters, and iteration is repeated until convergence is realized. In the embodiment of the invention, the face image is a target image, and a plurality of feature points are obtained by processing preset face elements in the face image through a preset face matching template.
As an alternative implementation, the number of the feature points of the preset face element in the portrait image may be a preset number, for example, 88, 99, 155, and the like in total, where the number of the specific feature points is related to the training image sample taken in the preset face matching template, and if the number of the feature points in the training image sample taken in the preset face matching template is 88 in total, the number of the feature points of the preset face element in the portrait image is 88 in total, and specifically, the image processing is more accurate as the number of the feature points is larger.
As an alternative implementation manner, as shown in fig. 2, which is a schematic diagram of feature points of a face image according to an embodiment of the present invention, fig. 2(a) is a schematic diagram of all feature points of the face image, where the face image includes 88 feature points in total, fig. 2(b) is a schematic diagram of feature points of an edge of the face, which includes 21 feature points from 68 to 88, fig. 2(c) is a schematic diagram of feature points of a left eyebrow, which includes 8 feature points from 1 to 8 feature points, fig. 2(d) is a schematic diagram of feature points of a right eyebrow, which includes 8 feature points from 9 to 16, fig. 2(e) is a schematic diagram of feature points of a left eye, which includes 8 feature points from 17 to 24, fig. 2(f) is a schematic diagram of feature points of a right eye, which includes 8 feature points from 25 to 32, and fig. 2(g) is a schematic diagram of feature points of a nose, including 13 feature points from feature point 33 to feature point 45, and fig. 2(h) is a schematic diagram of feature points of the mouth, including 22 feature points from feature point 46 to feature point 67.
As an optional implementation manner, in step S101, calculating a feature value of a preset face element according to a plurality of acquired feature points of the preset face element, specifically, calculating a corresponding area, a corresponding gray value, and the like according to the plurality of feature points of the preset face element, for example, as shown in fig. 2(c), the feature point diagram is a feature point diagram of a left eyebrow, and includes 8 feature points from 1 to 8 feature points, and the feature point 8 is taken as a vertex and forms a triangle with two feature points in the feature points 1 to 7, and the areas of the triangles are calculated and summed up to obtain an area value of a left eyebrow region; as shown in fig. 2(e), the characteristic point diagram of the left eye includes 8 characteristic points, i.e., the characteristic points 17 to 24, and the gray value in the linear region between the characteristic point 17 and the characteristic point 21 is calculated.
S102, acquiring positive/negative sample characteristic values of the positive/negative sample images corresponding to the preset face elements, and calculating deviation values of the characteristic values of the preset face elements and the positive/negative sample characteristic values to obtain target characteristic values.
As an optional implementation manner, in step S102, a positive/negative sample feature value of a positive/negative sample image corresponding to a preset face element is obtained, specifically, the positive/negative sample image is obtained by performing feature extraction on sample images in a preset image library, and classifying according to the preset face element to obtain the positive/negative sample image corresponding to the preset face element. Positive/negative sample images are, for example, a positive sample image of a large eye, a negative sample image of a small eye, a positive sample image of a large nose, a negative sample image of a small nose, and the like.
Specifically, the positive/negative sample feature values are feature values obtained by processing a positive/negative sample image corresponding to a preset face element through a preset face matching template, and then calculating the feature values according to the feature points, specifically, for example, the feature value (eye) of the positive sample image of the large eye, the feature value (eye) of the negative sample image of the small eye, the feature value (nose) of the positive sample image of the large nose, the feature value (nose) of the negative sample image of the small nose, and the like.
As an alternative implementation manner, in step S102, a deviation value between the feature value of the preset face element and the feature value of the positive/negative sample is calculated to obtain a target feature value. Wherein, the target characteristic value may include: an eye characteristic value, a pupil characteristic value, an eyebrow characteristic value, a nose characteristic value, a mouth characteristic value, a fair skin characteristic value, or a smooth skin characteristic value. Optionally, the target feature value may be calculated by: (preset face element eigenvalue-negative sample eigenvalue)/(positive sample eigenvalue-negative sample eigenvalue).
S103, weighting the target characteristic value according to a preset weighting strategy, determining a face image processing result, and displaying the face image processing result on a display screen.
As an alternative implementation manner, in step S103, the target feature value is weighted according to a preset weighting policy, a face image processing result is determined, and the face image processing result is displayed on the display screen. Specifically, the preset weighting policy may be determined according to the gender of the face image and/or according to a preset weighting score, and the obtained face image processing result is displayed on the display screen. Further optionally, the face image processing result may be displayed on the display screen according to a preset display template, where the face image processing result includes, for example: displaying the face image, the total evaluation result value of the face image, the evaluation result of each target characteristic value and the like on a display screen, wherein a preset display template comprises the following components: "the evaluation of the face beauty degree of your face is divided into XX (100), the eyes of your are big, the skin smoothness and beauty degree exceed XX%" and so on.
Further optionally, after step S101, the method for processing a face image in the embodiment of the present invention may further include:
and analyzing and determining the gender of the face image according to a preset gender judgment template.
As an optional implementation manner, the preset person gender determination template is a person gender determination template obtained by preprocessing a training image sample (for example, light compensation, rotation correction, and the like) to extract Gabor features, converting information of the training image sample from a two-dimensional matrix into a one-dimensional Vector, performing feature dimension reduction on the feature Vector of the training image sample to reduce complexity, inputting a Support Vector Machine (SVM) classifier to perform training and recognition, and in the embodiment of the present invention, a face image is imported to perform recognition, and gender of the portrait image is recognized according to the preset person gender determination template.
Optionally, in step S103, the target feature value is weighted according to a preset weighting policy, and a face image processing result is determined, where the preset weighting policy includes: and determining a weighting term from the eye characteristic value, the pupil characteristic value, the eyebrow characteristic value, the nose characteristic value, the mouth characteristic value, the fair skin characteristic value and the smooth skin characteristic value in the target characteristic value according to the gender of the portrait image.
Further optionally, before step S101, the method for processing a face image in the embodiment of the present invention may further include:
and extracting the characteristics of the sample images in the preset image library, and classifying according to preset face elements to obtain positive/negative sample images corresponding to the preset face elements.
As an optional implementation manner, feature extraction is performed on sample images in a preset image library, optionally, feature extraction may be performed on the sample images through a preset face matching template, and positive/negative sample images corresponding to preset face elements are obtained by classifying the preset face elements according to the preset face elements, where the positive/negative sample images are, for example, a positive sample image of a large eye, a negative sample image of a small eye, a positive sample image of a large nose, a negative sample image of a small nose, and the like. Further optionally, the positive/negative sample images in the preset image library may also be updated in real time, for example, if the eye of the large eye positive sample image 001 detects that the newly acquired sample image 002 is larger through feature extraction and comparison, the eye large positive sample image 001 is replaced with the sample image 002, and the sample image 002 is the new large eye positive sample image.
The embodiment of the invention can calculate the characteristic value of the preset face element according to a plurality of characteristic points of the preset face element, each corresponding preset face element is provided with a plurality of characteristic points, the calculation precision is improved, the characteristic value corresponding to each preset face element is calculated, the calculation is not limited to the ratio of the distances between the preset face elements, the calculation dimensionality is increased, the deviation value of the characteristic value of the preset face element and the positive/negative sample characteristic value of the positive/negative sample image corresponding to the preset face element is calculated, and the positive/negative sample image is as follows: the method comprises the steps of extracting features of sample images in a preset image library, classifying the sample images according to preset face elements to obtain sample images corresponding to the preset face elements, weighting according to a preset weighting strategy to obtain face image processing results, and displaying the face image processing results on a display screen, so that the accuracy of face image processing and the flexibility of beauty evaluation of the face images are improved.
Further optionally, the following describes details of the facial image processing process according to the embodiment of the present invention with reference to fig. 2.
As an alternative implementation manner, a plurality of feature points of a preset face element in a portrait image are obtained, as shown in fig. 2, which is a schematic diagram of feature points of a facial image provided by an embodiment of the present invention, fig. 2(a) is a schematic diagram of all feature points of the facial image, where the facial image includes 88 feature points in total, fig. 2(b) is a schematic diagram of feature points of an edge of a facial image, which includes 21 feature points from 68 feature points to 88 feature points, fig. 2(c) is a schematic diagram of feature points of a left eyebrow, which includes 8 feature points from 1 feature point to 8 feature points, fig. 2(d) is a schematic diagram of feature points of a right eyebrow, which includes 8 feature points from 9 feature point to 16 feature point, fig. 2(e) is a schematic diagram of feature points of a left eye, which includes 8 feature points from 17 feature point to 24 feature point, fig. 2(f) is a schematic diagram of feature points of a right eye, which includes 8 feature points from 25 feature points to 32 feature points, fig. 2(g) is a schematic diagram of the nose feature points, which includes 13 feature points from feature point 33 to feature point 45, and fig. 2(h) is a schematic diagram of the mouth feature points, which includes 22 feature points from feature point 46 to feature point 67.
As an optional implementation manner, calculating a feature value of the preset face element according to the obtained multiple feature points of the preset face element may specifically include:
respectively calculating the area of a feature region of a left eye, the area of a feature region of a right eye and the area of a feature region of a face edge according to a plurality of feature points of a preset face element;
comparing the characteristic region area of the left eye with the characteristic region area of the right eye, and determining the left eye/the right eye with large characteristic region area as a target eye;
and calculating the ratio of the area of the characteristic region corresponding to the target eye to the area of the characteristic region at the edge of the human face to obtain a first characteristic value.
The preset positive/negative sample feature values of the positive/negative sample image corresponding to the face elements may include: the first positive sample characteristic value corresponding to the large-eye image and the first negative sample characteristic value corresponding to the small-eye image may be calculated by: (first eigenvalue-first negative sample eigenvalue)/(first positive sample eigenvalue-first negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2(b), 2(e) and 2(f), fig. 2(b) is a schematic diagram of feature points of the edge of the face, which includes 21 feature points from the feature points 68 to 88, and calculates half of the area of a polygonal region surrounded by the feature points 68 to 88 in fig. 2(b), which is denoted as S00; fig. 2(e) is a schematic diagram of feature points of the left eye, which includes 8 feature points from feature point 17 to feature point 24, and calculates the area of a polygonal region surrounded by feature points 17 to feature point 24, and is denoted as S11; fig. 2(f) is a schematic diagram of the feature points of the right eye, which includes 8 feature points from 25 to 32, and calculates the area of the polygonal region surrounded by the feature points from 25 to 32, and is denoted as S12.
Calculating the difference between S11 and S12, i.e. M01 is equal to S11-S12, if M01 is greater than zero, the left eye corresponding to S11 is the target eye, and if M01 is less than zero, the right eye corresponding to S12 is the target eye.
The first characteristic value is: d01 ═ max (S11, S12)/S00, the first positive sample feature value for the large eye image is P10, and the first negative sample feature value for the small eye image is P11, then the eye feature value T01 ═ (D01-P11)/(P10-P11).
Optionally, the method for calculating the first positive sample feature value P10 corresponding to the large-eye image is the same as the method for calculating the first feature value D01, and the method for calculating the first negative sample feature value P11 corresponding to the small-eye image is the same as the method for calculating the first feature value D01, and is not repeated.
As an optional implementation manner, the calculating the feature value of the preset face element according to the obtained multiple feature points of the preset face element may specifically further include:
respectively calculating the area of a feature region of the left eyebrow, the area of a feature region of the right eyebrow and the area of a feature region of the face edge according to a plurality of feature points of preset face elements;
comparing the characteristic area of the left eyebrow with the characteristic area of the right eyebrow, and determining the left eyebrow/the right eyebrow with the large characteristic area as the target eyebrow;
calculating the ratio of the area of the characteristic region corresponding to the target eyebrow to the area of the characteristic region at the edge of the face to obtain a second characteristic value;
the preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements may include: the second positive sample characteristic value corresponding to the thick eyebrow image and the second negative sample characteristic value corresponding to the thin eyebrow image, and the calculation formula of the eyebrow characteristic values is as follows: (second eigenvalue-second negative sample eigenvalue)/(second positive sample eigenvalue-second negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2(b) to 2(d), fig. 2(b) is a schematic diagram of feature points of the edge of the human face, which includes 21 feature points from the feature points 68 to 88, and calculates half the area of a polygonal region surrounded by the feature points 68 to 88 in fig. 2(b), which is denoted as S00; fig. 2(c) is a schematic diagram of feature points of the left eyebrow, including 8 feature points from feature point 1 to feature point 8, and the area of a polygonal region surrounded by feature points 1 to feature point 8 is calculated and recorded as S21; fig. 2(d) is a schematic diagram of the feature points of the right eyebrow, which includes 8 feature points from feature point 9 to feature point 16, and calculates the area of the polygonal region surrounded by feature points 9 to feature point 16, and is denoted as S22.
And calculating the difference between S21 and S22, namely M02 is equal to S21-S22, if M02 is larger than zero, the left eyebrow corresponding to S21 is the target eyebrow, and if M02 is smaller than zero, the right eyebrow corresponding to S22 is the target eyebrow.
The second characteristic value is: d02 ═ max (S21, S22)/S00, the second positive sample feature value corresponding to the thick eyebrow image is P20, and the second negative sample feature value corresponding to the thin eyebrow image is P21, then the eyebrow feature value T02 ═ D02-P21)/(P20-P21.
Optionally, the method for calculating the second positive sample feature value P20 corresponding to the thick eyebrow image is the same as the method for calculating the second feature value D02, and the method for calculating the second negative sample feature value P21 corresponding to the thin eyebrow image is the same as the method for calculating the second feature value D02, and therefore the method is not repeated.
As an optional implementation manner, calculating the feature value of the preset face element according to the obtained multiple feature points of the preset face element may specifically include:
respectively calculating the area of a nose feature region and the area of a feature region of a face edge according to a plurality of feature points of a preset face element;
and calculating the ratio of the area of the nose feature region to the area of the feature region of the edge of the face to obtain a third feature value.
The preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements may include: the third positive sample characteristic value corresponding to the big nose image and the third negative sample characteristic value corresponding to the small nose sub-image, and the calculation formula of the nose characteristic value may be: (third eigenvalue-third negative sample eigenvalue)/(third positive sample eigenvalue-third negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2(b) and 2(g), fig. 2(b) is a schematic diagram of feature points of the edge of the human face, which includes 21 feature points from the feature point 68 to the feature point 88, and the area of a polygonal region surrounded by the feature points 68 to 88 in fig. 2(b) is calculated and is denoted as S01; fig. 2(g) is a schematic diagram of the feature points of the nose, including 13 feature points from the feature point 33 to the feature point 45, and the area of the polygonal region surrounded by the feature point 34 to the feature point 45 is calculated and denoted as S31.
The third eigenvalue is: d03 is S31/S01, the third positive sample feature value corresponding to the small nose sub-image is P30, and the third negative sample feature value P31 corresponding to the large nose image is T03 is (D03-P31)/(P30-P31).
Optionally, the method for calculating the third positive sample feature value P30 corresponding to the small nose sub-image is the same as the method for calculating the third feature value D03, and the method for calculating the third negative sample feature value P31 corresponding to the large nose sub-image is the same as the method for calculating the third feature value D03, and therefore the method is not repeated.
As an optional implementation manner, calculating the feature value of the preset face element according to the obtained multiple feature points of the preset face element may specifically include:
respectively calculating the area of the characteristic region of the left eye and the area of the characteristic region of the right eye according to a plurality of characteristic points of a preset face element;
comparing the characteristic region area of the left eye with the characteristic region area of the right eye, and determining the left eye/the right eye with large characteristic region area as a target eye;
and acquiring a gray value of the target eye and a gray value of the pupil corresponding to the target eye according to the plurality of characteristic points of the preset face elements, and calculating the ratio of the gray value of the pupil to the gray value of the target eye to obtain a fourth characteristic value.
The preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements may include: the fourth positive sample characteristic value corresponding to the large pupil image and the fourth negative sample characteristic value corresponding to the small pupil image, and the calculation formula of the pupil characteristic value may be: (fourth eigenvalue-fourth negative sample eigenvalue)/(fourth positive sample eigenvalue-fourth negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2(b), 2(e) and 2(f), fig. 2(b) is a schematic diagram of feature points of the edge of the face, which includes 21 feature points from the feature points 68 to 88, and calculates half of the area of a polygonal region surrounded by the feature points 68 to 88 in fig. 2(b), which is denoted as S00; fig. 2(e) is a schematic diagram of feature points of the left eye, which includes 8 feature points from feature point 17 to feature point 24, and calculates the area of a polygonal region surrounded by feature points 171 to feature point 24, and is denoted as S11; fig. 2(f) is a schematic diagram of the feature points of the right eye, which includes 8 feature points from 25 to 32, and calculates the area of the polygonal region surrounded by the feature points from 25 to 32, and is denoted as S12.
Calculating the difference between S11 and S12, i.e. M01 is equal to S11-S12, if M01 is greater than zero, the left eye corresponding to S11 is the target eye, and if M01 is less than zero, the right eye corresponding to S12 is the target eye.
If the left eye is the target eye, obtaining the feature points of the canthus, namely the feature point 17 and the feature point 21, making a straight line segment between the feature point 17 and the feature point 21, selecting the pixel points of the straight line segment path along the straight line segment, and converting the selected pixel points into gray values (0-255). If the right eye is the target eye, the processing method is the same as the processing method of the left eye which is the target eye, and the processing method is not repeated.
The smaller the gray value is, the darker the corresponding image is, the smaller the gray value of the pupil in the eye region is, the region with the gray value smaller than 50 may be taken as the pupil region, the total number of pixels included in the linear segment is obtained and recorded as S41, the number of pixels corresponding to the gray value smaller than 50 in the linear segment is obtained and recorded as S42, the fourth characteristic value is D04 ═ S42/S41, the fourth negative sample characteristic value P41 corresponding to the large pupil image, and the fourth negative sample characteristic value P42 corresponding to the small pupil image, and the pupil characteristic value T04 ═ D04-P42)/(P41-P42.
Optionally, the calculation method of the fourth negative sample feature value P41 corresponding to the large pupil image is the same as the calculation method of the fourth feature value D04, and the calculation method of the fourth negative sample feature value P42 corresponding to the small pupil image is the same as the calculation method of the fourth feature value D04, and is not repeated.
As an optional implementation manner, the calculating the feature value of the preset face element by using the obtained plurality of feature points of the preset face element may specifically further include:
acquiring a skin characteristic area according to a plurality of characteristic points of preset face elements;
acquiring an average gray value of the skin characteristic region to obtain a fifth characteristic value;
the preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements may include: the calculation formula of the fifth positive sample characteristic value corresponding to the skin white image and the fifth negative sample characteristic value corresponding to the skin black image may be: (fifth eigenvalue-fifth negative sample eigenvalue)/(fifth positive sample eigenvalue-fifth negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2, specifically, the skin feature region is selected according to the feature point 19 in fig. 2(e) and the feature point 46 in fig. 2(h), for example, a skin sample with a preset area may be selected by taking a straight line segment of the feature point 19 and the feature point 46 as a central line, so as to obtain the skin feature region. The skin feature region may also be selected according to the feature points 27 in fig. 2(f) and the feature points 52 in fig. 2(h), for example, a skin sample with a preset area may be selected by taking a straight line segment of the feature points 27 and the feature points 52 as a central line, so as to obtain the skin feature region. And acquiring pixel points of the skin characteristic region, converting the acquired pixel points into gray values (0-255), calculating an average value of the gray values of the skin characteristic region, and recording the average value as D05. And the fifth positive sample eigenvalue P51 corresponding to the white skin image and the fifth negative sample eigenvalue P52 corresponding to the black skin image, the white skin eigenvalue T05 is (D05-P52)/(P51-P52).
Optionally, the method for calculating the fifth positive sample feature value P51 corresponding to the white skin image is the same as the method for calculating the fifth feature value D05, and the method for calculating the fifth negative sample feature value P52 corresponding to the black skin image is the same as the method for calculating the fifth feature value D05, and therefore the method is not repeated.
As an optional implementation manner, the calculating the feature value of the preset face element by using the obtained plurality of feature points of the preset face element may specifically further include:
acquiring an edge feature area according to a plurality of feature points of a preset face element;
acquiring an average gray value of the edge feature area to obtain a sixth feature value;
the preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements comprise: the sixth positive sample characteristic value corresponding to the skin smoothness image and the sixth negative sample characteristic value corresponding to the rough skin image may be calculated as: (sixth eigenvalue-sixth negative sample eigenvalue)/(sixth positive sample eigenvalue-sixth negative sample eigenvalue).
As an alternative embodiment, an edge detector may be used to detect edges of the face image, if spots are included in the face image, each spot may correspond to the existence of a spot edge, and corresponding edges may also exist in the eyes, nose, mouth, and eyebrows of the face image.
As shown in fig. 2(b), fig. 2(b) is a schematic diagram of feature points of a human face edge, which includes 21 feature points from the feature points 68 to the feature points 88, and an edge detector may be used to detect an edge of an area between the feature points 68 to the feature points 88, and then remove edges of eyes, nose, and mouth according to preset human face elements to obtain an edge feature area, obtain gray values (0 to 255) of the edge feature area, calculate an average value of the gray values of the edge feature area, and obtain a sixth feature value, which is denoted as D06.
Further optionally, an edge detector may be used to perform edge detection on the whole face image to obtain edge features of the whole face image, then remove edges of eyes, a nose, a mouth, and eyebrows from preset face elements to obtain an edge feature region, obtain a gray value (0-255) of the edge feature region, calculate an average value of the gray values of the edge feature region, and obtain a sixth feature value, which is recorded as D06.
Specifically, if the skin smoothness feature value T06 is (D06-P62)/(P61-P62) for the sixth positive sample feature value P61 for the skin smoothness image and the sixth negative sample feature value P62 for the rough skin image.
Optionally, the method for calculating the sixth positive sample feature value P61 corresponding to the skin-smooth image is the same as the method for calculating the sixth feature value D06, and the method for calculating the sixth negative sample feature value P62 corresponding to the skin-rough image is the same as the method for calculating the sixth feature value D06, and will not be repeated.
As an optional implementation manner, calculating the feature value of the preset face element by using the obtained multiple feature points of the preset face element may specifically further include:
calculating the center distance between the left eye and the right eye according to a plurality of feature points of preset face elements to obtain a center distance value of two eyes;
calculating the center distance between the left mouth corner and the right mouth corner of the mouth according to a plurality of feature points of preset face elements to obtain a mouth corner center width value;
calculating the ratio of the width value of the center of the mouth angle to the distance value between the centers of the two eyes to obtain a seventh characteristic value;
the preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements comprise: the mouth feature value may be calculated by the following formula: (seventh eigenvalue-seventh negative sample eigenvalue)/(seventh positive sample eigenvalue-seventh negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2(h), fig. 2(h) is a schematic diagram of feature points of the mouth, which includes 22 feature points from feature point 46 to feature point 67, and calculates the center distance between the left mouth corner and the right mouth corner of the mouth, that is, the length between feature point 46 and feature point 52, to obtain a mouth corner center width value, which is denoted as L1. As shown in fig. 2(e), the left-eye feature point diagram includes 8 feature points from feature point 17 to feature point 24, as shown in fig. 2(f), the right-eye feature point diagram includes 8 feature points from feature point 25 to feature point 32, the center distances of the left and right eyes are calculated, the center positions of the left and right eyes, that is, the center position O1 of the left eye can be obtained from feature point 17 and feature point 21, the center position O2 of the right eye can be obtained from feature point 25 and feature point 29, the distance between O1 and O2 is referred to as the center distance of the left and right eyes, and is referred to as L2, and the seventh feature value is D07L 1/L2. The seventh positive sample feature value P71 for small mouth images and the seventh negative sample feature value P72 for large mouth images, the mouth feature value T07 is (D07-P72)/(P71-P72).
Optionally, the method for calculating the seventh positive sample feature value P71 corresponding to the small mouth image is the same as the method for calculating the seventh feature value D07, and the method for calculating the seventh negative sample feature value P72 corresponding to the large mouth image is the same as the method for calculating the seventh feature value D07, and therefore the method is not repeated.
Further optionally, an eighth feature value corresponding to the third-stop uniform may also be calculated, as shown in fig. 2, fig. 2(e) is a schematic diagram of feature points of the left eye, which includes 8 feature points from feature point 17 to feature point 24, and fig. 2(f) is a schematic diagram of feature points of the right eye, which includes 8 feature points from feature point 25 to feature point 32, and the center positions of the canthus in both eyes are calculated through the feature valuesPoint 21 and feature point 29 result in the center position O3 of the inner corner of the eye. As shown in fig. 2(g), a schematic diagram of the nose feature points is shown, including 13 feature points from the feature point 33 to the feature point 45, and a distance between the center position O3 of the inner corner of the eye and the feature point 33, which is the nose tip point, is calculated and is denoted as L3. As shown in fig. 2(h), the schematic diagram of the feature points of the mouth includes 22 feature points from the feature point 46 to the feature point 67, and the distance between the feature point 33 in fig. 2(g) and the feature point 60, which is the midpoint of the upper edge of the lower lip in fig. 2(h), is calculated and is denoted as L4. As shown in fig. 2(b), a schematic diagram of feature points of the edge of the human face includes 21 feature points from the feature point 68 to the feature point 88, and a distance between the feature point 60, which is the midpoint of the upper edge of the lower lip in fig. 2(h), and the feature point 78, which is the lowest point of the chin in fig. 2(b), is calculated and is denoted as L5. The eighth eigenvalue D08 is the variance of L3, L4 and L5,
Figure GDA0002111883690000141
wherein the content of the first and second substances,
Figure GDA0002111883690000142
and the eighth positive sample feature value P81 corresponding to the three-stop uniform image and the eighth negative sample feature value P82 corresponding to the three-stop uniform image are determined, so that the three-stop uniform feature value T08 is (D08-P82)/(P81-P82).
Optionally, the method for calculating the eighth positive sample feature value P81 corresponding to the three-stop uniform image is the same as the method for calculating the eighth feature value D08, and the method for calculating the eighth negative sample feature value P82 corresponding to the three-stop uniform image is the same as the method for calculating the eighth feature value D08, and is not repeated.
Further optionally, a ninth feature value corresponding to the fat-thin face may be calculated, as shown in fig. 2, fig. 2(b) is a schematic diagram of feature points of the edge of the face, which includes 21 feature points 68 to 88, and an included angle between the feature point 68 and the feature point 88 and the feature point 78 (with the feature point 78 as a vertex) is calculated and is denoted as α, so that the ninth feature value is D09 ═ α, the ninth positive sample feature value P91 corresponding to the thin-face image is calculated, and the ninth negative sample feature value P92 corresponding to the thick-face image is calculated, so that the fat-thin face feature value T09 is (D09-P92)/(P91-P92).
Optionally, the calculation method of the ninth positive sample feature value P91 corresponding to the thin-face image is the same as the calculation method of the ninth feature value D09, and the calculation method of the ninth negative sample feature value P92 corresponding to the thick-face image is the same as the calculation method of the ninth feature value D09, and is not repeated.
Further optionally, the obtaining of the target feature value of the face image by the calculation includes: an eye characteristic value T01, an eyebrow characteristic value T02, a nose characteristic value T03, a pupil characteristic value T04, an fair skin characteristic value T05, a smooth skin characteristic value T06, and a mouth characteristic value T07, and further optionally, the target characteristic values may further include a three-stop uniform characteristic value T08 and a fat-thin face characteristic value T09. The absolute value of the target feature value calculated as described above is between 0 and 1, and the closer to 0, the closer to the negative sample feature value, and the closer to 1, the closer to the positive sample feature value. Specifically, for example, if the eye feature value is negative, the eyes in the face image are larger than those in the large-eye positive sample image.
As an optional implementation manner, the target feature value may be weighted according to a preset weighting policy to determine a face image processing result, where the preset weighting policy includes: and determining a weighting term from the eye characteristic value, the pupil characteristic value, the eyebrow characteristic value, the nose characteristic value, the mouth characteristic value, the fair skin characteristic value and the smooth skin characteristic value in the target characteristic value according to the gender of the portrait image. The preset weighting strategy is shown in the following table, for example, where y represents that the weighting item is selected for the face image corresponding to the gender, and n represents that the weighting item is not selected for the face image corresponding to the gender.
Weighted terms For male Woman
Eye characteristic value T01 y y
Eyebrow feature value T02 y n
Nose characteristic value T03 n y
Pupil characteristic value T04 y y
Characteristic value of white skin T05 n y
Characteristic value of skin smoothness T06 y y
Characteristic value of mouth T07 n y
Three stop uniform characteristic value T08 y y
Face fat-thin characteristic value T09 y y
For example, if the gender of the face image is identified as male, the corresponding weighting term is: an eye characteristic value T01, an eyebrow characteristic value T02, a pupil characteristic value T04, a skin smoothness characteristic value T06, a three-stop uniformity characteristic value T08, and a face fat-thin characteristic value T09.
If the gender of the face image is identified as female, the corresponding weighting term is as follows: an eye characteristic value T01, a nose characteristic value T03, a pupil characteristic value T04, an fair skin characteristic value T05, a smooth skin characteristic value T06, and a mouth characteristic value T07, a triple stop uniformity characteristic value T08, and a fat-thin face characteristic value T09.
In other embodiments, the implementation method of the preset weighting policy may be various, and may also be other implementation forms, and is not specifically limited by this embodiment.
As an alternative embodiment, the target feature value is weighted, and may be calculated according to the following preset weighting calculation formula: g ═ 40+ min (T01, T02, T03, … …, T0n) × 30+ (sum (T01, T02, T03 … … T0n) -min (T01, T02, T03, … …, T0 n)). 30, where min (T01, T02, T03, … …, T0n) is the minimum value among all the determined weight terms, and sum (T01, T02, T03 … … T0n) is the total value of all the determined weight terms.
In the embodiment of the present invention, if the gender of the face image is identified as a male, G00 ═ 40+ min (T01, T02, T04, T06, T08, T09) × 30+ (sum (T01, T02, T04, T06, T08, T09) -min (T01, T02, T04, T06, T08, T09)) × 30. If the gender of the face image is identified as female, G11 is 40+ min (T01, T03, T04, T05, T06, T07, T08, T09) × 30+ (sum (T01, T03, T04, T05, T06, T07, T08, T09) -min (T01, T03, T04, T05, T06, T07, T08, T09)) × 30.
In the embodiment of the present invention, a human face image processing process is detailed in combination with a specific schematic diagram of feature points of a human face image, a feature value of a preset human face element can be calculated according to a plurality of feature points of the preset human face element, each preset human face element corresponds to a plurality of feature points, calculation accuracy is improved, a feature value corresponding to each preset human face element is calculated, the feature value is not limited to only a ratio of distances between the preset human face elements, calculation dimensionality is increased, a deviation value between the feature value of the preset human face element and a positive/negative sample feature value of a positive/negative sample image corresponding to the preset human face element is calculated, and a target feature value is obtained, wherein the target feature value includes: eye characteristic value, pupil characteristic value, eyebrow characteristic value, nose characteristic value, mouth characteristic value, fair skin characteristic value or smooth skin characteristic value, and the positive/negative sample image is: the method comprises the steps of extracting features of sample images in a preset image library, classifying the sample images according to preset face elements to obtain sample images corresponding to the preset face elements, weighting according to a preset weighting strategy to obtain face image processing results, and displaying the face image processing results on a display screen, so that the accuracy of face image processing and the flexibility of beauty evaluation of the face images are improved.
The following describes in detail a face image processing apparatus according to an embodiment of the present invention with reference to fig. 3. It should be noted that the processing apparatus for human face image shown in fig. 3 is used for executing the method according to the embodiment of the present invention shown in fig. 1, and for convenience of description, only the portion related to the embodiment of the present invention is shown, and details of the specific technology are not disclosed, please refer to the embodiment of the present invention shown in fig. 1.
Fig. 3 is a schematic structural diagram of a device for processing a face image according to an embodiment of the present invention; the apparatus may include: a face element processing module 301, a feature value processing module 302 and an image result processing module 303.
The face element processing module 301 is configured to obtain a plurality of feature points of a preset face element in a portrait image, and calculate a feature value of the preset face element according to the obtained plurality of feature points of the preset face element.
As an alternative implementation, the face element processing module 301 obtains a plurality of feature points of preset face elements in the portrait image, where the preset face elements include, but are not limited to: left eye, right eye, left eyebrow, right eyebrow, nose, mouth, and human face edge.
As an optional implementation manner, the plurality of feature points of the preset face element may be a plurality of feature points obtained by processing the preset face element in the face image through a preset face matching template, where the preset face matching template is implemented by an Active Shape Model (ASM) in the prior art.
As an alternative implementation, the number of the feature points of the preset face element in the portrait image may be a preset number, for example, 88, 99, 155, and the like in total, where the number of the specific feature points is related to the training image sample taken in the preset face matching template, and if the number of the feature points in the training image sample taken in the preset face matching template is 88 in total, the number of the feature points of the preset face element in the portrait image is 88 in total, and specifically, the image processing is more accurate as the number of the feature points is larger.
As an optional implementation manner, as shown in fig. 2, which is a schematic diagram of feature points of a face image according to an embodiment of the present invention, fig. 2(a) is a schematic diagram of all feature points of a face image, where the face image includes 88 feature points in total.
As an optional implementation manner, the face element processing module 301 calculates a feature value of a preset face element according to a plurality of acquired feature points of the preset face element, specifically, may calculate a corresponding area, a corresponding gray value, and the like according to the plurality of feature points of the preset face element, for example, as shown in fig. 2(c), the face element processing module is a schematic diagram of feature points of the left eyebrow, and includes 8 feature points including feature points 1 to 8, and uses the feature point 8 as a vertex to respectively form a triangle with two feature points in the feature points 1 to 7, calculate an area of each triangle, and sum up to obtain an area value of the left eyebrow region; as shown in fig. 2(e), the characteristic point diagram of the left eye includes 8 characteristic points, i.e., the characteristic points 17 to 24, and the gray value in the linear region between the characteristic point 17 and the characteristic point 21 is calculated.
The feature value processing module 302 is configured to obtain a positive/negative sample feature value of a positive/negative sample image corresponding to a preset face element, and calculate a deviation value between the feature value of the preset face element and the positive/negative sample feature value to obtain a target feature value.
As an optional implementation manner, the feature value processing module 302 obtains a positive/negative sample feature value of a positive/negative sample image corresponding to a preset face element, specifically, the positive/negative sample image is obtained by performing feature extraction on sample images in a preset image library, and classifying the sample images according to the preset face element to obtain the positive/negative sample image corresponding to the preset face element. Positive/negative sample images are, for example, a positive sample image of a large eye, a negative sample image of a small eye, a positive sample image of a large nose, a negative sample image of a small nose, and the like.
Specifically, the positive/negative sample feature values are feature values obtained by processing a positive/negative sample image corresponding to a preset face element through a preset face matching template, and then calculating the feature values according to the feature points, specifically, for example, the feature value (eye) of the positive sample image of the large eye, the feature value (eye) of the negative sample image of the small eye, the feature value (nose) of the positive sample image of the large nose, the feature value (nose) of the negative sample image of the small nose, and the like.
As an alternative implementation manner, the eigenvalue processing module 302 calculates a deviation value between the eigenvalue of the preset face element and the eigenvalue of the positive/negative sample, so as to obtain a target eigenvalue. The target characteristic value may include: an eye characteristic value, a pupil characteristic value, an eyebrow characteristic value, a nose characteristic value, a mouth characteristic value, a fair skin characteristic value, or a smooth skin characteristic value. Optionally, the target feature value may be calculated by: (preset face element eigenvalue-negative sample eigenvalue)/(positive sample eigenvalue-negative sample eigenvalue).
And the image result processing module 303 is configured to weight the target feature value according to a preset weighting policy, determine a face image processing result, and display the face image processing result on a display screen.
Further optionally, the apparatus for processing a face image according to an embodiment of the present invention further includes: and the display screen is used for displaying the face image processing result.
As an optional implementation manner, the image result processing module 303 weights the target feature value according to a preset weighting policy, determines a face image processing result, and displays the face image processing result on the display screen. Specifically, the preset weighting policy may be determined according to the gender of the face image and/or according to a preset weighting score, and the obtained face image processing result is displayed on the display screen. Further optionally, the face image processing result may be displayed on the display screen according to a preset display template, where the face image processing result includes, for example: displaying the face image, the total evaluation result value of the face image, the evaluation result of each target characteristic value and the like on a display screen, wherein a preset display template comprises the following components: "the evaluation of the face beauty degree of your face is divided into XX (100), the eyes of your are big, the skin smoothness and beauty degree exceed XX%" and so on.
Further optionally, the processing apparatus for a face image in the embodiment of the present invention may further include: a sample image processing module 304.
And the sample image processing module 304 is configured to perform feature extraction on sample images in a preset image library, and classify the sample images according to preset face elements to obtain positive/negative sample images corresponding to the preset face elements.
As an optional implementation manner, the sample image processing module 304 performs feature extraction on sample images in a preset image library, and optionally, may perform feature extraction on the sample images through a preset face matching template, and perform classification according to preset face elements to obtain positive/negative sample images corresponding to the preset face elements, where the positive/negative sample images are, for example, a positive sample image of a large eye, a negative sample image of a small eye, a positive sample image of a large nose, a negative sample image of a small nose, and the like. Further optionally, the positive/negative sample images in the preset image library may also be updated in real time, for example, if the eye of the large eye positive sample image 001 is detected to be larger through feature extraction and comparison, and then the large eye positive sample image 001 is replaced with the sample image 002, and then the sample image 002 is the new large eye positive sample image.
Further optionally, the processing apparatus for a face image in the embodiment of the present invention may further include: a gender determination module 305.
And a gender determination module 305, configured to analyze and determine the gender of the portrait image according to a preset gender determination template.
As an optional implementation manner, the preset character gender determination template is a character gender determination template obtained by preprocessing a training image sample (for example, light supplement, rotation correction, and the like) to extract Gabor features, converting training image sample information from a two-dimensional matrix into a one-dimensional vector, performing feature dimension reduction on the feature vector of the training image sample to reduce complexity, inputting the training and recognition vectors into an SVM classifier, and performing recognition.
Optionally, the image result processing module 303 is specifically configured to: weighting the target characteristic value according to a preset weighting strategy to determine a face image processing result, wherein the preset weighting strategy comprises the following steps: and determining a weighting term from an eye characteristic value, a pupil characteristic value, an eyebrow characteristic value, a nose characteristic value, a mouth characteristic value, a skin white characteristic value and a skin smooth characteristic value in the target characteristic value according to the gender of the portrait image.
The embodiment of the invention provides a processing device of a face image, wherein a face element processing module can calculate a characteristic value of a preset face element according to a plurality of characteristic points of the preset face element, each preset face element corresponds to a plurality of characteristic points, the calculation precision is improved, the characteristic value corresponding to each preset face element is calculated and is not limited to the ratio of the distances between the preset face elements, the calculation dimension is increased, the characteristic value processing module can calculate the deviation value of the characteristic value of the preset face element and the positive/negative sample characteristic values of a positive/negative sample image corresponding to the preset face element, and the positive/negative sample image is as follows: the method comprises the steps of extracting features of sample images in a preset image library, classifying the sample images according to preset face elements to obtain sample images corresponding to the preset face elements, weighting the sample images according to a preset weighting strategy by an image result processing module to obtain face image processing results, and displaying the face image processing results on a display screen, so that the accuracy of face image processing and the flexibility of evaluation of the beauty degree of the face images are improved.
The structure and function of the face element processing module shown in fig. 3 will be described in detail with reference to fig. 4.
Referring to fig. 4, a schematic structural diagram of a face element processing module according to an embodiment of the present invention is shown, and a detailed description is given below to a face image processing process according to an embodiment of the present invention with reference to fig. 4 and fig. 2.
As an alternative implementation manner, a plurality of feature points of a preset face element in a portrait image are obtained, as shown in fig. 2, which is a schematic diagram of feature points of a facial image provided by an embodiment of the present invention, fig. 2(a) is a schematic diagram of all feature points of the facial image, where the facial image includes 88 feature points in total, fig. 2(b) is a schematic diagram of feature points of an edge of a facial image, which includes 21 feature points from 68 feature points to 88 feature points, fig. 2(c) is a schematic diagram of feature points of a left eyebrow, which includes 8 feature points from 1 feature point to 8 feature points, fig. 2(d) is a schematic diagram of feature points of a right eyebrow, which includes 8 feature points from 9 feature point to 16 feature point, fig. 2(e) is a schematic diagram of feature points of a left eye, which includes 8 feature points from 17 feature point to 24 feature point, fig. 2(f) is a schematic diagram of feature points of a right eye, which includes 8 feature points from 25 feature points to 32 feature points, fig. 2(g) is a schematic diagram of the nose feature points, which includes 13 feature points from feature point 33 to feature point 45, and fig. 2(h) is a schematic diagram of the mouth feature points, which includes 22 feature points from feature point 46 to feature point 67.
As an alternative embodiment, the face element processing module 301 may include: a first area calculation unit 401, a target eye determination unit 402, and a first feature value calculation unit 403.
A first area calculating unit 401, configured to calculate, according to a plurality of feature points of a preset face element, a feature region area of a left eye, a feature region area of a right eye, and a feature region area of a face edge, respectively.
A target eye determining unit 402, configured to compare the characteristic region area of the left eye and the characteristic region area of the right eye, and determine the left eye/the right eye with the large characteristic region area as the target eye.
The first feature value calculating unit 403 is configured to calculate a ratio of a feature region area corresponding to the target eye to a feature region area of the face edge, so as to obtain a first feature value.
The preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements comprise: the first positive sample characteristic value corresponding to the large eye image and the first negative sample characteristic value corresponding to the small eye image have the following calculation formula: (first eigenvalue-first negative sample eigenvalue)/(first positive sample eigenvalue-first negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2(b), 2(e) and 2(f), fig. 2(b) is a schematic diagram of feature points of the edge of the face, which includes 21 feature points from the feature points 68 to 88, and calculates half of the area of a polygonal region surrounded by the feature points 68 to 88 in fig. 2(b), which is denoted as S00; fig. 2(e) is a schematic diagram of feature points of the left eye, which includes 8 feature points from feature point 17 to feature point 24, and calculates the area of a polygonal region surrounded by feature points 17 to feature point 24, and is denoted as S11; fig. 2(f) is a schematic diagram of the feature points of the right eye, which includes 8 feature points from 25 to 32, and calculates the area of the polygonal region surrounded by the feature points from 25 to 32, and is denoted as S12.
Calculating the difference between S11 and S12, i.e. M01 is equal to S11-S12, if M01 is greater than zero, the left eye corresponding to S11 is the target eye, and if M01 is less than zero, the right eye corresponding to S12 is the target eye.
The first characteristic value is: d01 ═ max (S11, S12)/S00, the first positive sample feature value for the large eye image is P10, and the first negative sample feature value for the small eye image is P11, then the eye feature value T01 ═ (D01-P11)/(P10-P11).
Optionally, the method for calculating the first positive sample feature value P10 corresponding to the large-eye image is the same as the method for calculating the first feature value D01, and the method for calculating the first negative sample feature value P11 corresponding to the small-eye image is the same as the method for calculating the first feature value D01, and is not repeated.
As an optional implementation, the face element processing module 301 may further include: a second area calculation unit 404, a target eyebrow determination unit 405, and a second feature value calculation unit 406.
A second area calculating unit 404, configured to calculate, according to a plurality of feature points of preset face elements, a feature region area of the left eyebrow, a feature region area of the right eyebrow, and a feature region area of a face edge, respectively.
A target eyebrow determining unit 405 configured to compare the feature region area of the left eyebrow with the feature region area of the right eyebrow, and determine the left eyebrow/the right eyebrow with the larger feature region area as the target eyebrow.
The second feature value calculating unit 406 is configured to calculate a ratio of an area of a feature region corresponding to the target eyebrow to an area of a feature region at the edge of the human face, so as to obtain a second feature value.
The preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements comprise: the second positive sample characteristic value corresponding to the thick eyebrow image and the second negative sample characteristic value corresponding to the thin eyebrow image, and the calculation formula of the eyebrow characteristic value is as follows: (second eigenvalue-second negative sample eigenvalue)/(second positive sample eigenvalue-second negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2(b) to 2(d), fig. 2(b) is a schematic diagram of feature points of the edge of the human face, which includes 21 feature points from the feature points 68 to 88, and calculates half the area of a polygonal region surrounded by the feature points 68 to 88 in fig. 2(b), which is denoted as S00; fig. 2(c) is a schematic diagram of feature points of the left eyebrow, which includes 8 feature points from feature point 1 to feature point 8, and calculates the area of a polygonal region surrounded by feature points 1 to feature points 8, and is denoted as S21; fig. 2(d) is a schematic diagram of the feature points of the right eyebrow, which includes 8 feature points from feature point 9 to feature point 16, and calculates the area of the polygonal region surrounded by feature points 9 to feature point 16, and is denoted as S22.
And calculating the difference between S21 and S22, namely M02 is equal to S21-S22, if M02 is larger than zero, the left eyebrow corresponding to S21 is the target eyebrow, and if M02 is smaller than zero, the right eyebrow corresponding to S22 is the target eyebrow.
The second characteristic value is: d02 ═ max (S21, S22)/S00, the second positive sample feature value for the thick eyebrow image is P20, and the second negative sample feature value P21 for the thin eyebrow image is (D02-P21)/(P20-P21), and the eyebrow feature value T02 is (D02-P21)/(P20-P21).
Optionally, the method for calculating the second positive sample feature value P20 corresponding to the thick eyebrow image is the same as the method for calculating the second feature value D02, and the method for calculating the second negative sample feature value P21 corresponding to the thin eyebrow image is the same as the method for calculating the second feature value D02, and therefore the method is not repeated.
As an optional implementation, the face element processing module 301 may further include: a third area calculation unit 407 and a third feature value calculation unit 408.
A third area calculating unit 407, configured to calculate a nose feature region area and a feature region area of a face edge according to a plurality of feature points of a preset face element.
The third feature value calculating unit 408 is configured to calculate a ratio of the area of the nose feature region to the area of the feature region of the face edge, so as to obtain a third feature value.
The preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements may include: the third positive sample characteristic value corresponding to the big nose image and the third negative sample characteristic value corresponding to the small nose sub-image, and the calculation formula of the nose characteristic value may be: (third eigenvalue-third negative sample eigenvalue)/(third positive sample eigenvalue-third negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2(b) and 2(g), fig. 2(b) is a schematic diagram of feature points of the edge of the human face, which includes 21 feature points from the feature point 68 to the feature point 88, and the area of a polygonal region surrounded by the feature points 68 to 88 in fig. 2(b) is calculated and is denoted as S01; fig. 2(g) is a schematic diagram of the feature points of the nose, which includes 13 feature points from the feature point 33 to the feature point 45, and calculates the area of the polygonal region surrounded by the feature points 34 to 45, and is denoted as S31.
The third eigenvalue is: d03 is S31/S01, the third positive sample feature value corresponding to the small nose sub-image is P30, and the third negative sample feature value P31 corresponding to the large nose image is T03 is (D03-P31)/(P30-P31).
Optionally, the method for calculating the third positive sample feature value P30 corresponding to the small nose sub-image is the same as the method for calculating the third feature value D03, and the method for calculating the third negative sample feature value P31 corresponding to the large nose sub-image is the same as the method for calculating the third feature value D03, and therefore the method is not repeated.
As an optional implementation, the face element processing module 301 may further include: a fourth feature value calculation unit 409.
The first area calculation unit 401 calculates a feature region area of the left eye and a feature region area of the right eye according to a plurality of feature points of a preset face element.
The target eye determination unit 402 compares the characteristic region area of the left eye and the characteristic region area of the right eye, and determines the left eye/right eye having a large characteristic region area as the target eye.
The fourth feature value calculating unit 409 is configured to obtain the gray value of the target eye and the gray value of the pupil corresponding to the target eye according to the multiple feature points of the preset face element, and calculate a ratio of the gray value of the pupil to the gray value of the target eye to obtain a fourth feature value.
The preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements may include: the fourth positive sample characteristic value corresponding to the large pupil image and the fourth negative sample characteristic value corresponding to the small pupil image, and the calculation formula of the pupil characteristic value may be: (fourth eigenvalue-fourth negative sample eigenvalue)/(fourth positive sample eigenvalue-fourth negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2(b), 2(e) and 2(f), fig. 2(b) is a schematic diagram of feature points of the edge of the face, which includes 21 feature points from the feature points 68 to 88, and calculates half of the area of a polygonal region surrounded by the feature points 68 to 88 in fig. 2(b), which is denoted as S00; fig. 2(e) is a schematic diagram of feature points of the left eye, which includes 8 feature points from feature point 17 to feature point 24, and calculates the area of a polygonal region surrounded by feature points 171 to feature point 24, and is denoted as S11; fig. 2(f) is a schematic diagram of the feature points of the right eye, which includes 8 feature points from 25 to 32, and calculates the area of the polygonal region surrounded by the feature points from 25 to 32, and is denoted as S12.
Calculating the difference between S11 and S12, i.e. M01 is equal to S11-S12, if M01 is greater than zero, the left eye corresponding to S11 is the target eye, and if M01 is less than zero, the right eye corresponding to S12 is the target eye.
If the left eye is the target eye, obtaining the feature points of the canthus, namely the feature point 17 and the feature point 21, making a straight line segment between the feature point 17 and the feature point 21, selecting the pixel points of the straight line segment path along the straight line segment, and converting the selected pixel points into gray values (0-255). If the right eye is the target eye, the processing method is the same as the processing method of the left eye which is the target eye, and the processing method is not repeated.
The smaller the gray value is, the darker the corresponding image is, the smaller the gray value of the pupil in the eye region is, the region with the gray value smaller than 50 may be taken as the pupil region, the total number of pixels included in the linear segment is obtained and recorded as S41, the number of pixels corresponding to the gray value smaller than 50 in the linear segment is obtained and recorded as S42, the fourth characteristic value is D04 ═ S42/S41, the fourth negative sample characteristic value P41 corresponding to the large pupil image, and the fourth negative sample characteristic value P42 corresponding to the small pupil image, and the pupil characteristic value T04 ═ D04-P42)/(P41-P42.
Optionally, the calculation method of the fourth negative sample feature value P41 corresponding to the large pupil image is the same as the calculation method of the fourth feature value D04, and the calculation method of the fourth negative sample feature value P42 corresponding to the small pupil image is the same as the calculation method of the fourth feature value D04, and is not repeated.
As an optional implementation, the face element processing module 301 may further include: a first acquisition unit 410 and a fifth feature value calculation unit 411.
The first obtaining unit 410 is configured to obtain a skin feature region according to a plurality of feature points of a preset face element.
The fifth feature value calculating unit 411 is configured to obtain an average gray value of the skin feature region to obtain a fifth feature value.
The preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements may include: the calculation formula of the fifth positive sample characteristic value corresponding to the skin white image and the fifth negative sample characteristic value corresponding to the skin black image may be: (fifth eigenvalue-fifth negative sample eigenvalue)/(fifth positive sample eigenvalue-fifth negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2, specifically, the skin feature region is selected according to the feature point 19 in fig. 2(e) and the feature point 46 in fig. 2(h), for example, a skin sample with a preset area may be selected by taking a straight line segment of the feature point 19 and the feature point 46 as a central line, so as to obtain the skin feature region. The skin feature region may also be selected according to the feature points 27 in fig. 2(f) and the feature points 52 in fig. 2(h), for example, a skin sample with a preset area may be selected by taking a straight line segment of the feature points 27 and the feature points 52 as a central line, so as to obtain the skin feature region. And acquiring pixel points of the skin characteristic region, converting the acquired pixel points into gray values (0-255), calculating an average value of the gray values of the skin characteristic region, and recording the average value as D05. And the fifth positive sample eigenvalue P51 corresponding to the white skin image and the fifth negative sample eigenvalue P52 corresponding to the black skin image, the white skin eigenvalue T05 is (D05-P52)/(P51-P52).
Optionally, the method for calculating the fifth positive sample feature value P51 corresponding to the white skin image is the same as the method for calculating the fifth feature value D05, and the method for calculating the fifth negative sample feature value P52 corresponding to the black skin image is the same as the method for calculating the fifth feature value D05, and therefore the method is not repeated.
As an optional implementation, the face element processing module 301 may further include: a second acquisition unit 412 and a sixth feature value calculation unit 413.
The second obtaining unit 412 is configured to obtain an edge feature region according to a plurality of feature points of a preset face element.
The sixth feature value calculating unit 413 is configured to obtain an average gray value of the edge feature region, and obtain a sixth feature value.
The preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements comprise: the sixth positive sample characteristic value corresponding to the skin smoothness image and the sixth negative sample characteristic value corresponding to the rough skin image may be calculated as: (sixth eigenvalue-sixth negative sample eigenvalue)/(sixth positive sample eigenvalue-sixth negative sample eigenvalue).
As an alternative embodiment, an edge detector may be used to detect edges of the face image, if spots are included in the face image, each spot may correspond to the existence of a spot edge, and corresponding edges may also exist in the eyes, nose, mouth, and eyebrows of the face image.
As shown in fig. 2(b), fig. 2(b) is a schematic diagram of feature points of a human face edge, which includes 21 feature points from the feature points 68 to the feature points 88, and an edge detector may be used to detect an edge of an area between the feature points 68 to the feature points 88, and then remove edges of eyes, nose, and mouth according to preset human face elements to obtain an edge feature area, obtain gray values (0 to 255) of the edge feature area, calculate an average value of the gray values of the edge feature area, and obtain a sixth feature value, which is denoted as D06.
Further optionally, an edge detector may be used to perform edge detection on the whole face image to obtain edge features of the whole face image, then remove edges of eyes, a nose, a mouth, and eyebrows from preset face elements to obtain an edge feature region, obtain a gray value (0-255) of the edge feature region, calculate an average value of the gray values of the edge feature region, and obtain a sixth feature value, which is recorded as D06.
Specifically, if the skin smoothness feature value T06 is (D06-P62)/(P61-P62) for the sixth positive sample feature value P61 for the skin smoothness image and the sixth negative sample feature value P62 for the rough skin image.
Optionally, the method for calculating the sixth positive sample feature value P61 corresponding to the skin-smooth image is the same as the method for calculating the sixth feature value D06, and the method for calculating the sixth negative sample feature value P62 corresponding to the skin-rough image is the same as the method for calculating the sixth feature value D06, and will not be repeated.
As an optional implementation, the face element processing module 301 may further include: an eye distance calculation unit 414, a mouth angle width calculation unit 415, and a seventh feature value calculation unit 416.
The eye distance calculating unit 414 is configured to calculate a central distance between the left eye and the right eye according to a plurality of feature points of the preset face element, so as to obtain a central distance value between the two eyes.
The mouth corner width calculating unit 415 is configured to calculate a center distance between a left mouth corner and a right mouth corner of the mouth according to a plurality of feature points of a preset face element, so as to obtain a mouth corner center width value.
A seventh characteristic value calculating unit 416, configured to calculate a ratio of the mouth angle center width value to the distance value between the two eyes, so as to obtain a seventh characteristic value.
The preset positive/negative sample characteristic values of the positive/negative sample image corresponding to the face elements comprise: the mouth feature value may be calculated by the following formula: (seventh eigenvalue-seventh negative sample eigenvalue)/(seventh positive sample eigenvalue-seventh negative sample eigenvalue).
As an alternative embodiment, as shown in fig. 2(h), fig. 2(h) is a schematic diagram of feature points of the mouth, which includes 22 feature points from feature point 46 to feature point 67, and calculates the center distance between the left mouth corner and the right mouth corner of the mouth, i.e., the length between feature point 46 and feature point 52, and obtains a value of the center width of the mouth corner, which is denoted as L1. As shown in fig. 2(e), the left-eye feature point diagram includes 8 feature points from feature point 17 to feature point 24, as shown in fig. 2(f), the right-eye feature point diagram includes 8 feature points from feature point 25 to feature point 32, the center distances of the left and right eyes are calculated, the center positions of the left and right eyes can be calculated first, that is, the center position O1 of the left eye can be obtained from feature point 17 and feature point 21, the center position O2 of the right eye can be obtained from feature point 25 and feature point 29, the distance between O1 and O2 is recorded as the center distance of the left and right eyes and is recorded as L2, and the seventh feature value is D07L 1/L2. The seventh positive sample feature value P71 for the small mouth image and the seventh negative sample feature value P72 for the large mouth image, the mouth feature value T07 is (D07-P72)/(P71-P72).
Optionally, the method for calculating the seventh positive sample feature value P71 corresponding to the small mouth image is the same as the method for calculating the seventh feature value D07, and the method for calculating the seventh negative sample feature value P72 corresponding to the large mouth image is the same as the method for calculating the seventh feature value D07, and therefore the method is not repeated.
Further optionally, the face element processing module 301 may be further configured to calculate an eighth feature value corresponding to the third-stop uniformity. As shown in fig. 2, fig. 2(e) is a schematic diagram of feature points of the left eye, which includes 8 feature points from 17 to 24, and fig. 2(f) is a schematic diagram of feature points of the right eye, which includes 8 feature points from 25 to 32, and the center positions of the inner canthus in both eyes are calculated, and the center position O3 of the inner canthus can be obtained from the feature point 21 and the feature point 29. As shown in fig. 2(g), a schematic diagram of the nose feature points is shown, including 13 feature points from the feature point 33 to the feature point 45, and a distance between the center position O3 of the inner corner of the eye and the feature point 33, which is the nose tip point, is calculated and is denoted as L3. As shown in fig. 2(h), the schematic diagram of the feature points of the mouth includes 22 feature points from the feature point 46 to the feature point 67, and the distance between the feature point 33 in fig. 2(g) and the feature point 60, which is the midpoint of the upper edge of the lower lip in fig. 2(h), is calculated and is denoted as L4. As shown in fig. 2(b), a schematic diagram of feature points of the edge of the human face includes 21 feature points from the feature point 68 to the feature point 88, and a distance between the feature point 60, which is the midpoint of the upper edge of the lower lip in fig. 2(h), and the feature point 78, which is the lowest point of the chin in fig. 2(b), is calculated and is denoted as L5. The eighth eigenvalue D08 is the variance of L3, L4 and L5,
Figure GDA0002111883690000271
wherein the content of the first and second substances,
Figure GDA0002111883690000272
and the eighth positive sample feature value P81 corresponding to the three-stop uniform image and the eighth negative sample feature value P82 corresponding to the three-stop uniform image are determined, so that the three-stop uniform feature value T08 is (D08-P82)/(P81-P82).
Optionally, the calculation method of the eighth positive sample feature value P81 corresponding to the three-stop uniform image is the same as the calculation method of the eighth feature value D08, and the calculation method of the eighth negative sample feature value P82 corresponding to the three-stop uniform image is the same as the calculation method of the eighth feature value D08, and is not repeated.
Further optionally, the face element processing module 301 may further calculate a ninth feature value corresponding to the face fatness and thinness. As shown in fig. 2, fig. 2(b) is a schematic diagram of feature points of a face edge, which includes 21 feature points from feature point 68 to feature point 88, and calculates an included angle between feature point 68, feature point 88 and feature point 78 (with feature point 78 as a vertex), and the included angle is denoted as α, the ninth feature value is D09 ═ α, the ninth positive sample feature value P91 corresponding to a face-thin image, and the ninth negative sample feature value P92 corresponding to a face-thick image, and the face-fat feature value T09 ═ D09-P92)/(P91-P92).
Optionally, the calculation method of the ninth positive sample feature value P91 corresponding to the thin-face image is the same as the calculation method of the ninth feature value D09, and the calculation method of the ninth negative sample feature value P92 corresponding to the thick-face image is the same as the calculation method of the ninth feature value D09, and is not repeated.
Further optionally, the obtaining of the target feature value of the face image by the calculation includes: an eye characteristic value T01, an eyebrow characteristic value T02, a nose characteristic value T03, a pupil characteristic value T04, an fair skin characteristic value T05, a smooth skin characteristic value T06, and a mouth characteristic value T07, and further optionally, the target characteristic values may further include a three-stop uniform characteristic value T08 and a fat-thin face characteristic value T09. The absolute value of the target feature value calculated as described above is between 0 and 1, and the closer to 0, the closer to the negative sample feature value, and the closer to 1, the closer to the positive sample feature value. Specifically, for example, if the eye feature value is negative, the eyes in the face image are larger than those in the large-eye positive sample image.
As an optional implementation manner, the target feature value may be weighted according to a preset weighting policy to determine a face image processing result, where the preset weighting policy includes: and determining a weighting term from the eye characteristic value, the pupil characteristic value, the eyebrow characteristic value, the nose characteristic value, the mouth characteristic value, the fair skin characteristic value and the smooth skin characteristic value in the target characteristic value according to the gender of the portrait image.
For example, if the gender of the face image is identified as male, the corresponding weighting term is: an eye characteristic value T01, an eyebrow characteristic value T02, a pupil characteristic value T04, a skin smoothness characteristic value T06, a three-stop uniformity characteristic value T08, and a face fat-thin characteristic value T09.
If the gender of the face image is identified as female, the corresponding weighting term is as follows: an eye characteristic value T01, a nose characteristic value T03, a pupil characteristic value T04, a fair skin characteristic value T05, a smooth skin characteristic value T06, and a mouth characteristic value T07, a three-stop uniform characteristic value T08, and a fat-thin face characteristic value T09.
In other embodiments, the implementation method of the preset weighting policy may be various, and may also be other implementation forms, and is not specifically limited by this embodiment.
As an alternative embodiment, the target feature value is weighted, and may be calculated according to the following preset weighting calculation formula: g ═ 40+ min (T01, T02, T03, … …, T0n) × 30+ (sum (T01, T02, T03 … … T0n) -min (T01, T02, T03, … …, T0 n)). 30, where min (T01, T02, T03, … …, T0n) is the minimum value among all the determined weight terms, and sum (T01, T02, T03 … … T0n) is the total value of all the determined weight terms.
In the embodiment of the present invention, if the gender of the face image is identified as a male, G00 ═ 40+ min (T01, T02, T04, T06, T08, T09) × 30+ (sum (T01, T02, T04, T06, T08, T09) -min (T01, T02, T04, T06, T08, T09)) × 30. If the gender of the face image is identified as female, G11 is 40+ min (T01, T03, T04, T05, T06, T07, T08, T09) × 30+ (sum (T01, T03, T04, T05, T06, T07, T08, T09) -min (T01, T03, T04, T05, T06, T07, T08, T09)) × 30.
In the embodiment of the invention, the human face image processing process is detailed by combining the specific schematic diagram of the characteristic points of the human face image, so that the human face processing precision and the flexibility of evaluating the beauty degree of the human face image can be improved.
It should be noted that the structure and function of the face element processing module shown in fig. 4 can be specifically implemented by the method according to the embodiment shown in fig. 1, and the specific implementation process may refer to the description related to the embodiment shown in fig. 1, which is not described herein again.
Further optionally, an embodiment of the present invention further discloses a terminal, including the apparatuses shown in fig. 3 to fig. 4; the structure and function of the device can be referred to the related description of the embodiment shown in fig. 3-4, and are not repeated herein. It should be noted that the server provided in this embodiment corresponds to the method for processing a face image shown in fig. 1, and is an execution subject based on the method for processing a face image shown in fig. 1.
Through the description of the above embodiment, can calculate the eigenvalue of presetting the face element according to a plurality of characteristic points of presetting the face element, it presets the face element to correspond each, the characteristic point that every presetting the face element corresponds contains a plurality ofly, calculation accuracy is improved, calculate the eigenvalue that every presetting the face element corresponds, no longer confine to the ratio of the distance between the presetting face element only, the dimensionality of calculation has been increased, calculate the eigenvalue of presetting the face element and the deviation value of the positive/negative sample eigenvalue of the positive/negative sample image that presets the face element and corresponds, obtain the target eigenvalue, wherein, the target eigenvalue includes: eye characteristic value, pupil characteristic value, eyebrow characteristic value, nose characteristic value, mouth characteristic value, fair skin characteristic value or smooth skin characteristic value, and the positive/negative sample image is: the method comprises the steps of extracting features of sample images in a preset image library, classifying the sample images according to preset face elements to obtain sample images corresponding to the preset face elements, weighting according to a preset weighting strategy to obtain face image processing results, and displaying the face image processing results on a display screen, so that the accuracy of face image processing and the flexibility of beauty evaluation of the face images are improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (16)

1. A method for calculating a target feature value of a face image, the method comprising:
carrying out feature extraction on sample images in a preset image library, and classifying according to preset face elements to obtain positive/negative sample images corresponding to the preset face elements; the preset face elements comprise: left eye, right eye, left eyebrow, right eyebrow, nose, mouth, and face edge;
shooting through a terminal to obtain a face image;
processing the preset face elements in the face image through a preset face matching template to obtain a plurality of feature points, and calculating the feature values of the preset face elements according to the plurality of feature points of the preset face elements, wherein the method comprises the following steps: acquiring a skin characteristic area according to the plurality of characteristic points of the preset face elements; acquiring the average gray value of the skin characteristic region to obtain a fifth characteristic value; the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: a fifth positive sample characteristic value corresponding to the skin white image and a fifth negative sample characteristic value corresponding to the skin black image; the calculation formula of the characteristic value of the fair skin is as follows: (the fifth eigenvalue-the fifth negative sample eigenvalue)/(the fifth positive sample eigenvalue-the fifth negative sample eigenvalue); the feature values of the preset face elements comprise the fifth feature value;
processing the positive/negative sample image corresponding to the preset face element through the face matching template to obtain a plurality of feature points, determining a positive/negative sample feature value according to the plurality of feature points of the positive/negative sample image, and calculating a deviation value between the feature value of the preset face element and the positive/negative sample feature value to obtain a target feature value; the target feature value includes: eye characteristic value, pupil characteristic value, eyebrow characteristic value, nose characteristic value, mouth characteristic value, fair skin characteristic value or smooth skin characteristic value;
the calculation formula of the target characteristic value is as follows: (eigenvalue of the preset face element-the negative sample eigenvalue)/(the positive sample eigenvalue-the negative sample eigenvalue);
and weighting the target characteristic value according to a preset weighting strategy, determining a face image processing result, and displaying the face image processing result on a display screen.
2. The method according to claim 1, wherein the calculating the feature value of the preset face element according to the plurality of feature points of the preset face element comprises:
respectively calculating the area of the characteristic region of the left eye, the area of the characteristic region of the right eye and the area of the characteristic region of the human face edge according to the plurality of characteristic points of the preset human face element;
comparing the characteristic region area of the left eye with the characteristic region area of the right eye, and determining the left eye/the right eye with large characteristic region area as a target eye;
calculating the ratio of the area of the characteristic region corresponding to the target eye to the area of the characteristic region at the edge of the human face to obtain a first characteristic value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: the first positive sample characteristic value corresponding to the large-eye image and the first negative sample characteristic value corresponding to the small-eye image;
the eye characteristic value has the following calculation formula: (the first eigenvalue-the first negative sample eigenvalue)/(the first positive sample eigenvalue-the first negative sample eigenvalue).
3. The method according to claim 1, wherein the calculating the feature value of the preset face element according to the plurality of feature points of the preset face element comprises:
respectively calculating the area of the characteristic region of the left eyebrow, the area of the characteristic region of the right eyebrow and the area of the characteristic region of the face edge according to the plurality of characteristic points of the preset face element;
comparing the characteristic region area of the left eyebrow with the characteristic region area of the right eyebrow, and determining the left eyebrow/the right eyebrow with the large characteristic region area as a target eyebrow;
calculating the ratio of the area of the characteristic region corresponding to the target eyebrow to the area of the characteristic region of the human face edge to obtain a second characteristic value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: a second positive sample characteristic value corresponding to the thick eyebrow image and a second negative sample characteristic value corresponding to the thin eyebrow image;
the calculation formula of the eyebrow characteristic value is as follows: (the second eigenvalue-the second negative sample eigenvalue)/(the second positive sample eigenvalue-the second negative sample eigenvalue).
4. The method according to claim 1, wherein the calculating the feature value of the preset face element according to the plurality of feature points of the preset face element comprises:
respectively calculating the area of the nose feature region and the area of the feature region of the face edge according to the plurality of feature points of the preset face element;
calculating the ratio of the area of the nose feature region to the area of the feature region of the face edge to obtain a third feature value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: the third positive sample characteristic value corresponding to the big nose image and the third negative sample characteristic value corresponding to the small nose sub-image;
the calculation formula of the nose characteristic value is as follows: (the third eigenvalue-the third negative sample eigenvalue)/(the third positive sample eigenvalue-the third negative sample eigenvalue).
5. The method according to claim 1, wherein the calculating the feature value of the preset face element according to the plurality of feature points of the preset face element comprises:
respectively calculating the area of the characteristic region of the left eye and the area of the characteristic region of the right eye according to the plurality of characteristic points of the preset face elements;
comparing the characteristic region area of the left eye with the characteristic region area of the right eye, and determining the left eye/the right eye with large characteristic region area as a target eye;
acquiring a gray value of the target eye and a gray value of a pupil corresponding to the target eye according to the plurality of feature points of the preset face element, and calculating a ratio of the gray value of the pupil to the gray value of the target eye to obtain a fourth feature value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: a fourth positive sample characteristic value corresponding to the large pupil image and a fourth negative sample characteristic value corresponding to the small pupil image;
the pupil characteristic value has the following calculation formula: (the fourth eigenvalue-the fourth negative sample eigenvalue)/(the fourth positive sample eigenvalue-the fourth negative sample eigenvalue).
6. The method according to claim 1, wherein the calculating the feature value of the preset face element according to the plurality of feature points of the preset face element comprises:
acquiring an edge feature area according to the plurality of feature points of the preset face element;
acquiring an average gray value of the edge feature area to obtain a sixth feature value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: the sixth positive sample characteristic value corresponding to the skin smooth image and the sixth negative sample characteristic value corresponding to the skin rough image;
the skin smoothness characteristic value has the following calculation formula: (the sixth eigenvalue-the sixth negative sample eigenvalue)/(the sixth positive sample eigenvalue-the sixth negative sample eigenvalue).
7. The method as claimed in claim 1, wherein said calculating the feature value of the preset face element according to a plurality of feature points of the preset face element comprises:
calculating the central distance between the left eye and the right eye according to the plurality of feature points of the preset face elements to obtain a central distance value between the two eyes;
calculating the center distance between the left mouth corner and the right mouth corner of the mouth according to the plurality of feature points of the preset face elements to obtain a mouth corner center width value;
calculating the ratio of the mouth angle center width value to the distance value between the two eye centers to obtain a seventh characteristic value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: the seventh positive sample characteristic value corresponding to the small mouth image and the seventh negative sample characteristic value corresponding to the large mouth image;
the calculation formula of the mouth characteristic value is as follows: (the seventh eigenvalue-the seventh negative sample eigenvalue)/(the seventh positive sample eigenvalue-the seventh negative sample eigenvalue).
8. The method according to any one of claims 2 to 7, wherein after obtaining a plurality of feature points of a preset face element in the face image, the method comprises:
and analyzing and determining the gender of the face image according to a preset figure gender judgment template.
9. An apparatus for calculating a target feature value of a face image, the apparatus comprising:
the face element processing module is used for shooting through a terminal to obtain a face image; processing preset face elements in the face image through a preset face matching template to obtain a plurality of feature points, and calculating the feature values of the preset face elements according to the plurality of feature points of the preset face elements, wherein the method comprises the following steps: acquiring a skin characteristic area according to the plurality of characteristic points of the preset face elements; acquiring the average gray value of the skin characteristic region to obtain a fifth characteristic value; the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: a fifth positive sample characteristic value corresponding to the skin white image and a fifth negative sample characteristic value corresponding to the skin black image; the calculation formula of the characteristic value of the white skin is as follows: (the fifth eigenvalue-the fifth negative sample eigenvalue)/(the fifth positive sample eigenvalue-the fifth negative sample eigenvalue); the feature values of the preset face elements comprise the fifth feature value; the preset face elements comprise: left eye, right eye, left eyebrow, right eyebrow, nose, mouth, and face edge;
the characteristic value processing module is used for processing the positive/negative sample image corresponding to the preset face element through the face matching template to obtain a plurality of characteristic points, determining a positive/negative sample characteristic value according to the plurality of characteristic points of the positive/negative sample image, and calculating a deviation value between the characteristic value of the preset face element and the characteristic value of the positive/negative sample to obtain a target characteristic value; the target feature value includes: eye characteristic value, pupil characteristic value, eyebrow characteristic value, nose characteristic value, mouth characteristic value, fair skin characteristic value or smooth skin characteristic value;
the calculation formula of the target characteristic value is as follows: (eigenvalue of the preset face element-the negative sample eigenvalue)/(the positive sample eigenvalue-the negative sample eigenvalue);
the image result processing module is used for weighting the target characteristic value according to a preset weighting strategy, determining a face image processing result and displaying the face image processing result on a display screen;
and the sample image processing module is used for extracting the characteristics of sample images in a preset image library and classifying the sample images according to the preset human face elements to obtain positive/negative sample images corresponding to the preset human face elements.
10. The apparatus of claim 9, wherein the face element processing module comprises:
the first area calculation unit is used for calculating the area of the characteristic region of the left eye, the area of the characteristic region of the right eye and the area of the characteristic region of the face edge according to the plurality of characteristic points of the preset face elements;
a target eye determination unit for comparing the characteristic region area of the left eye and the characteristic region area of the right eye and determining the left eye/right eye having a large characteristic region area as a target eye;
the first characteristic value calculating unit is used for calculating the ratio of the area of the characteristic region corresponding to the target eye to the area of the characteristic region of the face edge to obtain a first characteristic value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: the first positive sample characteristic value corresponding to the large-eye image and the first negative sample characteristic value corresponding to the small-eye image;
the eye characteristic value has the following calculation formula: (the first eigenvalue-the first negative sample eigenvalue)/(the first positive sample eigenvalue-the first negative sample eigenvalue).
11. The apparatus of claim 9, wherein the face element processing module comprises:
the second area calculation unit is used for respectively calculating the area of the characteristic region of the left eyebrow, the area of the characteristic region of the right eyebrow and the area of the characteristic region of the face edge according to the plurality of characteristic points of the preset face element;
the target eyebrow determining unit is used for comparing the characteristic area of the left eyebrow with the characteristic area of the right eyebrow and determining the left eyebrow/the right eyebrow with the larger characteristic area as the target eyebrow;
the second characteristic value calculating unit is used for calculating the ratio of the area of the characteristic region corresponding to the target eyebrow to the area of the characteristic region of the human face edge to obtain a second characteristic value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: a second positive sample characteristic value corresponding to the thick eyebrow image and a second negative sample characteristic value corresponding to the thin eyebrow image;
the calculation formula of the eyebrow characteristic value is as follows: (the second eigenvalue-the second negative sample eigenvalue)/(the second positive sample eigenvalue-the second negative sample eigenvalue).
12. The apparatus of claim 9, wherein the face element processing module comprises:
the third area calculation unit is used for calculating the area of the nose feature region and the area of the feature region of the face edge according to the plurality of feature points of the preset face element;
the third characteristic value calculating unit is used for calculating the ratio of the area of the nose characteristic region to the area of the characteristic region of the face edge to obtain a third characteristic value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: a third positive sample characteristic value corresponding to the image of the big nose and a third negative sample characteristic value corresponding to the image of the small nose;
the calculation formula of the nose characteristic value is as follows: (the third eigenvalue-the third negative sample eigenvalue)/(the third positive sample eigenvalue-the third negative sample eigenvalue).
13. The apparatus of claim 9, wherein the face element processing module comprises:
the first area calculation unit is used for calculating the area of the characteristic region of the left eye and the area of the characteristic region of the right eye according to the plurality of characteristic points of the preset face elements;
a target eye determination unit for comparing the characteristic region area of the left eye and the characteristic region area of the right eye and determining the left eye/right eye having a large characteristic region area as a target eye;
the fourth characteristic value calculation unit is used for acquiring the gray value of the target eye and the gray value of the pupil corresponding to the target eye according to the plurality of characteristic points of the preset face element, and calculating the ratio of the gray value of the pupil to the gray value of the target eye to obtain a fourth characteristic value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: a fourth positive sample characteristic value corresponding to the large pupil image and a fourth negative sample characteristic value corresponding to the small pupil image;
the pupil characteristic value has the following calculation formula: (the fourth eigenvalue-the fourth negative sample eigenvalue)/(the fourth positive sample eigenvalue-the fourth negative sample eigenvalue).
14. The apparatus of claim 9, wherein the face element processing module comprises:
the second acquisition unit is used for acquiring an edge feature region according to the plurality of feature points of the preset face element;
a sixth feature value calculating unit, configured to obtain an average gray value of the edge feature region to obtain a sixth feature value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: the sixth positive sample characteristic value corresponding to the skin smooth image and the sixth negative sample characteristic value corresponding to the skin rough image;
the skin smoothness characteristic value has the following calculation formula: (the sixth eigenvalue-the sixth negative sample eigenvalue)/(the sixth positive sample eigenvalue-the sixth negative sample eigenvalue).
15. The apparatus of claim 9, wherein the face element processing module comprises:
the eye distance calculation unit is used for calculating the central distance between the left eye and the right eye according to the plurality of feature points of the preset face elements to obtain a central distance value between the two eyes;
the mouth angle width calculating unit is used for calculating the central distance between the left mouth angle and the right mouth angle of the mouth according to the plurality of characteristic points of the preset face elements to obtain a mouth angle central width value;
a seventh feature value calculating unit, configured to calculate a ratio of the mouth angle center width value to the two-eye center distance value, so as to obtain a seventh feature value;
the positive/negative sample characteristic values of the positive/negative sample image corresponding to the preset face elements comprise: the seventh positive sample characteristic value corresponding to the small mouth image and the seventh negative sample characteristic value corresponding to the large mouth image;
the calculation formula of the mouth characteristic value is as follows: (the seventh eigenvalue-the seventh negative sample eigenvalue)/(the seventh positive sample eigenvalue-the seventh negative sample eigenvalue).
16. The apparatus of any of claims 9 to 15, further comprising:
the gender determination module is used for analyzing and determining the gender of the face image according to a preset figure gender judgment template;
the preset weighting strategy comprises the following steps: and determining a weighting term from the eye characteristic value, the pupil characteristic value, the eyebrow characteristic value, the nose characteristic value, the mouth characteristic value, the skin white characteristic value and the skin smooth characteristic value in the target characteristic value according to the gender of the face image.
CN201310636576.2A 2013-11-27 2013-11-27 Method and device for processing face image Active CN104680121B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201310636576.2A CN104680121B (en) 2013-11-27 2013-11-27 Method and device for processing face image
PCT/CN2014/089885 WO2015078261A1 (en) 2013-11-27 2014-10-30 Methods and systems for processing facial images
HK15107064.9A HK1206463A1 (en) 2013-11-27 2015-07-24 A method and apparatus for processing facial images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310636576.2A CN104680121B (en) 2013-11-27 2013-11-27 Method and device for processing face image

Publications (2)

Publication Number Publication Date
CN104680121A CN104680121A (en) 2015-06-03
CN104680121B true CN104680121B (en) 2022-06-03

Family

ID=53198334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310636576.2A Active CN104680121B (en) 2013-11-27 2013-11-27 Method and device for processing face image

Country Status (3)

Country Link
CN (1) CN104680121B (en)
HK (1) HK1206463A1 (en)
WO (1) WO2015078261A1 (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205490B (en) * 2015-09-23 2019-09-24 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN105450664B (en) * 2015-12-29 2019-04-12 腾讯科技(深圳)有限公司 A kind of information processing method and terminal
CN107122327B (en) * 2016-02-25 2021-06-29 阿里巴巴集团控股有限公司 Method and training system for training model by using training data
CN106503686A (en) * 2016-10-28 2017-03-15 广州炒米信息科技有限公司 The method and system of retrieval facial image
CN108229278B (en) 2017-04-14 2020-11-17 深圳市商汤科技有限公司 Face image processing method and device and electronic equipment
CN108229279B (en) 2017-04-14 2020-06-02 深圳市商汤科技有限公司 Face image processing method and device and electronic equipment
CN110490177A (en) * 2017-06-02 2019-11-22 腾讯科技(深圳)有限公司 A kind of human-face detector training method and device
CN107329402A (en) * 2017-07-03 2017-11-07 湖南工业大学 The control method that a kind of combined integral link is combined with PPI controller algorithm
CN109299632A (en) * 2017-07-25 2019-02-01 上海中科顶信医学影像科技有限公司 Skin detecting method, system, equipment and storage medium
CN108288023B (en) * 2017-12-20 2020-10-16 深圳和而泰数据资源与云技术有限公司 Face recognition method and device
CN108346130B (en) * 2018-03-20 2021-07-23 北京奇虎科技有限公司 Image processing method and device and electronic equipment
CN108629303A (en) * 2018-04-24 2018-10-09 杭州数为科技有限公司 A kind of shape of face defect identification method and system
CN109063597A (en) * 2018-07-13 2018-12-21 北京科莱普云技术有限公司 Method for detecting human face, device, computer equipment and storage medium
CN110929073A (en) * 2018-08-30 2020-03-27 上海掌门科技有限公司 Method and equipment for pushing information and collecting data
CN110968723B (en) * 2018-09-29 2023-05-12 深圳云天励飞技术有限公司 Image characteristic value searching method and device and electronic equipment
CN109978836B (en) * 2019-03-06 2021-01-19 华南理工大学 User personalized image aesthetic feeling evaluation method, system, medium and equipment based on meta learning
CN110717373B (en) * 2019-08-19 2023-01-03 咪咕文化科技有限公司 Image simulation method, electronic device, and computer-readable storage medium
CN111768336B (en) * 2020-07-09 2022-11-01 腾讯科技(深圳)有限公司 Face image processing method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1472694A (en) * 2002-10-25 2004-02-04 Global anti-terrorism face identifying codes and computer storing and searching method
CN101305913A (en) * 2008-07-11 2008-11-19 华南理工大学 Face beauty assessment method based on video
JP2012008617A (en) * 2010-06-22 2012-01-12 Kao Corp Face image evaluation method, face evaluation method and image processing device
CN102496002A (en) * 2011-11-22 2012-06-13 上海大学 Facial beauty evaluation method based on images

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7720284B2 (en) * 2006-09-08 2010-05-18 Omron Corporation Method for outlining and aligning a face in face processing of an image
US7822696B2 (en) * 2007-07-13 2010-10-26 Microsoft Corporation Histogram-based classifiers having variable bin sizes
CN101833672B (en) * 2010-04-02 2012-02-29 清华大学 Sparse representation face identification method based on constrained sampling and shape feature
KR20130000828A (en) * 2011-06-24 2013-01-03 엘지이노텍 주식회사 A method of detecting facial features

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1472694A (en) * 2002-10-25 2004-02-04 Global anti-terrorism face identifying codes and computer storing and searching method
CN101305913A (en) * 2008-07-11 2008-11-19 华南理工大学 Face beauty assessment method based on video
JP2012008617A (en) * 2010-06-22 2012-01-12 Kao Corp Face image evaluation method, face evaluation method and image processing device
CN102496002A (en) * 2011-11-22 2012-06-13 上海大学 Facial beauty evaluation method based on images

Also Published As

Publication number Publication date
WO2015078261A1 (en) 2015-06-04
CN104680121A (en) 2015-06-03
HK1206463A1 (en) 2016-01-08

Similar Documents

Publication Publication Date Title
CN104680121B (en) Method and device for processing face image
WO2019128508A1 (en) Method and apparatus for processing image, storage medium, and electronic device
Zhao et al. Facial expression recognition from near-infrared videos
US7848548B1 (en) Method and system for robust demographic classification using pose independent model from sequence of face images
US9317785B1 (en) Method and system for determining ethnicity category of facial images based on multi-level primary and auxiliary classifiers
US9235751B2 (en) Method and apparatus for image detection and correction
Tome et al. Identification using face regions: Application and assessment in forensic scenarios
Jana et al. Age estimation from face image using wrinkle features
Li et al. Efficient 3D face recognition handling facial expression and hair occlusion
CN108197534A (en) A kind of head part's attitude detecting method, electronic equipment and storage medium
US20100111375A1 (en) Method for Determining Atributes of Faces in Images
KR101558547B1 (en) Age Cognition Method that is powerful to change of Face Pose and System thereof
CN109725721B (en) Human eye positioning method and system for naked eye 3D display system
Ouanan et al. Facial landmark localization: Past, present and future
CN110232331B (en) Online face clustering method and system
CN111586424B (en) Video live broadcast method and device for realizing multi-dimensional dynamic display of cosmetics
CN109509142A (en) A kind of face ageing image processing method, system, readable storage medium storing program for executing and equipment
CN104008364A (en) Face recognition method
Kroon et al. Eye localization in low and standard definition content with application to face matching
Galdámez et al. Ear recognition using a hybrid approach based on neural networks
CN109087240B (en) Image processing method, image processing apparatus, and storage medium
Lin et al. A gender classification scheme based on multi-region feature extraction and information fusion for unconstrained images
Halawani et al. Human ear localization: A template-based approach
Bourbakis et al. Skin-based face detection-extraction and recognition of facial expressions
KR20160042646A (en) Method of Recognizing Faces

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1206463

Country of ref document: HK

C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant