WO2019090769A1 - Human face shape recognition method and apparatus, and intelligent terminal - Google Patents

Human face shape recognition method and apparatus, and intelligent terminal Download PDF

Info

Publication number
WO2019090769A1
WO2019090769A1 PCT/CN2017/110711 CN2017110711W WO2019090769A1 WO 2019090769 A1 WO2019090769 A1 WO 2019090769A1 CN 2017110711 W CN2017110711 W CN 2017110711W WO 2019090769 A1 WO2019090769 A1 WO 2019090769A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
key point
feature
mandible
reference value
Prior art date
Application number
PCT/CN2017/110711
Other languages
French (fr)
Chinese (zh)
Inventor
林丽梅
Original Assignee
深圳和而泰智能控制股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳和而泰智能控制股份有限公司 filed Critical 深圳和而泰智能控制股份有限公司
Priority to PCT/CN2017/110711 priority Critical patent/WO2019090769A1/en
Priority to CN201780009011.8A priority patent/CN108701216B/en
Publication of WO2019090769A1 publication Critical patent/WO2019090769A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present application relates to the field of face recognition technologies, and in particular, to a face face recognition method, apparatus, and smart terminal.
  • Face recognition technology is a technology for identifying and comparing facial visual feature information. Its research fields include: identification, expression recognition, gender recognition and beauty skin care.
  • the field of face recognition technology has also proposed some methods for detecting a person's face by recognizing a face in an image. For example, in the prior art, the face shape of the face to be tested can be identified by first positioning the contour of the face and then using the curvature feature of the contour of the face.
  • the recognition of the face face based on the curvature feature of the face contour requires a higher recognition accuracy of the face contour curve, but in practice In the application, the extraction of the mandible contour curve is difficult, and the accuracy of the extracted contour curve is not high. If the face feature based on the curvature feature of the face contour is used to recognize the face shape, the face face recognition result is low in reliability.
  • the embodiment of the present invention provides a face face recognition method, device, and intelligent terminal, which can solve the problem of low reliability of the recognition result of the face recognition based on the curvature of the face contour.
  • the embodiment of the present application provides a method for recognizing a face face, including:
  • Extracting a face key point in the face image wherein the face key point includes: a nasal bone off Key points, key points of the mandible, and key points of the chin;
  • the feature vector includes: a face length feature value, a mandible width feature value, and a chin angle feature value
  • the person A face length feature value is constructed based on the nasal bone key point, the chin key point, and the reference value
  • the mandible width feature value being constructed based on the mandible key point and the reference value
  • the chin angle feature value Constructing based on the key points of the mandible and the key points of the chin;
  • the face key point further includes two axillary key points;
  • determining, according to the face key point, the reference value of the face image including:
  • the distance between the two axillary key points is used as a reference value of the face image.
  • the face face recognition method further includes:
  • the acquiring the face feature model includes:
  • Face key points include: a key point of the nose bone, a key point of the mandible, and a key point of the chin;
  • the feature vector includes: a face length feature value, a mandible width feature value, and a chin angle feature value.
  • the face length feature value is constructed based on the nasal bone key point, the chin key point, and the reference value
  • the mandible width feature value is based on the mandible key point and the reference value
  • the chin angle feature value is constructed based on the key point of the mandible and the key point of the chin;
  • the feature vector and the face mark of each of the face image samples are input into a support vector machine model, and the face feature model is trained.
  • the face key point further includes: a tibia key point
  • the feature vector further includes: a cheek width feature value
  • the cheek width feature value is based on the nasal bone key point, the tibia key point
  • the reference value is constructed.
  • the feature vector further includes: a side face length feature value, and the side face length feature value is constructed based on the mandible key point, the tibial key point, and the reference value.
  • the feature vector further includes: a ratio of the side face length feature value to the face length feature value.
  • the embodiment of the present application provides a face recognition device, including:
  • An image acquisition unit configured to acquire a face image
  • a face key point extracting unit configured to extract a face key point in the face image, wherein the face key point comprises: a key point of a nose bone, a key point of a mandible, and a key point of a chin;
  • a reference value determining unit configured to determine a reference value of the face image based on the face key point
  • a feature vector construction unit configured to construct a feature vector of the face image in combination with the face key point and the reference value, wherein the feature vector includes: a face length feature value, a mandible width feature value, and a chin An angle feature value, the face length feature value being constructed based on the nasal bone key point, the chin key point, and the reference value, the mandible width feature value being constructed based on the mandible key point and the reference value
  • the chin angle feature value is constructed based on the key point of the mandible and the key point of the chin;
  • a matching unit configured to input the feature vector into a face feature model
  • an output unit configured to acquire a face recognition result output by the face feature model.
  • the facial face recognition device further includes:
  • a model obtaining unit configured to acquire the face feature model.
  • the model obtaining unit includes:
  • a face image sample collection module configured to collect a preset number of face image samples, each of the face image samples being marked with a face mark;
  • a face key point extraction module configured to separately extract face key points in the face image sample, wherein the face key points include: a key point of a nose bone, a key point of a mandible, and a key point of a chin;
  • a reference value determining module configured to determine a reference value of each of the face image samples based on the face key points respectively;
  • a feature vector construction module configured to construct a feature vector of each of the face image samples in combination with the face key point and the reference value, wherein the feature vector comprises: a face length feature value, a mandible width An eigenvalue and a chin angle feature value, the face length feature value being constructed based on the nasal bone key point, the chin key point, and the reference value, the mandible width feature value being based on the mandible key point and Constructing a reference value, the chin angle feature value being constructed based on the mandible key point and the chin key point;
  • a training module configured to input a feature vector and a face mark of each of the face image samples into a support vector machine model, and train the face feature model.
  • an intelligent terminal including:
  • At least one processor and,
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the face face recognition method as described above.
  • the embodiment of the present application provides a storage medium, where the storage medium stores executable instructions, and when the executable instructions are executed by the smart terminal, the smart terminal is configured to perform the face face recognition method as described above. .
  • an embodiment of the present application further provides a program product, where the program product includes a program stored on a storage medium, where the program includes program instructions, when the program instructions are executed by the smart terminal, The smart terminal performs the face face recognition method as described above.
  • the beneficial effects of the embodiment of the present application are: a face face recognition method provided by the embodiment of the present application,
  • the device and the smart terminal extract a face key point in the face image when acquiring a face image, determine a reference value of the face image based on the face key point, and combine the face key point Constructing a feature vector of the face image with the reference value, wherein the face key point comprises: a nose bone key point, a mandible key point, and a chin key point, and the feature vector includes: a face length feature value, a mandible width feature value and a chin angle feature value, the mandible width feature value being constructed based on the mandible key point and the reference value, the chin angle feature value being based on the mandible key point and the chin key Point construction; then input the feature vector into the face feature model; finally obtain the face recognition result output by the face feature model, and can combine the face key points outside the face contour, for example, the key points of the nose bone, combined with various people
  • FIG. 1 is a schematic flowchart of a face face recognition method according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an example of a face key point location provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for acquiring a face feature model according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of an example of a face face classification provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a face face recognition device according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of hardware of an intelligent terminal according to an embodiment of the present application.
  • the embodiment of the present application provides a face face recognition method, device, intelligent terminal, and storage medium.
  • the face face recognition method is a face recognition method based on a feature vector constructed by a face key point and a face feature model trained by a machine learning algorithm, and the face is extracted when the face image is acquired.
  • the face recognition result output by the face feature model can be combined with face key points outside the face contour, for example, key points of the nose bone, combined with various people Facial features to recognize faces, facial recognition so as to enhance reliability of the results.
  • the face face recognition method, the smart terminal and the storage medium provided by the embodiments of the present application can be applied to any technical field related to face recognition, such as portrait nationality recognition, etc., and are particularly suitable for the fields of beauty application, personal image design and the like.
  • a beauty-like application can be developed based on the inventive concept of the face face recognition method provided by the embodiment of the present application, and the application can automatically recognize the face shape of the face image when the user inputs the face image, and further Faces are designed with appropriate hair styles, makeup, eyeglass frames, jewelry, and more.
  • the face face recognition method provided by the embodiment of the present application may be performed by any type of smart terminal having an image processing function, and the smart terminal may include any suitable type of storage medium for storing data, such as a magnetic disk or a compact disk (CD). -ROM), read-only memory or random storage memory.
  • the smart terminal may also include one or more logical computing modules that perform any suitable type of function or operation in parallel, such as viewing a database, image processing, etc., in a single thread or multiple threads.
  • the logic operation module may be any suitable type of electronic circuit or chip-type electronic device capable of performing logical operation operations, such as a single core processor, a multi-core processor, a graphics processing unit (GPU), or the like.
  • the smart terminal may include, but is not limited to, a beauty authentication device, a personal computer, a tablet computer, a smart phone, a server, and the like.
  • FIG. 1 is a schematic flowchart of a face face recognition method according to an embodiment of the present application. Referring to FIG. 1, the method includes but is not limited to the following steps:
  • Step 110 Acquire a face image.
  • the "face image” refers to an image including the face of the detected person, by which all face features of the detected person can be acquired.
  • the specific implementation manner of acquiring the face image may be: collecting the positive face image of the detected person in real time; or, directly: acquiring the existing detected person including the detected person directly in the smart terminal or in the cloud.
  • the image of the face For different application scenarios or the selection of the detected person, different ways of acquiring the face image may be selected. For example, it is assumed that a smart terminal for recommending a suitable frame for the user is provided in the eyeglass store, and in order to be able to recommend a suitable frame based on the face of the user in time, the manner in which the smart terminal acquires the face image may be by camera.
  • the device collects the positive face image of the detected person in real time.
  • the user wants to design a suitable makeup for himself through his own smart terminal, for example, a smart phone. Since the smart terminal generally stores a personal face image, the smart terminal acquires the face in the application scenario. The image may also be obtained by directly capturing the existing image including the positive face of the detected person directly in the smart terminal or in the cloud.
  • the manner of obtaining the face image is not limited to the above description, and the comparison of the embodiments of the present application is not specifically limited.
  • Step 120 Extract a face key point in the face image.
  • the face key point refers to a feature point distributed on an area having a characteristic feature in a face (for example, a face contour, an eye, an eyebrow, a nose, a mouth, etc.).
  • a characteristic feature in a face for example, a face contour, an eye, an eyebrow, a nose, a mouth, etc.
  • Different faces have different face key points.
  • each of the regions having the characteristic features may be distributed with a plurality of face key points.
  • a key point of the nose for identifying the position of the bridge of the nose, a key point of the nose for identifying the position of the nose, and the like may be included; for example, on the contour area of the face, may include a marker for identifying the mandible.
  • the number of key points of the nose, the key points of the nose, the key points of the mandible, the key points of the tibia, and the key points of the chin are not limited to one.
  • the acquired face image may be positioned by using a suitable algorithm or manner, and then the required face key points are extracted.
  • the key points that need to be extracted include, but are not limited to, a key point of the nose, a key point of the mandible, and a key point of the chin.
  • the face key point of the obtained face image can be located through the third-party toolkit dlib, and the obtained key point distribution map is shown in FIG. 2 (where the face key points 0-16 are used to identify the person)
  • the feature points of the face contour area of the face image, the face key points 27-30 are the feature points of the nasal bone area identifying the face image); then the preset face key points are selected and the coordinate parameters are obtained, for example, Select the key points 27 to 30 of the nasal bone, which are located at the key points 4-7 and 9-12 of the mandible on both sides of the face, and the key point 8 of the chin.
  • the key points can include 56.
  • Step 130 Determine a reference value of the face image based on the face key point.
  • a reference standard is set for the facial feature values of each face image, and the reference standard is the reference. value".
  • the distance between any two extracted face key points may be used as a reference value of the face image, for example, the distance between the nose bone key point 27 and the nose bone key point 30 is taken as The reference value of the face image; or, the distance between the following key point 4 of the jaw and the key point 12 of the mandible is used as a reference value of the face image; or, the following key points of the jaw bone 5 and the key point of the chin 8 The distance between them is used as a reference value for the face image.
  • each face image constructs its reference value according to the construction method.
  • the distance between the key point 4 of the jaw and the key point 12 of the mandible is selected to construct a reference value. If the face image A is acquired at this time, the key point 4 of the mandible of the face image A is extracted. The key point 12 of the mandible is determined, and the reference value of the face image A is determined as the distance between the key point 4 of the mandible image A and the key point 12 of the mandible; if the face image B is acquired at this time, the person is extracted The mandibular key point 4 of the face image B and the mandible key point 12, and the reference value of the face image B is determined as the distance between the mandible key point 4 of the face image B and the mandible key point 12.
  • the extracted face key points further include two armpit key points located at the temple position, that is, face key points 1 and 15 as shown in FIG. 2, at this time, based on the person
  • a specific embodiment of determining a reference value of the face image by the face key may be: using a distance between the two axillary key points as a reference value of the face image.
  • the distance between the key points of the two armpits is a major feature of the face type, the distance is used as the reference value of the face image to obtain the distance between the key points of the specific face (for example, located on both sides of the face)
  • the ratio of the distance between the two key points of the mandible and the distance can better reflect the facial features of the face image.
  • Step 140 Construct a feature vector of the face image in combination with the face key point and the reference value.
  • the "feature vector” is a parameter for characterizing a face face of the acquired face image, which may be composed of facial feature values of a plurality of face images.
  • the “facial feature value” is a parameter that reflects a facial feature of the face image, and may include, but is not limited to, a face length feature value, a mandible width feature value, and a chin angle feature value.
  • a plurality of preset facial feature values may be first constructed based on the extracted face key points and the determined reference values, such as: based on the nasal bone key points, the chin key points, and The reference value constructs a face length feature value of the face image, and constructs a mandible width feature value of the face image based on the mandible key point and the reference value, and based on the key point of the mandible
  • the chin key points construct a chin angle feature value of the face image; and then the constructed face feature values are combined into a feature vector of the face image.
  • the extracted face key points may further include a key point of the humerus on the contour of the face near the tibia, which may be based on the extracted key points of the nasal bone and the key points of the tibia ( And/or, the critical point of the mandible and the determined reference value to construct a cheek width feature value of the face image, and/or based on the extracted key points of the mandible, key points of the tibia and the determined reference values The side face length feature value of the face image.
  • the constructed feature vector may further include a ratio of any two facial feature values having length/width features, such as the side face length feature value and the face length feature value.
  • Feature8 Feature3/Feature1
  • Feature9 Feature7/Feature1;
  • Feature10 Feature7/Feature3;
  • Feature15 the cosine of the angle formed by L(8,5) and L(8,11);
  • Feature16 the cosine of the angle formed by L(8,6) and L(8,10);
  • Feature17 the cosine of the angle formed by L(8,7) and L(8,9);
  • Feature18 the cosine of the angle formed by L(11,10) and L(11,12);
  • D(x, y) represents the distance between the face key point x and the face key point y
  • L(x, y) represents the face key point x and the face key point y The line segment between.
  • Feature1 is the face length feature value
  • Feature2 ⁇ Feature4 is the mandible width feature value
  • Feature5 and Feature6 are the mandible length feature values
  • Feature7 is the lateral face length feature value
  • Feature8 is the mandible width feature value and face length feature value.
  • the ratio of Feature9 is the ratio of the length value of the face length to the feature value of the face length
  • Feature10 is the ratio of the feature value of the face length to the feature value of the mandible width
  • Feature11 ⁇ Feature14 are the feature values of the cheek width
  • Feature15 ⁇ Feature18 are the chin angle.
  • Step 150 Input the feature vector into the face feature model.
  • the "face feature model” is a face type classifier trained by a machine learning algorithm, and after inputting the feature vector of the face image into the face feature model, the face feature model can input the feature vector An operation is performed to output a face recognition result corresponding to the feature vector.
  • the face feature model can be pre-trained and stored locally in the smart terminal. When the face face recognition is performed, the face feature model can be directly called from the smart terminal.
  • the smart terminal does not pre-store the face feature model.
  • the face feature model needs to be acquired first.
  • the specific embodiment manner of obtaining the face feature model may be: connecting from other devices or the cloud through a network connection.
  • the face feature model is carried; or the face feature model can be trained based on the face image sample and the machine learning algorithm locally on the smart terminal.
  • the specific implementation manner of training the face feature model based on the face image sample and the machine learning algorithm may be: first collecting a predetermined number of face image samples; and then extracting a feature vector of each face image sample as a training sample; Finally, any suitable machine learning algorithm, such as neural network, decision tree, support vector machine, etc., is used to train the training samples, so that the trained model has the function of face face classification.
  • any suitable machine learning algorithm such as neural network, decision tree, support vector machine, etc.
  • a schematic flowchart of a method for acquiring the face feature model provided by the embodiment of the present application may include, but is not limited to, the following steps:
  • Step 151 Collect a preset number of face image samples, and each of the face image samples is marked with a face mark.
  • the face face is first classified according to needs, for example, the face face is divided into five categories: a heart-shaped face, a goose face, a long face, a round face, and a square face as shown in FIG. 4;
  • the face shape collects a predetermined number of face image samples, and a face type mark is marked in each face image sample for indicating the face type to which the face image sample belongs.
  • Step 152 Extract the face key points in the face image sample separately.
  • the same face key point positioning method is used, for example, the third party toolkit dilb is used to perform face key point positioning on all collected face image samples and extract the predetermined position.
  • Face key points including but not limited to: key points of the nasal bone, key points of the mandible, and key points of the chin.
  • step 120 For the specific implementation manner of extracting the key points of the face image in each face image sample, refer to step 120 above, and details are not described herein again.
  • the same face key point positioning method is also used to perform face key point positioning on the acquired face image and extract the same face key point.
  • Step 153 Determine a reference value of each of the face image samples based on the face key points, respectively.
  • step 153 has the same technical features as the above-mentioned step 130, and the specific implementation manner can also refer to the above step 130, and therefore, details are not described herein again.
  • Step 154 Construct a feature vector of each of the face image samples in combination with the face key point and the reference value, respectively.
  • the step 154 has the same technical features as the above-mentioned step 140, and the specific embodiment can also refer to the above step 140, and therefore, details are not described herein again.
  • Step 155 Input feature vectors and face markers of each of the face image samples into a training support vector machine model, and train the face feature model.
  • a face vector feature model with face face classification function is trained by using a Support Vector Machine (SVM) model.
  • SVM Support Vector Machine
  • the "Support Vector Machine Model” is a supervised machine learning model, which is usually used for pattern recognition, classification and regression analysis.
  • the feature vector and the face mark of each of the face image samples are The support vector machine model is input, and the face feature model is trained. Specifically, the feature vector of each face image sample is input as a variable, and the face mark of the face image sample is input as a result; through a large number of training samples, a function describing the face shape can be obtained, which is equivalent to training generation. Face feature model.
  • a variable input feature vector of the face image to be tested
  • a result ie, a face recognition result of the face image to be tested
  • face type feature model face type feature model
  • the machine learning toolkit sklearn can be called and the svm class can be introduced to generate an empty object model; then the training data (the feature vector of each face image sample and its face type mark) is fed to the model.fit; The model.fit can be trained to obtain a face feature model.
  • the parameters of the trained face feature model may also be saved locally in the smart terminal, so as to facilitate the face face recognition to be directly called when the facial face recognition is performed later.
  • Step 160 Acquire a face recognition result output by the face feature model.
  • the face type recognition result is obtained, and the face recognition result is the face in the face image. Face shape.
  • the human face recognition method extracts a face key point in the face image by acquiring a face image.
  • the face key points determine a reference value of the face image and construct a feature vector of the face image in combination with the face key point and the reference value, wherein the face key point comprises: a nose bone key point a key point of the mandible and a key point of the chin, the feature vector comprising: a face length feature value, a mandible width feature value, and a chin angle feature value, the mandible width feature value being based on the mandible key point and the a reference value construction, the chin angle feature value is constructed based on the mandible key point and the chin key point; then the feature vector is input into the face feature model; and finally the face recognition result output by the face feature model is obtained, Combine face key points outside the contour of the face, such as key points of the nose, combined with a variety of facial features to recognize the face face, thereby enhancing the face
  • FIG. 5 is a schematic structural diagram of a face face recognition device according to an embodiment of the present disclosure.
  • the face recognition device 5 includes but is not limited to:
  • An image obtaining unit 51 configured to acquire a face image
  • a face key extraction unit 52 configured to extract a face key point in the face image, wherein the face key point includes: a key point of the nose bone, a key point of the mandible, and a key point of the chin;
  • a reference value determining unit 53 configured to determine a reference value of the face image based on the face key point
  • the feature vector construction unit 54 is configured to construct a feature vector of the face image in combination with the face key point and the reference value, wherein the feature vector includes: a face length feature value, a mandible width feature value, and a chin angle feature value, the face length feature value being constructed based on the nasal bone key point, the chin key point, and the reference value, the mandible width feature value being based on the mandible key point and the reference value Constructing, the chin angle feature value is constructed based on the key point of the mandible and the key point of the chin;
  • a matching unit 55 configured to input the feature vector into a face feature model
  • the output unit 56 is configured to acquire a face recognition result output by the face feature model.
  • the face key point extraction unit 52 determines the reference value of the face image based on the face key point; Constructing a feature vector of the face image in combination with the face key point and the reference value; then inputting the feature vector into the face feature model by the matching unit 55; finally, acquiring the face feature by the output unit 56 The face recognition result of the model output.
  • the extracted face key points further include: two armpit key points; at this time, the reference value determining unit 53 is specifically configured to use the distance between the two axillary key points as a The reference value of the face image.
  • the face face recognition device 5 further includes: a model acquisition unit 57, configured to acquire the face feature model.
  • the model obtaining unit 57 includes: a face image sample collecting module 571, a face key point extracting module 572, a reference value determining module 573, and a feature vector building model. Block 574 and training module 575.
  • the face image sample collection module 571 is configured to collect a preset number of face image samples, each of the face image samples is labeled with a face mark; and the face key point extraction module 572 is configured to separately extract the face image.
  • the face key point includes: a nasal bone key point, a mandibial key point, and a chin key point
  • the reference value determining module 573 is configured to determine each of the face key points respectively a reference value of the face image sample
  • the face key point may further include: a tibia key point
  • the feature vector may further include: a cheek width feature value, and/or a side face length feature value.
  • the cheek width feature value is constructed based on the nasal bone key point, the tibia key point, and the reference value; the side face length feature value is based on the mandible key point, the tibia key point, and the Baseline value build.
  • the feature vector may further include a ratio of the side face length feature value to the face length feature value. It can be understood that, in practical applications, a larger number of face-related feature values can be constructed by constructing a larger number of face-related feature values to construct feature vectors of face images, which are not enumerated here.
  • the facial face recognition device is based on the face key point extraction unit 52 by the face key point extraction unit 52 when the image acquisition unit 51 acquires the face image.
  • the face key points determine a reference value of the face image; and further, the feature vector constructing unit 54 combines the face key point and the reference value to construct the feature of the face image a vector, wherein the face key points include: a nasal bone key point, a mandibial key point, and a chin key point, the feature vector including: a face length feature value, a mandible width feature value, and a chin angle feature value, A mandible width feature value is constructed based on the mandible key point and the reference value, the chin angle feature value being constructed based on the mandible key point and the chin key point; the feature vector is then passed by the matching unit 55 Entering a face feature model; finally, the face recognition result output by the face feature model is obtained by the output unit 56, and can be combined
  • FIG. 6 is a schematic structural diagram of an intelligent terminal according to an embodiment of the present disclosure.
  • the smart terminal 600 can be any type of smart terminal, such as a mobile phone, a tablet computer, a beauty authentication device, etc., and can perform the operations provided in the embodiments of the present application. Any kind of face face recognition method.
  • the smart terminal 600 includes:
  • One or more processors 601 and memory 602, one processor 601 is taken as an example in FIG.
  • the processor 601 and the memory 602 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 602 is used as a non-transitory computer readable storage medium, and can be used for storing a non-transitory software program, a non-transitory computer executable program, and a module, such as a program instruction corresponding to the face face recognition method in the embodiment of the present application.
  • / Module for example, the image acquisition unit 51, the face key extraction unit 52, the reference value determination unit 53, the feature vector construction unit 54, the matching unit 55, the output unit 56, and the model acquisition unit 57 shown in FIG.
  • the processor 601 performs various functional applications and data processing of the device for detecting face defects by running non-transitory software programs, instructions, and modules stored in the memory 602, that is, the face of any of the above method embodiments is implemented. Face recognition method.
  • the memory 602 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the smart terminal 600, and the like.
  • memory 602 can include high speed random access memory, and can also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device.
  • the memory 602 can optionally include a processor 601 Remotely set memories that can be connected to the smart terminal 600 via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 602, and when executed by the one or more processors 601, perform a face face recognition method in any of the above method embodiments, for example, performing the above described diagram
  • the method steps 110 to 160 in 1 and the method steps 151 to 155 in FIG. 3 implement the functions of the units 51-57 in FIG.
  • the embodiment of the present application further provides a storage medium storing executable instructions executed by one or more processors, for example, by one processor 601 in FIG. 6, which may be The one or more processors perform the face face recognition method in any of the foregoing method embodiments, for example, perform the method steps 110 to 160 in FIG. 1 described above, and the method steps 151 to 155 in FIG. 3 are implemented.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the various embodiments can be implemented by means of software plus a general hardware platform, and of course, by hardware.
  • a person skilled in the art can understand that all or part of the process of implementing the above embodiments can be completed by a computer program to instruct related hardware, and the program can be stored in a non-transitory computer readable storage medium.
  • the program when executed, may include the flow of an embodiment of the methods as described above.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

Provided are a human face shape recognition method, an intelligent terminal and a storage medium. The method comprises: acquiring a human face image; extracting a human face key point from the human face image; based on the human face key point, determining a reference value of the human face image; building a feature vector of the human face image by combining the human face key point with the reference value; inputting the feature vector into a face shape feature model; and acquiring a face shape recognition result output by the face shape feature model. By means of the technical solution, the embodiments of the present application can recognize a human face shape by combining human face key points other than a human face outline and by combining multiple human face features, thereby improving the reliability of a face shape recognition result.

Description

一种人脸脸型识别方法、装置和智能终端Face face recognition method, device and intelligent terminal 技术领域Technical field
本申请涉及人脸识别技术领域,尤其涉及一种人脸脸型识别方法、装置和智能终端。The present application relates to the field of face recognition technologies, and in particular, to a face face recognition method, apparatus, and smart terminal.
背景技术Background technique
人脸识别技术是一种通过分析比较人脸视觉特征信息进行身份鉴定的技术,其研究领域包括:身份识别、表情识别、性别识别以及美容护肤等。Face recognition technology is a technology for identifying and comparing facial visual feature information. Its research fields include: identification, expression recognition, gender recognition and beauty skin care.
近年来,随着人们物质生活水平的日益提高,人们在个人形象设计方面的需求迅速增长。而为用户提供个人形象设计通常需要首先确定用户的脸型,继而根据用户的脸型选择合适的发型、妆容、眼镜、配饰等等。基于该需求,人脸识别技术领域目前也提出了一些通过识别图像中的人脸检测出一个人的脸型的方法。比如,在现有的技术中,可以通过首先定位出人脸轮廓,然后利用人脸轮廓的曲率特征识别待测的人脸的脸型。In recent years, with the increasing material living standards, people's demand for personal image design has grown rapidly. To provide users with a personal image design usually needs to first determine the user's face, and then select the appropriate hairstyle, makeup, glasses, accessories, etc. according to the user's face. Based on this demand, the field of face recognition technology has also proposed some methods for detecting a person's face by recognizing a face in an image. For example, in the prior art, the face shape of the face to be tested can be identified by first positioning the contour of the face and then using the curvature feature of the contour of the face.
然而,在实现本申请的过程中,发明人发现现有技术至少存在以下问题:基于人脸轮廓的曲率特征识别人脸脸型这种方式对人脸轮廓曲线的识别精度要求较高,但在实际应用中,下颌轮廓曲线提取较为困难,提取得到的人脸轮廓曲线精度不高,若采用基于人脸轮廓的曲率特征识别人脸脸型的方式,人脸脸型识别结果可靠性低。However, in the process of implementing the present application, the inventors have found that the prior art has at least the following problems: the recognition of the face face based on the curvature feature of the face contour requires a higher recognition accuracy of the face contour curve, but in practice In the application, the extraction of the mandible contour curve is difficult, and the accuracy of the extracted contour curve is not high. If the face feature based on the curvature feature of the face contour is used to recognize the face shape, the face face recognition result is low in reliability.
发明内容Summary of the invention
本申请实施例提供一种人脸脸型识别方法、装置和智能终端,能够解决基于人脸轮廓的曲率识别人脸脸型的识别结果的可靠性低的问题。The embodiment of the present invention provides a face face recognition method, device, and intelligent terminal, which can solve the problem of low reliability of the recognition result of the face recognition based on the curvature of the face contour.
第一方面,本申请实施例提供了一种人脸脸型识别方法,包括:In a first aspect, the embodiment of the present application provides a method for recognizing a face face, including:
获取人脸图像;Obtaining a face image;
提取所述人脸图像中的人脸关键点,其中,所述人脸关键点包括:鼻骨关 键点、下颌骨关键点和下巴关键点;Extracting a face key point in the face image, wherein the face key point includes: a nasal bone off Key points, key points of the mandible, and key points of the chin;
基于所述人脸关键点确定所述人脸图像的基准值;Determining a reference value of the face image based on the face key point;
结合所述人脸关键点和所述基准值构建所述人脸图像的特征向量,其中,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述人脸长度特征值基于所述鼻骨关键点、所述下巴关键点和所述基准值构建,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;Constructing a feature vector of the face image in combination with the face key point and the reference value, wherein the feature vector includes: a face length feature value, a mandible width feature value, and a chin angle feature value, the person A face length feature value is constructed based on the nasal bone key point, the chin key point, and the reference value, the mandible width feature value being constructed based on the mandible key point and the reference value, the chin angle feature value Constructing based on the key points of the mandible and the key points of the chin;
将所述特征向量输入脸型特征模型;Inputting the feature vector into a face feature model;
获取所述脸型特征模型输出的脸型识别结果。Obtaining a face recognition result output by the face feature model.
可选地,所述人脸关键点还包括两个颞窝关键点;Optionally, the face key point further includes two axillary key points;
则,所述基于所述人脸关键点确定所述人脸图像的基准值,包括:Then, determining, according to the face key point, the reference value of the face image, including:
以所述两个颞窝关键点之间的距离作为所述人脸图像的基准值。The distance between the two axillary key points is used as a reference value of the face image.
可选地,所述将所述特征向量输入脸型特征模型的步骤之前,所述人脸脸型识别方法还包括:Optionally, before the step of inputting the feature vector into the face feature model, the face face recognition method further includes:
获取所述脸型特征模型。Obtaining the face feature model.
可选地,所述获取所述脸型特征模型,包括:Optionally, the acquiring the face feature model includes:
采集预设数量的人脸图像样本,每一所述人脸图像样本标注有一脸型标记;Collecting a preset number of face image samples, each of the face image samples being marked with a face mark;
分别提取所述人脸图像样本中的人脸关键点,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点;Extracting face key points in the face image sample respectively, wherein the face key points include: a key point of the nose bone, a key point of the mandible, and a key point of the chin;
分别基于所述人脸关键点确定每一所述人脸图像样本的基准值;Determining a reference value of each of the face image samples based on the face key points;
分别结合所述人脸关键点和所述基准值构建每一所述人脸图像样本的特征向量,其中,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述人脸长度特征值基于所述鼻骨关键点、所述下巴关键点和所述基准值构建,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值 构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;Constructing a feature vector of each of the face image samples in combination with the face key point and the reference value, wherein the feature vector includes: a face length feature value, a mandible width feature value, and a chin angle feature value. The face length feature value is constructed based on the nasal bone key point, the chin key point, and the reference value, the mandible width feature value is based on the mandible key point and the reference value Constructing, the chin angle feature value is constructed based on the key point of the mandible and the key point of the chin;
将每一所述人脸图像样本的特征向量和脸型标记输入支持向量机模型,训练得到所述脸型特征模型。The feature vector and the face mark of each of the face image samples are input into a support vector machine model, and the face feature model is trained.
可选地,所述人脸关键点还包括:颧骨关键点,所述特征向量还包括:脸颊宽度特征值,所述脸颊宽度特征值基于所述鼻骨关键点、所述颧骨关键点和所述基准值构建。Optionally, the face key point further includes: a tibia key point, the feature vector further includes: a cheek width feature value, the cheek width feature value is based on the nasal bone key point, the tibia key point, and The reference value is constructed.
可选地,所述特征向量还包括:侧脸长度特征值,所述侧脸长度特征值基于所述下颌骨关键点、所述颧骨关键点和所述基准值构建。Optionally, the feature vector further includes: a side face length feature value, and the side face length feature value is constructed based on the mandible key point, the tibial key point, and the reference value.
可选地,所述特征向量还包括:所述侧脸长度特征值与所述人脸长度特征值的比值。Optionally, the feature vector further includes: a ratio of the side face length feature value to the face length feature value.
第二方面,本申请实施例提供了一种人脸脸型识别装置,包括:In a second aspect, the embodiment of the present application provides a face recognition device, including:
图像获取单元,用于获取人脸图像;An image acquisition unit, configured to acquire a face image;
人脸关键点提取单元,用于提取所述人脸图像中的人脸关键点,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点;a face key point extracting unit, configured to extract a face key point in the face image, wherein the face key point comprises: a key point of a nose bone, a key point of a mandible, and a key point of a chin;
基准值确定单元,用于基于所述人脸关键点确定所述人脸图像的基准值;a reference value determining unit, configured to determine a reference value of the face image based on the face key point;
特征向量构建单元,用于结合所述人脸关键点和所述基准值构建所述人脸图像的特征向量,其中,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述人脸长度特征值基于所述鼻骨关键点、所述下巴关键点和所述基准值构建,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;a feature vector construction unit, configured to construct a feature vector of the face image in combination with the face key point and the reference value, wherein the feature vector includes: a face length feature value, a mandible width feature value, and a chin An angle feature value, the face length feature value being constructed based on the nasal bone key point, the chin key point, and the reference value, the mandible width feature value being constructed based on the mandible key point and the reference value The chin angle feature value is constructed based on the key point of the mandible and the key point of the chin;
匹配单元,用于将所述特征向量输入脸型特征模型;a matching unit, configured to input the feature vector into a face feature model;
输出单元,用于获取所述脸型特征模型输出的脸型识别结果。And an output unit, configured to acquire a face recognition result output by the face feature model.
可选地,所述人脸脸型识别装置还包括:Optionally, the facial face recognition device further includes:
模型获取单元,用于获取所述脸型特征模型。a model obtaining unit, configured to acquire the face feature model.
可选地,所述模型获取单元包括: Optionally, the model obtaining unit includes:
人脸图像样本采集模块,用于采集预设数量的人脸图像样本,每一所述人脸图像样本标注有一脸型标记;a face image sample collection module, configured to collect a preset number of face image samples, each of the face image samples being marked with a face mark;
人脸关键点提取模块,用于分别提取所述人脸图像样本中的人脸关键点,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点;a face key point extraction module, configured to separately extract face key points in the face image sample, wherein the face key points include: a key point of a nose bone, a key point of a mandible, and a key point of a chin;
基准值确定模块,用于分别基于所述人脸关键点确定每一所述人脸图像样本的基准值;a reference value determining module, configured to determine a reference value of each of the face image samples based on the face key points respectively;
特征向量构建模块,用于分别结合所述人脸关键点和所述基准值构建每一所述人脸图像样本的特征向量,其中,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述人脸长度特征值基于所述鼻骨关键点、所述下巴关键点和所述基准值构建,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;a feature vector construction module, configured to construct a feature vector of each of the face image samples in combination with the face key point and the reference value, wherein the feature vector comprises: a face length feature value, a mandible width An eigenvalue and a chin angle feature value, the face length feature value being constructed based on the nasal bone key point, the chin key point, and the reference value, the mandible width feature value being based on the mandible key point and Constructing a reference value, the chin angle feature value being constructed based on the mandible key point and the chin key point;
训练模块,用于将每一所述人脸图像样本的特征向量和脸型标记输入支持向量机模型,训练得到所述脸型特征模型。And a training module, configured to input a feature vector and a face mark of each of the face image samples into a support vector machine model, and train the face feature model.
第三方面,本申请实施例提供一种智能终端,包括:In a third aspect, an embodiment of the present application provides an intelligent terminal, including:
至少一个处理器;以及,At least one processor; and,
与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如上所述的人脸脸型识别方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the face face recognition method as described above.
第四方面,本申请实施例提供一种存储介质,所述存储介质存储有可执行指令,所述可执行指令被智能终端执行时,使所述智能终端执行如上所述的人脸脸型识别方法。In a fourth aspect, the embodiment of the present application provides a storage medium, where the storage medium stores executable instructions, and when the executable instructions are executed by the smart terminal, the smart terminal is configured to perform the face face recognition method as described above. .
第五方面,本申请实施例还提供了一种程序产品,所述机程序产品包括存储在存储介质上的程序,所述程序包括程序指令,当所述程序指令被智能终端执行时,使所述智能终端执行如上所述的人脸脸型识别方法。In a fifth aspect, an embodiment of the present application further provides a program product, where the program product includes a program stored on a storage medium, where the program includes program instructions, when the program instructions are executed by the smart terminal, The smart terminal performs the face face recognition method as described above.
本申请实施例的有益效果在于:本申请实施例提供的人脸脸型识别方法、 装置和智能终端通过在获取到人脸图像时,提取所述人脸图像中的人脸关键点,基于所述人脸关键点确定所述人脸图像的基准值并结合所述人脸关键点和所述基准值构建所述人脸图像的特征向量,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;然后将所述特征向量输入脸型特征模型;最后获取所述脸型特征模型输出的脸型识别结果,能够结合人脸轮廓之外的人脸关键点,比如,鼻骨关键点,结合多种人脸特征识别人脸脸型,从而提升脸型识别结果的可靠性。The beneficial effects of the embodiment of the present application are: a face face recognition method provided by the embodiment of the present application, The device and the smart terminal extract a face key point in the face image when acquiring a face image, determine a reference value of the face image based on the face key point, and combine the face key point Constructing a feature vector of the face image with the reference value, wherein the face key point comprises: a nose bone key point, a mandible key point, and a chin key point, and the feature vector includes: a face length feature value, a mandible width feature value and a chin angle feature value, the mandible width feature value being constructed based on the mandible key point and the reference value, the chin angle feature value being based on the mandible key point and the chin key Point construction; then input the feature vector into the face feature model; finally obtain the face recognition result output by the face feature model, and can combine the face key points outside the face contour, for example, the key points of the nose bone, combined with various people The face feature recognizes the face face type, thereby improving the reliability of the face recognition result.
附图说明DRAWINGS
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。The one or more embodiments are exemplified by the accompanying drawings in the accompanying drawings, and FIG. The figures in the drawings do not constitute a scale limitation unless otherwise stated.
图1是本申请实施例提供的一种人脸脸型识别方法的流程示意图;1 is a schematic flowchart of a face face recognition method according to an embodiment of the present application;
图2是本申请实施例提供的一种人脸关键点定位的示例示意图;2 is a schematic diagram of an example of a face key point location provided by an embodiment of the present application;
图3是本申请实施例提供的一种获取脸型特征模型的方法的流程示意图;3 is a schematic flowchart of a method for acquiring a face feature model according to an embodiment of the present application;
图4是本申请实施例提供的一种人脸脸型分类示例示意图;4 is a schematic diagram of an example of a face face classification provided by an embodiment of the present application;
图5是本申请实施例提供的一种人脸脸型识别装置的结构示意图;FIG. 5 is a schematic structural diagram of a face face recognition device according to an embodiment of the present application; FIG.
图6是本申请实施例提供的一种智能终端的硬件结构示意图。FIG. 6 is a schematic structural diagram of hardware of an intelligent terminal according to an embodiment of the present application.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。In order to make the objects, technical solutions, and advantages of the present application more comprehensible, the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
需要说明的是,如果不冲突,本申请实施例中的各个特征可以相互结合,均在本申请的保护范围之内。另外,虽然在装置示意图中进行了功能模块划分, 在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于装置中的模块划分,或流程图中的顺序执行所示出或描述的步骤。It should be noted that, if there is no conflict, the various features in the embodiments of the present application may be combined with each other, and are all within the protection scope of the present application. In addition, although the functional module division is performed in the device schematic, The logical order is shown in the flowcharts, but in some cases the steps shown or described may be performed in a different order than the modules in the device, or in the sequence of the flowchart.
本申请实施例提供了一种人脸脸型识别方法、装置、智能终端和存储介质。其中,该人脸脸型识别方法是一种基于由人脸关键点构建的特征向量以及利用机器学习算法训练的脸型特征模型的脸型识别方案,通过在获取到人脸图像时,提取所述人脸图像中的人脸关键点,基于所述人脸关键点确定所述人脸图像的基准值并结合所述人脸关键点和所述基准值构建所述人脸图像的特征向量,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;然后将所述特征向量输入脸型特征模型;最后获取所述脸型特征模型输出的脸型识别结果,能够结合人脸轮廓之外的人脸关键点,比如,鼻骨关键点,结合多种人脸特征识别人脸脸型,从而提升脸型识别结果的可靠性。The embodiment of the present application provides a face face recognition method, device, intelligent terminal, and storage medium. The face face recognition method is a face recognition method based on a feature vector constructed by a face key point and a face feature model trained by a machine learning algorithm, and the face is extracted when the face image is acquired. a face key point in the image, determining a reference value of the face image based on the face key point, and constructing a feature vector of the face image according to the face key point and the reference value, where The key points of the face include: a key point of the nose bone, a key point of the mandible, and a key point of the chin, and the feature vector includes: a face length feature value, a mandible width feature value, and a chin angle feature value, and the mandible width feature value Constructed based on the mandible key point and the reference value, the chin angle feature value is constructed based on the mandible key point and the chin key point; then the feature vector is input into the face feature model; and finally the The face recognition result output by the face feature model can be combined with face key points outside the face contour, for example, key points of the nose bone, combined with various people Facial features to recognize faces, facial recognition so as to enhance reliability of the results.
本申请实施例提供的人脸脸型识别方法、智能终端和存储介质能够应用于任意人脸识别相关的技术领域,比如,人像国籍识别等,尤其适用于美容应用、个人形象设计等领域。例如,可以基于本申请实施例提供的人脸脸型识别方法的发明构思开发美容类的应用程序,该应用程序可以在用户输入人脸图像时自动地识别出该人脸图像的脸型,进而为该脸型相应设计合适的发型、妆容、眼镜镜架、首饰搭配等等。The face face recognition method, the smart terminal and the storage medium provided by the embodiments of the present application can be applied to any technical field related to face recognition, such as portrait nationality recognition, etc., and are particularly suitable for the fields of beauty application, personal image design and the like. For example, a beauty-like application can be developed based on the inventive concept of the face face recognition method provided by the embodiment of the present application, and the application can automatically recognize the face shape of the face image when the user inputs the face image, and further Faces are designed with appropriate hair styles, makeup, eyeglass frames, jewelry, and more.
本申请实施例提供的人脸脸型识别方法可以由任意类型的具有图像处理功能的智能终端执行,该智能终端可以包括任何合适类型的,用以存储数据的存储介质,例如磁碟、光盘(CD-ROM)、只读存储记忆体或随机存储记忆体等。该智能终端还可以包括一个或者多个逻辑运算模块,单线程或者多线程并行执行任何合适类型的功能或者操作,例如查看数据库、图像处理等。所述逻辑运算模块可以是任何合适类型的,能够执行逻辑运算操作的电子电路或者贴片式电子器件,例如:单核心处理器、多核心处理器、图形处理器(GPU)等。举 例来说,该智能终端可以包括但不限于:美容鉴定仪器、个人电脑、平板电脑、智能手机、服务器等。The face face recognition method provided by the embodiment of the present application may be performed by any type of smart terminal having an image processing function, and the smart terminal may include any suitable type of storage medium for storing data, such as a magnetic disk or a compact disk (CD). -ROM), read-only memory or random storage memory. The smart terminal may also include one or more logical computing modules that perform any suitable type of function or operation in parallel, such as viewing a database, image processing, etc., in a single thread or multiple threads. The logic operation module may be any suitable type of electronic circuit or chip-type electronic device capable of performing logical operation operations, such as a single core processor, a multi-core processor, a graphics processing unit (GPU), or the like. Lift For example, the smart terminal may include, but is not limited to, a beauty authentication device, a personal computer, a tablet computer, a smart phone, a server, and the like.
具体地,下面结合附图,对本申请实施例作进一步阐述。Specifically, the embodiments of the present application are further described below in conjunction with the accompanying drawings.
图1是本申请实施例提供的一种人脸脸型识别方法的流程示意图,请参阅图1,该方法包括但不限于以下步骤:1 is a schematic flowchart of a face face recognition method according to an embodiment of the present application. Referring to FIG. 1, the method includes but is not limited to the following steps:
步骤110:获取人脸图像。Step 110: Acquire a face image.
在本实施例中,所述“人脸图像”是指包括被检测人的正脸的图像,通过该人脸图像能够获取到该被检测人的所有面部特征。In the present embodiment, the "face image" refers to an image including the face of the detected person, by which all face features of the detected person can be acquired.
在本实施例中,获取人脸图像的具体实施方式可以是:实时采集被检测人的正脸图像;或者,也可以是:直接在智能终端本地或云端调取已有的包括被检测人的正脸的图像。针对不同的应用场景或者被检测人的选择,可以选择不同的获取人脸图像的方式。例如:假设在眼镜商店中设置有用于为用户推荐合适的镜架的智能终端,为了能够及时地基于用户的脸型为其推荐合适的镜架,该智能终端获取人脸图像的方式可以是通过摄像装置实时采集被检测人的正脸图像。又如,用户希望通过自己的智能终端,比如,智能手机,为自己设计合适的妆容,由于其智能终端上一般存储有个人的人脸图像,因此,在该应用场景中,智能终端获取人脸图像的方式也可以是直接在智能终端本地或者云端调取已有的包括被检测人的正脸的图像。当然,在实际应用中,获取人脸图像的方式也可以不限于以上所描述的方式,本申请实施例对比不作具体限定。In this embodiment, the specific implementation manner of acquiring the face image may be: collecting the positive face image of the detected person in real time; or, directly: acquiring the existing detected person including the detected person directly in the smart terminal or in the cloud. The image of the face. For different application scenarios or the selection of the detected person, different ways of acquiring the face image may be selected. For example, it is assumed that a smart terminal for recommending a suitable frame for the user is provided in the eyeglass store, and in order to be able to recommend a suitable frame based on the face of the user in time, the manner in which the smart terminal acquires the face image may be by camera. The device collects the positive face image of the detected person in real time. For example, the user wants to design a suitable makeup for himself through his own smart terminal, for example, a smart phone. Since the smart terminal generally stores a personal face image, the smart terminal acquires the face in the application scenario. The image may also be obtained by directly capturing the existing image including the positive face of the detected person directly in the smart terminal or in the cloud. Of course, in the actual application, the manner of obtaining the face image is not limited to the above description, and the comparison of the embodiments of the present application is not specifically limited.
步骤120:提取所述人脸图像中的人脸关键点。Step 120: Extract a face key point in the face image.
在本实施例中,所述人脸关键点是指分布于人脸中具有特质特征的区域(比如:人脸轮廓、眼睛、眉毛、鼻子、嘴巴等)上的特征点。不同的人脸具有不同的人脸关键点分布。其中,每一所述具有特质特征的区域中可以分布有多个人脸关键点。比如,在鼻子区域中,可以包括用于标识鼻梁的位置的鼻骨关键点、用于标识鼻翼的位置的鼻翼关键点等;又如,在人脸轮廓区域上,可以包括用于标识下颌骨的轮廓的下颌骨关键点、用于标识颧骨附近的脸部轮廓的颧骨关键点、用于标识下巴的位置的下巴关键点等等。此外,还需说明的是,在 本实施例中,上述鼻骨关键点、鼻翼关键点、下颌骨关键点、颧骨关键点以及下巴关键点等人脸关键点的数量均不限于1个。In this embodiment, the face key point refers to a feature point distributed on an area having a characteristic feature in a face (for example, a face contour, an eye, an eyebrow, a nose, a mouth, etc.). Different faces have different face key points. Wherein, each of the regions having the characteristic features may be distributed with a plurality of face key points. For example, in the nose region, a key point of the nose for identifying the position of the bridge of the nose, a key point of the nose for identifying the position of the nose, and the like may be included; for example, on the contour area of the face, may include a marker for identifying the mandible. The contoured mandible key points, the humeral key points used to identify the contour of the face near the tibia, the chin key points used to identify the position of the chin, and the like. In addition, it should be noted that In this embodiment, the number of key points of the nose, the key points of the nose, the key points of the mandible, the key points of the tibia, and the key points of the chin are not limited to one.
在本实施例中,可以通过任意合适的算法或者方式对获取到的人脸图像进行人脸关键点定位,然后提取所需的人脸关键点。特别地,在本实施例中,为了准确描述人脸图像的脸型特征,需要提取出的人脸关键点包括但不限于:鼻骨关键点、下颌骨关键点和下巴关键点。In this embodiment, the acquired face image may be positioned by using a suitable algorithm or manner, and then the required face key points are extracted. In particular, in the present embodiment, in order to accurately describe the face feature of the face image, the key points that need to be extracted include, but are not limited to, a key point of the nose, a key point of the mandible, and a key point of the chin.
例如:可以通过第三方工具包dlib对获取到的人脸图像进行人脸关键点定位,得到的人脸关键点分布图如图2所示(其中,人脸关键点0~16为标识该人脸图像的人脸轮廓区域的特征点,人脸关键点27~30为标识该人脸图像的鼻骨区域的特征点);然后选择出预设的人脸关键点并获取其坐标参数,比如,选择出鼻骨关键点27~30,分别位于人脸两侧的下颌骨关键点4~7和9~12,以及,下巴关键点8。其中,可以理解的是,在本实施例中,为了进行示例性说明,仅标注出人脸轮廓以及鼻骨区域的人脸关键点,在实际应用中,通过第三方工具包dilb定位得到的人脸关键点可以包括56个。For example, the face key point of the obtained face image can be located through the third-party toolkit dlib, and the obtained key point distribution map is shown in FIG. 2 (where the face key points 0-16 are used to identify the person) The feature points of the face contour area of the face image, the face key points 27-30 are the feature points of the nasal bone area identifying the face image); then the preset face key points are selected and the coordinate parameters are obtained, for example, Select the key points 27 to 30 of the nasal bone, which are located at the key points 4-7 and 9-12 of the mandible on both sides of the face, and the key point 8 of the chin. It can be understood that, in this embodiment, for the exemplary description, only the face contour and the face key point of the nasal bone region are marked, and in actual application, the face obtained by the third party tool package dilb is positioned. The key points can include 56.
此外,可以理解的是,在实际应用中,为了统一标准,针对不同的人脸图像需采用同一种人脸关键点定位方式,以及,提取相同的人脸关键点进行后续分析。In addition, it can be understood that, in practical applications, in order to unify the standard, the same face key point positioning method is adopted for different face images, and the same face key points are extracted for subsequent analysis.
步骤130:基于所述人脸关键点确定所述人脸图像的基准值。Step 130: Determine a reference value of the face image based on the face key point.
在实际应用中,人在拍照时距离摄像装置的距离不一,因此,采集到的人脸图像中有可能会出现有些人脸占屏面积较大,而有些人脸占屏面积较小的情况。因而,为了使得人脸占屏面积大小不同的人脸图像之间具有可比性,在本实施例中,为每一人脸图像的面部特征值设定一个参照标准,该参照标准即所述“基准值”。In practical applications, when the person takes a picture, the distance from the camera device is different. Therefore, in the collected face image, there may be some face occupying a large screen area, and some faces occupy a small screen area. . Therefore, in order to make the face images having different face occupying areas have comparability, in the embodiment, a reference standard is set for the facial feature values of each face image, and the reference standard is the reference. value".
具体地,在本实施例中,可以基于提取出的任意两个人脸关键点之间的距离作为该人脸图像的基准值,比如,以鼻骨关键点27和鼻骨关键点30之间的距离作为该人脸图像的基准值;或者,以下颌骨关键点4和下颌骨关键点12之间的距离作为该人脸图像的基准值;又或者,以下颌骨关键点5和下巴关键点8 之间的距离作为该人脸图像的基准值。其中,需说明的是,为了统一标准,当选定一种构建基准值的方式时,每一人脸图像均根据该构建方式构建其基准值。举例来说,假设选择以下颌骨关键点4和下颌骨关键点12之间的距离构建基准值,若此时获取到人脸图像A,则提取出人脸图像A的下颌骨关键点4和下颌骨关键点12,并且确定人脸图像A的基准值为人脸图像A的下颌骨关键点4和下颌骨关键点12之间的距离;若此时获取到人脸图像B,则提取出人脸图像B的下颌骨关键点4和下颌骨关键点12,并且确定人脸图像B的基准值为人脸图像B的下颌骨关键点4和下颌骨关键点12之间的距离。Specifically, in this embodiment, the distance between any two extracted face key points may be used as a reference value of the face image, for example, the distance between the nose bone key point 27 and the nose bone key point 30 is taken as The reference value of the face image; or, the distance between the following key point 4 of the jaw and the key point 12 of the mandible is used as a reference value of the face image; or, the following key points of the jaw bone 5 and the key point of the chin 8 The distance between them is used as a reference value for the face image. Among them, it should be noted that, in order to unify the standard, when a method of constructing a reference value is selected, each face image constructs its reference value according to the construction method. For example, suppose the distance between the key point 4 of the jaw and the key point 12 of the mandible is selected to construct a reference value. If the face image A is acquired at this time, the key point 4 of the mandible of the face image A is extracted. The key point 12 of the mandible is determined, and the reference value of the face image A is determined as the distance between the key point 4 of the mandible image A and the key point 12 of the mandible; if the face image B is acquired at this time, the person is extracted The mandibular key point 4 of the face image B and the mandible key point 12, and the reference value of the face image B is determined as the distance between the mandible key point 4 of the face image B and the mandible key point 12.
其中,在一些实施例中,提取到的人脸关键点还包括两个位于太阳穴位置的颞窝关键点,即如图2所示的人脸关键点1和15,此时,基于所述人脸关键点确定所述人脸图像的基准值的具体实施方式可以是:以所述两个颞窝关键点之间的距离作为所述人脸图像的基准值。其中,由于两个颞窝关键点之间的距离是脸型的一大特征,以该距离作为人脸图像的基准值,获取特定人脸关键点之间的距离(比如,位于人脸两侧的两个下颌骨关键点之间的距离)与该距离的比值能够更好地体现出人脸图像的脸型特征。Wherein, in some embodiments, the extracted face key points further include two armpit key points located at the temple position, that is, face key points 1 and 15 as shown in FIG. 2, at this time, based on the person A specific embodiment of determining a reference value of the face image by the face key may be: using a distance between the two axillary key points as a reference value of the face image. Wherein, since the distance between the key points of the two armpits is a major feature of the face type, the distance is used as the reference value of the face image to obtain the distance between the key points of the specific face (for example, located on both sides of the face) The ratio of the distance between the two key points of the mandible and the distance can better reflect the facial features of the face image.
步骤140:结合所述人脸关键点和所述基准值构建所述人脸图像的特征向量。Step 140: Construct a feature vector of the face image in combination with the face key point and the reference value.
在本实施例中,所述“特征向量”是用于表征获取到的人脸图像的人脸脸型的参量,其可以由多个人脸图像的面部特征值组成。其中,所述“面部特征值”是反映人脸图像的面部特征的参量,其可以包括但不限于:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值等。In the present embodiment, the "feature vector" is a parameter for characterizing a face face of the acquired face image, which may be composed of facial feature values of a plurality of face images. The “facial feature value” is a parameter that reflects a facial feature of the face image, and may include, but is not limited to, a face length feature value, a mandible width feature value, and a chin angle feature value.
具体地,在本实施例中,可以首先基于提取到的人脸关键点和所确定的基准值构建多个预设的面部特征值,比如:基于所述鼻骨关键点、所述下巴关键点和所述基准值构建该人脸图像的人脸长度特征值,基于所述下颌骨关键点和所述基准值构建该人脸图像的下颌骨宽度特征值,以及,基于所述下颌骨关键点和所述下巴关键点构建该人脸图像的下巴角度特征值;然后由所构建的面部特征值组合成该人脸图像的特征向量。 Specifically, in this embodiment, a plurality of preset facial feature values may be first constructed based on the extracted face key points and the determined reference values, such as: based on the nasal bone key points, the chin key points, and The reference value constructs a face length feature value of the face image, and constructs a mandible width feature value of the face image based on the mandible key point and the reference value, and based on the key point of the mandible The chin key points construct a chin angle feature value of the face image; and then the constructed face feature values are combined into a feature vector of the face image.
在实际应用中,为了提升人脸脸型识别的精度,可以基于更多数量的人脸关键点构建更多种类的与脸型相关的面部特征值,并且,每一类型的面部特征值的数量也可以不限于一个。比如,在一些实施例中,所提取的人脸关键点还可以包括位于颧骨附近的人脸轮廓上的颧骨关键点,此时,可以基于提取到的鼻骨关键点、颧骨关键点(和/或,下颌骨关键点)以及所确定的基准值构建该人脸图像的脸颊宽度特征值,和/或,基于提取到的下颌骨关键点、颧骨关键点和所确定的基准值构建该人脸图像的侧脸长度特征值。进一步地,在又一些实施例中,所构建的特征向量还可以包括任意两个具有长度/宽度特征的面部特征值的比值,比如,所述侧脸长度特征值与所述人脸长度特征值的比值、所述侧脸长度特征值与所述下颌骨宽度特征值的比值等。In practical applications, in order to improve the accuracy of facial face recognition, more types of facial feature values related to face types can be constructed based on a larger number of face key points, and the number of facial feature values of each type can also be Not limited to one. For example, in some embodiments, the extracted face key points may further include a key point of the humerus on the contour of the face near the tibia, which may be based on the extracted key points of the nasal bone and the key points of the tibia ( And/or, the critical point of the mandible and the determined reference value to construct a cheek width feature value of the face image, and/or based on the extracted key points of the mandible, key points of the tibia and the determined reference values The side face length feature value of the face image. Further, in still other embodiments, the constructed feature vector may further include a ratio of any two facial feature values having length/width features, such as the side face length feature value and the face length feature value. The ratio, the ratio of the side face length feature value to the mandible width feature value, and the like.
举例说明:假设获取到的人脸关键点分布如图2所示,并且以两个颞窝关键点(人脸关键点1和15)之间的距离作为该人脸图像的基准值,则,结合所述人脸关键点和所述基准值构建所述人脸图像的特征向量的方式可以是:For example: suppose the obtained key point distribution is as shown in Figure 2, and the distance between two armpit key points (face key points 1 and 15) is used as the reference value of the face image. The manner of constructing the feature vector of the face image in combination with the face key point and the reference value may be:
首先,结合图2中的人脸关键点和该基准值D(1,15)构建如下18个面部特征值:First, the following 18 facial feature values are constructed in conjunction with the face key point in FIG. 2 and the reference value D(1, 15):
Feature1:D(8,27)/D(1,15);Feature1: D(8,27)/D(1,15);
Feature2:D(4,12)/D(1,15);Feature2: D(4,12)/D(1,15);
Feature3:D(5,11)/D(1,15);Feature3: D(5,11)/D(1,15);
Feature4:D(6,10)/D(1,15);Feature4: D(6,10)/D(1,15);
Feature5:D(9,10)/D(1,15);Feature5: D(9,10)/D(1,15);
Feature6:D(8,12)/D(1,15);Feature6: D(8,12)/D(1,15);
Feature7:D(12,14)/D(1,15);Feature7: D(12,14)/D(1,15);
Feature8:Feature3/Feature1;Feature8: Feature3/Feature1;
Feature9:Feature7/Feature1;Feature9: Feature7/Feature1;
Feature10:Feature7/Feature3;Feature10: Feature7/Feature3;
Feature11:(D(27,3)+D(27,13))/2/D(1,15);Feature11: (D(27,3)+D(27,13))/2/D(1,15);
Feature12:(D(27,4)+D(27,12))/2/D(1,15); Feature12: (D(27,4)+D(27,12))/2/D(1,15);
Feature13:(D(27,5)+D(27,11))/2/D(1,15);Feature13: (D(27,5)+D(27,11))/2/D(1,15);
Feature14:(D(27,6)+D(27,10))/2/D(1,15);Feature14: (D(27,6)+D(27,10))/2/D(1,15);
Feature15:L(8,5)和L(8,11)所形成的夹角的余弦值;Feature15: the cosine of the angle formed by L(8,5) and L(8,11);
Feature16:L(8,6)和L(8,10)所形成的夹角的余弦值;Feature16: the cosine of the angle formed by L(8,6) and L(8,10);
Feature17:L(8,7)和L(8,9)所形成的夹角的余弦值;Feature17: the cosine of the angle formed by L(8,7) and L(8,9);
Feature18:L(11,10)和L(11,12)所形成的夹角的余弦值;Feature18: the cosine of the angle formed by L(11,10) and L(11,12);
其中,“/”表示除号,D(x,y)表示人脸关键点x与人脸关键点y之间的距离,L(x,y)表示人脸关键点x与人脸关键点y之间的线段。Where "/" indicates the division number, D(x, y) represents the distance between the face key point x and the face key point y, and L(x, y) represents the face key point x and the face key point y The line segment between.
其中,Feature1为人脸长度特征值;Feature2~Feature4为下颌骨宽度特征值;Feature5和Feature6为下颌骨长度特征值;Feature7为侧脸长度特征值;Feature8为下颌骨宽度特征值与人脸长度特征值的比值;Feature9为侧脸长度特征值与人脸长度特征值的比值;Feature10为侧脸长度特征值与下颌骨宽度特征值的比值;Feature11~Feature14为脸颊宽度特征值;Feature15~Feature18为下巴角度特征值。Among them, Feature1 is the face length feature value; Feature2~Feature4 is the mandible width feature value; Feature5 and Feature6 are the mandible length feature values; Feature7 is the lateral face length feature value; Feature8 is the mandible width feature value and face length feature value. The ratio of Feature9 is the ratio of the length value of the face length to the feature value of the face length; Feature10 is the ratio of the feature value of the face length to the feature value of the mandible width; Feature11~Feature14 are the feature values of the cheek width; Feature15~Feature18 are the chin angle. Eigenvalues.
然后,由这些面部特征值组合构成该人脸图像的特征向量,即:Face=[Feature1,Feature2,Feature3Feature4,Feature5,Feature6,Feature7,Feature8,Feature9,Feature10,Feature11,Feature12,Feature13,Feature14,Feature15,Feature16,Feature17,Feature18],该特征向即对该人脸图像的面部特征的数值化标示。Then, the feature vectors of the face image are composed of these facial feature values, namely: Face=[Feature1, Feature2, Feature3Feature4, Feature5, Feature6, Feature7, Feature8, Feature9, Feature10, Feature11, Feature12, Feature13, Feature14, Feature15, Feature16, Feature 17, Feature 18], this feature is a numerical indication of the facial features of the face image.
步骤150:将所述特征向量输入脸型特征模型。Step 150: Input the feature vector into the face feature model.
在本实施例中,所述“脸型特征模型”是一个利用机器学习算法训练出来的脸型分类器,将人脸图像的特征向量输入该脸型特征模型后,该脸型特征模型可以对输入的特征向量进行运算,进而输出一个与该特征向量对应的脸型识别结果。该脸型特征模型可以预先训练好并存储在智能终端本地,当进行人脸脸型识别时,可以直接从智能终端本地调用该脸型特征模型。In this embodiment, the "face feature model" is a face type classifier trained by a machine learning algorithm, and after inputting the feature vector of the face image into the face feature model, the face feature model can input the feature vector An operation is performed to output a face recognition result corresponding to the feature vector. The face feature model can be pre-trained and stored locally in the smart terminal. When the face face recognition is performed, the face feature model can be directly called from the smart terminal.
其中,在一些实施例中,智能终端本地没有预存该脸型特征模型,则,在该实施例中,在执行步骤150之前,需要首先获取该脸型特征模型。而获取该脸型特征模型的具体实施例方式可以是:通过网络连接从其他设备或者云端下 载该脸型特征模型;或者,也可以在智能终端本地,基于人脸图像样本和机器学习算法训练该脸型特征模型。In some embodiments, the smart terminal does not pre-store the face feature model. In this embodiment, before performing step 150, the face feature model needs to be acquired first. The specific embodiment manner of obtaining the face feature model may be: connecting from other devices or the cloud through a network connection. The face feature model is carried; or the face feature model can be trained based on the face image sample and the machine learning algorithm locally on the smart terminal.
其中,所述基于人脸图像样本和机器学习算法训练该脸型特征模型的具体实施方式可以是:首先采集预定数量的人脸图像样本;然后提取出每一人脸图像样本的特征向量作为训练样本;最后利用任意合适的机器学习算法,比如,神经网络、决策树、支持向量机等,对训练样本进行训练,从而使得训练出来的模型具有人脸脸型分类的功能。The specific implementation manner of training the face feature model based on the face image sample and the machine learning algorithm may be: first collecting a predetermined number of face image samples; and then extracting a feature vector of each face image sample as a training sample; Finally, any suitable machine learning algorithm, such as neural network, decision tree, support vector machine, etc., is used to train the training samples, so that the trained model has the function of face face classification.
具体地,如图3所示,为本申请实施例提供的一种获取该脸型特征模型的方法的流程示意图,其可以包括但不限于以下步骤:Specifically, as shown in FIG. 3, a schematic flowchart of a method for acquiring the face feature model provided by the embodiment of the present application may include, but is not limited to, the following steps:
步骤151:采集预设数量的人脸图像样本,每一所述人脸图像样本标注有一脸型标记。Step 151: Collect a preset number of face image samples, and each of the face image samples is marked with a face mark.
在实施例中,首先根据需要对人脸脸型进行分类,比如,将人脸脸型分为如图4所示的心形脸、鹅蛋脸、长脸,圆脸和方脸五大类别;然后针对每一类别的脸型采集预定数量的人脸图像样本,并且,在每一人脸图像样本中标注一脸型标记,用于表示该人脸图像样本所属的脸型类别。比如,采集100张具有心形脸的人脸图像样本并为这些人脸图像样本标注脸型标记“1”(用于表示其为心形脸);采集100张具有鹅蛋脸的人脸图像样本并为这些人脸图像样本标注脸型标记“2”(用于表示其为鹅蛋脸);采集100张具有长脸的人脸图像样本并为这些人脸图像样本标注脸型标记“3”(用于表示其为长脸);采集100张具有圆脸的人脸图像样本并为这些人脸图像样本标注脸型标记“4”(用于表示其为圆脸);100张具方脸的人脸图像样本并为这些人脸图像样本标注脸型标记“5”(用于表示其为方脸)。从而可以获得500张人脸图像样本,并且,每一张人脸图像样本都标注有一个用于表示其所属的脸型类别的脸型标记。In an embodiment, the face face is first classified according to needs, for example, the face face is divided into five categories: a heart-shaped face, a goose face, a long face, a round face, and a square face as shown in FIG. 4; The face shape collects a predetermined number of face image samples, and a face type mark is marked in each face image sample for indicating the face type to which the face image sample belongs. For example, collecting 100 face image samples with heart-shaped faces and labeling these face image samples with a face mark "1" (used to represent a heart-shaped face); collecting 100 face image samples with a goose face and These face image samples are marked with a face mark "2" (used to indicate that it is a goose face); 100 face image samples with long faces are collected and face type marks "3" are marked for these face image samples (used to indicate that they are long faces) Collecting 100 face image samples with round faces and labeling these face image samples with face mark "4" (used to represent a round face); 100 face image samples with square faces for these people The face image sample is marked with a face mark "5" (used to indicate that it is a square face). Thereby, 500 face image samples can be obtained, and each face image sample is marked with a face mark for indicating the face type to which it belongs.
步骤152:分别提取所述人脸图像样本中的人脸关键点。Step 152: Extract the face key points in the face image sample separately.
在本实施例中,以同一人脸关键点定位方法,比如,均采用第三方工具包dilb,对所有采集到的人脸图像样本进行人脸关键点定位并且提取预定位置的人 脸关键点,所述人脸关键点包括但不限于:鼻骨关键点、下颌骨关键点和下巴关键点等。而提取每一人脸图像样本中的人脸关键点的具体实施方式可以参见上述步骤120,此处便不再赘述。In this embodiment, the same face key point positioning method is used, for example, the third party toolkit dilb is used to perform face key point positioning on all collected face image samples and extract the predetermined position. Face key points, including but not limited to: key points of the nasal bone, key points of the mandible, and key points of the chin. For the specific implementation manner of extracting the key points of the face image in each face image sample, refer to step 120 above, and details are not described herein again.
其中,可以理解的是,为了统一标准,在进行人脸脸型识别时,也要采用相同的人脸关键点定位方法对获取的人脸图像进行人脸关键点定位以及提取相同的人脸关键点。Among them, it can be understood that, in order to unify the standard, in the face face recognition, the same face key point positioning method is also used to perform face key point positioning on the acquired face image and extract the same face key point. .
步骤153:分别基于所述人脸关键点确定每一所述人脸图像样本的基准值。Step 153: Determine a reference value of each of the face image samples based on the face key points, respectively.
在本实施例中,步骤153与上述步骤130具有相同的技术特征,其具体的实施方式同样可以参考上述步骤130,因此,此处也不再赘述。In this embodiment, the step 153 has the same technical features as the above-mentioned step 130, and the specific implementation manner can also refer to the above step 130, and therefore, details are not described herein again.
其中,可以理解的是,为了统一标准,在训练脸型特征模型以及在进行人脸脸型识别时,均需采用相同的方式确定每一人脸图像样本或者人脸图像的基准值,比如,均以两个颞窝关键点之间的距离作为基准值。Among them, it can be understood that, in order to unify the standard, when training the face feature model and performing face face recognition, the same method is needed to determine the reference value of each face image sample or face image, for example, both The distance between the key points of the armpit is used as the reference value.
步骤154:分别结合所述人脸关键点和所述基准值构建每一所述人脸图像样本的特征向量。Step 154: Construct a feature vector of each of the face image samples in combination with the face key point and the reference value, respectively.
在本实施例中,步骤154与上述步骤140具有相同的技术特征,其具体的实施方式同样可以参考上述步骤140,因此,此处也不再赘述。In the embodiment, the step 154 has the same technical features as the above-mentioned step 140, and the specific embodiment can also refer to the above step 140, and therefore, details are not described herein again.
其中,可以理解的是,为了统一标准,在训练脸型特征模型以及在进行人脸脸型识别时,同样需要采用相同的方式构建每一人脸图像样本或者人脸图像的特征向量。Among them, it can be understood that, in order to unify the standard, when training the face feature model and performing face face recognition, it is also necessary to construct the feature vector of each face image sample or face image in the same manner.
步骤155:将每一所述人脸图像样本的特征向量和脸型标记输入训练支持向量机模型,训练得到所述脸型特征模型。Step 155: Input feature vectors and face markers of each of the face image samples into a training support vector machine model, and train the face feature model.
在本实施例中,基于所有人脸图像样本的脸型标记和特征向量,利用支持向量机(Support Vector Machine,SVM)模型训练出具备人脸脸型分类功能的脸型特征模型。其中,所述“支持向量机模型”是一种有监督的机器学习模型,通常用来进行模式识别、分类以及回归分析。In this embodiment, based on the face markers and feature vectors of all face image samples, a face vector feature model with face face classification function is trained by using a Support Vector Machine (SVM) model. Among them, the "Support Vector Machine Model" is a supervised machine learning model, which is usually used for pattern recognition, classification and regression analysis.
具体地,在本实施例中,将每一所述人脸图像样本的特征向量和脸型标记 输入支持向量机模型,训练得到所述脸型特征模型。具体为:将每一人脸图像样本的特征向量作为变量输入,而该人脸图像样本的脸型标记作为结果输入;通过大量的训练样本,可以得到一个描述脸型的函数,该函数即相当于训练生成的脸型特征模型。由此,当接收到一个变量输入(待测人脸图像的特征向量)时,通过该函数(脸型特征模型)即可得到一个结果(即,待测人脸图像的脸型识别结果)。Specifically, in the embodiment, the feature vector and the face mark of each of the face image samples are The support vector machine model is input, and the face feature model is trained. Specifically, the feature vector of each face image sample is input as a variable, and the face mark of the face image sample is input as a result; through a large number of training samples, a function describing the face shape can be obtained, which is equivalent to training generation. Face feature model. Thus, when a variable input (feature vector of the face image to be tested) is received, a result (ie, a face recognition result of the face image to be tested) can be obtained by the function (face type feature model).
在实际应用中,可以调用机器学习工具包sklearn并引入svm类,生成空对象model;然后将训练数据(每一人脸图像样本的特征向量及其脸型标记)喂给模型model.fit;最后跑动该模型model.fit即可训练得到脸型特征模型。In practical applications, the machine learning toolkit sklearn can be called and the svm class can be introduced to generate an empty object model; then the training data (the feature vector of each face image sample and its face type mark) is fed to the model.fit; The model.fit can be trained to obtain a face feature model.
进一步地,也可以将训练得到的脸型特征模型的参数保存在智能终端本地,以方便以后进行人脸脸型识别时直接调用该脸型特征模型进行人脸脸型识别。Further, the parameters of the trained face feature model may also be saved locally in the smart terminal, so as to facilitate the face face recognition to be directly called when the facial face recognition is performed later.
步骤160:获取所述脸型特征模型输出的脸型识别结果。Step 160: Acquire a face recognition result output by the face feature model.
在本实施例中,通过脸型特征模型对输入的变量,即,该人脸图像的特征向量,进行计算,即可对应获得一个脸型识别结果,该脸型识别结果即该人脸图像中的人脸的脸型。In this embodiment, by calculating the input variable, that is, the feature vector of the face image, the face type recognition result is obtained, and the face recognition result is the face in the face image. Face shape.
通过上述技术方案可知,本申请实施例的有益效果在于:本申请实施例提供的人脸脸型识别方法通过在获取到人脸图像时,提取所述人脸图像中的人脸关键点,基于所述人脸关键点确定所述人脸图像的基准值并结合所述人脸关键点和所述基准值构建所述人脸图像的特征向量,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;然后将所述特征向量输入脸型特征模型;最后获取所述脸型特征模型输出的脸型识别结果,能够结合人脸轮廓之外的人脸关键点,比如,鼻骨关键点,结合多种人脸特征识别人脸脸型,从而提升脸型识别结果的可靠性。 According to the foregoing technical solution, the human face recognition method provided by the embodiment of the present application extracts a face key point in the face image by acquiring a face image. The face key points determine a reference value of the face image and construct a feature vector of the face image in combination with the face key point and the reference value, wherein the face key point comprises: a nose bone key point a key point of the mandible and a key point of the chin, the feature vector comprising: a face length feature value, a mandible width feature value, and a chin angle feature value, the mandible width feature value being based on the mandible key point and the a reference value construction, the chin angle feature value is constructed based on the mandible key point and the chin key point; then the feature vector is input into the face feature model; and finally the face recognition result output by the face feature model is obtained, Combine face key points outside the contour of the face, such as key points of the nose, combined with a variety of facial features to recognize the face face, thereby enhancing the face The reliability of the recognition result.
图5是本申请实施例提供的一种人脸脸型识别装置的结构示意图,请参阅图5,该人脸脸型识别装置5包括但不限于:FIG. 5 is a schematic structural diagram of a face face recognition device according to an embodiment of the present disclosure. Referring to FIG. 5, the face recognition device 5 includes but is not limited to:
图像获取单元51,用于获取人脸图像;An image obtaining unit 51, configured to acquire a face image;
人脸关键点提取单元52,用于提取所述人脸图像中的人脸关键点,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点;a face key extraction unit 52, configured to extract a face key point in the face image, wherein the face key point includes: a key point of the nose bone, a key point of the mandible, and a key point of the chin;
基准值确定单元53,用于基于所述人脸关键点确定所述人脸图像的基准值;a reference value determining unit 53 configured to determine a reference value of the face image based on the face key point;
特征向量构建单元54,用于结合所述人脸关键点和所述基准值构建所述人脸图像的特征向量,其中,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述人脸长度特征值基于所述鼻骨关键点、所述下巴关键点和所述基准值构建,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;The feature vector construction unit 54 is configured to construct a feature vector of the face image in combination with the face key point and the reference value, wherein the feature vector includes: a face length feature value, a mandible width feature value, and a chin angle feature value, the face length feature value being constructed based on the nasal bone key point, the chin key point, and the reference value, the mandible width feature value being based on the mandible key point and the reference value Constructing, the chin angle feature value is constructed based on the key point of the mandible and the key point of the chin;
匹配单元55,用于将所述特征向量输入脸型特征模型;以及,a matching unit 55, configured to input the feature vector into a face feature model; and,
输出单元56,用于获取所述脸型特征模型输出的脸型识别结果。The output unit 56 is configured to acquire a face recognition result output by the face feature model.
在本申请实施例中,当图像获取单元51获取到人脸图像时,通过人脸关键点提取单元52基于所述人脸关键点确定所述人脸图像的基准值;进而在特征向量构建单元54中结合所述人脸关键点和所述基准值构建所述人脸图像的特征向量;然后通过匹配单元55将所述特征向量输入脸型特征模型;最后,由输出单元56获取所述脸型特征模型输出的脸型识别结果。In the embodiment of the present application, when the image acquisition unit 51 acquires the face image, the face key point extraction unit 52 determines the reference value of the face image based on the face key point; Constructing a feature vector of the face image in combination with the face key point and the reference value; then inputting the feature vector into the face feature model by the matching unit 55; finally, acquiring the face feature by the output unit 56 The face recognition result of the model output.
其中,在一些实施例中,提取的人脸关键点还包括:两个颞窝关键点;此时,基准值确定单元53具体用于以所述两个颞窝关键点之间的距离作为所述人脸图像的基准值。In some embodiments, the extracted face key points further include: two armpit key points; at this time, the reference value determining unit 53 is specifically configured to use the distance between the two axillary key points as a The reference value of the face image.
其中,在一些实施例中,人脸脸型识别装置5还包括:模型获取单元57,该模型获取单元57用于获取所述脸型特征模型。In some embodiments, the face face recognition device 5 further includes: a model acquisition unit 57, configured to acquire the face feature model.
具体地,在其中一些实施例中,该模型获取单元57包括:人脸图像样本采集模块571、人脸关键点提取模块572、基准值确定模块573、特征向量构建模 块574以及训练模块575。其中,人脸图像样本采集模块571用于采集预设数量的人脸图像样本,每一所述人脸图像样本标注有一脸型标记;人脸关键点提取模块572用于分别提取所述人脸图像样本中的人脸关键点,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点;基准值确定模块573用于分别基于所述人脸关键点确定每一所述人脸图像样本的基准值;特征向量构建模块574用于分别结合所述人脸关键点和所述基准值构建每一所述人脸图像样本的特征向量,其中,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述人脸长度特征值基于所述鼻骨关键点、所述下巴关键点和所述基准值构建,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;训练模块575用于将每一所述人脸图像样本的特征向量和脸型标记输入支持向量机模型,训练得到所述脸型特征模型。Specifically, in some of the embodiments, the model obtaining unit 57 includes: a face image sample collecting module 571, a face key point extracting module 572, a reference value determining module 573, and a feature vector building model. Block 574 and training module 575. The face image sample collection module 571 is configured to collect a preset number of face image samples, each of the face image samples is labeled with a face mark; and the face key point extraction module 572 is configured to separately extract the face image. a face key point in the sample, wherein the face key point includes: a nasal bone key point, a mandibial key point, and a chin key point; the reference value determining module 573 is configured to determine each of the face key points respectively a reference value of the face image sample; the feature vector construction module 574 is configured to construct a feature vector of each of the face image samples in combination with the face key point and the reference value, wherein the feature vector comprises: a face length feature value, a mandible width feature value, and a chin angle feature value, the face length feature value being constructed based on the nasal bone key point, the chin key point, and the reference value, the mandible width feature value Constructed based on the mandible key point and the reference value, the chin angle feature value is constructed based on the mandible key point and the chin key point; training module 5 75 is configured to input feature vectors and face markers of each of the face image samples into a support vector machine model, and train the face feature model.
此外,在一些实施例中,所述人脸关键点还可以包括:颧骨关键点,所述特征向量还可以包括:脸颊宽度特征值,和/或,侧脸长度特征值。所述脸颊宽度特征值基于所述鼻骨关键点、所述颧骨关键点和所述基准值构建;所述侧脸长度特征值基于所述下颌骨关键点、所述颧骨关键点和所述基准值构建。进一步地,在又一些实施例中,所述特征向量还可以包括所述侧脸长度特征值与所述人脸长度特征值的比值。可以理解的是,在实际应用中,还可以通过更多数量的人脸关键点构建更多数量的与脸型相关的特征值来构建人脸图像的特征向量,此处便不一一列举。In addition, in some embodiments, the face key point may further include: a tibia key point, and the feature vector may further include: a cheek width feature value, and/or a side face length feature value. The cheek width feature value is constructed based on the nasal bone key point, the tibia key point, and the reference value; the side face length feature value is based on the mandible key point, the tibia key point, and the Baseline value build. Further, in still other embodiments, the feature vector may further include a ratio of the side face length feature value to the face length feature value. It can be understood that, in practical applications, a larger number of face-related feature values can be constructed by constructing a larger number of face-related feature values to construct feature vectors of face images, which are not enumerated here.
还需要说明的是,由于所述人脸脸型识别装置与上述方法实施例中的人脸脸型识别方法基于相同的发明构思,因此,上述方法实施例的相应内容同样适用于本装置实施例,此处不再详述。It should be noted that, since the face face recognition device and the face face recognition method in the above method embodiment are based on the same inventive concept, the corresponding content of the above method embodiment is also applicable to the device embodiment. It will not be detailed.
通过上述技术方案可知,本申请实施例的有益效果在于:本申请实施例提供的人脸脸型识别装置通过当图像获取单元51获取到人脸图像时,通过人脸关键点提取单元52基于所述人脸关键点确定所述人脸图像的基准值;进而在特征向量构建单元54中结合所述人脸关键点和所述基准值构建所述人脸图像的特征 向量,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;然后通过匹配单元55将所述特征向量输入脸型特征模型;最后,由输出单元56获取所述脸型特征模型输出的脸型识别结果,能够结合人脸轮廓之外的人脸关键点,比如,鼻骨关键点,结合多种人脸特征识别人脸脸型,从而提升脸型识别结果的可靠性。The above-mentioned technical solutions can be seen that the facial face recognition device provided by the embodiment of the present application is based on the face key point extraction unit 52 by the face key point extraction unit 52 when the image acquisition unit 51 acquires the face image. The face key points determine a reference value of the face image; and further, the feature vector constructing unit 54 combines the face key point and the reference value to construct the feature of the face image a vector, wherein the face key points include: a nasal bone key point, a mandibial key point, and a chin key point, the feature vector including: a face length feature value, a mandible width feature value, and a chin angle feature value, A mandible width feature value is constructed based on the mandible key point and the reference value, the chin angle feature value being constructed based on the mandible key point and the chin key point; the feature vector is then passed by the matching unit 55 Entering a face feature model; finally, the face recognition result output by the face feature model is obtained by the output unit 56, and can be combined with a face key point other than the face contour, for example, a key point of the nose bone, combined with a plurality of face feature recognition persons. Face type, which improves the reliability of face recognition results.
图6是本申请实施例提供的一种智能终端的结构示意图,该智能终端600可以是任意类型的智能终端,如:手机、平板电脑、美容鉴定仪器等,能够执行本申请实施例中提供的任意一种人脸脸型识别方法。FIG. 6 is a schematic structural diagram of an intelligent terminal according to an embodiment of the present disclosure. The smart terminal 600 can be any type of smart terminal, such as a mobile phone, a tablet computer, a beauty authentication device, etc., and can perform the operations provided in the embodiments of the present application. Any kind of face face recognition method.
具体地,请参阅图6,该智能终端600包括:Specifically, referring to FIG. 6, the smart terminal 600 includes:
一个或多个处理器601以及存储器602,图6中以一个处理器601为例。One or more processors 601 and memory 602, one processor 601 is taken as an example in FIG.
处理器601和存储器602可以通过总线或者其他方式连接,图6中以通过总线连接为例。The processor 601 and the memory 602 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
存储器602作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序、非暂态性计算机可执行程序以及模块,如本申请实施例中的人脸脸型识别方法对应的程序指令/模块(例如,附图5所示的图像获取单元51、人脸关键点提取单元52、基准值确定单元53、特征向量构建单元54、匹配单元55、输出单元56以及模型获取单元57)。处理器601通过运行存储在存储器602中的非暂态软件程序、指令以及模块,从而执行检测人脸瑕疵点的装置的各种功能应用以及数据处理,即实现上述任一方法实施例的人脸脸型识别方法。The memory 602 is used as a non-transitory computer readable storage medium, and can be used for storing a non-transitory software program, a non-transitory computer executable program, and a module, such as a program instruction corresponding to the face face recognition method in the embodiment of the present application. / Module (for example, the image acquisition unit 51, the face key extraction unit 52, the reference value determination unit 53, the feature vector construction unit 54, the matching unit 55, the output unit 56, and the model acquisition unit 57) shown in FIG. The processor 601 performs various functional applications and data processing of the device for detecting face defects by running non-transitory software programs, instructions, and modules stored in the memory 602, that is, the face of any of the above method embodiments is implemented. Face recognition method.
存储器602可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据智能终端600的使用所创建的数据等。此外,存储器602可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施例中,存储器602可选包括相对于处理器601 远程设置的存储器,这些远程存储器可以通过网络连接至智能终端600。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 602 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the smart terminal 600, and the like. Moreover, memory 602 can include high speed random access memory, and can also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 can optionally include a processor 601 Remotely set memories that can be connected to the smart terminal 600 via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
所述一个或者多个模块存储在所述存储器602中,当被所述一个或者多个处理器601执行时,执行上述任意方法实施例中的人脸脸型识别方法,例如,执行以上描述的图1中的方法步骤110至步骤160,图3中的方法步骤151至步骤155,实现图5中的单元51-57的功能。The one or more modules are stored in the memory 602, and when executed by the one or more processors 601, perform a face face recognition method in any of the above method embodiments, for example, performing the above described diagram The method steps 110 to 160 in 1 and the method steps 151 to 155 in FIG. 3 implement the functions of the units 51-57 in FIG.
本申请实施例还提供了一种存储介质,所述存储介质存储有可执行指令,该可执行指令被一个或多个处理器执行,例如:被图6中的一个处理器601执行,可使得上述一个或多个处理器执行上述任意方法实施例中的人脸脸型识别方法,例如,执行以上描述的图1中的方法步骤110至步骤160,图3中的方法步骤151至步骤155,实现图5中的单元51-57的功能。The embodiment of the present application further provides a storage medium storing executable instructions executed by one or more processors, for example, by one processor 601 in FIG. 6, which may be The one or more processors perform the face face recognition method in any of the foregoing method embodiments, for example, perform the method steps 110 to 160 in FIG. 1 described above, and the method steps 151 to 155 in FIG. 3 are implemented. The function of units 51-57 in Figure 5.
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非暂态计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。Through the description of the above embodiments, those skilled in the art can clearly understand that the various embodiments can be implemented by means of software plus a general hardware platform, and of course, by hardware. A person skilled in the art can understand that all or part of the process of implementing the above embodiments can be completed by a computer program to instruct related hardware, and the program can be stored in a non-transitory computer readable storage medium. The program, when executed, may include the flow of an embodiment of the methods as described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。 The above products can perform the methods provided by the embodiments of the present application, and have the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiments of the present application.
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。 Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application, and are not limited thereto; in the idea of the present application, the technical features in the above embodiments or different embodiments may also be combined. The steps may be carried out in any order, and there are many other variations of the various aspects of the present application as described above, which are not provided in the details for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, The skilled person should understand that the technical solutions described in the foregoing embodiments may be modified, or some of the technical features may be equivalently replaced; and the modifications or substitutions do not deviate from the embodiments of the present application. The scope of the technical solution.

Claims (12)

  1. 一种人脸脸型识别方法,其特征在于,包括:A face face recognition method, comprising:
    获取人脸图像;Obtaining a face image;
    提取所述人脸图像中的人脸关键点,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点;Extracting face key points in the face image, wherein the face key points include: a key point of the nose bone, a key point of the mandible, and a key point of the chin;
    基于所述人脸关键点确定所述人脸图像的基准值;Determining a reference value of the face image based on the face key point;
    结合所述人脸关键点和所述基准值构建所述人脸图像的特征向量,其中,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述人脸长度特征值基于所述鼻骨关键点、所述下巴关键点和所述基准值构建,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;Constructing a feature vector of the face image in combination with the face key point and the reference value, wherein the feature vector includes: a face length feature value, a mandible width feature value, and a chin angle feature value, the person A face length feature value is constructed based on the nasal bone key point, the chin key point, and the reference value, the mandible width feature value being constructed based on the mandible key point and the reference value, the chin angle feature value Constructing based on the key points of the mandible and the key points of the chin;
    将所述特征向量输入脸型特征模型;Inputting the feature vector into a face feature model;
    获取所述脸型特征模型输出的脸型识别结果。Obtaining a face recognition result output by the face feature model.
  2. 根据权利要求1所述的人脸脸型识别方法,其特征在于,所述人脸关键点还包括两个颞窝关键点;The facial face recognition method according to claim 1, wherein the face key point further comprises two axillary key points;
    则,所述基于所述人脸关键点确定所述人脸图像的基准值,包括:Then, determining, according to the face key point, the reference value of the face image, including:
    以所述两个颞窝关键点之间的距离作为所述人脸图像的基准值。The distance between the two axillary key points is used as a reference value of the face image.
  3. 根据权利要求1所述的人脸脸型识别方法,其特征在于,所述将所述特征向量输入脸型特征模型的步骤之前,所述人脸脸型识别方法还包括:The face face recognition method according to claim 1, wherein the face face recognition method further comprises: before the step of inputting the feature vector into the face feature model:
    获取所述脸型特征模型。Obtaining the face feature model.
  4. 根据权利要求2所述的人脸脸型识别方法,其特征在于,所述获取所述脸型特征模型,包括:The facial face recognition method according to claim 2, wherein the acquiring the facial feature model comprises:
    采集预设数量的人脸图像样本,每一所述人脸图像样本标注有一脸型标记;Collecting a preset number of face image samples, each of the face image samples being marked with a face mark;
    分别提取所述人脸图像样本中的人脸关键点,其中,所述人脸关键点包括: 鼻骨关键点、下颌骨关键点和下巴关键点;Extracting face key points in the face image sample respectively, wherein the face key points include: Key points of the nasal bone, key points of the mandible, and key points of the chin;
    分别基于所述人脸关键点确定每一所述人脸图像样本的基准值;Determining a reference value of each of the face image samples based on the face key points;
    分别结合所述人脸关键点和所述基准值构建每一所述人脸图像样本的特征向量,其中,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述人脸长度特征值基于所述鼻骨关键点、所述下巴关键点和所述基准值构建,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;Constructing a feature vector of each of the face image samples in combination with the face key point and the reference value, wherein the feature vector includes: a face length feature value, a mandible width feature value, and a chin angle feature value. The face length feature value is constructed based on the nasal bone key point, the chin key point, and the reference value, and the mandible width feature value is constructed based on the mandible key point and the reference value, A chin angle feature value is constructed based on the mandible key point and the chin key point;
    将每一所述人脸图像样本的特征向量和脸型标记输入支持向量机模型,训练得到所述脸型特征模型。The feature vector and the face mark of each of the face image samples are input into a support vector machine model, and the face feature model is trained.
  5. 根据权利要求1-4任一项所述的人脸脸型识别方法,其特征在于,所述人脸关键点还包括:颧骨关键点,所述特征向量还包括:脸颊宽度特征值,所述脸颊宽度特征值基于所述鼻骨关键点、所述颧骨关键点和所述基准值构建。The facial face recognition method according to any one of claims 1 to 4, wherein the face key point further comprises: a sacral key point, the feature vector further comprising: a cheek width feature value, The cheek width feature value is constructed based on the nasal bone key point, the tibia key point, and the reference value.
  6. 根据权利要求5所述的人脸脸型识别方法,其特征在于,所述特征向量还包括:侧脸长度特征值,所述侧脸长度特征值基于所述下颌骨关键点、所述颧骨关键点和所述基准值构建。The facial face recognition method according to claim 5, wherein the feature vector further comprises: a side face length feature value, the side face length feature value is based on the mandible key point, the tibia key The point and the reference value are constructed.
  7. 根据权利要求6所述的人脸脸型识别方法,其特征在于,所述特征向量还包括:所述侧脸长度特征值与所述人脸长度特征值的比值。The face face recognition method according to claim 6, wherein the feature vector further comprises: a ratio of the side face length feature value to the face length feature value.
  8. 一种人脸脸型识别装置,其特征在于,包括:A face face recognition device, comprising:
    图像获取单元,用于获取人脸图像;An image acquisition unit, configured to acquire a face image;
    人脸关键点提取单元,用于提取所述人脸图像中的人脸关键点,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点;a face key point extracting unit, configured to extract a face key point in the face image, wherein the face key point comprises: a key point of a nose bone, a key point of a mandible, and a key point of a chin;
    基准值确定单元,用于基于所述人脸关键点确定所述人脸图像的基准值;a reference value determining unit, configured to determine a reference value of the face image based on the face key point;
    特征向量构建单元,用于结合所述人脸关键点和所述基准值构建所述人脸图像的特征向量,其中,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述人脸长度特征值基于所述鼻骨关键点、所述下 巴关键点和所述基准值构建,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;a feature vector construction unit, configured to construct a feature vector of the face image in combination with the face key point and the reference value, wherein the feature vector includes: a face length feature value, a mandible width feature value, and a chin An angle feature value, the face length feature value is based on the nasal bone key point, the lower a key point of the jaw and a reference value constructed based on the key point of the mandible and the reference value, the chin angle feature value being based on the key point of the mandible and the key point of the chin Construct;
    匹配单元,用于将所述特征向量输入脸型特征模型;a matching unit, configured to input the feature vector into a face feature model;
    输出单元,用于获取所述脸型特征模型输出的脸型识别结果。And an output unit, configured to acquire a face recognition result output by the face feature model.
  9. 根据权利要求8所述的人脸脸型识别装置,其特征在于,所述人脸脸型识别装置还包括:The facial face recognition device according to claim 8, wherein the facial face recognition device further comprises:
    模型获取单元,用于获取所述脸型特征模型。a model obtaining unit, configured to acquire the face feature model.
  10. 根据权利要求9所述的人脸脸型识别装置,其特征在于,所述模型获取单元包括:The face recognition device according to claim 9, wherein the model acquisition unit comprises:
    人脸图像样本采集模块,用于采集预设数量的人脸图像样本,每一所述人脸图像样本标注有一脸型标记;a face image sample collection module, configured to collect a preset number of face image samples, each of the face image samples being marked with a face mark;
    人脸关键点提取模块,用于分别提取所述人脸图像样本中的人脸关键点,其中,所述人脸关键点包括:鼻骨关键点、下颌骨关键点和下巴关键点;a face key point extraction module, configured to separately extract face key points in the face image sample, wherein the face key points include: a key point of a nose bone, a key point of a mandible, and a key point of a chin;
    基准值确定模块,用于分别基于所述人脸关键点确定每一所述人脸图像样本的基准值;a reference value determining module, configured to determine a reference value of each of the face image samples based on the face key points respectively;
    特征向量构建模块,用于分别结合所述人脸关键点和所述基准值构建每一所述人脸图像样本的特征向量,其中,所述特征向量包括:人脸长度特征值、下颌骨宽度特征值以及下巴角度特征值,所述人脸长度特征值基于所述鼻骨关键点、所述下巴关键点和所述基准值构建,所述下颌骨宽度特征值基于所述下颌骨关键点和所述基准值构建,所述下巴角度特征值基于所述下颌骨关键点和所述下巴关键点构建;a feature vector construction module, configured to construct a feature vector of each of the face image samples in combination with the face key point and the reference value, wherein the feature vector comprises: a face length feature value, a mandible width An eigenvalue and a chin angle feature value, the face length feature value being constructed based on the nasal bone key point, the chin key point, and the reference value, the mandible width feature value being based on the mandible key point and Constructing a reference value, the chin angle feature value being constructed based on the mandible key point and the chin key point;
    训练模块,用于将每一所述人脸图像样本的特征向量和脸型标记输入支持向量机模型,训练得到所述脸型特征模型。And a training module, configured to input a feature vector and a face mark of each of the face image samples into a support vector machine model, and train the face feature model.
  11. 一种智能终端,其特征在于,包括: An intelligent terminal, comprising:
    至少一个处理器;以及,At least one processor; and,
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1-7任一项所述的人脸脸型识别方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the method of any of claims 1-7 Face face recognition method.
  12. 一种存储介质,其特征在于,所述存储介质存储有可执行指令,所述可执行指令被智能终端执行时,使所述智能终端执行如权利要求1-7任一项所述的人脸脸型识别方法。 A storage medium, wherein the storage medium stores executable instructions that, when executed by a smart terminal, cause the smart terminal to perform the human face according to any one of claims 1-7 Face recognition method.
PCT/CN2017/110711 2017-11-13 2017-11-13 Human face shape recognition method and apparatus, and intelligent terminal WO2019090769A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/110711 WO2019090769A1 (en) 2017-11-13 2017-11-13 Human face shape recognition method and apparatus, and intelligent terminal
CN201780009011.8A CN108701216B (en) 2017-11-13 2017-11-13 Face recognition method and device and intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/110711 WO2019090769A1 (en) 2017-11-13 2017-11-13 Human face shape recognition method and apparatus, and intelligent terminal

Publications (1)

Publication Number Publication Date
WO2019090769A1 true WO2019090769A1 (en) 2019-05-16

Family

ID=63843832

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/110711 WO2019090769A1 (en) 2017-11-13 2017-11-13 Human face shape recognition method and apparatus, and intelligent terminal

Country Status (2)

Country Link
CN (1) CN108701216B (en)
WO (1) WO2019090769A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717977A (en) * 2019-10-23 2020-01-21 网易(杭州)网络有限公司 Method and device for processing face of game character, computer equipment and storage medium
CN110866970A (en) * 2019-10-21 2020-03-06 西南民族大学 System and method for realizing reconstruction lens matching through face key point identification
CN110991294A (en) * 2019-11-26 2020-04-10 吉林大学 Method and system for identifying rapidly-constructed human face action unit
CN111062995A (en) * 2019-11-28 2020-04-24 重庆中星微人工智能芯片技术有限公司 Method and device for generating face image, electronic equipment and computer readable medium
CN111241961A (en) * 2020-01-03 2020-06-05 精硕科技(北京)股份有限公司 Face detection method and device and electronic equipment
CN111813995A (en) * 2020-07-01 2020-10-23 盛视科技股份有限公司 Pedestrian article extraction behavior detection method and system based on space-time relationship
CN111814702A (en) * 2020-07-13 2020-10-23 安徽兰臣信息科技有限公司 Child face recognition method based on adult face and child photo feature space mapping relation
CN111862030A (en) * 2020-07-15 2020-10-30 北京百度网讯科技有限公司 Face synthetic image detection method and device, electronic equipment and storage medium
CN111915479A (en) * 2020-07-15 2020-11-10 北京字节跳动网络技术有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112101127A (en) * 2020-08-21 2020-12-18 深圳数联天下智能科技有限公司 Face shape recognition method and device, computing equipment and computer storage medium
CN112818772A (en) * 2021-01-19 2021-05-18 网易(杭州)网络有限公司 Facial parameter identification method and device, electronic equipment and storage medium
CN113674139A (en) * 2021-08-17 2021-11-19 北京京东尚科信息技术有限公司 Face image processing method and device, electronic equipment and storage medium

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657655A (en) * 2019-01-30 2019-04-19 吴长汶 A kind of five-element's face classification method and storage equipment
CN110032959B (en) * 2019-03-29 2021-04-06 北京迈格威科技有限公司 Face shape judging method and device
CN110188590B (en) * 2019-04-09 2021-05-11 浙江工业大学 Face shape distinguishing method based on three-dimensional face model
CN111081375B (en) * 2019-12-27 2023-04-18 北京深测科技有限公司 Early warning method and system for health monitoring
CN113536844B (en) * 2020-04-16 2023-10-31 中移(成都)信息通信科技有限公司 Face comparison method, device, equipment and medium
CN111652131A (en) * 2020-06-02 2020-09-11 浙江大华技术股份有限公司 Face recognition device, light supplementing method thereof and readable storage medium
CN113591763B (en) * 2021-08-09 2024-05-28 平安科技(深圳)有限公司 Classification recognition method and device for face shapes, storage medium and computer equipment
CN114445298A (en) * 2022-01-28 2022-05-06 北京大甜绵白糖科技有限公司 Image processing method and device, electronic equipment and storage medium
CN115908260B (en) * 2022-10-20 2023-10-20 北京的卢铭视科技有限公司 Model training method, face image quality evaluation method, equipment and medium
CN117788720B (en) * 2024-02-26 2024-05-17 山东齐鲁壹点传媒有限公司 Method for generating user face model, storage medium and terminal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339612A (en) * 2008-08-19 2009-01-07 陈建峰 Face contour checking and classification method
CN102339376A (en) * 2010-07-14 2012-02-01 上海一格信息科技有限公司 Classifying and processing method based on active shape model and K nearest neighbor algorithm for facial forms of human faces
CN103632147A (en) * 2013-12-10 2014-03-12 公安部第三研究所 System and method for implementing standardized semantic description of facial features
KR101382172B1 (en) * 2013-03-12 2014-04-10 건아정보기술 주식회사 System for classifying hierarchical facial feature and method therefor
CN106971164A (en) * 2017-03-28 2017-07-21 北京小米移动软件有限公司 Shape of face matching process and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009122760A1 (en) * 2008-04-04 2009-10-08 富士フイルム株式会社 Image processing device, image processing method, and computer-readable medium
CN106548156A (en) * 2016-10-27 2017-03-29 江西瓷肌电子商务有限公司 A kind of method for providing face-lifting suggestion according to facial image
CN106980840A (en) * 2017-03-31 2017-07-25 北京小米移动软件有限公司 Shape of face matching process, device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339612A (en) * 2008-08-19 2009-01-07 陈建峰 Face contour checking and classification method
CN102339376A (en) * 2010-07-14 2012-02-01 上海一格信息科技有限公司 Classifying and processing method based on active shape model and K nearest neighbor algorithm for facial forms of human faces
KR101382172B1 (en) * 2013-03-12 2014-04-10 건아정보기술 주식회사 System for classifying hierarchical facial feature and method therefor
CN103632147A (en) * 2013-12-10 2014-03-12 公安部第三研究所 System and method for implementing standardized semantic description of facial features
CN106971164A (en) * 2017-03-28 2017-07-21 北京小米移动软件有限公司 Shape of face matching process and device

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866970A (en) * 2019-10-21 2020-03-06 西南民族大学 System and method for realizing reconstruction lens matching through face key point identification
CN110717977A (en) * 2019-10-23 2020-01-21 网易(杭州)网络有限公司 Method and device for processing face of game character, computer equipment and storage medium
CN110717977B (en) * 2019-10-23 2023-09-26 网易(杭州)网络有限公司 Method, device, computer equipment and storage medium for processing game character face
CN110991294B (en) * 2019-11-26 2023-06-02 吉林大学 Face action unit recognition method and system capable of being quickly constructed
CN110991294A (en) * 2019-11-26 2020-04-10 吉林大学 Method and system for identifying rapidly-constructed human face action unit
CN111062995A (en) * 2019-11-28 2020-04-24 重庆中星微人工智能芯片技术有限公司 Method and device for generating face image, electronic equipment and computer readable medium
CN111062995B (en) * 2019-11-28 2024-02-23 重庆中星微人工智能芯片技术有限公司 Method, apparatus, electronic device and computer readable medium for generating face image
CN111241961A (en) * 2020-01-03 2020-06-05 精硕科技(北京)股份有限公司 Face detection method and device and electronic equipment
CN111241961B (en) * 2020-01-03 2023-12-08 北京秒针人工智能科技有限公司 Face detection method and device and electronic equipment
CN111813995A (en) * 2020-07-01 2020-10-23 盛视科技股份有限公司 Pedestrian article extraction behavior detection method and system based on space-time relationship
CN111814702A (en) * 2020-07-13 2020-10-23 安徽兰臣信息科技有限公司 Child face recognition method based on adult face and child photo feature space mapping relation
US11881050B2 (en) 2020-07-15 2024-01-23 Beijing Baidu Netcom Science And Technology Co., Ltd. Method for detecting face synthetic image, electronic device, and storage medium
CN111915479A (en) * 2020-07-15 2020-11-10 北京字节跳动网络技术有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111862030B (en) * 2020-07-15 2024-02-09 北京百度网讯科技有限公司 Face synthetic image detection method and device, electronic equipment and storage medium
CN111862030A (en) * 2020-07-15 2020-10-30 北京百度网讯科技有限公司 Face synthetic image detection method and device, electronic equipment and storage medium
CN111915479B (en) * 2020-07-15 2024-04-26 抖音视界有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112101127A (en) * 2020-08-21 2020-12-18 深圳数联天下智能科技有限公司 Face shape recognition method and device, computing equipment and computer storage medium
CN112101127B (en) * 2020-08-21 2024-04-30 深圳数联天下智能科技有限公司 Face shape recognition method and device, computing equipment and computer storage medium
CN112818772A (en) * 2021-01-19 2021-05-18 网易(杭州)网络有限公司 Facial parameter identification method and device, electronic equipment and storage medium
CN113674139A (en) * 2021-08-17 2021-11-19 北京京东尚科信息技术有限公司 Face image processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108701216B (en) 2021-12-03
CN108701216A (en) 2018-10-23

Similar Documents

Publication Publication Date Title
WO2019090769A1 (en) Human face shape recognition method and apparatus, and intelligent terminal
CN109117808B (en) Face recognition method and device, electronic equipment and computer readable medium
TWI687879B (en) Server, client, user verification method and system
WO2017107957A1 (en) Human face image retrieval method and apparatus
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
WO2019100282A1 (en) Face skin color recognition method, device and intelligent terminal
CN108229376B (en) Method and device for detecting blinking
JP7454105B2 (en) Facial image quality evaluation method and device, computer equipment and computer program
CN106778453B (en) Method and device for detecting glasses wearing in face image
TW201137768A (en) Face recognition apparatus and methods
CN109376604B (en) Age identification method and device based on human body posture
CN112115866A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN108197592B (en) Information acquisition method and device
CN104143086A (en) Application technology of portrait comparison to mobile terminal operating system
CN105740808B (en) Face identification method and device
CN108197318A (en) Face identification method, device, robot and storage medium
JP6969878B2 (en) Discriminator learning device and discriminator learning method
CN112036284B (en) Image processing method, device, equipment and storage medium
CN110728242A (en) Image matching method and device based on portrait recognition, storage medium and application
US20160217565A1 (en) Health and Fitness Monitoring via Long-Term Temporal Analysis of Biometric Data
CN108197608A (en) Face identification method, device, robot and storage medium
CN110135391A (en) System is matched using the program and spectacle-frame of computer apolegamy spectacle-frame
CN115410240A (en) Intelligent face pockmark and color spot analysis method and device and storage medium
CN111814738A (en) Human face recognition method, human face recognition device, computer equipment and medium based on artificial intelligence
KR100862526B1 (en) Method and system constructing moving image database and face recognition method and system using the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17931341

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17931341

Country of ref document: EP

Kind code of ref document: A1