CN111460910A - Face type classification method and device, terminal equipment and storage medium - Google Patents

Face type classification method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN111460910A
CN111460910A CN202010164459.0A CN202010164459A CN111460910A CN 111460910 A CN111460910 A CN 111460910A CN 202010164459 A CN202010164459 A CN 202010164459A CN 111460910 A CN111460910 A CN 111460910A
Authority
CN
China
Prior art keywords
point data
feature point
curve
facial
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010164459.0A
Other languages
Chinese (zh)
Inventor
王心君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen New Mirror Media Network Co ltd
Original Assignee
Shenzhen New Mirror Media Network Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen New Mirror Media Network Co ltd filed Critical Shenzhen New Mirror Media Network Co ltd
Priority to CN202010164459.0A priority Critical patent/CN111460910A/en
Publication of CN111460910A publication Critical patent/CN111460910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application is suitable for the technical field of computers, and provides a face type classification method, which comprises the following steps: acquiring facial feature point data of a target user, wherein the facial feature point data is used for indicating position information of each facial feature point of the target user; generating a characteristic curve according to the facial feature point data, wherein the characteristic curve is formed by connecting facial feature points corresponding to the facial feature point data; carrying out similarity matching on the characteristic curve and a preset curve corresponding to each human face shape respectively; and taking the face type corresponding to the preset curve with the highest similarity with the characteristic curve as a face type classification result of the target user. Therefore, the curve curvature corresponding to each local facial form characteristic does not need to be calculated in a complex manner, the overall similarity between the characteristic curve and the preset curve is only needed to be calculated, and the data calculation amount in the facial form classification process is reduced.

Description

Face type classification method and device, terminal equipment and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to a face type classification method and device, a terminal device and a storage medium.
Background
The classification of the face shape is an important component in the analysis of face images, and can be widely applied to the fields of cosmetology, hairdressing, trial glasses, plastic surgery and the like. At present, the classification of the face shape can be divided into two classification modes, and the problem of large data calculation amount exists. One classification method is to classify the face shape by a machine learning model, and this method needs a large amount of image data of the face shape as a training sample of the machine learning model; the other classification method is to count the curvature of each local feature (such as chin, cheek, etc.) based on the curve of the face shape, and then perform comprehensive calculation on all the local features to obtain the classification result of the face shape, but this method needs to count a large amount of data, which affects the classification efficiency.
Disclosure of Invention
The embodiment of the application provides a face type classification method and device, and can solve the problem that the existing face type classification method is large in data calculation amount.
In a first aspect, an embodiment of the present application provides a method for classifying a facial form of a human face, including:
acquiring facial feature point data of a target user, wherein the facial feature point data is used for indicating position information of each facial feature point of the target user;
generating a characteristic curve according to the facial feature point data, wherein the characteristic curve is formed by connecting facial feature points corresponding to the facial feature point data;
carrying out similarity matching on the characteristic curve and a preset curve corresponding to each human face shape respectively;
and taking the face type corresponding to the preset curve with the highest similarity with the characteristic curve as a face type classification result of the target user.
According to the face type feature point data, the feature curve is generated, the feature curve is used as a whole to be matched with the similarity of the preset curve corresponding to each face type, the curve curvature corresponding to each local face type feature does not need to be calculated in a complex mode, the similarity of the feature curve and the preset curve on the whole is only needed to be calculated, the data calculation amount in the face type classification process is reduced, the types of standard face types and the preset curve can be expanded according to actual requirements, and more face types can be recognized.
In a second aspect, an embodiment of the present application provides a face classification device, including:
the face feature point acquisition module is used for acquiring face feature point data of a target user, and the face feature point data is used for indicating position information of each face feature point of the target user;
the generating module is used for generating a characteristic curve according to the facial feature point data, and the characteristic curve is formed by connecting facial feature points corresponding to the facial feature point data;
the matching module is used for matching the similarity of the characteristic curve and a preset curve corresponding to each face type;
and the classification module is used for taking the face corresponding to the preset curve with the highest similarity with the characteristic curve as a face classification result of the target user.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the above-mentioned face classification method when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program is executed by a processor to implement the above-mentioned method for classifying a facial form of a human face.
In a fifth aspect, the present application provides a computer program product, when the computer program product runs on a terminal device, causing the terminal device to execute the method for classifying a facial form of a human face according to any one of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a spline curve provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a characteristic curve and a predetermined curve provided by an embodiment of the present application;
fig. 3 is a flowchart illustrating a method for classifying facial shapes according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a local feature point connection according to an embodiment of the present application;
fig. 5 is a schematic diagram of a connection line of facial contour feature points according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a planar coordinate system provided by an embodiment of the present application;
fig. 7 is a flowchart illustrating a method for classifying facial shapes according to another embodiment of the present application;
fig. 8 is an exemplary diagram of a shortest path of a distance matrix provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a face classification device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
As described in the related art, a large amount of training data is required to classify the face shape through the machine learning model, which consumes a lot of training time and occupies a large amount of computing memory, otherwise, the classification accuracy of the machine learning model is not high in the absence of training samples. The human face is classified based on the curvature of the curve of the human face, the curvature of each local face feature on the corresponding position of the curve needs to be calculated firstly, then all curvatures are comprehensively calculated to obtain a classification result, and the data calculation amount is very large.
Therefore, the embodiment of the application provides a face type classification method, which is implemented by only calculating the overall similarity between a characteristic curve and a preset curve without calculating the curve curvature corresponding to each local face type characteristic, and reduces the data calculation amount in the face type classification process.
In the embodiment of the present application, the method may be applied to a Catmull-Rom Spline interpolation method, a Time Warping Distance algorithm (CTW), a Dynamic Time Warping algorithm (DTW), and the like. The above algorithm is described for clarity in understanding the implementation of the present application.
The Catmull-Rom Spline interpolation is an interpolation algorithm for a Spline curve of control points, which passes through all points from the second to the penultimate of the control points, so at least 4 control points are required to achieve interpolation. Taking the spline curve shown in fig. 1 as an example, the spline curve includes four control points P1, P2, P3 and P4, and an interpolation point Px between P2 and P3 can be calculated by the following formula, where P1 and P4 as auxiliary points influence the position of the interpolation point between P2 and P3 to make the spline curve smoother:
Px=P1(-0.5t3+t2-0.5t)+P2(1.5t3-2.5t2+1)+P3(1.5t3+2t2+0.5t)+P4(0.5t3-0.5t2),
wherein the value range of t is [0,1], and if t is different, the corresponding interpolation point data between P2 and P3 are also different.
The time warping distance algorithm is an algorithm that the distance between a point set obtained by translating, scaling and rotating the point set and the original point set is 0 in the time warping meaning, and the time warping distance algorithm has translation invariance, scale invariance and rotation invariance.
The dynamic time warping algorithm is an algorithm for measuring the similarity of two curves based on points on the curves. Calculating the distance of each point between two curves by dynamic time warping algorithm, forming all the distances into a distance matrix, and searching from the distance matrixThe sum of the minimum elements. As shown in fig. 2, two normalized curves are shown, where there are four points a1, a2, A3, and a4 on the curve a, and four points B1, B2, B3, and B4 on the curve B, and the distances between the four points a1, a2, A3, and a4 and all the points on the curve B are calculated according to the coordinates of the points on the curves a and B, so that 16 distance values can be obtained, and the distance matrix is formed by using the 16 distance values as the elements of the matrix. Assume that the matrix is
Figure BDA0002406909280000051
The sum of the minimum elements of the distance matrix from the top left corner to the bottom right corner is 0+0+0+0+0, and the sum of the minimum elements is used as a basis for determining the similarity, and the smaller the sum of the minimum elements is, the higher the similarity is, i.e., the sum of the minimum elements is 0, and the similarity is 100%.
Fig. 3 shows a schematic flowchart of a face type classification method provided in the present application, and by way of example and not limitation, the method may be applied to a terminal device, which includes but is not limited to a mobile phone, a tablet computer, a wearable device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like.
S301, obtaining face feature point data of the target user, wherein the face feature point data is used for indicating position information of each face feature point of the target user;
the face feature point data is two-dimensional face feature point data, which may be represented by coordinates of each face feature point of the target user in a predetermined coordinate system, and may include local face feature points such as a nose, eyes, a mouth, eyebrows, ears, a face contour, and may also include only the face contour feature points of the target user. Optionally, the face feature point may be obtained according to a two-dimensional human head image of the target user, or may be obtained by mapping the three-dimensional feature point to a two-dimensional plane after obtaining the three-dimensional feature point according to the three-dimensional human head image of the target user.
In a possible implementation manner, a camera may be disposed on the terminal device, a human head image of the target user is obtained through the camera, and the facial feature point data of the human head image is extracted by a processor of the terminal device. In another possible implementation manner, the terminal device is in communication connection with an external camera component, and the terminal device may acquire face feature point data obtained by performing feature extraction on a head image of a target user acquired by the camera component, or may acquire the head image of the target user acquired by the camera component, and further perform feature extraction on the head image.
S302, generating a characteristic curve according to the facial feature point data, wherein the characteristic curve is formed by connecting facial feature points corresponding to the facial feature point data;
the characteristic curve may be formed by connecting all the partial face feature points as shown in fig. 4, or may be formed by connecting all the contour feature points as shown in fig. 5. The facial form characteristics of the target user can be represented more accurately by connecting all local characteristic points to form a characteristic curve, the calculated amount of the fitting process of the characteristic curve can be reduced by connecting facial contour characteristic points to form the characteristic curve, the facial contour characteristic points serve as main characteristics of the facial form of the face, and the facial form characteristics of the target user can be represented stably.
Specifically, fig. 6 shows a schematic diagram of a plane coordinate system, which is established by taking an axis parallel to a connection line between the left pupil and the right pupil of the target user as an X axis, taking a midpoint of the connection line as a coordinate origin, and taking an axis parallel to a connection line between the midpoint and the human middle point of the target user as a Y axis, mapping the face feature points to the plane coordinate system, obtaining coordinates of each face feature point in the plane coordinate system, connecting the face feature points to form a face feature point connection line, and generating a feature curve of the face feature point connection line according to the coordinates of each face feature point.
It should be understood that the above-mentioned process of establishing a planar coordinate system is only an example, and in other embodiments, the coordinate system may be established with other connecting lines or origins.
Alternatively, to make the face feature points connecting lines smoother, interpolation points may be inserted between the face feature points by a Catmull-Rom Spline interpolation method.
S303, carrying out similarity matching on the characteristic curve and a preset curve corresponding to each face type;
the facial form may include a square facial form, a rectangular facial form, a rhombus facial form, a triangle facial form, a round facial form, an oblong facial form, etc., and it is to be noted that the facial form may be distinguished according to a morphology method, a font method, etc. In one embodiment, the curve of each facial form is fitted in advance, and the curve of each facial form is stored as a preset curve in a memory of the terminal device or a database in communication with the terminal device, so that the preset curve can be called in real time when the terminal device classifies the facial form of the target user.
It should be understood that the preset curve of the face shape may be expanded according to the actual requirements of the application scenario, for example, besides the preset curve of the face shape, the preset curve of the face shape may also include preset curves of the face shapes such as an almond-shaped face shape, an oval-shaped face shape, and the like, and the details are not repeated herein.
The similarity matching of the characteristic curve to the preset curve may be specifically performed by a curve similarity algorithm, which may include, but is not limited to, free Distance, hausdorff Distance, One Way Distance, L IP Distance (L accuracy In-between Polylines Distance), L CSS (L aspect-Common-Sub-sequence), dtw (dynamic Time walking), edr (edit Distance on response sequences), and the like.
The similarity matching is carried out by taking the characteristic curve as a whole and the preset curve corresponding to each human face type, so that the curve curvature corresponding to each local face type characteristic does not need to be calculated in a complicated way, the similarity of the characteristic curve and the preset curve on the whole is only needed to be calculated, the data calculation amount in the face type classification process is reduced, the types of the standard face types and the preset curve can be expanded according to actual requirements, and more face types can be recognized.
And S304, taking the face corresponding to the preset curve with the highest similarity with the characteristic curve as a face classification result of the target user.
And calculating the similarity between the characteristic curve and a preset curve of each face shape by the curve similarity algorithm, comparing all the similarities, and taking the face shape of the preset curve corresponding to the maximum similarity as a face shape classification result.
Fig. 7 is a schematic flow chart illustrating another method for classifying a face according to an embodiment of the present application, and it should be understood that steps similar to those in fig. 3 are not repeated herein.
Referring to fig. 7, in a possible implementation manner, the above S301 includes S701 and S702:
s701, acquiring a three-dimensional human head image of a target user, and extracting three-dimensional face feature point data of the three-dimensional human head image;
the three-dimensional head image includes depth information and RGB information of the head of the target user, and the three-dimensional face feature point data includes, but is not limited to, local three-dimensional feature point data of mouth, eyes, nose, eyebrows, ears, and face contour.
In an embodiment, a three-dimensional human head image of a target user can be acquired through a 3D camera based on a 3D structured light technology, and three-dimensional face feature points in the three-dimensional human head image are extracted according to a human face feature extraction algorithm. It should be understood that the 3D camera may also be a camera based on optical time of flight (TOF), Binocular Stereo Vision (Binocular Stereo Vision), etc. techniques.
And S702, projecting the three-dimensional face feature point data towards a preset direction for two-dimensionalization to obtain the face feature point data of the target user.
The projection bidimensionalization is a process of mapping the three-dimensional characteristic points to a two-dimensional plane so as to obtain two-dimensional characteristic points corresponding to the three-dimensional characteristic points on the two-dimensional plane. The predetermined direction is a direction perpendicular to a predetermined plane, such as a direction perpendicular to the XY plane.
Specifically, the center of a three-dimensional human head is taken as the origin of a cartesian rectangular coordinate system, an X axis is parallel to a connecting line between left and right pupils of a target user, a Y axis is parallel to a connecting line between a midpoint of the connecting line of the left and right pupils and a human center point of the target user, and a Z axis is perpendicular to an XY plane formed by the X axis and the Y axis, and after projection bidimensionalization is performed on three-dimensional face feature point data in a direction perpendicular to the XY plane, if the coordinate of the three-dimensional face feature point in the cartesian rectangular coordinate system is (X, Y, Z), the coordinate of the three-dimensional face feature point is (X, Y, 0), that is, the face feature point data is (X, Y).
In this embodiment, two-dimensional face feature point data is obtained according to a three-dimensional face image, so that a stable standard front face is obtained, the accuracy of the face feature point data is improved, and the difference between the face and the actual face caused by the rotation of the head is avoided, such as the difference between the face and the actual face caused by head lowering, head raising and head side lowering.
Referring to fig. 7, in a possible implementation manner, the foregoing step 302 further includes, before S703:
and S703, translating, scaling and/or rotating the face feature point data to enable the left cheek feature point in the face feature point data to coincide with a first preset reference point and enable the right cheek feature point in the face feature point data to coincide with a second preset reference point.
The first preset reference point and the second preset reference point are standardized reference points, the first preset reference point is used as a starting point of a curve, the second preset curve is used as a curve ending point, and therefore the characteristic curve and the preset curve can be compared at the same starting point and the same ending point. In order to make the characteristic curve and the preset curve at the same starting point and the same ending point and ensure that the characteristic curve and the preset curve are unchanged, the embodiment performs translation, scaling and/or rotation on the characteristic curve and the face characteristic point on the preset curve by using a time warping distance algorithm, so that the characteristic curve and the preset curve meet translation invariance, scale invariance and rotation invariance.
Specifically, taking the characteristic curve as an example, all the facial feature points on the characteristic curve are taken as a point cloud, the point cloud comprises a point cloud of a left cheek feature point positioned on a left cheek and a point cloud of a right cheek feature point positioned on a right cheek, a first preset reference point is taken as a reference point of the left cheek feature point, a second preset reference point is taken as a reference point of the right cheek feature point, and the point cloud is multiplied by a rotation transformation matrix, so that the point cloud is translated, scaled and/or rotated in a whole manner, the left cheek feature point on the characteristic curve is coincided with the first preset reference point, and the right cheek feature point is coincided with the second preset reference point.
Referring to fig. 7, in a possible implementation manner, the foregoing 302 specifically includes S704 and S705:
s704, based on a preset interpolation method, inserting a plurality of interpolation point data between every two facial feature point data;
the preset interpolation method can be L agarge interpolation method, Akima interpolation method, Newton interpolation method, Catmull-Rom Spline interpolation method and the like, and the density between discrete feature point data is increased by inserting a plurality of interpolation point data into two face feature point data, so that the connection between the face feature points is smoother, the contour of each local face is further met, and the face classification precision is improved.
Optionally, a plurality of interpolation point data are inserted between every two facial feature points by using a Catmull-Rom Spline interpolation method, the Catmull-Rom Spline interpolation method can ensure that each interpolation point data is on a facial feature point connecting line, and the Catmull-Rom Spline interpolation method needs at least 4 facial feature points to determine the interpolation point data between the middle 2 facial feature points and can influence the positions of the interpolation points according to the positions of the 2 facial feature points on the edge, so that the facial feature point connecting line formed by the facial feature points and the interpolation points is smoother.
Specifically, feature point data P for two adjacent facesiAnd Pi+1Taking 4 facial feature points as an example, P is obtained according to a preset formulaiAnd Pi+1Inter interpolation point data PxWherein the preset formula is as follows:
Px=Pi-1(-0.5t3+t2-0.5t)+Pi(1.5t3-2.5t2+1)+Pi+1(1.5t3+2t2+0.5t)+Pi+2(0.5t3-0.5t2),
wherein, i is 1, 2 … … N-2, N is the total number of facial feature point data, Pi-1Is and PiAdjacent facial feature point data, Pi+2Is and Pi+1Adjacent facial feature point data, t has a value in the range of [0, 1%]When t takes a plurality of values, a plurality of P can be obtainediAnd Pi+1Interpolated point data in between.
At a minimum of Pi-1、Pi、Pi+1And Pi+2Four face feature point calculation PiAnd Pi+1The inter-interpolation-point data can reduce the data calculation amount in the interpolation-point data calculation process and can also ensure the stability of the interpolation-point data.
In addition, when i is 1, only P1、P2And P3Three facial feature points, then can be according to P1、P2Value of (2) simulating P1The previous characteristic point, and obtaining P according to the following formula1And P2Interpolation point data of points:
Px=(2P1-P2)(-0.5t3+t2-0.5t)+P1(1.5t3-2.5t2+1)+P2(1.5t3+2t2+0.5t)+P3(0.5t3-0.5t2)。
when i is N-2, only PN-2、PN-1And PNThree facial feature points, then can be according to PN-1And PNSimulation PNThen, the characteristic point is obtained, and P is obtained according to the following formulaN-1And PNInterpolation point data between:
Px=PN-2(-0.5t3+t2-0.5t)+PN-1(1.5t3-2.5t2+1)+PN(1.5t3+2t2+0.5t)+(2PN-PN-1)(0.5t3-0.5。t2)。
optionally, the value of t may be a plurality of preset constant values. The method can also be determined according to the number of the interpolation point data to be inserted, that is, t is j/(M +1), 0< j < (M +1), j is a constant, and M is the number of the interpolation point data to be inserted, so that the value of t can be uniformly distributed in [0,1], and the position distribution of the obtained interpolation point data is more reasonable.
S705, a characteristic curve is generated according to the facial feature point data and the interpolation point data, and the characteristic curve further comprises interpolation points corresponding to the interpolation point data.
The interpolation point data is coordinates in a plane coordinate system of the facial feature points, all the facial feature points and the interpolation points are connected to form a facial feature point connecting line, and a feature curve of the facial feature point connecting line is generated according to the coordinates of each facial feature point and each interpolation point.
Referring to fig. 7, in a possible implementation manner, the above S303 specifically includes S706 to S708. It should be noted that the number of the feature points and the interpolation points of the feature curve is the same as that of the preset curve, and the feature curve and the preset curve are normalized through the step S703, that is, the left and right reference points of the feature curve and the preset curve are already overlapped, and at this time, the feature curve and the preset curve satisfy the translation invariance, the rotation invariance and the scale invariance of the time warping distance algorithm, so that the similarity between the two curves can be calculated by using the dynamic time warping algorithm DTW.
S706, calculating distance matrixes between facial feature point data on a feature curve and preset feature point data on a preset curve, wherein each distance matrix corresponds to one preset curve, and the feature curve is obtained by connecting facial feature points obtained by translation, scaling and/or rotation;
the distance matrix is composed of distance values between each face feature point data on the feature curve and each preset feature point data on the preset curve. Specifically, the distance value between the facial feature point and the preset feature point can be calculated according to the coordinates of the facial feature point and the preset feature point in the same coordinate system. Taking the curve a shown in fig. 2 as a characteristic curve and the curve B as a preset curve as an example, assuming that the coordinates of the characteristic points of the normalized curve a and curve B are a1(0, 1), a2(1, 0), A3(2, 0), a4(3, 1), B1(0, 1), B2(1, 2), B3(2, 2), and B4(3, 1) in the same coordinate system, the distance values of the characteristic points on the curve a and curve B are calculated as shown in the following table:
A1(0,1) A2(1,0) A3(2,0) A4(3,1)
B1(0,1) 0 1.4 2.2 3
B2(1,2) 1.4 2 2.2 2.2
B3(2,2) 2.2 2.2 2 1.4
B4(3,1) 3 2.2 1.4 0
as can be seen from the above table, the distance matrix between the curve a and the curve B is:
Figure BDA0002406909280000121
wherein each distance value is an element of a distance matrix.
S707, inquiring the sum of minimum elements of the distance matrix from the upper left corner to the lower right corner;
taking the distance matrix between the curve a and the curve B as an example, a1 and B1 are taken as the coincident left reference points, i.e., the upper left corner of the distance matrix, and a4 and B4 are taken as the coincident right reference points, i.e., the lower right corner of the distance matrix. Therefore, as shown in the shortest path diagram of the distance matrix in fig. 8, the sum of the minimum elements of the distance matrix from the upper left corner to the lower right corner is 0+2+2+0 — 4.
And S708, determining the similarity between the characteristic curve and a preset curve according to the sum of the minimum elements.
The smaller the value of the sum of the minimum elements is, the higher the similarity between the characteristic curve and the preset curve is. In this embodiment, the output of the similarity may be "the similarity between the characteristic curve a and the preset curve B is the highest", and a specific value does not need to be output; or setting the corresponding relation between the sum of the minimum elements and the preset similarity, and determining the value of the similarity according to the value of the sum of the minimum elements.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 9 is a block diagram of a face shape classification device according to an embodiment of the present application, which corresponds to the face shape classification method according to the above embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 9, the apparatus includes:
an obtaining module 901, configured to obtain face feature point data of a target user, where the face feature point data is used to indicate position information of each face feature point of the target user;
a generating module 902, configured to generate a characteristic curve according to the facial feature point data, where the characteristic curve is formed by connecting facial feature points corresponding to the facial feature point data;
a matching module 903, configured to perform similarity matching between the feature curve and a preset curve corresponding to each face shape;
and the classifying module 904 is configured to use the face shape corresponding to the preset curve with the highest similarity to the characteristic curve as a face shape classification result of the target user.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 10, the terminal device 10 of this embodiment includes: at least one processor 100 (only one shown in fig. 10), a memory 101, and a computer program 102 stored in the memory 101 and executable on the at least one processor 100, the processor 100 implementing the steps of any of the above-described method embodiments when executing the computer program 102.
The terminal device 10 may be a mobile phone, a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 100, a memory 101. Those skilled in the art will appreciate that fig. 10 is merely an example of the terminal device 10, and does not constitute a limitation of the terminal device 10, and may include more or less components than those shown, or combine some of the components, or different components, such as an input-output device, a network access device, etc.
The Processor 100 may be a Central Processing Unit (CPU), and the Processor 100 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 101 may in some embodiments be an internal storage unit of the terminal device 10, such as a hard disk or a memory of the terminal device 10, the memory 101 may in other embodiments also be an external storage device of the terminal device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. further, the memory 101 may also include both an internal storage unit and an external storage device of the terminal device 10, the memory 101 is used for storing an operating system, applications, a Boot loader (Boot L loader), data and other programs, such as program codes of the computer program, etc. the memory 101 may also be used for temporarily storing data that has been or will be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for classifying a facial form of a human face, comprising:
acquiring facial feature point data of a target user, wherein the facial feature point data is used for indicating position information of each facial feature point of the target user;
generating a characteristic curve according to the facial feature point data, wherein the characteristic curve is formed by connecting facial feature points corresponding to the facial feature point data;
carrying out similarity matching on the characteristic curve and a preset curve corresponding to each human face shape respectively;
and taking the face type corresponding to the preset curve with the highest similarity of the characteristic curves as the face type classification result of the target user.
2. The method for classifying a facial form of a human face as claimed in claim 1, wherein said obtaining facial form feature point data of a target user comprises:
acquiring a three-dimensional human head image of the target user, and extracting three-dimensional face feature point data of the three-dimensional human head image;
and projecting the three-dimensional face feature point data towards a preset direction for two-dimensionalization to obtain the face feature point data of the target user.
3. The method of classifying a facial form of claim 1, wherein said generating a feature curve from said facial form feature point data comprises:
based on a preset interpolation method, inserting a plurality of interpolation point data between every two facial feature point data;
and generating a characteristic curve according to the face feature point data and the interpolation point data, wherein the characteristic curve also comprises interpolation points corresponding to the interpolation point data.
4. The method of classifying a facial form of claim 3, wherein said interpolating a plurality of interpolation point data between every two facial form feature point data based on a predetermined interpolation method comprises:
for two adjacent facial feature point data PiAnd Pi+1
Obtaining P according to a preset formulaiAnd Pi+1Inter interpolation point data PxWherein the preset formula is as follows:
Px=Pi-1(-0.5t3+t2-0.5t)+Pi(1.5t3-2.5t2+1)+Pi+1(1.5t3+2t2+0.5t)+Pi+2(0.5t3-0.5t2),
wherein i is 1, 2 … … N-2, N is the total number of the facial feature point data, Pi-1Is with said PiAdjacent facial feature point data, Pi+2Is and Pi+1Adjacent facial feature point data, t has a value in the range of [0, 1%]When t takes a plurality of values, a plurality of P can be obtainediAnd Pi+1Said interpolation point data in between;
further, when i is 1, P is obtained according to the following formula1And P2Interpolation point data of points:
Px=(2P1-P2)(-0.5t3+t2-0.5t)+P1(1.5t3-2.5t2+1)+P2(1.5t3+2t2+0.5t)+P3(0.5t3-0.5t2),
i-N-2, P is obtained according to the following formulaN-1And PNInterpolation point data between:
Px=PN-2(-0.5t3+t2-0.5t)+PN-1(1.5t3-2.5t2+1)+PN(1.5t3+2t2+0.5t)+(2PN-PN-1)(0.5t3-0.5t2)。
5. the method of classifying a face type according to claim 4, wherein t ═ j/(M +1), 0< j < (M +1), j is a constant, and M is the number of interpolation point data to be inserted.
6. The method of classifying a facial form according to any one of claims 1 to 5, wherein before generating a feature curve from the facial form feature point data, further comprising:
and translating, scaling and/or rotating the face feature point data to enable a left cheek feature point in the face feature point data to coincide with a first preset reference point and a right cheek feature point in the face feature point to coincide with a second preset reference point.
7. The method for classifying facial shapes according to claim 6, wherein the similarity matching of the characteristic curve and the corresponding preset curve for each facial shape comprises:
calculating distance matrixes between the facial feature point data on the feature curve and preset feature point data on the preset curve, wherein each distance matrix corresponds to one preset curve, and the feature curve is obtained by connecting facial feature points obtained by translation, scaling and/or rotation;
querying the sum of minimum elements of the distance matrix from the upper left corner to the lower right corner;
and determining the similarity between the characteristic curve and a preset curve according to the sum of the minimum elements.
8. A face type classification device, comprising:
the face feature point acquisition module is used for acquiring face feature point data of a target user, and the face feature point data is used for indicating position information of each face feature point of the target user;
the generating module is used for generating a characteristic curve according to the facial feature point data, and the characteristic curve is formed by connecting facial feature point lines corresponding to the facial feature point data;
the matching module is used for matching the similarity of the characteristic curve and a preset curve corresponding to each face type;
and the classification module is used for taking the face corresponding to the preset curve with the highest similarity with the characteristic curve as the face classification result of the target user.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010164459.0A 2020-03-11 2020-03-11 Face type classification method and device, terminal equipment and storage medium Pending CN111460910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010164459.0A CN111460910A (en) 2020-03-11 2020-03-11 Face type classification method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010164459.0A CN111460910A (en) 2020-03-11 2020-03-11 Face type classification method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111460910A true CN111460910A (en) 2020-07-28

Family

ID=71682775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010164459.0A Pending CN111460910A (en) 2020-03-11 2020-03-11 Face type classification method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111460910A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190585A (en) * 2021-04-12 2021-07-30 郑州轻工业大学 Big data acquisition and analysis system for clothing design
CN113821574A (en) * 2021-08-31 2021-12-21 北京达佳互联信息技术有限公司 User behavior classification method and device and storage medium
CN113837326A (en) * 2021-11-30 2021-12-24 自然资源部第一海洋研究所 Airborne laser sounding data registration method based on characteristic curve

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105517680A (en) * 2015-04-28 2016-04-20 北京旷视科技有限公司 Device, system and method for recognizing human face, and computer program product
CN106909875A (en) * 2016-09-12 2017-06-30 湖南拓视觉信息技术有限公司 Face shape of face sorting technique and system
CN106971164A (en) * 2017-03-28 2017-07-21 北京小米移动软件有限公司 Shape of face matching process and device
CN106980840A (en) * 2017-03-31 2017-07-25 北京小米移动软件有限公司 Shape of face matching process, device and storage medium
CN107944093A (en) * 2017-11-02 2018-04-20 广东数相智能科技有限公司 A kind of lipstick color matching system of selection, electronic equipment and storage medium
CN108596091A (en) * 2018-04-24 2018-09-28 杭州数为科技有限公司 Figure image cartooning restoring method, system and medium
CN109272473A (en) * 2018-10-26 2019-01-25 维沃移动通信(杭州)有限公司 A kind of image processing method and mobile terminal
CN109410315A (en) * 2018-08-31 2019-03-01 南昌理工学院 Hair styling method, device, readable storage medium storing program for executing and intelligent terminal
CN109446893A (en) * 2018-09-14 2019-03-08 百度在线网络技术(北京)有限公司 Face identification method, device, computer equipment and storage medium
CN109858343A (en) * 2018-12-24 2019-06-07 深圳云天励飞技术有限公司 A kind of control method based on recognition of face, device and storage medium
WO2020019451A1 (en) * 2018-07-27 2020-01-30 平安科技(深圳)有限公司 Face recognition method and apparatus, computer device, and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105517680A (en) * 2015-04-28 2016-04-20 北京旷视科技有限公司 Device, system and method for recognizing human face, and computer program product
CN106909875A (en) * 2016-09-12 2017-06-30 湖南拓视觉信息技术有限公司 Face shape of face sorting technique and system
CN106971164A (en) * 2017-03-28 2017-07-21 北京小米移动软件有限公司 Shape of face matching process and device
CN106980840A (en) * 2017-03-31 2017-07-25 北京小米移动软件有限公司 Shape of face matching process, device and storage medium
CN107944093A (en) * 2017-11-02 2018-04-20 广东数相智能科技有限公司 A kind of lipstick color matching system of selection, electronic equipment and storage medium
CN108596091A (en) * 2018-04-24 2018-09-28 杭州数为科技有限公司 Figure image cartooning restoring method, system and medium
WO2020019451A1 (en) * 2018-07-27 2020-01-30 平安科技(深圳)有限公司 Face recognition method and apparatus, computer device, and storage medium
CN109410315A (en) * 2018-08-31 2019-03-01 南昌理工学院 Hair styling method, device, readable storage medium storing program for executing and intelligent terminal
CN109446893A (en) * 2018-09-14 2019-03-08 百度在线网络技术(北京)有限公司 Face identification method, device, computer equipment and storage medium
CN109272473A (en) * 2018-10-26 2019-01-25 维沃移动通信(杭州)有限公司 A kind of image processing method and mobile terminal
CN109858343A (en) * 2018-12-24 2019-06-07 深圳云天励飞技术有限公司 A kind of control method based on recognition of face, device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190585A (en) * 2021-04-12 2021-07-30 郑州轻工业大学 Big data acquisition and analysis system for clothing design
CN113821574A (en) * 2021-08-31 2021-12-21 北京达佳互联信息技术有限公司 User behavior classification method and device and storage medium
CN113837326A (en) * 2021-11-30 2021-12-24 自然资源部第一海洋研究所 Airborne laser sounding data registration method based on characteristic curve
CN113837326B (en) * 2021-11-30 2022-03-25 自然资源部第一海洋研究所 Airborne laser sounding data registration method based on characteristic curve

Similar Documents

Publication Publication Date Title
CN111815755B (en) Method and device for determining blocked area of virtual object and terminal equipment
CN109859305B (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
WO2020207190A1 (en) Three-dimensional information determination method, three-dimensional information determination device, and terminal apparatus
US20190311190A1 (en) Methods and apparatuses for determining hand three-dimensional data
CN111460910A (en) Face type classification method and device, terminal equipment and storage medium
CN107463865B (en) Face detection model training method, face detection method and device
CN111723691B (en) Three-dimensional face recognition method and device, electronic equipment and storage medium
CN109948397A (en) A kind of face image correcting method, system and terminal device
CN114187633B (en) Image processing method and device, and training method and device for image generation model
CN112101073B (en) Face image processing method, device, equipment and computer storage medium
CN105096353A (en) Image processing method and device
CN113610958A (en) 3D image construction method and device based on style migration and terminal
WO2024012333A1 (en) Pose estimation method and apparatus, related model training method and apparatus, electronic device, computer readable medium and computer program product
CN108573192B (en) Glasses try-on method and device matched with human face
CN111460937B (en) Facial feature point positioning method and device, terminal equipment and storage medium
CN110032941B (en) Face image detection method, face image detection device and terminal equipment
CN115031635A (en) Measuring method and device, electronic device and storage medium
CN115439733A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
WO2023109086A1 (en) Character recognition method, apparatus and device, and storage medium
CN115861515A (en) Three-dimensional face reconstruction method, computer program product and electronic device
CN112348069B (en) Data enhancement method, device, computer readable storage medium and terminal equipment
CN112613357B (en) Face measurement method, device, electronic equipment and medium
CN115994944A (en) Three-dimensional key point prediction method, training method and related equipment
CN113781653A (en) Object model generation method and device, electronic equipment and storage medium
CN112464753B (en) Method and device for detecting key points in image and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination