CN112613357A - Face measurement method, face measurement device, electronic equipment and medium - Google Patents

Face measurement method, face measurement device, electronic equipment and medium Download PDF

Info

Publication number
CN112613357A
CN112613357A CN202011422988.2A CN202011422988A CN112613357A CN 112613357 A CN112613357 A CN 112613357A CN 202011422988 A CN202011422988 A CN 202011422988A CN 112613357 A CN112613357 A CN 112613357A
Authority
CN
China
Prior art keywords
key point
face
dimensional
face image
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011422988.2A
Other languages
Chinese (zh)
Other versions
CN112613357B (en
Inventor
马啸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN202011422988.2A priority Critical patent/CN112613357B/en
Publication of CN112613357A publication Critical patent/CN112613357A/en
Application granted granted Critical
Publication of CN112613357B publication Critical patent/CN112613357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a face measurement method, a face measurement device, electronic equipment and a medium. The method comprises the following steps: acquiring a face image, and performing key point detection on the face image to obtain a first key point coordinate of the face image; performing affine transformation on the first key point coordinates of the face image according to the two-dimensional key point coordinates of a preset standard face model to obtain second key point coordinates; acquiring reference sizes corresponding to any two key points in a preset standard face model, wherein the reference sizes are used for expressing the standard distance of any two key points on a two-dimensional plane; based on the reference size and the second key point coordinates of the face image, carrying out scaling processing on the face image to obtain third key point coordinates of the face image; combining the third key point coordinate of the preset standard face model with the third key point coordinate to obtain the three-dimensional coordinate of the face image; and calculating according to the three-dimensional coordinates of the face image to obtain a face measurement result.

Description

Face measurement method, face measurement device, electronic equipment and medium
Technical Field
The invention relates to the technical field of computer vision, in particular to a face measurement method, a face measurement device, electronic equipment and a medium.
Background
At present, many terminal devices have a camera function, and in application programs of many terminal devices, human face pictures shot by the camera can be analyzed, wherein the human face pictures include measurement information of distance, proportion, eye shape, eyebrow shape, face shape and the like of five sense organs. However, the face measurement information obtained by the current analysis method has limitations, and the real size of each part of the face cannot be accurately obtained.
Disclosure of Invention
The application provides a face measurement method, a face measurement device, electronic equipment and a medium.
In a first aspect, a face measurement method is provided, including:
acquiring a face image, and performing key point detection on the face image to acquire a first key point coordinate of the face image, wherein the first key point coordinate is a two-dimensional coordinate;
performing affine transformation on the first key point coordinates of the face image according to two-dimensional key point coordinates of a preset standard face model to obtain second key point coordinates of the face image, wherein the preset standard face model is a three-dimensional face model, and the two-dimensional key point coordinates of the preset standard face model are used for indicating the positions of key points in the preset standard face model on a two-dimensional plane;
acquiring reference sizes corresponding to any two key points in the preset standard face model, wherein the reference sizes are used for representing the standard distance of the any two key points on the two-dimensional plane;
based on the reference size and the second key point coordinates of the face image, carrying out scaling processing on the face image to obtain third key point coordinates of the face image;
combining a third key point coordinate of the preset standard face model with the third key point coordinate to obtain a three-dimensional coordinate of the face image, wherein the third key point coordinate is a one-dimensional key point coordinate except the two-dimensional key point coordinate in the three-dimensional key point coordinate of the preset standard face model, and the three-dimensional key point coordinate of the preset standard face model is used for indicating the position of a key point in the preset standard face model in a three-dimensional space;
and calculating according to the three-dimensional coordinates of the face image to obtain a face measurement result, wherein the face measurement result comprises the distance between two coordinate points in the three-dimensional coordinates or the angle formed by connecting at least three coordinate points.
In an optional implementation manner, the performing affine transformation on the first key point coordinates of the face image according to the two-dimensional key point coordinates of the preset standard face model to obtain the second key point coordinates of the face image includes:
selecting N groups of corresponding coordinates from the two-dimensional key point coordinates and the first key point coordinates of the preset standard face model, wherein N is less than the number of the first key point coordinates, and one group of coordinates consists of the two-dimensional key point coordinates and the first key point coordinates corresponding to one key point;
calculating and obtaining a transformation matrix based on the N groups of coordinates; and performing affine transformation on the first key point coordinates by using the transformation matrix to obtain second key point coordinates of the face image.
In an optional implementation manner, the obtaining of the reference sizes corresponding to any two key points in the preset standard face model includes:
acquiring key point coordinates of a plurality of first sample face images;
acquiring key point coordinates corresponding to any two key points from the key point coordinates of each first sample face image, and calculating the distance value of any two key points based on the key point coordinates corresponding to any two key points to obtain a plurality of distance values;
determining the reference dimension from the plurality of distance values.
In an alternative embodiment, the determining the reference dimension according to the plurality of distance values includes:
acquiring the median of the plurality of distance values as the reference size; alternatively, the first and second electrodes may be,
and acquiring the maximum value and the minimum value in the plurality of distance values, and taking the average value of the maximum value and the minimum value as the reference size.
In an optional implementation manner, the scaling the face image based on the reference size and the second keypoint coordinates of the face image to obtain the target keypoint coordinates of the face image includes:
under the condition that the two-dimensional key point coordinate corresponding to any one of the two arbitrary key points is aligned with the second key point coordinate corresponding to the any one key point, scaling the face image according to the reference size so as to enable the distance between the two arbitrary key points in the face image to be consistent with the reference size, and thus obtaining a third key point coordinate of the face image.
In an optional implementation manner, the number of the reference sizes is multiple, the multiple reference sizes include reference sizes corresponding to multiple groups of key points respectively and/or multiple different reference sizes corresponding to a group of key points, a group of key points is composed of any two key points, and at least one of the two key points corresponding to different groups is different;
the face measurement result comprises a plurality of face measurement results corresponding to the plurality of reference sizes;
after the calculation is performed according to the three-dimensional coordinates of the face image and a face measurement result is obtained, the method further comprises the following steps:
and averaging the plurality of face measurement results to obtain a target face measurement result.
In an optional embodiment, the method further comprises:
acquiring a second sample face image; performing key point detection on the second sample face image to obtain sample face key points;
mapping the sample face key points to a depth point cloud to obtain the depth value of each second sample face key point so as to obtain the three-dimensional key point coordinates of the second sample face image;
moving the three-dimensional key point coordinates of the second sample face image to a coordinate system with a preset point as an original point, and normalizing the three-dimensional key point coordinates of the second sample face image based on a preset maximum value;
grouping the three-dimensional key point coordinates of the sample face image; and respectively averaging the coordinates of each group of three-dimensional key points in the coordinate system to obtain average key point coordinates which are used as the three-dimensional key point coordinates of the preset standard human face model.
In an optional implementation manner, the scaling the face image according to the reference size includes:
calculating the distance between any two key points;
calculating according to the distance between any two key points and the reference size to obtain a scaling;
and carrying out scaling processing on the face image according to the scaling.
In an optional embodiment, before the acquiring the face image, the method includes:
acquiring image data containing a human face;
the acquiring of the face image comprises:
carrying out face detection on the image data to obtain a face detection result of the image data, wherein the face detection result comprises edge coordinates of a face image;
and extracting the face image in the image data according to the edge coordinates of the face image.
In a second aspect, a face measurement device is provided, which includes:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a face image, and performing key point detection on the face image to acquire a first key point coordinate of the face image, and the first key point coordinate is a two-dimensional coordinate;
the transformation module is used for carrying out affine transformation on the first key point coordinates of the face image according to the two-dimensional key point coordinates of a preset standard face model to obtain second key point coordinates of the face image, wherein the preset standard face model is a three-dimensional face model, and the two-dimensional key point coordinates of the preset standard face model are used for indicating the positions of key points in the preset standard face model on a two-dimensional plane;
a scaling module to:
acquiring reference sizes corresponding to any two key points in the preset standard face model, wherein the reference sizes are used for representing the standard distance of the any two key points on the two-dimensional plane;
based on the reference size and the second key point coordinates of the face image, carrying out scaling processing on the face image to obtain third key point coordinates of the face image;
a calculation module to:
combining a third key point coordinate of the preset standard face model with the third key point coordinate to obtain a three-dimensional coordinate of the face image, wherein the third key point coordinate is a one-dimensional key point coordinate except the two-dimensional key point coordinate in the three-dimensional key point coordinate of the preset standard face model, and the three-dimensional key point coordinate of the preset standard face model is used for indicating the position of a key point in the preset standard face model in a three-dimensional space;
and calculating according to the three-dimensional coordinates of the face image to obtain a face measurement result, wherein the face measurement result comprises the distance between two coordinate points in the three-dimensional coordinates or the angle formed by connecting at least three coordinate points.
In a third aspect, an electronic device is provided, comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps as in the first aspect and any one of its possible implementations.
In a fourth aspect, there is provided a computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the steps of the first aspect and any possible implementation thereof.
The method comprises the steps of obtaining a face image, carrying out key point detection on the face image, and obtaining a first key point coordinate of the face image, wherein the first key point coordinate is a two-dimensional coordinate; performing affine transformation on the first key point coordinates of the face image according to two-dimensional key point coordinates of a preset standard face model to obtain second key point coordinates of the face image, wherein the preset standard face model is a three-dimensional face model, and the two-dimensional key point coordinates of the preset standard face model are used for indicating the positions of key points in the preset standard face model on a two-dimensional plane; acquiring reference sizes corresponding to any two key points in the preset standard face model, wherein the reference sizes are used for representing the standard distance of the any two key points on the two-dimensional plane, zooming the face image based on the reference sizes and the second key point coordinates of the face image to obtain third key point coordinates of the face image, combining the third key point coordinates of the preset standard face model and the third key point coordinates to obtain three-dimensional coordinates of the face image, wherein the third key point coordinates are one-dimensional key point coordinates except the two-dimensional key point coordinates in the three-dimensional key point coordinates of the preset standard face model, and the three-dimensional key point coordinates of the preset standard face model are used for indicating the positions of the key points in the preset standard face model in a three-dimensional space, finally, calculating according to the three-dimensional coordinates of the face image to obtain a face measurement result, wherein the face measurement result comprises the distance between two coordinate points in the three-dimensional coordinates or the angle formed by connecting at least three coordinate points, affine transformation can be carried out on the acquired key point coordinates of the face based on a preset standard face model, and the face image is zoomed by using the selected reference dimension to adjust the face pose and proportion, so that the processed face image is aligned with the preset standard face model, namely the pose is a front face and the dimension is uniform, and the measurement result of each part of the face is more accurate; and based on the combination of the third key point coordinate of the preset standard face model and the third key point coordinate, mapping the key points of the face image to a three-dimensional space, and calculating according to the three-dimensional coordinate of the face image so as to measure the real size of each part of the face.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
Fig. 1 is a schematic flow chart of a face measurement method according to an embodiment of the present application;
fig. 2A is a schematic diagram illustrating a definition of a face key point according to an embodiment of the present application;
fig. 2B is a schematic diagram of a face detection box according to an embodiment of the present application;
fig. 2C is a schematic diagram of a two-dimensional image of a preset standard face model according to an embodiment of the present application;
fig. 2D is a schematic view of coordinate axes of key points of a human face according to an embodiment of the present application;
fig. 2E is another schematic view of coordinate axes of key points of a human face according to an embodiment of the present application;
fig. 3 is a schematic flow chart of another human face measuring method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a face measurement device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Neural Networks (NN) referred to in the embodiments of the present application are complex network systems formed by widely interconnecting a large number of simple processing units (called neurons), reflect many basic features of human brain functions, and are highly complex nonlinear dynamical learning systems. The neural network has the capabilities of large-scale parallel, distributed storage and processing, self-organization, self-adaptation and self-learning, and is particularly suitable for processing inaccurate and fuzzy information processing problems which need to consider many factors and conditions simultaneously.
Convolutional Neural Networks (CNN) are a class of feed forward Neural Networks (fed forward Neural Networks) that contain convolution computations and have a deep structure, and are one of the representative algorithms for deep learning (deep learning).
The embodiments of the present application will be described below with reference to the drawings.
Referring to fig. 1, fig. 1 is a schematic flow chart of a face measurement method according to an embodiment of the present application. The method can comprise the following steps:
101. the method comprises the steps of obtaining a face image, and carrying out key point detection on the face image to obtain a first key point coordinate of the face image, wherein the first key point coordinate is a two-dimensional coordinate.
The implementation subject of the embodiments of the present application may be a human face measuring apparatus, and may be an electronic device, and in particular implementations, the electronic device may be a terminal, which may also be referred to as a terminal device, including but not limited to other portable devices such as a mobile phone, a laptop computer, or a tablet computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad). It should also be understood that in some embodiments, the devices described above are not portable communication devices, but rather are desktop computers having touch-sensitive surfaces (e.g., touch screen displays and/or touch pads).
The face image is an image including only a face region, and can be extracted from image data including a face. The face image can be subjected to key point detection, a plurality of key points in the face are positioned, for example, a Cascade regression algorithm based on Deep learning such as MTCNN, Deep conditional Network Cascade and the like, the position of a predefined face key point can be accurately positioned on one face image, and namely, the first key point coordinate of the face image can be obtained through key point detection. The face image is a two-dimensional image, and the obtained first key point coordinate is a two-dimensional coordinate.
In the embodiment of the application, different key point detection algorithms or models can be used for detecting key points, and key points of each part in the face, such as eyes, a nose, eyebrows, a mouth and the like, can be defined according to the requirement of face measurement, which is not limited herein. For example, fig. 2A is a schematic view illustrating definition of key points of a face according to an embodiment of the present application, and as shown in fig. 2A, 27 key points, which are 0 to 26, are labeled in the face image and are respectively distributed in the contour and the edge position of five sense organs of the face in the face image.
In an optional implementation manner, before the step 101, the method further includes:
acquiring image data containing a human face;
the step 101 specifically includes:
carrying out face detection on the image data to obtain a face detection result of the image data, wherein the face detection result comprises edge coordinates of a face image;
and extracting the face image in the image data according to the edge coordinates of the face image.
The image data containing the human face can be an image acquired by any equipment, such as a picture containing the human face and shot by a camera of a mobile phone. Specifically, a face detection model may be used to perform face detection on input face image data to obtain a face detection result of the image data, and mainly output face position information. In one embodiment, the face detection result may be embodied as a face detection frame detected in the image data, and the edge coordinates of the face image may be the angular coordinates of the face detection frame. That is, the image in the face detection frame can be determined and extracted as the face image.
For example, as shown in fig. 2B, a schematic diagram of a face detection frame is shown, where fig. 2B shows a rectangular frame in image data including a face, and coordinates (x) of an upper left corner of the rectangular frame are labeled1,y1) And the coordinates of the lower right corner (x)2,y2) The rectangular frame is an external rectangular frame of the face area, the face area in the external rectangular frame in the image data can be obtained and used as the face image in the embodiment of the application, then the key point detection is carried out, the face area can be preliminarily detected, the key point detection range is reduced, the data processing amount is also reduced, and the face key point detection result is more accurate.
In the embodiment of the application, the trained target detection model can be adopted for face detection. And (3) learning and training through a large amount of labeled face data to obtain an Object Detection model (Object Detection) capable of detecting the face position. The face detection model or algorithm may be flexibly selected, for example, to include a Multi-task convolutional neural network (MTCNN), a ssd (single Shot Multi box detector) series model, a yolo (young Only Look one) series model, and the like, which is not limited in the embodiments of the present application. The SSD is a method for detecting an object in an image by using a single deep neural network, and the YOLO is an object identification and positioning algorithm based on the deep neural network, and has the greatest characteristic of high running speed and can be used for a real-time system.
102. Performing affine transformation on the first key point coordinates of the face image according to two-dimensional key point coordinates of a preset standard face model to obtain second key point coordinates of the face image, wherein the preset standard face model is a three-dimensional face model, and the two-dimensional key point coordinates of the preset standard face model are used for indicating the position of a key point in the preset standard face model on a two-dimensional plane.
Affine Transformation (also called Affine mapping) referred to in the embodiments of the present application means that, in geometry, one vector space is subjected to linear Transformation once and then translated into another vector space.
In the embodiment of the present application, a standard face model may be preset as a reference for face transformation processing, that is, the preset standard face model is a three-dimensional face model, where each key point has a corresponding three-dimensional key point coordinate. In the embodiment of the application, the key points of the face image can be more standard through affine transformation, and the first key point of the face image can be transformed to the two-dimensional plane space of the preset standard face model. Since the angle of the face is not always parallel to the camera when the image is captured, a side face condition may be displayed in the face image, and if the size calculation of the face part is performed using the detected first key point in this condition, the measured result may be small or distorted. Therefore, the preset standard face model is established, and the detected first key point is corrected by using the affine transformation principle, so that the subsequent face measurement result is more accurate. This process may be referred to as face alignment.
In practical application, information of a certain measured person, such as sex, age, etc., can be collected, so that the preset standard face model can be established corresponding to specific standard statistical values, thereby providing measurement accuracy. For example, the head and face size table of adult men/women specified in the national standard (head and face size of adult, GB/T2428-1998) can be used for adult men/women, and the statistics of the gender and age stages in the national standard (head and face size of Chinese minors, GB/T26160-2010) can be used for teenagers. A coordinate system can be set to establish a preset standard face model, and the sizes of all parts of the preset standard face model are determined according to the standard statistical values of all parts of the face so as to determine the coordinates of all face key points of the preset standard face model. Certainly, a standard face model may also be established by using statistical results with higher credibility issued by other authorities, which is not limited in the embodiment of the present application. A reliable preset standard face model can be established through standard statistical data and is suitable for being used as a reference standard of each face image, so that affine transformation of key point coordinates of the face image is carried out according to the key point coordinates of the preset standard face model.
Optionally, when the standard face model is established, because the nose height portion is minimally affected by the angle of the face (because the nose height portion is a line perpendicular to the ground and parallel to the lens, the size of a pixel in an image is basically unchanged when the head is turned), the nose height portion can be used as a reference, and other key points can be placed in proportion to the nose height by referring to a defined and preset standard value (such as the average size of each statistical face portion). For example, a two-dimensional image of a preset standard face model established in this way can be seen in fig. 2C, as shown in fig. 2C, in which the nasion point (key point 15 in the figure) is defined as the origin and the nasion height is defined as the distance
Figure BDA0002823368990000091
I.e. |15(0,0),l13(0, 51); then distance
Figure BDA0002823368990000092
The distance from the nose to the key point under the chin is represented by the standard coordinate l of the key point 44(0, -62). The rest points are analogized in turn to obtain a two-dimensional preset standard face model L based on the nasal heightstd{li(xi,yi) Where i represents the ith keypoint. The above-mentioned standard statistical values of each part of the human face may be values in a two-dimensional image (that is, the above-mentioned face size is expressed as a face plane size), so that a corresponding two-dimensional preset standard human face model may be established, and further, a two-dimensional preset standard human face model may be obtainedAnd (3) taking the preset standard depth values of the face key points as third-dimensional coordinates of the face key points, namely combining the two-dimensional coordinates of each key point in the two-dimensional preset standard face model with the corresponding preset standard depth values to obtain three-dimensional coordinates, so that the three-dimensional preset standard face model can be established. Optionally, if the standard statistical values of the various parts of the face are values in a three-dimensional image (that is, the face size table is the size in the three-dimensional face), a corresponding three-dimensional preset standard face model may be established accordingly.
In an alternative embodiment, the method further comprises:
acquiring a second sample face image; performing key point detection on the second sample face image to obtain sample face key points;
mapping the sample face key points to a depth point cloud to obtain the depth value of each second sample face key point so as to obtain the three-dimensional key point coordinates of the second sample face image;
moving the three-dimensional key point coordinates of the second sample face image to a coordinate system with a preset point as an original point, and normalizing the three-dimensional key point coordinates of the second sample face image based on a preset maximum value;
grouping the three-dimensional key point coordinates of the sample face image; and respectively averaging the coordinates of each group of the three-dimensional key points in the coordinate system to obtain average key point coordinates which are used as the three-dimensional key point coordinates of the preset standard human face model.
Specifically, the three-dimensional key point coordinates of the second sample face image can be obtained by mapping the two-dimensional second sample face image to the depth point cloud. The point cloud is a data set of points in a certain coordinate system, and the depth point cloud in the embodiment of the present application may be understood as a data set of three-dimensional coordinates (x, y, z) of key points of a human face, and may be visualized as a three-dimensional human face structure in a space. The depth point cloud in the embodiment of the application can provide a third dimensional coordinate for the key point. Each point in the depth point cloud has a depth value corresponding to a z-axis coordinate value in the coordinate system that can be used as a key point of the second sample face. In an implementation methodIn the formula, the preset standard face model may be obtained by performing face random sampling scanning on a person with a high-precision three-dimensional scanner to obtain a second sample face image, performing face key point detection, and mapping the detected second sample face key point onto the depth point cloud. Replacing the first and second two-dimensional coordinates of the corresponding midpoint of the depth point cloud by using the two-dimensional coordinates of the second sample face key point, namely combining the x-axis and y-axis coordinates of the second sample face key point with the z-axis coordinates of the point in the corresponding depth point cloud to obtain three-dimensional coordinates, namely converting the coordinates of the second sample face key point into three-dimensional coordinates, and obtaining the three-dimensional key point coordinates L { L } of the sample face imagei(xi,yi,zi) And obtaining a three-dimensional face image through the three-dimensional key point coordinates.
After the three-dimensional key point coordinates of the sample face image are obtained, all key points need to be moved to a coordinate system with a preset point as an origin, and normalization processing is performed based on a preset maximum value, so that a more uniform and standard face model is obtained. The normalization is understood to be that, according to the preset maximum value and the origin coordinates, the coordinate components in the coordinate system can be determined, so that when the coordinates of the sample face image are moved into the coordinate system, the three-dimensional key point coordinates of the sample face image follow the coordinate logic of the coordinate system.
For example, as shown in fig. 2D, a schematic diagram of coordinate axes of key points of a human face may use a key point at a lower left corner of a circumscribed rectangle frame of the three-dimensional key points as an origin, and then normalize coordinates of all the three-dimensional key points by using a face height (the height of the circumscribed rectangle frame is generally greater than a face width) as a maximum value, that is, update coordinate values with reference to the origin, at this time, a second sample human face image represents coordinate values in the coordinate system by using the origin of the coordinate system as a reference. Then, the three-dimensional key point coordinates of all the second sample face images are grouped, for example, all the three-dimensional key point coordinates corresponding to each label in fig. 2D are grouped, and the three-dimensional key point coordinates of each group are respectively averaged under the coordinate system, so that a corresponding average of each group is obtainedAnd averaging the coordinates of the key points, and taking all the obtained average coordinates of the key points as the coordinates of the three-dimensional key points of the preset standard human face model. For a set of three-dimensional keypoint coordinates, the value of the first dimension coordinate (x-axis) is recorded as xiThen the first dimension coordinate (x-axis) value of the set of average keypoint coordinates is calculated as:
Figure BDA0002823368990000111
similarly, the second and third coordinate values of the set of average keypoint coordinates can be determined
Figure BDA0002823368990000113
Thereby obtaining an average keypoint coordinate corresponding to the set of three-dimensional keypoint coordinates.
For example, as shown in fig. 2D, in the embodiment of the present application, key points 0 to 27 are detected from each sample face image, and the coordinates of these key points may be different in each sample face image. Specifically, for a plurality of sample face images, the key points with the same number are in a group, for example, there are 20 sample face images, the key point corresponding to the first inner canthus 19 position is in a group, and for the three-dimensional key point coordinate with the number of 19, the values of the first dimensional coordinate (x-axis) of the 20 sample face images are respectively marked as x1、x2、...、x20Then the first dimension coordinate (x-axis) value of the set of average keypoint coordinates of number 19 is calculated as:
Figure BDA0002823368990000112
similarly, the second and third coordinate values of the set of average keypoint coordinates can be determined
Figure BDA0002823368990000114
Thereby obtaining an average keypoint coordinate corresponding to the set of three-dimensional keypoint coordinates as the first internal canthus keypoint coordinate numbered 19. In this wayThe keypoint coordinates for each number may be obtained.
It should be noted that, since the same depth point cloud may be used for mapping, that is, for the key points of the plurality of second sample face images, the third dimensional coordinates of the key points with the same number are the same, it may not be necessary to calculate the average value of the third dimensional coordinates, and the data processing amount is reduced.
In an optional implementation manner, the step 102 may specifically include:
selecting N groups of corresponding coordinates from the two-dimensional key point coordinates and the first key point coordinates of the preset standard face model, wherein N is less than the number of the first key point coordinates, and one group of coordinates consists of the two-dimensional key point coordinates and the first key point coordinates corresponding to one key point;
calculating and obtaining a transformation matrix based on the N groups of coordinates; and performing affine transformation on the first key point coordinates by using the transformation matrix to obtain second key point coordinates of the face image.
If the transformation matrix is denoted as a, the affine transformation can be represented by the following formula: x' is a · x. Specifically, the above affine transformation formula can be as follows:
Figure BDA0002823368990000121
wherein, (x ', y') is the two-dimensional key point coordinates of the preset standard face model, and x, y) is the detected first key point coordinates. The above affine transformation has six degrees of freedom (unknowns a)1、a2、a3、a4、tx、ty) Can be composed of n groups (n)>3) Or n sets of the above known coordinate transformation matrices:
Figure BDA0002823368990000122
that is, in the embodiment of the present application, the two-dimensional key point coordinates of the standard face model and the first key point may be presetSelecting N groups of corresponding coordinates from the point coordinates, wherein N is less than the number of the first key point coordinates and is more than 3, and substituting the N groups of coordinates into the formula to calculate to obtain a transformation matrix A (six parameters a)1、a2、a3、a4、tx、ty) If more than three sets of coordinates are used for calculation, the parameters of one set of transformation matrix A can be calculated by every three sets of coordinates, and then the final transformation matrix A is determined by averaging the values of the same parameters. When the transformation matrix a is known, the same formula can be used, and all the first keypoint coordinates x, y) are multiplied by the transformation matrix a, so as to obtain second keypoint coordinates x ", y") after affine transformation:
Figure BDA0002823368990000123
the coordinates after the affine transformation can also be understood as the coordinates of key points of the front face, and the second key points after the affine transformation are basically consistent with the distribution condition of the key points of the preset standard face model, so that the alignment between the second key points and the preset standard face model is realized. Since the angle of the face is not always parallel to the camera when the image is captured, the face image may show a side face condition, and if the face part size calculation is performed in this condition, the measured result may be small or distorted. Through the alignment, some non-standard faces (such as side angles) can be presented as front faces like a preset standard face model, so that the measurement of the size of the subsequent face can be more accurately carried out.
103. And acquiring the reference sizes corresponding to any two key points in the preset standard face model, wherein the reference sizes are used for expressing the standard distance of the any two key points on the two-dimensional plane.
The reference size is a reference size for scaling the face image, and indicates a standard distance between the two key points on the two-dimensional plane. Any two key points can be any two face key points in a preset standard face model. Specifically, a reference size may be selected, and the measurement statistic of the reference size in the crowd is required to have a small variation range, that is, the size of the measurement value of the part of all people is only changed in a small range, so as to improve the reference accuracy for scaling the face image. For example, the reference size may be the width of the eyes, i.e. the distance between key points of the corners of the eyes in the eyes.
In an alternative embodiment, the step 103 may include:
acquiring key point coordinates of a plurality of first sample face images;
acquiring key point coordinates corresponding to any two key points from the key point coordinates of each first identical face image, and calculating the distance values of any two key points based on the key point coordinates corresponding to any two key points to acquire a plurality of distance values;
and determining the reference dimension according to the plurality of distance values.
The selection of the reference size can be determined by statistics of a plurality of sample face images. Specifically, after the key point coordinates are obtained by performing key point detection on a plurality of first same face images, two reference key points are selected from the key point coordinates of each first same face image, each reference key point is located at the same position (for example, the same-labeled key point in the face shown in fig. 2A) in different first same face images, and a distance value between the two reference key points in each first same face image is obtained, which can be specifically obtained by calculating the key point coordinates corresponding to the reference key points. After obtaining a plurality of distance values through the above steps, a reference dimension may be determined. The reference size is determined by counting the distance values of the key points of the face images of the multiple samples, and a large number of samples are referred, so that the determined reference size has a reference value and is more suitable for different face images.
Further optionally, the determining the reference size according to the plurality of distance values includes:
acquiring the median of the plurality of distance values as the reference size; alternatively, the first and second electrodes may be,
the maximum value and the minimum value of the plurality of distance values are acquired, and the average value of the maximum value and the minimum value is used as the reference size.
The error can be reduced by determining the reference size to which the image is scaled by selecting a median or average value. In the embodiment of the present application, a suitable reference size may be selected in other manners, which is not limited herein.
104. And carrying out scaling processing on the face image based on the reference size and the second key point coordinate of the face image to obtain a third key point coordinate of the face image.
Specifically, under the condition that a two-dimensional key point coordinate corresponding to any one of the two arbitrary key points is aligned with a second key point coordinate corresponding to the one arbitrary key point, the face image may be scaled according to the reference size, so that a distance between the two arbitrary key points in the face image matches the reference size, so as to obtain a third key point coordinate of the face image.
Specifically, the two-dimensional key point coordinate corresponding to any key point b1 in any two of the key points (b1, b2) may be aligned with the second key point coordinate corresponding to the key point b1, and then the distance between any two key points b1 and b2 may be obtained through calculation; then, a scaling ratio can be obtained by calculating the distance between the two key points b1 and b2 and the reference size, which can be the ratio m of the reference size and the distance between the two key points b1 and b 2; and performing corresponding scaling processing on the face image according to the scaling ratio, namely, magnifying the face image by m times (actually reducing if m is less than 1), so that the distances of the key points b1 and b2 in the face image can be consistent with the reference size, and then obtaining the third key point coordinate of the face image.
Specifically, referring to another schematic diagram of the coordinate axis of the key point of the face shown in fig. 2E, as shown in fig. 2E, the intra-ocular width is used as a reference size, and the face image obtained in fig. 2E is obtained by integrally scaling the original face image with the reference size as a standard, so that the intra-ocular width of the face image is equal to the intra-ocular width of the preset standard face model. The scaling process in this case may be as follows:
firstly, moving the key point integrally to
Figure BDA0002823368990000141
Coordinate system as origin
Figure BDA0002823368990000142
Calculating distance
Figure BDA0002823368990000143
Calculating a scaling ratio:
Figure BDA0002823368990000144
the newly obtained coordinates of the third key point of the face image are
Figure BDA0002823368990000145
Figure BDA0002823368990000146
Figure BDA0002823368990000147
105. And combining a third key point coordinate of the preset standard face model with the third key point coordinate to obtain a three-dimensional coordinate of the face image, wherein the third key point coordinate is a one-dimensional key point coordinate except the two-dimensional key point coordinate in the three-dimensional key point coordinate of the preset standard face model, and the three-dimensional key point coordinate of the preset standard face model is used for indicating the position of a key point in the preset standard face model in a three-dimensional space.
In order to measure the size of each part of the three-dimensional face, the coordinates of the third key point of a preset standard face model and the coordinates of the third key point can be combined, so that the three-dimensional coordinates of the face image can be obtained, and the mapping of the key points of the face image to the positions in the three-dimensional space can be understood.
106. And calculating according to the three-dimensional coordinates of the face image to obtain a face measurement result, wherein the face measurement result comprises the distance between two coordinate points or the angle formed by connecting at least three coordinate points in the three-dimensional coordinates.
After the three-dimensional coordinates of the face image are obtained, measurement calculation processing may be performed, which may specifically include distance measurement, angle measurement, and the like for different parts of the face, for example, the distance between the internal corners of two eyes or the angle of the nose tip may be calculated, which is not limited in this application.
The analysis of the human face shot by a camera is required in many cosmetic and facial applications, including specific measurement information such as distance between five sense organs, proportion, eye shape, eyebrow shape and face shape. However, the image shot by the camera of the general terminal device contains two-dimensional information, so that the general algorithm cannot measure or estimate the real size information of the face from the image, because the three-dimensional information of the stereo face image is used.
The method comprises the steps of obtaining a face image, carrying out key point detection on the face image, and obtaining a first key point coordinate of the face image, wherein the first key point coordinate is a two-dimensional coordinate; performing affine transformation on the first key point coordinates of the face image according to two-dimensional key point coordinates of a preset standard face model to obtain second key point coordinates of the face image, wherein the preset standard face model is a three-dimensional face model, and the two-dimensional key point coordinates of the preset standard face model are used for indicating the positions of key points in the preset standard face model on a two-dimensional plane; acquiring reference sizes corresponding to any two key points in the preset standard face model, wherein the reference sizes are used for representing the standard distance of the any two key points on the two-dimensional plane, zooming the face image based on the reference sizes and the second key point coordinates of the face image to obtain third key point coordinates of the face image, combining the third key point coordinates of the preset standard face model and the third key point coordinates to obtain three-dimensional coordinates of the face image, wherein the third key point coordinates are one-dimensional key point coordinates except the two-dimensional key point coordinates in the three-dimensional key point coordinates of the preset standard face model, and the three-dimensional key point coordinates of the preset standard face model are used for indicating the positions of the key points in the preset standard face model in a three-dimensional space, finally, calculating according to the three-dimensional coordinates of the face image to obtain a face measurement result, wherein the face measurement result comprises the distance between two coordinate points in the three-dimensional coordinates or the angle formed by connecting at least three coordinate points, affine transformation can be carried out on the acquired key point coordinates of the face based on a preset standard face model, and the face image is zoomed by using the selected reference dimension to adjust the face pose and proportion, so that the processed face image is aligned with the preset standard face model, namely the pose is a front face and the dimension is uniform, and the measurement result of each part of the face is more accurate; and based on the combination of the third key point coordinate of the preset standard face model and the third key point coordinate, mapping the key points of the face image to a three-dimensional space, and calculating according to the three-dimensional coordinate of the face image so as to measure the real size of each part of the face.
Referring to fig. 3, fig. 3 is a schematic flow chart of another human face measuring method according to an embodiment of the present application. As shown in fig. 3, the method may specifically include:
301. the method comprises the steps of obtaining a face image, and carrying out key point detection on the face image to obtain a first key point coordinate of the face image, wherein the first key point coordinate is a two-dimensional coordinate.
302. Performing affine transformation on the first key point coordinates of the face image according to two-dimensional key point coordinates of a preset standard face model to obtain second key point coordinates of the face image, wherein the preset standard face model is a three-dimensional face model, and the two-dimensional key point coordinates of the preset standard face model are used for indicating the position of a key point in the preset standard face model on a two-dimensional plane.
The implementation subject of the embodiments of the present application may be a human face measuring apparatus, and may be an electronic device, and in particular implementations, the electronic device may be a terminal, which may also be referred to as a terminal device, including but not limited to other portable devices such as a mobile phone, a laptop computer, or a tablet computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad). It should also be understood that in some embodiments, the devices described above are not portable communication devices, but rather are desktop computers having touch-sensitive surfaces (e.g., touch screen displays and/or touch pads).
Step 301 and step 302 may refer to the detailed description of step 101 and step 102 in the embodiment shown in fig. 1, and are not described herein again.
303. Acquiring reference sizes corresponding to any two key points in the preset standard face model, wherein the reference sizes are used for expressing the standard distance of the any two key points on the two-dimensional plane; the number of the reference sizes is multiple, the multiple reference sizes comprise reference sizes corresponding to multiple groups of key points respectively and/or multiple different reference sizes corresponding to one group of key points, one group of key points comprises any two key points, and at least one key point of any two key points corresponding to different groups is different.
The reference size may refer to the description of the reference size in the embodiment shown in fig. 1, and is not described herein again. On this basis, the present embodiment can perform processing with a plurality of reference sizes.
In an implementation manner, the multiple reference sizes include reference sizes corresponding to multiple groups of key points, and the determination of each reference size may refer to specific description in step 103 in the embodiment shown in fig. 1, which is not described herein again.
In another embodiment, the plurality of reference sizes include a plurality of different reference sizes corresponding to a group of key points, and it is understood that, for the same part in the face, there may be a plurality of distance values between two key points corresponding to the reference sizes, and the reference size may take any number of values from size statistics data of a certain part of the face. For example, the reference size is selected for the interocular width, the statistics of the part of the adult man are in the interval of [33,38] mm, the median is 35mm, and the variance is 1.35 mm. In the embodiment shown in fig. 1, the true value of the portion of the measured person may be assumed to be 35mm (i.e. the size of the portion in the standard human face model is preset), and may also be used as the reference size; in the embodiment of the present application, two values, namely, a minimum value of 33mm and a maximum value of 38mm, can be taken as reference dimensions for image processing respectively.
In the embodiment of the application, the face image is zoomed according to a plurality of different reference sizes to obtain a plurality of different groups of third key point coordinates, so as to obtain a plurality of face measurement results, and then the plurality of face measurement results are averaged to obtain a final target face measurement result, so that errors can be reduced, and the accuracy of the measurement result is further improved. The related calculation methods can refer to the related detailed description in the embodiment shown in fig. 1, and are not described herein again.
304. And respectively carrying out scaling processing on the face image based on the plurality of reference sizes and the second key point coordinates of the face image to obtain a plurality of groups of third key point coordinates of the face image.
305. And combining third-dimensional key point coordinates of the preset standard face model with the multiple groups of third key point coordinates respectively to obtain multiple groups of three-dimensional coordinates of the face image, wherein the third-dimensional key point coordinates are one-dimensional key point coordinates in the three-dimensional key point coordinates of the preset standard face model except the two-dimensional key point coordinates, and the three-dimensional key point coordinates of the preset standard face model are used for indicating the position of the key point in the preset standard face model in a three-dimensional space.
306. And respectively calculating according to a plurality of groups of three-dimensional coordinates of the face image to obtain a plurality of face measurement results, wherein the face measurement results comprise the distance between two coordinate points in the three-dimensional coordinates or the angle formed by connecting at least three coordinate points.
For the above steps 304-306, wherein: a step of scaling the face image based on any one of the reference sizes and the second key point coordinates of the face image; combining the third-dimensional key point coordinates of the preset standard face model with any group of third key point coordinates; and a step of performing face measurement calculation according to any one set of three-dimensional coordinates, which may refer to specific descriptions in step 104, step 105, and step 106 in the embodiment shown in fig. 1, and will not be described herein again.
307. And averaging the plurality of face measurement results to obtain a target face measurement result.
Selecting a plurality of reference sizes, repeating the steps 104 to 106 shown in fig. 1, and finally averaging the obtained face measurement results under each reference size to obtain a final target face measurement result, so that errors can be reduced, and the measurement accuracy can be ensured.
The face measurement method in the embodiment of the application can acquire a face image, perform key point detection on the face image to acquire a first key point coordinate of the face image, the first key point coordinate is a two-dimensional coordinate, and according to the two-dimensional key point coordinate of the preset standard human face model, affine transformation is carried out on the first key point coordinates of the face image to obtain second key point coordinates of the face image, wherein, the preset standard face model is a three-dimensional face model, the two-dimensional key point coordinates of the preset standard face model are used for indicating the positions of the key points in the preset standard face model on a two-dimensional plane, and acquiring the reference sizes corresponding to any two key points in the preset standard face model, the reference dimension is used for representing a standard distance between any two key points on the two-dimensional plane; the number of the reference sizes is multiple, the multiple reference sizes comprise reference sizes corresponding to multiple groups of key points respectively and/or multiple different reference sizes corresponding to one group of key points, one group of key points consists of any two key points, at least one key point of any two key points corresponding to different groups is different, the face image is respectively zoomed based on the multiple reference sizes and second key point coordinates of the face image to obtain multiple groups of third key point coordinates of the face image, the third key point coordinates of the preset standard face model are respectively combined with the multiple groups of third key point coordinates to obtain multiple groups of three-dimensional coordinates of the face image, and the third key point coordinates are one-dimensional key point coordinates except the two-dimensional key point coordinates in the three-dimensional key point coordinates of the preset standard face model, the three-dimensional key point coordinates of the preset standard face model are used for indicating the position of the key point in the preset standard face model in a three-dimensional space, respectively calculating according to the multiple groups of three-dimensional coordinates of the face image to obtain multiple face measurement results, the face measurement result comprises the distance between two coordinate points in the three-dimensional coordinates or the angle formed by connecting at least three coordinate points, and then the average value of the face measurement results is obtained to obtain the target face measurement result, the proportion of the face image and the positions of the key points are adjusted by adopting a plurality of reference sizes, so that the face image is more standard, and then corresponding calculation is carried out according to the three-dimensional coordinates of the face image, the real sizes of all parts of the face can be obtained, a plurality of face measurement results are obtained by adopting a plurality of different reference sizes, and then the average value is obtained, so that the obtained face measurement results are more accurate.
Based on the description of the embodiment of the face measurement method, the embodiment of the application also discloses a face measurement device. Referring to fig. 4, the face measurement apparatus 400 includes:
an obtaining module 410, configured to obtain a face image, perform key point detection on the face image, and obtain a first key point coordinate of the face image, where the first key point coordinate is a two-dimensional coordinate;
a transformation module 420, configured to perform affine transformation on a first key point coordinate of the face image according to a two-dimensional key point coordinate of a preset standard face model to obtain a second key point coordinate of the face image, where the preset standard face model is a three-dimensional face model, and the two-dimensional key point coordinate of the preset standard face model is used to indicate a position of a key point in the preset standard face model on a two-dimensional plane;
a scaling module 430 to:
acquiring reference sizes corresponding to any two key points in the preset standard face model, wherein the reference sizes are used for expressing the standard distance of the any two key points on the two-dimensional plane;
based on the reference size and the second key point coordinates of the face image, carrying out scaling processing on the face image to obtain third key point coordinates of the face image;
a calculation module 440 to:
combining a third key point coordinate of the preset standard face model with the third key point coordinate to obtain a three-dimensional coordinate of the face image, wherein the third key point coordinate is a one-dimensional key point coordinate except the two-dimensional key point coordinate in the three-dimensional key point coordinate of the preset standard face model, and the three-dimensional key point coordinate of the preset standard face model is used for indicating the position of a key point in the preset standard face model in a three-dimensional space;
and calculating according to the three-dimensional coordinates of the face image to obtain a face measurement result, wherein the face measurement result comprises the distance between two coordinate points or the angle formed by connecting at least three coordinate points in the three-dimensional coordinates.
According to an embodiment of the present application, each step involved in the methods shown in fig. 1 and fig. 3 may be performed by each module in the face measurement apparatus 400 shown in fig. 4, and is not described herein again.
The face measurement device 400 in the embodiment of the application may acquire a face image, perform key point detection on the face image, and acquire a first key point coordinate of the face image, where the first key point coordinate is a two-dimensional coordinate; performing affine transformation on the first key point coordinates of the face image according to two-dimensional key point coordinates of a preset standard face model to obtain second key point coordinates of the face image, wherein the preset standard face model is a three-dimensional face model, and the two-dimensional key point coordinates of the preset standard face model are used for indicating the positions of key points in the preset standard face model on a two-dimensional plane; acquiring reference sizes corresponding to any two key points in the preset standard face model, wherein the reference sizes are used for representing the standard distance of the any two key points on the two-dimensional plane, zooming the face image based on the reference sizes and the second key point coordinates of the face image to obtain third key point coordinates of the face image, combining the third key point coordinates of the preset standard face model and the third key point coordinates to obtain three-dimensional coordinates of the face image, wherein the third key point coordinates are one-dimensional key point coordinates except the two-dimensional key point coordinates in the three-dimensional key point coordinates of the preset standard face model, and the three-dimensional key point coordinates of the preset standard face model are used for indicating the positions of the key points in the preset standard face model in a three-dimensional space, finally, calculating according to the three-dimensional coordinates of the face image to obtain a face measurement result, wherein the face measurement result comprises the distance between two coordinate points in the three-dimensional coordinates or the angle formed by connecting at least three coordinate points, affine transformation can be carried out on the acquired key point coordinates of the face based on a preset standard face model, and the face image is zoomed by using the selected reference dimension to adjust the face pose and proportion, so that the processed face image is aligned with the preset standard face model, namely the pose is a front face and the dimension is uniform, and the measurement result of each part of the face is more accurate; and based on the combination of the third key point coordinate of the preset standard face model and the third key point coordinate, mapping the key points of the face image to a three-dimensional space, and calculating according to the three-dimensional coordinate of the face image so as to measure the real size of each part of the face.
Based on the description of the method embodiment and the device embodiment, the embodiment of the application further provides an electronic device. Referring to fig. 5, the electronic device 500 includes at least a processor 501, an input device 502, an output device 503, and a computer storage medium 504. The processor 501, the input device 502, the output device 503, and the computer storage medium 504 within the electronic device may be connected by a bus or other means.
A computer storage medium 504 may be stored in the memory of the electronic device, the computer storage medium 504 being used for storing a computer program comprising program instructions, and the processor 501 being used for executing the program instructions stored by the computer storage medium 504. The processor 501 (or CPU) is a computing core and a control core of the electronic device, and is adapted to implement one or more instructions, and in particular, is adapted to load and execute the one or more instructions so as to implement a corresponding method flow or a corresponding function; in one embodiment, the processor 501 described above in the embodiments of the present application may be used to perform a series of processes, including the methods in the embodiments shown in fig. 1 and fig. 3, and so on.
An embodiment of the present application further provides a computer storage medium (Memory), which is a Memory device in an electronic device and is used to store programs and data. It is understood that the computer storage medium herein may include both a built-in storage medium in the electronic device and, of course, an extended storage medium supported by the electronic device. Computer storage media provide storage space that stores an operating system for an electronic device. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by processor 501. The computer storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; and optionally at least one computer storage medium located remotely from the processor.
In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by processor 501 to perform the corresponding steps in the above embodiments; in particular implementations, one or more instructions in the computer storage medium may be loaded by processor 501 and executed to perform any step of the method in fig. 1 and/or fig. 3, which is not described herein again.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the division of the module is only one logical division, and other divisions may be possible in actual implementation, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not performed. The shown or discussed mutual coupling, direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some interfaces, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a read-only memory (ROM), or a Random Access Memory (RAM), or a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium, such as a Digital Versatile Disk (DVD), or a semiconductor medium, such as a Solid State Disk (SSD).

Claims (10)

1. A face measurement method, comprising:
acquiring a face image, and performing key point detection on the face image to acquire a first key point coordinate of the face image, wherein the first key point coordinate is a two-dimensional coordinate;
performing affine transformation on the first key point coordinates of the face image according to two-dimensional key point coordinates of a preset standard face model to obtain second key point coordinates of the face image, wherein the preset standard face model is a three-dimensional face model, and the two-dimensional key point coordinates of the preset standard face model are used for indicating the positions of key points in the preset standard face model on a two-dimensional plane;
acquiring reference sizes corresponding to any two key points in the preset standard face model, wherein the reference sizes are used for representing the standard distance of the any two key points on the two-dimensional plane;
based on the reference size and the second key point coordinates of the face image, carrying out scaling processing on the face image to obtain third key point coordinates of the face image;
combining a third key point coordinate of the preset standard face model with the third key point coordinate to obtain a three-dimensional coordinate of the face image, wherein the third key point coordinate is a one-dimensional key point coordinate except the two-dimensional key point coordinate in the three-dimensional key point coordinate of the preset standard face model, and the three-dimensional key point coordinate of the preset standard face model is used for indicating the position of a key point in the preset standard face model in a three-dimensional space;
and calculating according to the three-dimensional coordinates of the face image to obtain a face measurement result, wherein the face measurement result comprises the distance between two coordinate points in the three-dimensional coordinates or the angle formed by connecting at least three coordinate points.
2. The method according to claim 1, wherein the affine transformation is performed on the first key point coordinates of the face image according to the two-dimensional key point coordinates of a preset standard face model to obtain the second key point coordinates of the face image, and the method comprises:
selecting N groups of corresponding coordinates from the two-dimensional key point coordinates and the first key point coordinates of the preset standard face model, wherein N is less than the number of the first key point coordinates, and one group of coordinates consists of the two-dimensional key point coordinates and the first key point coordinates corresponding to one key point;
calculating and obtaining a transformation matrix based on the N groups of coordinates; and performing affine transformation on the first key point coordinates by using the transformation matrix to obtain second key point coordinates of the face image.
3. The method according to claim 1, wherein the obtaining of the reference dimensions corresponding to any two key points in the preset standard face model comprises:
acquiring key point coordinates of a plurality of first sample face images;
acquiring key point coordinates corresponding to any two key points from the key point coordinates of each first sample face image, and calculating the distance value of any two key points based on the key point coordinates corresponding to any two key points to obtain a plurality of distance values;
determining the reference dimension from the plurality of distance values.
4. The method of claim 3, wherein said determining the reference size from the plurality of distance values comprises:
acquiring the median of the plurality of distance values as the reference size; alternatively, the first and second electrodes may be,
and acquiring the maximum value and the minimum value in the plurality of distance values, and taking the average value of the maximum value and the minimum value as the reference size.
5. The method according to claim 1, wherein the obtaining target key point coordinates of the face image by scaling the face image based on the reference size and the second key point coordinates of the face image comprises:
under the condition that the two-dimensional key point coordinate corresponding to any one of the two arbitrary key points is aligned with the second key point coordinate corresponding to the any one key point, scaling the face image according to the reference size so as to enable the distance between the two arbitrary key points in the face image to be consistent with the reference size, and thus obtaining a third key point coordinate of the face image.
6. The method according to claim 4, wherein the number of the reference sizes is plural, the plural reference sizes include reference sizes corresponding to plural groups of key points respectively and/or plural different reference sizes corresponding to one group of key points, one group of key points is composed of the two arbitrary key points, and at least one of the two arbitrary key points corresponding to different groups is different;
the face measurement result comprises a plurality of face measurement results corresponding to the plurality of reference sizes;
after the calculation is performed according to the three-dimensional coordinates of the face image and a face measurement result is obtained, the method further comprises the following steps:
and averaging the plurality of face measurement results to obtain a target face measurement result.
7. The method of any of claims 1-6, further comprising:
acquiring a second sample face image; performing key point detection on the second sample face image to obtain sample face key points;
mapping the sample face key points to a depth point cloud to obtain the depth value of each second sample face key point so as to obtain the three-dimensional key point coordinates of the second sample face image;
moving the three-dimensional key point coordinates of the second sample face image to a coordinate system with a preset point as an original point, and normalizing the three-dimensional key point coordinates of the second sample face image based on a preset maximum value;
grouping the three-dimensional key point coordinates of the sample face image; and respectively averaging the coordinates of each group of three-dimensional key points in the coordinate system to obtain average key point coordinates which are used as the three-dimensional key point coordinates of the preset standard human face model.
8. A face measurement device, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a face image, and performing key point detection on the face image to acquire a first key point coordinate of the face image, and the first key point coordinate is a two-dimensional coordinate;
the transformation module is used for carrying out affine transformation on the first key point coordinates of the face image according to the two-dimensional key point coordinates of a preset standard face model to obtain second key point coordinates of the face image, wherein the preset standard face model is a three-dimensional face model, and the two-dimensional key point coordinates of the preset standard face model are used for indicating the positions of key points in the preset standard face model on a two-dimensional plane;
a scaling module to:
acquiring reference sizes corresponding to any two key points in the preset standard face model, wherein the reference sizes are used for representing the standard distance of the any two key points on the two-dimensional plane;
based on the reference size and the second key point coordinates of the face image, carrying out scaling processing on the face image to obtain third key point coordinates of the face image;
a calculation module to:
combining a third key point coordinate of the preset standard face model with the third key point coordinate to obtain a three-dimensional coordinate of the face image, wherein the third key point coordinate is a one-dimensional key point coordinate except the two-dimensional key point coordinate in the three-dimensional key point coordinate of the preset standard face model, and the three-dimensional key point coordinate of the preset standard face model is used for indicating the position of a key point in the preset standard face model in a three-dimensional space;
and calculating according to the three-dimensional coordinates of the face image to obtain a face measurement result, wherein the face measurement result comprises the distance between two coordinate points in the three-dimensional coordinates or the angle formed by connecting at least three coordinate points.
9. An electronic device, characterized in that it comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of the face measurement method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes the processor to carry out the steps of the face measurement method according to any one of claims 1 to 7.
CN202011422988.2A 2020-12-08 2020-12-08 Face measurement method, device, electronic equipment and medium Active CN112613357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011422988.2A CN112613357B (en) 2020-12-08 2020-12-08 Face measurement method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011422988.2A CN112613357B (en) 2020-12-08 2020-12-08 Face measurement method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN112613357A true CN112613357A (en) 2021-04-06
CN112613357B CN112613357B (en) 2024-04-09

Family

ID=75229534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011422988.2A Active CN112613357B (en) 2020-12-08 2020-12-08 Face measurement method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN112613357B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255511A (en) * 2021-05-21 2021-08-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for living body identification

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503671A (en) * 2016-11-03 2017-03-15 厦门中控生物识别信息技术有限公司 The method and apparatus for determining human face posture
CN109508678A (en) * 2018-11-16 2019-03-22 广州市百果园信息技术有限公司 Training method, the detection method and device of face key point of Face datection model
CN109993021A (en) * 2017-12-29 2019-07-09 浙江宇视科技有限公司 The positive face detecting method of face, device and electronic equipment
WO2020140832A1 (en) * 2019-01-04 2020-07-09 北京达佳互联信息技术有限公司 Three-dimensional facial reconstruction method and apparatus, and electronic device and storage medium
CN111428579A (en) * 2020-03-03 2020-07-17 平安科技(深圳)有限公司 Face image acquisition method and system
WO2020224136A1 (en) * 2019-05-07 2020-11-12 厦门美图之家科技有限公司 Interface interaction method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503671A (en) * 2016-11-03 2017-03-15 厦门中控生物识别信息技术有限公司 The method and apparatus for determining human face posture
CN109993021A (en) * 2017-12-29 2019-07-09 浙江宇视科技有限公司 The positive face detecting method of face, device and electronic equipment
CN109508678A (en) * 2018-11-16 2019-03-22 广州市百果园信息技术有限公司 Training method, the detection method and device of face key point of Face datection model
WO2020140832A1 (en) * 2019-01-04 2020-07-09 北京达佳互联信息技术有限公司 Three-dimensional facial reconstruction method and apparatus, and electronic device and storage medium
WO2020224136A1 (en) * 2019-05-07 2020-11-12 厦门美图之家科技有限公司 Interface interaction method and device
CN111428579A (en) * 2020-03-03 2020-07-17 平安科技(深圳)有限公司 Face image acquisition method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
宋顶利;杨炳儒;于复兴;: "关键点匹配三维人脸识别方法", 计算机应用研究, no. 11, 15 November 2010 (2010-11-15) *
赵兴文;杭丽君;宫恩来;叶锋;丁明旭;: "基于深度学习检测器的多角度人脸关键点检测", 光电工程, no. 01, 15 January 2020 (2020-01-15) *
高翔;黄法秀;刘春平;陈虎;: "3DMM与GAN结合的实时人脸表情迁移方法", 计算机应用与软件, no. 04, 12 April 2020 (2020-04-12) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255511A (en) * 2021-05-21 2021-08-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for living body identification

Also Published As

Publication number Publication date
CN112613357B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
US11922646B2 (en) Tracking surgical items with prediction of duplicate imaging of items
US11215845B2 (en) Method, device, and computer program for virtually adjusting a spectacle frame
US11120254B2 (en) Methods and apparatuses for determining hand three-dimensional data
EP3113114B1 (en) Image processing method and device
JP6681729B2 (en) Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object
US20190043199A1 (en) Image Segmentation Method, Image Segmentation System and Storage Medium and Apparatus Including the Same
US8711210B2 (en) Facial recognition using a sphericity metric
JP7327140B2 (en) Image processing method and information processing apparatus
CN107194361A (en) Two-dimentional pose detection method and device
CN111062328B (en) Image processing method and device and intelligent robot
CN113298870B (en) Object posture tracking method and device, terminal equipment and storage medium
CN112613357A (en) Face measurement method, face measurement device, electronic equipment and medium
US9924865B2 (en) Apparatus and method for estimating gaze from un-calibrated eye measurement points
CN113454684A (en) Key point calibration method and device
CN113221812A (en) Training method of face key point detection model and face key point detection method
CN115031635A (en) Measuring method and device, electronic device and storage medium
CN112200002A (en) Body temperature measuring method and device, terminal equipment and storage medium
CN106667496A (en) Face data measuring method and device
CN116612224B (en) Visual management system of digital mapping
CN113409371B (en) Image registration method and related device and equipment
CN117392734B (en) Face data processing method, device, computer equipment and storage medium
CN113947799B (en) Three-dimensional face data preprocessing method and equipment
CN116758589B (en) Cattle face recognition method for processing gesture and visual angle correction
CN113158908A (en) Face recognition method and device, storage medium and electronic equipment
CN114419148A (en) Touch detection method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant