CN109816628B - Face evaluation method and related product - Google Patents

Face evaluation method and related product Download PDF

Info

Publication number
CN109816628B
CN109816628B CN201811560246.9A CN201811560246A CN109816628B CN 109816628 B CN109816628 B CN 109816628B CN 201811560246 A CN201811560246 A CN 201811560246A CN 109816628 B CN109816628 B CN 109816628B
Authority
CN
China
Prior art keywords
target
value
angle value
weight
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811560246.9A
Other languages
Chinese (zh)
Other versions
CN109816628A (en
Inventor
高增辉
万勤锋
曾佐祺
钟斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201811560246.9A priority Critical patent/CN109816628B/en
Publication of CN109816628A publication Critical patent/CN109816628A/en
Application granted granted Critical
Publication of CN109816628B publication Critical patent/CN109816628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides a face evaluation method and a related product, wherein the method comprises the following steps: acquiring a target face image, and acquiring a three-dimensional angle value of the target face image, wherein the three-dimensional angle value comprises an x-angle value, a y-angle value and a z-angle value; acquiring three weights corresponding to the three-dimensional angle value, wherein the sum of a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value and a target third weight corresponding to the z-angle value is 1; performing weighted operation according to the x angle value, the y angle value, the z angle value, the target first weight, the target second weight and the target third weight to obtain a target angle value; and determining a first target evaluation value corresponding to the target angle value according to a mapping relation between a preset angle value and the angle quality evaluation value. The method and the device can evaluate the face angle, and are beneficial to improving the face unlocking efficiency.

Description

Face evaluation method and related product
Technical Field
The application relates to the technical field of image processing, in particular to a face evaluation method and a related product.
Background
With the widespread use of electronic devices (such as mobile phones, tablet computers, and the like), the electronic devices can support more and more applications and have more and more powerful functions. At present, a face unlocking technology becomes a standard matching technology of electronic equipment, but in practical application, if a face angle is not good, face unlocking is easy to fail, so that the quality of the angle directly affects face unlocking, and a problem how to provide an evaluation mode for the face angle needs to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a face evaluation method and a related product, which can provide an evaluation mode for face angles and improve face unlocking efficiency.
In a first aspect, an embodiment of the present application provides a face evaluation method, including:
acquiring a target face image, and acquiring a three-dimensional angle value of the target face image, wherein the three-dimensional angle value comprises an x-angle value, a y-angle value and a z-angle value;
obtaining three weights corresponding to the three-dimensional angle value, wherein the sum of a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value, and a target third weight corresponding to the z-angle value is 1;
performing weighted operation according to the x-angle value, the y-angle value, the z-angle value, the target first weight, the target second weight and the target third weight to obtain a target angle value;
and determining a first target evaluation value corresponding to the target angle value according to a mapping relation between a preset angle value and an angle quality evaluation value.
Optionally, before the acquiring the target face image, the method further includes:
acquiring a target image;
carrying out target detection on the target image;
when the target image contains people, acquiring target environment parameters, and performing image segmentation on the target image to obtain the people area;
determining target shooting parameters according to a mapping relation between preset environment parameters and the shooting parameters;
determining a focus according to the character area;
and shooting the person according to the target shooting parameters and the focus to obtain the target face image, and executing the step of obtaining the target face image.
In a second aspect, an embodiment of the present application provides a face evaluation apparatus, including:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a target face image and acquiring a three-dimensional angle value of the target face image, and the three-dimensional angle value comprises an x angle value, a y angle value and a z angle value; obtaining three weights corresponding to the three-dimensional angle value, wherein the sum of a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value, and a target third weight corresponding to the z-angle value is 1;
the operation unit is used for carrying out weighted operation according to the x angle value, the y angle value, the z angle value, the target first weight, the target second weight and the target third weight to obtain a target angle value;
and the determining unit is used for determining a first target evaluation value corresponding to the target angle value according to a mapping relation between a preset angle value and an angle quality evaluation value.
In a third aspect, an embodiment of the present application provides a face evaluation apparatus, including a processor, a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that, by the face evaluation method and related products described in the embodiments of the present application, a target face image is obtained, and a three-dimensional angle value of the target face image is obtained, where the three-dimensional angle value includes an x-angle value, a y-angle value, and a z-angle value, three weights corresponding to the three-dimensional angle value are obtained, where a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value, and a target third weight corresponding to the z-angle value, and a sum of the target first weight, the target second weight, and the target third weight is 1, a weighting operation is performed according to the x-angle value, the y-angle value, the z-angle value, the target first weight, the target second weight, and the target third weight to obtain a target angle value, and a first target evaluation value corresponding to the target angle value is determined according to a mapping relationship between a preset angle value and an angle quality evaluation value, and thus, by evaluating the three-dimensional angle of the face image, a basis is provided for the quality of the image to a certain extent.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1A is a schematic flowchart of an embodiment of a face evaluation method according to an embodiment of the present application;
fig. 1B is a schematic diagram of three-dimensional angle values of a human face according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another embodiment of a face evaluation method according to an embodiment of the present application;
fig. 3A is a schematic structural diagram of an embodiment of a face evaluation apparatus according to an embodiment of the present application;
fig. 3B is a schematic diagram of another structure of the face evaluation apparatus depicted in fig. 3A according to an embodiment of the present application;
fig. 3C is a schematic diagram of another structure of the face evaluation apparatus depicted in fig. 3A according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an embodiment of a face evaluation device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The face evaluation device described in the embodiment of the present application may include a smart Phone (such as an Android Phone, an iOS Phone, a Windows Phone, etc.), a tablet computer, a palm computer, a notebook computer, a Mobile Internet device (MID, Mobile Internet Devices), or a wearable device, which are examples, but not exhaustive, and include but are not limited to the above Devices, and certainly, the face evaluation device may also be a server.
It should be noted that the face evaluation device in the embodiment of the present application may be connected to a plurality of cameras, each camera may be used to capture a video image, and each camera may have a position mark corresponding to the camera, or may have a number corresponding to the camera. Typically, the camera may be located in a public place, such as a school, museum, intersection, pedestrian street, office building, garage, airport, hospital, subway station, bus station, supermarket, hotel, entertainment venue, and the like. After the camera shoots the video image, the video image can be stored in a memory of a system where the face evaluation device is located. The memory may store a plurality of image libraries, each image library may contain different video images of the same person, and of course, each image library may also be used to store video images of an area or video images captured by a specific camera.
Further optionally, in this embodiment of the application, each frame of video image shot by the camera corresponds to one attribute information, where the attribute information is at least one of the following: the shooting time of the video image, the position of the video image, the attribute parameters (format, size, resolution, etc.) of the video image, the number of the video image, and the character feature attributes in the video image. The character attributes in the video image may include, but are not limited to: number of people in the video image, position of people, angle value of people, age, image quality, etc.
It should be further noted that the video image acquired by each camera is usually a dynamic human face image, and therefore, in the embodiment of the present application, the angle value information of the human face image may be planned, and the angle value information may include but is not limited to: horizontal rotation angle value, pitch angle or inclination. For example, it is possible to define that the dynamic face image data requires a interocular distance of not less than 30 pixels, and it is recommended to have more than 60 pixels. The horizontal rotation angle value is not more than +/-30 degrees, the pitch angle is not more than +/-20 degrees, and the inclination angle is not more than +/-45 degrees. The horizontal rotation angle value is recommended to be not more than +/-15 degrees, the pitch angle is not more than +/-10 degrees, and the inclination angle is not more than +/-15 degrees. For example, whether the face image is blocked by other objects can be screened, in general, the main area of the face should not be blocked by ornaments, such as dark sunglasses, masks, exaggerated jewelry, etc., and of course, dust may be distributed on the camera, which may cause the face image to be blocked. The picture format of the video image in the embodiment of the present application may include, but is not limited to: BMP, JPEG, JPEG2000, PNG and the like, the size of the video image can be 10-30KB, each video image can also correspond to information such as shooting time, the unified serial number of a camera for shooting the video image, the link of a panoramic big image corresponding to the face image and the like (the face image and the global image establish a characteristic corresponding relation file).
Fig. 1A is a schematic flowchart of an embodiment of a face evaluation method according to an embodiment of the present application. The face evaluation method described in this embodiment includes the following steps:
101. the method comprises the steps of obtaining a target face image and obtaining a three-dimensional angle value of the target face image, wherein the three-dimensional angle value comprises an x-angle value, a y-angle value and a z-angle value.
In this embodiment of the application, the face evaluation device may obtain a target face image, and the face evaluation device may include a depth camera, or a depth camera and a visible light camera, specifically, for example, the target face image may be obtained by the visible light camera, and a three-dimensional angle value corresponding to the target face image is determined by the depth camera, that is, the three-dimensional spatial coordinate system, an x-angle value in an x direction, a y-angle value in a y direction, and a z-angle value in a z direction correspond to each other, so that an angular relationship between the camera and the face image may be accurately described. Different angles affect the recognition accuracy to some extent, for example, the angle of a human face directly affects the number of feature points or the quality of the feature points. The three-dimensional angle value can be understood as a three-dimensional angle between the face and the camera, as shown in fig. 1B, and fig. 1B shows that an angle between the camera and the face exists in an x direction, a y direction and a z direction.
102. And acquiring three weights corresponding to the three-dimensional angle value, wherein the sum of a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value and a target third weight corresponding to the z-angle value is 1.
Each of the three-dimensional angle values may correspond to one weight, and of course, the three weights corresponding to the three-dimensional angle values may be preset or default in the system. Specifically, the face evaluation device may obtain three weights corresponding to the three-dimensional angle value, specifically, a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value, and a target third weight corresponding to the z-angle value, where the target first weight + the target second weight + the target third weight are equal to 1.
Optionally, in the step 102, obtaining three weights corresponding to the three-dimensional angle value may include the following steps:
21. acquiring a target environment brightness value;
22. determining a target mapping relation group corresponding to the target environment brightness value according to a mapping relation between a preset environment brightness value and the mapping relation group, wherein each mapping relation group comprises a first mapping relation between an angle value in the x direction and a first weight value and a second mapping relation between an angle value in the y direction and a second weight value;
23. determining the target first weight corresponding to the x-angle value according to a first mapping relation in the target mapping relation group;
24. determining the target second weight corresponding to the y-angle value according to a second mapping relation in the target mapping relation group;
25. and obtaining the target third weight according to the target first weight and the target second weight.
In a specific implementation, the target environment brightness value may be obtained by the ambient light sensor, a mapping relationship between a preset environment brightness value and a mapping relationship group may be pre-stored, each mapping relationship group may include a first mapping relationship between an angle value in an x direction and a first weight, and a second mapping relationship between an angle value in a y direction and a second weight, and further, a target mapping relationship group corresponding to the target environment brightness value may be determined according to the mapping relationship between the preset environment brightness value and the mapping relationship group, a target first weight corresponding to the x angle value may be determined according to the first mapping relationship in the target mapping relationship group, and a target second weight corresponding to the y angle value may be determined according to the second mapping relationship in the target mapping relationship group, where a target third weight is 1-the target first weight-the target second weight, because under the environment light of difference, the angle of the people's face that can be discerned is also different, so, can be according to environment light, confirm the weight that corresponds with light, be favorable to accurate evaluation to the people's face, of course, to different environment light, the evaluation rule that corresponds is different, is favorable to accurate realization to evaluate the people's face angle.
103. And carrying out weighted operation according to the x angle value, the y angle value, the z angle value, the target first weight, the target second weight and the target third weight to obtain a target angle value.
The target angle value is x angle value + y angle value + target second weight value + z angle value + target third weight value, so that the three-dimensional angle value can be converted into a one-dimensional angle value for accurately representing the angle of the face.
104. And determining a first target evaluation value corresponding to the target angle value according to a mapping relation between a preset angle value and an angle quality evaluation value.
The face evaluation device can pre-store a mapping relation between a preset angle value and an angle quality evaluation value, and further determine a first target evaluation value corresponding to the target angle value according to the mapping relation, and further, if the first target evaluation value is greater than a preset evaluation threshold, it can be understood that a face image is easily recognized and is successfully recognized to a great extent.
Optionally, after the step 104, the following steps may be further included:
a1, carrying out image quality evaluation on the target face image to obtain a second target evaluation value;
a2, acquiring a first weight coefficient corresponding to the first target evaluation value and a second weight coefficient corresponding to the second target evaluation value;
a3, performing weighted operation according to the first target evaluation value, the first weight coefficient, the second target evaluation value and the second weight coefficient to obtain a third target evaluation value;
a4, determining a target matching threshold corresponding to the third target evaluation value according to a mapping relation between a preset matching threshold and the evaluation value;
a5, matching the target face image with a preset face template;
and A6, when the matching value between the target face image and the preset face template is larger than the target matching threshold value, confirming that the matching is successful.
The image quality of the target face image may be evaluated to obtain a second target evaluation value, where the first target evaluation value corresponds to a first weight coefficient, the second target evaluation value corresponds to a second weight coefficient, and of course, the first weight coefficient and the second weight coefficient may be set by the user or default by the system, a sum of the first weight coefficient and the second weight coefficient is less than or equal to 1, the first weight coefficient and the second weight coefficient are between 0 and 1, and a third target evaluation value is a first target evaluation value, a first weight coefficient and a second target evaluation value, a mapping relationship between a preset matching threshold and an evaluation value may be stored in advance in the face evaluation device, and a target matching threshold corresponding to the third target evaluation value may be determined according to the mapping relationship, and a preset face template may be stored in advance in the face evaluation device, and further, matching the target face image with a preset face template, when the matching value between the target face image and the preset face template is greater than a target matching threshold value, confirming that the matching is successful, otherwise, confirming that the matching is failed. In the embodiment of the present application, a dynamic matching threshold may be considered, that is, if the quality is good, the matching threshold may be increased, and if the quality is poor, the matching threshold may be decreased. Therefore, the face matching process is dynamically adjusted, and the face recognition efficiency is improved according to specific environments.
Alternatively, in the step a1, the image quality of the target face image is evaluated to obtain the second target evaluation value, which may be implemented as follows:
and performing image quality evaluation on the target face image by adopting at least one image quality evaluation index to obtain a second target evaluation value.
Wherein the image quality evaluation index may include at least one of: mean gray scale, mean square error, entropy, edge preservation, signal-to-noise ratio, and the like, without limitation. It can be defined that the larger the resulting evaluation value is, the better the image quality is.
Further optionally, in the step a5, matching the target face image with a preset face template may include the following steps:
a51, carrying out angle adjustment on the preset face template according to the three-dimensional angle value to obtain a target preset face template;
a52, performing feature extraction on the target face image to obtain a first peripheral outline and a first feature point set;
a53, performing feature extraction on the target preset face template to obtain a second peripheral outline and a second feature point set;
a54, matching the first peripheral outline with the second peripheral outline to obtain a first matching value;
a55, matching the first feature point set with the second feature point set to obtain a second matching value;
a56, when the first matching value is larger than a first preset threshold value and the second matching value is larger than a second preset threshold value, taking the mean value between the first matching value and the second matching value as the matching value between the target face image and the preset face template;
and A57, when the first matching value is smaller than or equal to the first preset threshold value, or the second matching value is smaller than or equal to the second preset threshold value, confirming that the matching between the target face image and the preset face template fails.
In a specific implementation, the first preset threshold and the second preset threshold may be preset or default. The face evaluation device can carry out angle adjustment on the preset face template according to the three-dimensional angle value to obtain a target preset face template, the adjusted target preset face template can be the same as the three-dimensional angle value of a target face image, so that no matter the face to be evaluated or the face template is consistent in angle, the two are matched in the same state to show fairness between the two matching, further, the target face image can be subjected to feature extraction to obtain a first peripheral outline and a first feature point set, the target preset face template is subjected to feature extraction to obtain a second peripheral outline and a second feature point set, the first peripheral outline is matched with the second peripheral outline to obtain a first matching value, the first feature point set is matched with the second feature point set to obtain a second matching value, when the first matching value is greater than the first preset threshold value and the second matching value is greater than the second preset threshold value, the method comprises the steps of taking the mean value between a first matching value and a second matching value as the matching value between a target face image and a preset face template, and confirming that the matching between the target face image and the preset face template fails when the first matching value is smaller than or equal to a first preset threshold value or the second matching value is smaller than or equal to a second preset threshold value.
In addition, the algorithm of the contour extraction may be at least one of: hough transform, canny operator, etc., and the algorithm for feature point extraction may be at least one of the following algorithms: harris corners, Scale Invariant Feature Transform (SIFT), and the like, without limitation.
Optionally, between the above steps 101 to 102, the following steps may be further included:
b1, detecting whether the x angle value is in a first preset range, whether the y angle value is in a second preset range and whether the z angle value is in a third preset range;
and B2, when the x-angle value is in the first preset range, the y-angle value is in the second preset range, and the z-angle value is in the third preset range, executing the step of obtaining three weights corresponding to the three-dimensional angle value.
The first preset range, the second preset range and the third preset range can be set by a user or defaulted by a system. In a specific implementation, the x-angle value, the y-angle value and the z-angle value only satisfy a certain range, and the evaluation of the angle of the face image is meaningful, so that the actual value exists for the face evaluation only if the x-angle value satisfies a first preset range, the y-angle value satisfies a second preset range and the z-angle value satisfies a third preset range.
Optionally, before the step 101, the following steps may be further included:
c1, acquiring a target image;
c2, carrying out target detection on the target image;
c3, when the target image contains a person, acquiring target environment parameters, and performing image segmentation on the target image to obtain the person region;
c4, determining target shooting parameters according to the mapping relation between the preset environment parameters and the shooting parameters;
c5, determining a focus according to the human figure region;
and C6, shooting the person according to the target shooting parameters and the focus to obtain the target face image.
In concrete realization, the face evaluation device can shoot earlier to obtain the target image, and then, carries out target detection to this target image, when detecting that contains the personage in the target image, then can obtain environmental parameter through environmental sensor, wherein, environmental parameter can be following at least one: ambient light level, temperature, humidity, geographical location, magnetic field interference intensity, etc., without limitation, the ambient sensor may be at least one of: an ambient light sensor, a temperature sensor, a humidity sensor, a position sensor, a magnetic field detection sensor, etc., without limitation. The shooting parameter may be at least one of: the human face evaluation device can also pre-store a mapping relation between preset environmental parameters and shooting parameters, can determine target shooting parameters according to the mapping relation, certainly, can also segment a target image to obtain a person region, can determine a human face region according to a geometric pattern of the person region, can also determine a geometric center (such as a centroid, a gravity center or a center and the like) of the human face region as a focus, and shoots a person according to the target shooting parameters and the focus to obtain a target human face image.
It can be seen that, by the face evaluation method described in the embodiment of the present application, a target face image is obtained, and a three-dimensional angle value of the target face image is obtained, where the three-dimensional angle value includes an x-angle value, a y-angle value, and a z-angle value, and three weights corresponding to the three-dimensional angle value are obtained, where a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value, and a target third weight corresponding to the z-angle value, and a sum of the target first weight, the target second weight, and the target third weight is 1, a weighting operation is performed according to the x-angle value, the y-angle value, the z-angle value, the target first weight, the target second weight, and the target third weight to obtain a target angle value, and a first target evaluation value corresponding to the target angle value is determined according to a mapping relationship between a preset angle value and an angle quality evaluation value, and thus, by evaluating the three-dimensional angle of the face image, and providing basis for the quality of the image to a certain extent.
In accordance with the above, please refer to fig. 2, which is a flowchart illustrating an embodiment of a face evaluation method according to an embodiment of the present application. The face evaluation method described in this embodiment includes the following steps:
201. the method comprises the steps of obtaining a target face image and obtaining a three-dimensional angle value of the target face image, wherein the three-dimensional angle value comprises an x-angle value, a y-angle value and a z-angle value.
202. And acquiring three weights corresponding to the three-dimensional angle value, wherein the sum of a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value, and a target third weight corresponding to the z-angle value is 1.
203. And carrying out weighted operation according to the x angle value, the y angle value, the z angle value, the target first weight, the target second weight and the target third weight to obtain a target angle value.
204. And determining a first target evaluation value corresponding to the target angle value according to a mapping relation between a preset angle value and an angle quality evaluation value.
205. And evaluating the image quality of the target face image to obtain a second target evaluation value.
206. And acquiring a first weight coefficient corresponding to the first target evaluation value and a second weight coefficient corresponding to the second target evaluation value.
207. And performing weighting operation according to the first target evaluation value, the first weight coefficient, the second target evaluation value and the second weight coefficient to obtain a third target evaluation value.
208. And determining a target matching threshold corresponding to the third target evaluation value according to a mapping relation between a preset matching threshold and the evaluation value.
209. And matching the target face image with a preset face template.
210. And when the matching value between the target face image and the preset face template is greater than the target matching threshold value, confirming that the matching is successful.
The face evaluation method described in the above steps 201 to 210 may refer to corresponding steps of the face evaluation method described in fig. 1A.
It can be seen that, by the face evaluation method described in the embodiment of the present application, a target face image is obtained, and a three-dimensional angle value of the target face image is obtained, where the three-dimensional angle value includes an x-angle value, a y-angle value, and a z-angle value, three weights corresponding to the three-dimensional angle value are obtained, where a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value, and a target third weight corresponding to the z-angle value, and a sum of the target first weight, the target second weight, and the target third weight is 1, a weighting operation is performed according to the x-angle value, the y-angle value, the z-angle value, the target first weight, the target second weight, and the target third weight to obtain a target angle value, a first target evaluation value corresponding to the target angle value is determined according to a mapping relationship between a preset angle value and an angle quality evaluation value, and an image quality evaluation is performed, obtaining a second target evaluation value, obtaining a first weight coefficient corresponding to the first target evaluation value and a second weight coefficient corresponding to the second target evaluation value, performing weighting operation according to the first target evaluation value, the first weight coefficient, the second target evaluation value and the second weight coefficient to obtain a third target evaluation value, determining a target matching threshold corresponding to the third target evaluation value according to a mapping relation between a preset matching threshold and an evaluation value, matching the target face image with a preset face template, and confirming that matching is successful when the matching value between the target face image and the preset face template is greater than the target matching threshold. Therefore, the face matching process is dynamically adjusted, and the face recognition efficiency is improved according to specific environments.
In accordance with the above, the following is a device for implementing the above-described face evaluation method, and specifically includes:
please refer to fig. 3A, which is a schematic structural diagram of an embodiment of a face evaluation apparatus according to an embodiment of the present application. The face evaluation device described in this embodiment includes: the acquiring unit 301, the calculating unit 302 and the determining unit 303 are as follows:
an obtaining unit 301, configured to obtain a target face image, obtain a three-dimensional angle value of the target face image, where the three-dimensional angle value includes an x-angle value, a y-angle value, and a z-angle value, and obtain three weights corresponding to the three-dimensional angle value, where a sum of a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value, and a target third weight corresponding to the z-angle value is 1;
an operation unit 302, configured to perform a weighted operation according to the x-angle value, the y-angle value, the z-angle value, the target first weight, the target second weight, and the target third weight, so as to obtain a target angle value;
the determining unit 303 is configured to determine a first target evaluation value corresponding to the target angle value according to a mapping relationship between a preset angle value and an angle quality evaluation value.
It can be seen that, by the face evaluation device described in the embodiment of the present application, a target face image is obtained, and a three-dimensional angle value of the target face image is obtained, where the three-dimensional angle value includes an x-angle value, a y-angle value, and a z-angle value, and three weights corresponding to the three-dimensional angle value are obtained, where a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value, and a target third weight corresponding to the z-angle value, and a sum of the target first weight, the target second weight, and the target third weight is 1, a weighting operation is performed according to the x-angle value, the y-angle value, the z-angle value, the target first weight, the target second weight, and the target third weight to obtain a target angle value, and a first target evaluation value corresponding to the target angle value is determined according to a mapping relationship between a preset angle value and an angle quality evaluation value, and thus, by evaluating the three-dimensional angle of the face image, and providing basis for the quality of the image to a certain extent.
The obtaining unit 301 may be configured to implement the methods described in the steps 101 and 102, the operation unit 302 may be configured to implement the method described in the step 103, the determining unit 303 may be configured to implement the method described in the step 104, and so on.
In a possible example, in terms of obtaining three weight values corresponding to the three-dimensional angle value, the obtaining unit 301 is specifically configured to:
acquiring a target environment brightness value;
determining a target mapping relation group corresponding to the target environment brightness value according to a mapping relation between a preset environment brightness value and the mapping relation group, wherein each mapping relation group comprises a first mapping relation between an angle value in the x direction and a first weight value and a second mapping relation between an angle value in the y direction and a second weight value;
determining the target first weight corresponding to the x-angle value according to a first mapping relation in the target mapping relation group;
determining the target second weight corresponding to the y-angle value according to a second mapping relation in the target mapping relation group;
and obtaining the target third weight according to the target first weight and the target second weight.
In one possible example, as shown in fig. 3B, fig. 3B is a further modified structure of the face evaluation apparatus depicted in fig. 3A, which may further include, compared with fig. 3A: an evaluation unit 304 and a matching unit 305, wherein,
the evaluation unit 304 is configured to perform image quality evaluation on the target face image to obtain a second target evaluation value;
the acquiring unit 301 is further specifically configured to acquire a first weight coefficient corresponding to the first target evaluation value and a second weight coefficient corresponding to the second target evaluation value;
the operation unit 302 is further specifically configured to perform a weighted operation according to the first target evaluation value, the first weight coefficient, the second target evaluation value, and the second weight coefficient, so as to obtain a third target evaluation value;
the determining unit 303 is further specifically configured to determine a target matching threshold corresponding to the third target evaluation value according to a mapping relationship between a preset matching threshold and an evaluation value;
the matching unit 305 is configured to match the target face image with a preset face template;
the determining unit 303 is further specifically configured to determine that matching is successful when a matching value between the target face image and the preset face template is greater than the target matching threshold.
In a possible example, in the aspect of matching the target face image with a preset face template, the matching unit 305 is specifically configured to:
carrying out angle adjustment on the preset face template according to the three-dimensional angle value to obtain a target preset face template;
extracting the features of the target face image to obtain a first peripheral outline and a first feature point set;
performing feature extraction on the target preset face template to obtain a second peripheral outline and a second feature point set;
matching the first peripheral contour with the second peripheral contour to obtain a first matching value;
matching the first characteristic point set with the second characteristic point set to obtain a second matching value;
when the first matching value is larger than a first preset threshold value and the second matching value is larger than a second preset threshold value, taking the mean value between the first matching value and the second matching value as the matching value between the target face image and the preset face template;
and when the first matching value is smaller than or equal to the first preset threshold value, or the second matching value is smaller than or equal to the second preset threshold value, confirming that the matching between the target face image and the preset face template fails.
In one possible example, as shown in fig. 3C, fig. 3C is a further modified structure of the face evaluation apparatus depicted in fig. 3A, which may further include, compared with fig. 3A: the detecting unit 306 is specifically as follows:
a detecting unit 306, configured to detect whether the x-angle value is within a first preset range, whether the y-angle value is within a second preset range, and whether the z-angle value is within a third preset range;
the step of obtaining three weights corresponding to the three-dimensional angle value is performed by the obtaining unit 301 when the x-angle value is in the first preset range, the y-angle value is in the second preset range, and the z-angle value is in the third preset range.
It can be understood that the functions of each program module of the face evaluation apparatus in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
In accordance with the above, please refer to fig. 4, which is a schematic structural diagram of an embodiment of a face evaluation apparatus according to an embodiment of the present application. The face evaluation device described in this embodiment includes: at least one input device 1000; at least one output device 2000; at least one processor 3000, e.g., a CPU; and a memory 4000, the input device 1000, the output device 2000, the processor 3000, and the memory 4000 being connected by a bus 5000.
The input device 1000 may be a touch panel, a physical button, or a mouse.
The output device 2000 may be a display screen.
The memory 4000 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 4000 is used for storing a set of program codes, and the input device 1000, the output device 2000 and the processor 3000 are used for calling the program codes stored in the memory 4000 to execute the following operations:
the processor 3000 is configured to:
acquiring a target face image, and acquiring a three-dimensional angle value of the target face image, wherein the three-dimensional angle value comprises an x-angle value, a y-angle value and a z-angle value;
obtaining three weights corresponding to the three-dimensional angle value, wherein the sum of a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value, and a target third weight corresponding to the z-angle value is 1;
performing weighted operation according to the x-angle value, the y-angle value, the z-angle value, the target first weight, the target second weight and the target third weight to obtain a target angle value;
and determining a first target evaluation value corresponding to the target angle value according to a mapping relation between a preset angle value and an angle quality evaluation value.
It can be seen that, by the face evaluation device described in the embodiment of the present application, a target face image is obtained, and a three-dimensional angle value of the target face image is obtained, where the three-dimensional angle value includes an x-angle value, a y-angle value, and a z-angle value, and three weights corresponding to the three-dimensional angle value are obtained, where a target first weight corresponding to the x-angle value, a target second weight corresponding to the y-angle value, and a target third weight corresponding to the z-angle value, and a sum of the target first weight, the target second weight, and the target third weight is 1, a weighting operation is performed according to the x-angle value, the y-angle value, the z-angle value, the target first weight, the target second weight, and the target third weight to obtain a target angle value, and a first target evaluation value corresponding to the target angle value is determined according to a mapping relationship between a preset angle value and an angle quality evaluation value, and thus, by evaluating the three-dimensional angle of the face image, and providing basis for the quality of the image to a certain extent.
In one possible example, in the aspect of obtaining three weight values corresponding to the three-dimensional angle value, the processor 3000 is specifically configured to:
acquiring a target environment brightness value;
determining a target mapping relation group corresponding to the target environment brightness value according to a mapping relation between a preset environment brightness value and the mapping relation group, wherein each mapping relation group comprises a first mapping relation between an angle value in the x direction and a first weight value and a second mapping relation between an angle value in the y direction and a second weight value;
determining the target first weight corresponding to the x-angle value according to a first mapping relation in the target mapping relation group;
determining the target second weight corresponding to the y-angle value according to a second mapping relation in the target mapping relation group;
and obtaining the target third weight according to the target first weight and the target second weight.
In one possible example, the processor 3000 is further specifically configured to:
performing image quality evaluation on the target face image to obtain a second target evaluation value;
acquiring a first weight coefficient corresponding to the first target evaluation value and a second weight coefficient corresponding to the second target evaluation value;
performing weighting operation according to the first target evaluation value, the first weight coefficient, the second target evaluation value and the second weight coefficient to obtain a third target evaluation value;
determining a target matching threshold corresponding to the third target evaluation value according to a mapping relation between a preset matching threshold and the evaluation value;
matching the target face image with a preset face template;
and when the matching value between the target face image and the preset face template is greater than the target matching threshold value, confirming that the matching is successful.
In one possible example, in the aspect of matching the target face image with a preset face template, the processor 3000 is specifically configured to:
carrying out angle adjustment on the preset face template according to the three-dimensional angle value to obtain a target preset face template;
extracting the features of the target face image to obtain a first peripheral outline and a first feature point set;
performing feature extraction on the target preset face template to obtain a second peripheral outline and a second feature point set;
matching the first peripheral contour with the second peripheral contour to obtain a first matching value;
matching the first characteristic point set with the second characteristic point set to obtain a second matching value;
when the first matching value is larger than a first preset threshold value and the second matching value is larger than a second preset threshold value, taking the mean value between the first matching value and the second matching value as the matching value between the target face image and the preset face template;
and when the first matching value is smaller than or equal to the first preset threshold value, or the second matching value is smaller than or equal to the second preset threshold value, confirming that the matching between the target face image and the preset face template fails.
In one possible example, the processor 3000 is further specifically configured to:
detecting whether the x-angle value is in a first preset range, whether the y-angle value is in a second preset range and whether the z-angle value is in a third preset range;
and when the x-angle value is in the first preset range, the y-angle value is in the second preset range and the z-angle value is in the third preset range, executing the step of obtaining three weights corresponding to the three-dimensional angle value.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a program, and the program includes, when executed, some or all of the steps of any one of the face evaluation methods described in the above method embodiments.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus (device), or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (8)

1. A face evaluation method is characterized by comprising the following steps:
acquiring a target face image, and acquiring a three-dimensional angle value of the target face image, wherein the three-dimensional angle value comprises an x-angle value, a y-angle value and a z-angle value;
acquiring three weights corresponding to the three-dimensional angle value, specifically: acquiring a target environment brightness value; determining a target mapping relation group corresponding to the target environment brightness value according to a mapping relation between a preset environment brightness value and the mapping relation group, wherein each mapping relation group comprises a first mapping relation between an angle value in the x direction and a first weight value and a second mapping relation between an angle value in the y direction and a second weight value; determining the target first weight corresponding to the x-angle value according to a first mapping relation in the target mapping relation group; determining the target second weight corresponding to the y-angle value according to a second mapping relation in the target mapping relation group; obtaining a target third weight according to the target first weight and the target second weight, wherein the sum of the target first weight corresponding to the x-angle value, the target second weight corresponding to the y-angle value, and the target third weight corresponding to the z-angle value is 1;
performing weighted operation according to the x-angle value, the y-angle value, the z-angle value, the target first weight, the target second weight and the target third weight to obtain a target angle value;
and determining a first target evaluation value corresponding to the target angle value according to a mapping relation between a preset angle value and an angle quality evaluation value.
2. The method of claim 1, further comprising:
performing image quality evaluation on the target face image to obtain a second target evaluation value;
acquiring a first weight coefficient corresponding to the first target evaluation value and a second weight coefficient corresponding to the second target evaluation value;
performing weighting operation according to the first target evaluation value, the first weight coefficient, the second target evaluation value and the second weight coefficient to obtain a third target evaluation value;
determining a target matching threshold corresponding to the third target evaluation value according to a mapping relation between a preset matching threshold and the evaluation value;
matching the target face image with a preset face template;
and when the matching value between the target face image and the preset face template is greater than the target matching threshold value, confirming that the matching is successful.
3. The method of claim 2, wherein matching the target face image with a preset face template comprises:
carrying out angle adjustment on the preset face template according to the three-dimensional angle value to obtain a target preset face template;
extracting the features of the target face image to obtain a first peripheral outline and a first feature point set;
performing feature extraction on the target preset face template to obtain a second peripheral outline and a second feature point set;
matching the first peripheral contour with the second peripheral contour to obtain a first matching value;
matching the first characteristic point set with the second characteristic point set to obtain a second matching value;
when the first matching value is larger than a first preset threshold value and the second matching value is larger than a second preset threshold value, taking the mean value between the first matching value and the second matching value as the matching value between the target face image and the preset face template;
and when the first matching value is smaller than or equal to the first preset threshold value, or the second matching value is smaller than or equal to the second preset threshold value, confirming that the matching between the target face image and the preset face template fails.
4. The method of claim 1, further comprising:
detecting whether the x-angle value is in a first preset range, whether the y-angle value is in a second preset range and whether the z-angle value is in a third preset range;
and when the x-angle value is in the first preset range, the y-angle value is in the second preset range and the z-angle value is in the third preset range, executing the step of obtaining three weights corresponding to the three-dimensional angle value.
5. A face evaluation apparatus, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a target face image and acquiring a three-dimensional angle value of the target face image, and the three-dimensional angle value comprises an x angle value, a y angle value and a z angle value; and acquiring three weights corresponding to the three-dimensional angle value, specifically: acquiring a target environment brightness value; determining a target mapping relation group corresponding to the target environment brightness value according to a mapping relation between a preset environment brightness value and the mapping relation group, wherein each mapping relation group comprises a first mapping relation between an angle value in the x direction and a first weight value and a second mapping relation between an angle value in the y direction and a second weight value; determining the target first weight corresponding to the x-angle value according to a first mapping relation in the target mapping relation group; determining the target second weight corresponding to the y-angle value according to a second mapping relation in the target mapping relation group; obtaining a target third weight according to the target first weight and the target second weight, wherein the sum of the target first weight corresponding to the x-angle value, the target second weight corresponding to the y-angle value, and the target third weight corresponding to the z-angle value is 1;
the operation unit is used for carrying out weighted operation according to the x angle value, the y angle value, the z angle value, the target first weight, the target second weight and the target third weight to obtain a target angle value;
and the determining unit is used for determining a first target evaluation value corresponding to the target angle value according to a mapping relation between a preset angle value and an angle quality evaluation value.
6. The apparatus of claim 5, further comprising: an evaluation unit and a matching unit, wherein,
the evaluation unit is used for evaluating the image quality of the target face image to obtain a second target evaluation value;
the acquiring unit is further specifically configured to acquire a first weight coefficient corresponding to the first target evaluation value and a second weight coefficient corresponding to the second target evaluation value;
the operation unit is further specifically configured to perform a weighted operation according to the first target evaluation value, the first weight coefficient, the second target evaluation value, and the second weight coefficient to obtain a third target evaluation value;
the determining unit is further specifically configured to determine a target matching threshold corresponding to the third target evaluation value according to a mapping relationship between a preset matching threshold and an evaluation value;
the matching unit is used for matching the target face image with a preset face template;
the determining unit is further specifically configured to determine that matching is successful when a matching value between the target face image and the preset face template is greater than the target matching threshold.
7. A face assessment device comprising a processor, a memory for storing one or more programs and configured for execution by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-4.
8. A computer-readable storage medium storing a computer program for execution by a processor to implement the method of any one of claims 1-4.
CN201811560246.9A 2018-12-20 2018-12-20 Face evaluation method and related product Active CN109816628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811560246.9A CN109816628B (en) 2018-12-20 2018-12-20 Face evaluation method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811560246.9A CN109816628B (en) 2018-12-20 2018-12-20 Face evaluation method and related product

Publications (2)

Publication Number Publication Date
CN109816628A CN109816628A (en) 2019-05-28
CN109816628B true CN109816628B (en) 2021-09-14

Family

ID=66602221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811560246.9A Active CN109816628B (en) 2018-12-20 2018-12-20 Face evaluation method and related product

Country Status (1)

Country Link
CN (1) CN109816628B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160175A (en) * 2019-12-19 2020-05-15 中科寒武纪科技股份有限公司 Intelligent pedestrian violation behavior management method and related product
CN112597911A (en) * 2020-12-25 2021-04-02 百果园技术(新加坡)有限公司 Buffing processing method and device, mobile terminal and storage medium
CN113411355B (en) * 2021-08-19 2021-11-09 深圳百昱达科技有限公司 Internet-based application registration method and related device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5226570B2 (en) * 2009-03-19 2013-07-03 グローリー株式会社 Image detection apparatus, image detection method, and image detection program
CN106446851A (en) * 2016-09-30 2017-02-22 厦门大图智能科技有限公司 Visible light based human face optimal selection method and system
CN107256388B (en) * 2017-05-25 2020-10-27 武汉普利商用机器有限公司 Method and device for acquiring front face image
CN108230293A (en) * 2017-05-31 2018-06-29 深圳市商汤科技有限公司 Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image
CN107590474B (en) * 2017-09-21 2020-08-14 Oppo广东移动通信有限公司 Unlocking control method and related product
CN107633499B (en) * 2017-09-27 2020-09-01 Oppo广东移动通信有限公司 Image processing method and related product
CN107613550B (en) * 2017-09-27 2020-12-29 Oppo广东移动通信有限公司 Unlocking control method and related product
CN107679482B (en) * 2017-09-27 2021-04-09 Oppo广东移动通信有限公司 Unlocking control method and related product
CN108921795A (en) * 2018-06-04 2018-11-30 腾讯科技(深圳)有限公司 A kind of image interfusion method, device and storage medium

Also Published As

Publication number Publication date
CN109816628A (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN109961009B (en) Pedestrian detection method, system, device and storage medium based on deep learning
CN111046744B (en) Method and device for detecting attention area, readable storage medium and terminal equipment
CN109815843B (en) Image processing method and related product
CN109816745B (en) Human body thermodynamic diagram display method and related products
CN105637530B (en) A kind of method and system of the 3D model modification using crowdsourcing video
CN109766779B (en) Loitering person identification method and related product
CN110378994B (en) Face modeling method and related product
WO2018210047A1 (en) Data processing method, data processing apparatus, electronic device and storage medium
CN109816628B (en) Face evaluation method and related product
CN110807361A (en) Human body recognition method and device, computer equipment and storage medium
JP6352208B2 (en) 3D model processing apparatus and camera calibration system
JP6609640B2 (en) Managing feature data for environment mapping on electronic devices
CN109840885B (en) Image fusion method and related product
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
KR102337209B1 (en) Method for notifying environmental context information, electronic apparatus and storage medium
US20240071016A1 (en) Mixed reality system, program, mobile terminal device, and method
CN111582240B (en) Method, device, equipment and medium for identifying number of objects
CN109785439B (en) Face sketch image generation method and related products
CN107832598B (en) Unlocking control method and related product
CN110738185A (en) Form object identification method and device and storage medium
WO2022087846A1 (en) Image processing method and apparatus, device, and storage medium
CN106919260B (en) Webpage operation method and device
CN111274602B (en) Image characteristic information replacement method, device, equipment and medium
CN111310595B (en) Method and device for generating information
CN110232417B (en) Image recognition method and device, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant