CN109753886A - A kind of evaluation method of facial image, device and equipment - Google Patents
A kind of evaluation method of facial image, device and equipment Download PDFInfo
- Publication number
- CN109753886A CN109753886A CN201811540914.1A CN201811540914A CN109753886A CN 109753886 A CN109753886 A CN 109753886A CN 201811540914 A CN201811540914 A CN 201811540914A CN 109753886 A CN109753886 A CN 109753886A
- Authority
- CN
- China
- Prior art keywords
- face
- determining
- detection area
- image
- evaluation index
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a kind of evaluation method of facial image, device and equipment, wherein this method comprises: obtaining image to be detected;By Face datection algorithm, the Face datection region and human face characteristic point of image to be detected are determined;Determine the positional relationship between human face characteristic point and the proportionate relationship and human face characteristic point in Face datection region;According to the positional relationship between human face characteristic point and the proportionate relationship and human face characteristic point in Face datection region, face evaluation index is determined.Evaluation method, device and the equipment of the facial image provided through the embodiment of the present invention, can evaluate facial image, improve the accuracy of recognition of face.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a device and equipment for evaluating a face image.
Background
With the development of multimedia technology and the like, facial images become more and more important. In the field of video monitoring, face images are captured to perform face recognition, so that people in a monitored area are monitored; the intelligent shooting method comprises the following steps of shooting a face image in the intelligent shooting field, and carrying out subsequent processing on the face image through face recognition, such as adjusting the brightness, the chromaticity and the like of the face image.
Therefore, the face image is the basis for face recognition, the quality of the face image directly affects the accuracy of face recognition, and particularly under the condition of poor quality of the face image, the probability of face recognition misjudgment is greatly increased. In this way, it is important to evaluate the quality of the face image.
Disclosure of Invention
The embodiment of the invention aims to provide a method, a device and equipment for evaluating a face image so as to improve the accuracy of face recognition. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for evaluating a face image, including:
acquiring an image to be detected;
determining a face detection area and face characteristic points of the image to be detected through a face detection algorithm;
determining a proportional relationship between the face characteristic points and the face detection area and a position relationship between the face characteristic points;
and determining a face evaluation index according to the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points.
Optionally, the face feature points include: two eyes, nose tip, left mouth corner and right mouth corner; the two eyes comprise a left eye and a right eye;
the determining the proportional relationship between the face feature points and the face detection area and the position relationship between the face feature points includes:
determining a first ratio of a distance between the two eyes in the horizontal direction to a width of the face detection area, a second ratio of a distance between the two eyes in the vertical direction to a height of the face detection area, and a distance between the nose tip and a perpendicular bisector of a line connecting the two eyes;
determining a face evaluation index according to the proportional relationship between the face characteristic points and the face detection area and the position relationship between the face characteristic points, wherein the determining comprises the following steps:
and determining the face evaluation index according to the first ratio, the second ratio and the distance between the nose tip and the perpendicular bisector of the line connecting the two eyes.
Optionally, the face evaluation index is positively correlated with the first ratio; the face evaluation index is inversely related to the distance between the nose tip and the perpendicular bisector of the line connecting the two eyes.
Optionally, after the face detection region and the face feature point of the image to be detected are determined by the face detection algorithm, the method further includes:
determining the size ratio of the face detection area to the image to be detected;
determining a face evaluation index according to the proportional relationship between the face characteristic points and the face detection area and the position relationship between the face characteristic points, wherein the determining comprises the following steps:
and determining a face evaluation index according to the proportional relation between the face characteristic points and the face detection area, the position relation between the face characteristic points and the size proportion.
Optionally, after determining a face evaluation index according to the proportional relationship between the face feature points and the face detection area, the positional relationship between the face feature points, and the size ratio, the method further includes:
judging whether the face evaluation index is higher than a preset threshold value or not;
and when the face evaluation index is higher than the preset threshold value, determining that the image to be detected corresponding to the face evaluation index is a target image.
Optionally, the face detection algorithm includes a multitask cascade convolution neural network MTCNN algorithm, and the face detection region includes a rectangular frame region.
In a second aspect, an embodiment of the present invention provides an apparatus for evaluating a face image, including:
the acquisition module is used for acquiring an image to be detected;
the first determining module is used for determining a face detection area and face characteristic points of the image to be detected through a face detection algorithm;
the second determination module is used for determining the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points;
and the third determining module is used for determining a face evaluation index according to the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points.
Optionally, the face feature points include: two eyes, nose tip, left mouth corner and right mouth corner; the two eyes comprise a left eye and a right eye;
the second determining module is specifically configured to determine a first ratio of a distance between the two eyes in the horizontal direction to a width of the face detection area, a second ratio of a distance between the two eyes in the vertical direction to a height of the face detection area, and a distance between the nose tip and a perpendicular bisector of a line connecting the two eyes;
the third determining module is specifically configured to determine the face evaluation index according to the first ratio, the second ratio, and a distance between the nose tip and a perpendicular bisector of the line connecting the two eyes.
Optionally, the face evaluation index is positively correlated with the first ratio; the face evaluation index is inversely related to the distance between the nose tip and the perpendicular bisector of the line connecting the two eyes.
Optionally, the apparatus further comprises:
the fourth determining module is used for determining the size ratio of the face detection area to the image to be detected;
the third determining module is configured to determine a face evaluation index according to a proportional relationship between the face feature points and the face detection area, a positional relationship between the face feature points, and the size ratio.
Optionally, the apparatus further comprises:
the judging module is used for judging whether the face evaluation index is higher than a preset threshold value after the face evaluation index is determined according to the proportional relation between the face characteristic points and the face detection area, the position relation between the face characteristic points and the size proportion;
and the fifth determining module is used for determining that the image to be detected corresponding to the face evaluation index is a target image when the face evaluation index is higher than the preset threshold value.
Optionally, the face detection algorithm includes a multitask cascade convolution neural network MTCNN algorithm, and the face detection region includes a rectangular frame region.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method steps of the first aspect when executing the program stored in the memory.
In yet another aspect of the present invention, there is also provided a computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method steps of the first aspect described above.
In a further aspect of the present invention, there is also provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method steps of the first aspect described above.
The evaluation method, the device and the equipment for the face image provided by the embodiment of the invention can acquire the image to be detected; determining a face detection area and face characteristic points of an image to be detected through a face detection algorithm; determining the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points; and determining a face evaluation index according to the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points. In the embodiment of the invention, the face evaluation index for evaluating the face in the image to be detected can be determined according to the proportional relation between the face characteristic points and the face detection area in the image to be detected and the position relation between the face characteristic points. Therefore, the face image can be evaluated, and the accuracy of face recognition is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flowchart of a method for evaluating a face image according to an embodiment of the present invention;
FIG. 2(a) is a schematic diagram of different states of a human face and facial feature points according to an embodiment of the present invention;
FIG. 2(b) is another schematic diagram of different states of a human face and facial feature points according to an embodiment of the present invention;
FIG. 2(c) is another schematic diagram of different states of a human face and facial feature points according to an embodiment of the present invention;
FIG. 2(d) is another schematic diagram of different states of a human face and facial feature points according to an embodiment of the present invention;
FIG. 2(e) is another schematic diagram of different states of a human face and facial feature points according to an embodiment of the present invention;
FIG. 2(f) is another schematic diagram of different states of a human face and facial feature points according to an embodiment of the present invention;
FIG. 3(a) is a schematic diagram of a face monitoring area and face feature points under different states of a face according to an embodiment of the present invention;
FIG. 3(b) is another schematic diagram of a face monitoring area and face feature points under different states of a face according to an embodiment of the present invention;
fig. 4 is another flowchart of a method for evaluating a face image according to an embodiment of the present invention;
FIG. 5(a) is a diagram illustrating a scoring result according to an embodiment of the present invention;
FIG. 5(b) is a diagram illustrating another scoring result in an embodiment of the present invention;
FIG. 5(c) is a diagram illustrating another scoring result in an embodiment of the present invention;
FIG. 5(d) is a diagram illustrating another scoring result in an embodiment of the present invention;
FIG. 5(e) is a diagram illustrating another scoring result in an embodiment of the present invention;
FIG. 5(f) is a diagram illustrating another scoring result in an embodiment of the present invention;
fig. 6 is another flowchart of a method for evaluating a face image according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an evaluation apparatus for a face image according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
The human face image is the basis for human face recognition, the quality of the human face image directly affects the accuracy of the human face recognition, and particularly under the condition of poor quality of the human face image, the probability of human face recognition misjudgment is greatly increased. In this way, it is important to evaluate the quality of the face image.
The quality evaluation of the face image can be understood as evaluating whether the face included in the face image is correct, clear, and the like.
For example, in portrait shooting, especially in smart shooting, a camera cannot automatically determine whether a face of a current photo is clear and correct, which may cause a final shot photo to be a side face, too small, and the like, which may increase the probability of erroneous determination and erroneous determination for face recognition and other works. Therefore, it is very important to actively sense whether the face in the currently photographed face image is clear or correct.
In the embodiment of the invention, the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points are determined through the face detection area and the face characteristic points of the image to be detected, and the evaluation index for evaluating the correction degree of the face in the image to be detected is determined according to the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points. Meanwhile, the size ratio of the face detection area to the image to be detected can be determined; and determining an evaluation index for evaluating the correction degree and the definition degree of the face in the image to be detected according to the proportional relation between the face characteristic points and the face detection area, the position relation between the face characteristic points and the size ratio. Therefore, the correction degree and the definition degree of the face in the face image can be objectively evaluated, whether the face in the obtained face image is correct and clear or not is judged, and the accuracy of face recognition is further improved.
The following describes in detail the evaluation method of a face image according to an embodiment of the present invention.
The method for evaluating the face image provided by the embodiment of the invention can be applied to electronic equipment, and specifically, the electronic equipment can be a terminal, a server, a processor and the like.
The embodiment of the invention provides an evaluation method of a face image, which comprises the following steps of:
and S101, acquiring an image to be detected.
The image to be detected can be an image acquired by the electronic equipment from other devices, or the electronic equipment comprises an image acquisition module, and the image to be detected can be an image directly acquired by the electronic equipment through the image acquisition module.
The image to be detected can be an image containing a human face captured in intelligent shooting, an image containing a human face shot in the field of video monitoring, and the like.
S102, determining a face detection area and face characteristic points of the image to be detected through a face detection algorithm.
The face detection algorithm may include: a face detection algorithm based on histogram coarse segmentation and singular value features, a face detection algorithm based on binary wavelet transform, a face detection algorithm based on AdaBoost algorithm, a face detection algorithm based on facial binocular structure features, and the like. In addition, the determination of the face detection area and the determination of the face feature points can be performed simultaneously through a face detection algorithm; or may be performed separately.
The embodiment of the invention does not limit the mode of the face detection algorithm, and any form capable of realizing face detection is within the protection scope of the embodiment of the invention.
In one implementation, the face detection algorithm includes a Multi-tasking cascaded convolutional neural network (MTCNN) algorithm, and the face detection region may include a rectangular box region. The MTCNN can simultaneously detect the face detection area and the face characteristic point, and can improve the detection efficiency.
The human face feature points may include five sense organs in the human face. Specifically, the face feature points may include: two eyes, the tip of the nose, the left corner of the mouth and the right corner of the mouth, including the left eye and the right eye. As indicated by the small black circles in fig. 2(a), 2(b), 2(c), 2(d), 2(e) and 2 (f).
S103, determining the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points.
The proportional relationship between the face feature point and the face detection area may include: the ratio of the distance between the eyes to the face detection area, the ratio of the distance between the nose and the mouth to the face detection area, and so on.
The positional relationship between the face feature points may include: the distance between the eyes, the distance of the tip of the nose relative to the perpendicular bisector of the line connecting the eyes, the distance of the mouth relative to the perpendicular bisector of the line connecting the eyes, and the like.
When the face detection region and the face feature points are determined, information such as the position and size of the face detection region, and information such as the position of each face feature point, such as the coordinates of each face feature point, may be determined. Therefore, the proportional relationship between the face characteristic points and the face detection area and the position relationship between the face characteristic points can be determined according to the position information of the face characteristic points and the information of the position, the size and the like of the face detection area. Such as the ratio of the distance between the two eyes to the face detection area, the ratio of the distance between the nose and the mouth to the face detection area; the distance between the eyes, the distance of the tip of the nose relative to the perpendicular bisector of the line connecting the eyes, the distance of the mouth relative to the perpendicular bisector of the line connecting the eyes, and the like.
And S104, determining a face evaluation index according to the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points.
The face evaluation index is an evaluation index for evaluating the correction degree of the face in the image to be detected.
When the face in the face image is in different states, the proportional relationship between the face feature points and the face detection area and the position relationship between the face feature points are different. Therefore, the evaluation index for evaluating the correction degree of the face in the image to be detected can be determined according to the proportional relationship between the face characteristic points and the face detection area and the position relationship between the face characteristic points.
In the embodiment of the invention, the face evaluation index for evaluating the face in the image to be detected can be determined according to the proportional relation between the face characteristic points and the face detection area in the image to be detected and the position relation between the face characteristic points. Therefore, the face image can be evaluated, and the accuracy of face recognition is improved.
As can be seen from analyzing fig. 2(a), 2(b), 2(c), 2(d), 2(e) and 2(f), the human face in the human face images in fig. 2(a), 2(b) and 2(c) is a front face; in the face images in fig. 2(d), 2(e), and 2(f), the face is a side face. As can be seen from analyzing fig. 2(a), 2(b), 2(c), 2(d), 2(e) and 2(f), the human face in the human face images in fig. 2(a), 2(b) and 2(c) is a front face; in the face images in fig. 2(d), 2(e), and 2(f), the face is a side face. In 5 human face characteristic points, the maximum distinction between the front face and the side face is the relative position of the nose tip, when the face is the front face, the nose tip is always near the perpendicular bisector of the line between the two eyes, and when the shot human face is the side face, the distance between the nose tip and the perpendicular bisector of the line between the two eyes is always longer, taking fig. 2(f) as an example, the nose tip is on the left side of the perpendicular bisector of the line between the two eyes, so that the human face can be judged to look to the left. And the further they are, the more lateral the description. Therefore, the relative position of the nose tip relative to the perpendicular bisector of the line connecting the two eyes can be used as a basis for estimating the deviation degree of the face, the deviation degree of the face from the front can be estimated through the relative position of the nose tip relative to the perpendicular bisector of the line connecting the two eyes, the side degree of the side face can be understood, and the correction degree of the face can be determined.
In an alternative embodiment of the present invention, step S103: determining the proportional relationship between the face feature points and the face detection area and the position relationship between the face feature points may include:
and determining a first ratio of the distance between the two eyes in the horizontal direction to the width of the human face detection area, a second ratio of the distance between the two eyes in the vertical direction to the height of the human face detection area, and a distance between the tip of the nose and the perpendicular bisector of the line connecting the two eyes.
Step S104: determining a face evaluation index according to the proportional relationship between the face feature points and the face detection area and the position relationship between the face feature points, which may include:
and determining a face evaluation index according to the first ratio, the second ratio and the distance between the nose tip and the perpendicular bisector of the line connecting the two eyes.
Specifically, the face evaluation index is positively correlated with the first ratio. Namely, the face evaluation index increases with the increase of the first ratio and decreases with the decrease of the first ratio.
The face evaluation index is inversely related to the distance between the nose tip and the perpendicular bisector of the line connecting the two eyes. Namely, the face evaluation index decreases with increasing distance between the tip of the nose and the perpendicular bisector of the line connecting the two eyes, and increases with decreasing distance between the tip of the nose and the perpendicular bisector of the line connecting the two eyes.
In one embodiment, as shown in fig. 3(a) and 3(b), the image to be detected 301, the face detection area of the image to be detected is shown as a rectangular frame 302, and the perpendicular bisector of the connecting line of the two eyes is shown as a dashed line 303.
Specifically, the width of the image 301 to be detected is W, and the height thereof is H, and the height of the face detection area of the image 301 to be detected, such as the face detection frame 302, is HfWidth of wf(ii) a The coordinate of the left eye in the face feature point is (x)1,y1),The coordinate of the right eye is (x)2,y2) The nasal tip coordinate is (x)3,y3)。
The more correct the detected face is, the larger the ratio of the distance between the two eyes to the face detection frame is, and the two eyes are in the horizontal position, which means that the face is not rotated, and can be specifically expressed by the following formula (1-1).
Wherein,the first ratio is a first ratio of the first ratio,is the second ratio.
The tip of the nose is as close as possible to the perpendicular bisector of the line connecting the eyes, and can be specifically expressed by the following formula (1-2).
Wherein,the distance between the tip of the nose and the perpendicular bisector of the line between the two eyes.
On the basis of the above embodiment, in an alternative embodiment of the present invention, in step S102: after determining the face detection area and the face feature point of the image to be detected by the face detection algorithm, as shown in fig. 4, the method may further include:
and S105, determining the size ratio of the face detection area to the image to be detected.
In general, the larger the size of the face detection area is, the sharper the face image is. Specifically, it can be expressed by the following formulas (1 to 3).
argmax(hf),argmax(wf) (1-3)
In order to avoid the error caused by singly determining the size of the face detection area through the size of the face detection area due to the difference of the sizes of the images to be detected, the size ratio of the face detection area to the image to be detected can be determined so as to determine the size of the image to be detected occupied by the face detection area.
Step S104: determining a face evaluation index according to the proportional relationship between the face feature points and the face detection area and the position relationship between the face feature points, which may include:
and S1041, determining a face evaluation index according to the proportional relation between the face characteristic points and the face detection area, the position relation between the face characteristic points and the size proportion.
The face evaluation index is used for evaluating the correction degree and the definition degree of the face in the image to be detected. Namely, the face evaluation index can be used for evaluating the correcting degree and the definition degree of the face in the image to be detected, and indicating whether the face in the image to be detected is correct and clear.
When the evaluation index for evaluating the correction degree of the face in the image to be detected is determined in the above embodiment, the definition degree of the face in the image to be detected can be considered at the same time. The evaluation index for evaluating the correction degree of the face in the image to be detected can be determined according to the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points, and if factors influencing the definition degree of the face, such as the size proportion between the face detection area and the image to be detected, are considered at the same time, the evaluation index for evaluating the correction degree and the definition degree of the face in the image to be detected can be determined according to the proportional relation between the face characteristic points and the face detection area, the position relation between the face characteristic points and the size proportion between the face detection area and the image to be detected.
Specifically, the face evaluation index may be determined by a preset formula according to a proportional relationship between the face feature points and the face detection area, a positional relationship between the face feature points, and the size ratio.
The preset formula may include the following scoring formula (1-4):
in an implementation manner, the weight a of the size of the face detection frame may be set to 0.5.
The definition degree and the correction degree of the face in the face image are comprehensively considered in the formula (1-4), and specifically, the size of the face detection frame area is considered; the relationship between the face detection area and the face feature points, such as a first ratio of the distance between the two eyes in the horizontal direction to the width of the face detection area, and a second ratio of the distance between the two eyes in the vertical direction to the height of the face detection area; the relationship between points of a person's face, such as the distance between the tip of the nose and the perpendicular bisector of the line connecting the two eyes.
The group of pictures of fig. 5(a), fig. 5(b), fig. 5(c), fig. 5(d), fig. 5(e) and fig. 5(f) shows that the face images of different angles of the face are scored by using the formula (1-4), wherein the weight a of the size of the face detection frame is 0.5, and the score is at the upper left corner of each picture. In fig. 5(a), scores are given correspondingly when the head is raised: 0.374; in fig. 5(b) the heads are scored correspondingly when lowering: 0.483; the corresponding scores in FIG. 5(c) when the face is deflected to the left: 0.076; the corresponding scores in FIG. 5(d) when deflected to the right: 0.008; figure 5(e) scores correspondingly as the face zooms out: 0.518; figure 5(f) scores correspondingly when the face is upright and zoomed in: 0.742. it can be understood that fig. 5(f) is the best picture with correct and clear faces in the group of pictures.
It can be seen that the score is valid, and no closer distance and positive face scores are high whether the face is facing up or down, left or right, or far away. Only when the face is clear and correct, the score is higher.
Therefore, the picture with the highest definition and correction captured in the shooting process can be selected in the face active perception task to serve as a capturing result in the face active perception task.
In the embodiment of the invention, a scoring mechanism for evaluating the correction degree and the definition degree of the face is determined by using face detection and a few face key points, namely face characteristic points, so that the face image can be evaluated, and the recognition capabilities such as the recognition accuracy of face recognition, face snapshot and the like can be improved. And the evaluation index for evaluating the correction degree of the face in the image to be detected can be automatically determined, and the evaluation index for evaluating the correction degree and the definition degree of the face in the image to be detected can be automatically determined, so that the correction degree and the definition degree of the face can be rapidly and efficiently determined, and the efficiency of face recognition, face snapshot and the like can be improved. Meanwhile, guidance can be provided for active shooting equipment, such as shooting guidance for monitoring cameras, intelligent cameras and the like.
On the basis of the above embodiment, in an alternative embodiment of the present invention, in step S106: after determining the face evaluation index according to the proportional relationship between the face feature points and the face detection area, the position relationship and the size ratio between the face feature points, as shown in fig. 6, the method may further include:
and S107, judging whether the face evaluation index is higher than a preset threshold value.
And S108, when the face evaluation index is higher than a preset threshold value, determining the image to be detected corresponding to the face evaluation index as a target image.
The preset threshold may be determined according to actual applications. Specifically, the corresponding preset threshold may be determined based on different requirements of different applications on the face correction degree and the definition degree. Therefore, the image meeting the requirement can be selected according to the preset threshold value.
In order to select a clearer and more correct face image, an image to be retrieved with a face evaluation index higher than a preset threshold value can be selected as a target image to be selected.
Therefore, the correct and clear image meeting the requirements can be selected according to different requirements.
An embodiment of the present invention further provides an apparatus for evaluating a face image, as shown in fig. 7, including:
an obtaining module 701, configured to obtain an image to be detected;
a first determining module 702, configured to determine a face detection area and a face feature point of an image to be detected through a face detection algorithm;
a second determining module 703, configured to determine a proportional relationship between the face feature points and the face detection area and a position relationship between the face feature points;
the third determining module 704 is configured to determine a face evaluation index according to a proportional relationship between the face feature points and the face detection area and a position relationship between the face feature points.
In the embodiment of the invention, the face evaluation index for evaluating the face in the image to be detected can be determined according to the proportional relation between the face characteristic points and the face detection area in the image to be detected and the position relation between the face characteristic points. Therefore, the face image can be evaluated, and the accuracy of face recognition is improved.
Optionally, the face feature points include: two eyes, nose tip, left mouth corner and right mouth corner; both eyes include the left eye and the right eye;
a second determining module 703, configured to specifically determine a first ratio of a distance between two eyes in the horizontal direction to a width of the face detection area, a second ratio of a distance between two eyes in the vertical direction to a height of the face detection area, and a distance between a nose tip and a perpendicular bisector of a line connecting the two eyes;
the third determining module 704 is specifically configured to determine a face evaluation index according to the first ratio, the second ratio, and a distance between the nose tip and a perpendicular bisector of a line connecting the two eyes.
Optionally, the face evaluation index is positively correlated with the first ratio; the face evaluation index is inversely related to the distance between the nose tip and the perpendicular bisector of the line connecting the two eyes.
Optionally, the apparatus further comprises:
the fourth determining module is used for determining the size ratio of the face detection area to the image to be detected;
the third determining module 704 is specifically configured to determine a face evaluation index according to a proportional relationship between the face feature points and the face detection area, a position relationship between the face feature points, and a size ratio.
Optionally, the apparatus further comprises:
the judging module is used for judging whether the face evaluation index is higher than a preset threshold value after the face evaluation index is determined according to the proportional relation between the face characteristic points and the face detection area, the position relation between the face characteristic points and the size proportion;
and the fifth determining module is used for determining the image to be detected corresponding to the face evaluation index as the target image when the face evaluation index is higher than the preset threshold value.
Optionally, the face detection algorithm includes a multitask cascade convolution neural network MTCNN algorithm, and the face detection area includes a rectangular frame area.
It should be noted that the facial image evaluation device provided in the embodiment of the present invention is a device to which the facial image evaluation method is applied, and all embodiments of the facial image evaluation method are applicable to the device, and can achieve the same or similar beneficial effects.
An embodiment of the present invention further provides an electronic device, as shown in fig. 8, including a processor 801, a communication interface 802, a memory 803, and a communication bus 804, where the processor 801, the communication interface 802, and the memory 803 complete mutual communication through the communication bus 804.
A memory 803 for storing a computer program;
the processor 801 is configured to implement the method steps of the method for evaluating a face image in the above-described embodiment when executing the program stored in the memory 803.
In the embodiment of the invention, the face evaluation index for evaluating the face in the image to be detected can be determined according to the proportional relation between the face characteristic points and the face detection area in the image to be detected and the position relation between the face characteristic points. Therefore, the face image can be evaluated, and the accuracy of face recognition is improved.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In a further embodiment of the present invention, a computer-readable storage medium is further provided, in which instructions are stored, which, when run on a computer, cause the computer to perform the method steps of the method for evaluating a face image in the above-mentioned embodiment.
In the embodiment of the invention, the face evaluation index for evaluating the face in the image to be detected can be determined according to the proportional relation between the face characteristic points and the face detection area in the image to be detected and the position relation between the face characteristic points. Therefore, the face image can be evaluated, and the accuracy of face recognition is improved.
In a further embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method steps of the method for evaluating a face image according to the above embodiment.
In the embodiment of the invention, the face evaluation index for evaluating the face in the image to be detected can be determined according to the proportional relation between the face characteristic points and the face detection area in the image to be detected and the position relation between the face characteristic points. Therefore, the face image can be evaluated, and the accuracy of face recognition is improved.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, device, computer-readable storage medium, and computer program product embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and for related matters, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (13)
1. A method for evaluating a face image is characterized by comprising the following steps:
acquiring an image to be detected;
determining a face detection area and face characteristic points of the image to be detected through a face detection algorithm;
determining a proportional relationship between the face characteristic points and the face detection area and a position relationship between the face characteristic points;
and determining a face evaluation index according to the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points.
2. The method of claim 1, wherein the human face feature points comprise: two eyes, nose tip, left mouth corner and right mouth corner; the two eyes comprise a left eye and a right eye;
the determining the proportional relationship between the face feature points and the face detection area and the position relationship between the face feature points includes:
determining a first ratio of a distance between the two eyes in the horizontal direction to a width of the face detection area, a second ratio of a distance between the two eyes in the vertical direction to a height of the face detection area, and a distance between the nose tip and a perpendicular bisector of a line connecting the two eyes;
determining a face evaluation index according to the proportional relationship between the face characteristic points and the face detection area and the position relationship between the face characteristic points, wherein the determining comprises the following steps:
and determining the face evaluation index according to the first ratio, the second ratio and the distance between the nose tip and the perpendicular bisector of the line connecting the two eyes.
3. The method according to claim 2, wherein the face evaluation index is positively correlated with the first ratio; the face evaluation index is inversely related to the distance between the nose tip and the perpendicular bisector of the line connecting the two eyes.
4. The method according to any one of claims 1 to 3, wherein after determining the face detection area and the face feature point of the image to be detected by the face detection algorithm, the method further comprises:
determining the size ratio of the face detection area to the image to be detected;
determining a face evaluation index according to the proportional relationship between the face characteristic points and the face detection area and the position relationship between the face characteristic points, wherein the determining comprises the following steps:
and determining a face evaluation index according to the proportional relation between the face characteristic points and the face detection area, the position relation between the face characteristic points and the size proportion.
5. The method according to claim 4, wherein after determining the face evaluation index according to the proportional relationship between the face feature points and the face detection area, the positional relationship between the face feature points, and the size ratio, the method further comprises:
judging whether the face evaluation index is higher than a preset threshold value or not;
and when the face evaluation index is higher than the preset threshold value, determining that the image to be detected corresponding to the face evaluation index is a target image.
6. The method of claim 1, wherein the face detection algorithm comprises a multitasking cascaded convolutional neural network (MTCNN) algorithm, and wherein the face detection region comprises a rectangular box region.
7. An apparatus for evaluating a face image, comprising:
the acquisition module is used for acquiring an image to be detected;
the first determining module is used for determining a face detection area and face characteristic points of the image to be detected through a face detection algorithm;
the second determination module is used for determining the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points;
and the third determining module is used for determining a face evaluation index according to the proportional relation between the face characteristic points and the face detection area and the position relation between the face characteristic points.
8. The apparatus of claim 7, wherein the face feature points comprise: two eyes, nose tip, left mouth corner and right mouth corner; the two eyes comprise a left eye and a right eye;
the second determining module is specifically configured to determine a first ratio of a distance between the two eyes in the horizontal direction to a width of the face detection area, a second ratio of a distance between the two eyes in the vertical direction to a height of the face detection area, and a distance between the nose tip and a perpendicular bisector of a line connecting the two eyes;
the third determining module is specifically configured to determine the face evaluation index according to the first ratio, the second ratio, and a distance between the nose tip and a perpendicular bisector of the line connecting the two eyes.
9. The apparatus of claim 8, wherein the face evaluation indicator is positively correlated with the first ratio; the face evaluation index is inversely related to the distance between the nose tip and the perpendicular bisector of the line connecting the two eyes.
10. The apparatus of any one of claims 7 to 9, further comprising:
the fourth determining module is used for determining the size ratio of the face detection area to the image to be detected;
the third determining module is specifically configured to determine a face evaluation index according to a proportional relationship between the face feature points and the face detection area, a positional relationship between the face feature points, and the size ratio.
11. The apparatus of claim 10, further comprising:
the judging module is used for judging whether the face evaluation index is higher than a preset threshold value after the face evaluation index is determined according to the proportional relation between the face characteristic points and the face detection area, the position relation between the face characteristic points and the size proportion;
and the fifth determining module is used for determining that the image to be detected corresponding to the face evaluation index is a target image when the face evaluation index is higher than the preset threshold value.
12. The apparatus of claim 7, wherein the face detection algorithm comprises a multitasking cascaded convolutional neural network (MTCNN) algorithm, and wherein the face detection region comprises a rectangular box region.
13. An electronic device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, implementing the method steps of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811540914.1A CN109753886B (en) | 2018-12-17 | 2018-12-17 | Face image evaluation method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811540914.1A CN109753886B (en) | 2018-12-17 | 2018-12-17 | Face image evaluation method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109753886A true CN109753886A (en) | 2019-05-14 |
CN109753886B CN109753886B (en) | 2024-03-08 |
Family
ID=66403893
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811540914.1A Active CN109753886B (en) | 2018-12-17 | 2018-12-17 | Face image evaluation method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109753886B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110276308A (en) * | 2019-06-25 | 2019-09-24 | 上海商汤智能科技有限公司 | Image processing method and device |
CN111081375A (en) * | 2019-12-27 | 2020-04-28 | 北京深测科技有限公司 | Early warning method and system for health monitoring |
CN114155593A (en) * | 2022-02-09 | 2022-03-08 | 深圳市海清视讯科技有限公司 | Face recognition method, face recognition device, recognition terminal and storage medium |
CN116631039A (en) * | 2023-06-07 | 2023-08-22 | 银河互联网电视有限公司 | Face detection method, device and equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101383001A (en) * | 2008-10-17 | 2009-03-11 | 中山大学 | A Fast and Accurate Frontal Face Discrimination Method |
CN101615241A (en) * | 2008-06-24 | 2009-12-30 | 上海银晨智能识别科技有限公司 | A kind of screening technique of certificate photograph |
US20130243268A1 (en) * | 2012-03-13 | 2013-09-19 | Honeywell International Inc. | Face image prioritization based on face quality analysis |
CN104834898A (en) * | 2015-04-09 | 2015-08-12 | 华南理工大学 | Quality classification method for portrait photography image |
CN105046246A (en) * | 2015-08-31 | 2015-11-11 | 广州市幸福网络技术有限公司 | Identification photo camera capable of performing human image posture photography prompting and human image posture detection method |
CN105893946A (en) * | 2016-03-29 | 2016-08-24 | 中国科学院上海高等研究院 | Front face image detection method |
CN107122054A (en) * | 2017-04-27 | 2017-09-01 | 青岛海信医疗设备股份有限公司 | A kind of detection method and device of face deflection angle and luffing angle |
CN107590461A (en) * | 2017-09-12 | 2018-01-16 | 广东欧珀移动通信有限公司 | Face identification method and Related product |
CN107958444A (en) * | 2017-12-28 | 2018-04-24 | 江西高创保安服务技术有限公司 | A kind of face super-resolution reconstruction method based on deep learning |
CN108230293A (en) * | 2017-05-31 | 2018-06-29 | 深圳市商汤科技有限公司 | Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image |
CN108960156A (en) * | 2018-07-09 | 2018-12-07 | 苏州浪潮智能软件有限公司 | A kind of Face datection recognition methods and device |
-
2018
- 2018-12-17 CN CN201811540914.1A patent/CN109753886B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101615241A (en) * | 2008-06-24 | 2009-12-30 | 上海银晨智能识别科技有限公司 | A kind of screening technique of certificate photograph |
CN101383001A (en) * | 2008-10-17 | 2009-03-11 | 中山大学 | A Fast and Accurate Frontal Face Discrimination Method |
US20130243268A1 (en) * | 2012-03-13 | 2013-09-19 | Honeywell International Inc. | Face image prioritization based on face quality analysis |
CN104834898A (en) * | 2015-04-09 | 2015-08-12 | 华南理工大学 | Quality classification method for portrait photography image |
CN105046246A (en) * | 2015-08-31 | 2015-11-11 | 广州市幸福网络技术有限公司 | Identification photo camera capable of performing human image posture photography prompting and human image posture detection method |
CN105893946A (en) * | 2016-03-29 | 2016-08-24 | 中国科学院上海高等研究院 | Front face image detection method |
CN107122054A (en) * | 2017-04-27 | 2017-09-01 | 青岛海信医疗设备股份有限公司 | A kind of detection method and device of face deflection angle and luffing angle |
CN108230293A (en) * | 2017-05-31 | 2018-06-29 | 深圳市商汤科技有限公司 | Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image |
CN107590461A (en) * | 2017-09-12 | 2018-01-16 | 广东欧珀移动通信有限公司 | Face identification method and Related product |
CN107958444A (en) * | 2017-12-28 | 2018-04-24 | 江西高创保安服务技术有限公司 | A kind of face super-resolution reconstruction method based on deep learning |
CN108960156A (en) * | 2018-07-09 | 2018-12-07 | 苏州浪潮智能软件有限公司 | A kind of Face datection recognition methods and device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110276308A (en) * | 2019-06-25 | 2019-09-24 | 上海商汤智能科技有限公司 | Image processing method and device |
CN111081375A (en) * | 2019-12-27 | 2020-04-28 | 北京深测科技有限公司 | Early warning method and system for health monitoring |
CN111081375B (en) * | 2019-12-27 | 2023-04-18 | 北京深测科技有限公司 | Early warning method and system for health monitoring |
CN114155593A (en) * | 2022-02-09 | 2022-03-08 | 深圳市海清视讯科技有限公司 | Face recognition method, face recognition device, recognition terminal and storage medium |
CN116631039A (en) * | 2023-06-07 | 2023-08-22 | 银河互联网电视有限公司 | Face detection method, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109753886B (en) | 2024-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113011385B (en) | Face silence living body detection method, face silence living body detection device, computer equipment and storage medium | |
CN109753886B (en) | Face image evaluation method, device and equipment | |
US8320642B2 (en) | Face collation apparatus | |
WO2019033574A1 (en) | Electronic device, dynamic video face recognition method and system, and storage medium | |
JP4824411B2 (en) | Face extraction device, semiconductor integrated circuit | |
CN111027504A (en) | Face key point detection method, device, equipment and storage medium | |
CN111767820B (en) | Method, device, equipment and storage medium for identifying focused object | |
TW201911130A (en) | Method and device for remake image recognition | |
CN110390229B (en) | Face picture screening method and device, electronic equipment and storage medium | |
CN109389018B (en) | Face angle recognition method, device and equipment | |
CN112348778A (en) | Object identification method and device, terminal equipment and storage medium | |
CN109993021A (en) | Face detection method, device and electronic device | |
CN113051978A (en) | Face recognition method, electronic device and readable medium | |
JP7255173B2 (en) | Human detection device and human detection method | |
CN115019364A (en) | Identity authentication method, device, electronic device and medium based on face recognition | |
CN115690892B (en) | A squint recognition method, device, electronic equipment and storage medium | |
CN112507767B (en) | Face recognition method and related computer system | |
CN111767821B (en) | Method, device, equipment and storage medium for identifying focused object | |
CN114332775A (en) | A Smoke Detection Method Based on Object Detection and Disordered Features | |
CN114463781A (en) | Method, device and equipment for determining trigger gesture | |
CN114255493B (en) | Image detection method, face detection method and device, equipment and storage medium | |
CN108737733A (en) | Information prompting method and device, electronic equipment and computer readable storage medium | |
US12307819B2 (en) | Identification model generation apparatus, identification apparatus, identification model generation method, identification method, and storage medium | |
US12198458B2 (en) | Character recognition method for dynamic images | |
TW201804368A (en) | Face recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |