CN111126122B - Face recognition algorithm evaluation method and device - Google Patents
Face recognition algorithm evaluation method and device Download PDFInfo
- Publication number
- CN111126122B CN111126122B CN201811297995.7A CN201811297995A CN111126122B CN 111126122 B CN111126122 B CN 111126122B CN 201811297995 A CN201811297995 A CN 201811297995A CN 111126122 B CN111126122 B CN 111126122B
- Authority
- CN
- China
- Prior art keywords
- face
- small
- labeling
- recognition algorithm
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 118
- 238000011156 evaluation Methods 0.000 title claims abstract description 45
- 238000002372 labelling Methods 0.000 claims abstract description 114
- 238000012360 testing method Methods 0.000 claims abstract description 69
- 238000000034 method Methods 0.000 claims abstract description 33
- 238000010586 diagram Methods 0.000 claims description 13
- 238000005457 optimization Methods 0.000 claims description 6
- 230000010365 information processing Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides a face recognition algorithm evaluation method and device, and relates to the technical field of computer information processing. The method determines whether the face small image is matched with a marked face on a frame image where the face small image is located by calculating the distance between a preset small image position point of the face small image obtained by recognition of a face recognition algorithm and a preset marked position point of the marked face on the corresponding frame image in a test video sequence where the face small image is located. When the distance is smaller than or equal to a certain preset value, determining that the face small image is matched with the labeling face; and when the distance is larger than a certain preset value, determining that the face small image is not matched with the labeling face. Further, the face recognition algorithm is calculated to aim at the capture rate, the multi-capture rate, the false beat rate and the face preference rate of the face in the test video sequence, so that the method is higher in evaluation accuracy and more comprehensive in evaluation.
Description
Technical Field
The application relates to the technical field of computer information processing, in particular to a face recognition algorithm evaluation method and device.
Background
The face recognition technology is a biological recognition technology for carrying out identity recognition based on facial feature information of a person, and is a series of related technologies for acquiring images or video streams containing the face by using a camera or a camera, automatically detecting and tracking the face in the images, and further carrying out face recognition on the detected face.
The face recognition algorithm is a core tool of the face recognition technology, at present, the face recognition algorithm is relatively more, the algorithm performance is different, and when the algorithm performance is too poor, the higher false detection rate can be caused. Therefore, before the face recognition algorithm is applied to face image recognition, algorithm performance evaluation is performed to ensure algorithm performance, but the evaluation method in the prior art is low in accuracy and incomplete in evaluation.
Disclosure of Invention
Accordingly, an object of an embodiment of the present application is to provide a face recognition algorithm evaluation method and apparatus, which determine a matching degree between a face small image and a labeled face on a frame image where the face small image is located by calculating a distance between a preset small image position point of the face small image and a preset labeled position point of the labeled face on the frame image where the face small image is located.
In order to achieve the above object, the preferred embodiment of the present application adopts the following technical scheme:
the preferred embodiment of the application provides a face recognition algorithm evaluation method, which comprises the following steps:
the method comprises the steps of obtaining a marked face and face characteristic information of the marked face, which are obtained after marking the face in a test video sequence, wherein the test video sequence comprises a plurality of frame images, and the face characteristic information comprises coordinates of the marked face in the frame images, numbers of the frame images, face coincidence scores and face sizes of the marked face on the frame images;
obtaining a plurality of face small drawings and small drawing characteristic information of the face small drawings, wherein the face small drawings are obtained by identifying the test video sequence through a face recognition algorithm to be evaluated, and the small drawing characteristic information comprises the number of a frame image where the face small drawings are located and the coordinates of the frame image where the face small drawings are located;
and determining whether the face small image is matched with the marked face on the frame image where the face small image is located according to the distance between the preset small image position point of the face small image and the preset marked position point of the marked face on the frame image where the face small image is located.
In a preferred embodiment of the present application, the step of determining whether the face thumbnail matches the labeling face on the frame image in which the face thumbnail is located according to the distance between the preset thumbnail position point of the face thumbnail and the preset labeling position point of the labeling face on the frame image in which the face thumbnail is located includes:
comparing the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeled human face in the same frame of image, and the product of the human face size and the preset length coefficient;
if the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeling human face in the same frame of image is smaller than or equal to the product of the size of the labeling human face and the preset length coefficient, determining that the small image of the human face is matched with the labeling human face;
if the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeling human face in the same frame of image is larger than the product of the size of the labeling human face and the preset length coefficient, determining that the small image of the human face is not matched with the labeling human face.
In a preferred embodiment of the present application, the method further comprises:
and calculating at least one evaluation index of the face recognition algorithm aiming at the capture rate, the multi-capture rate, the false beat rate and the face preference rate of the faces in the test video sequence according to the number of the face small images matched with the marked faces in all the frame images.
In the preferred embodiment of the application, the total number of people corresponding to the test video sequence is obtained according to the number of non-repeated faces in the labeled faces;
according to the number of face small images matched with the marked faces in all the frame images, the step of calculating the capture rate of the face recognition algorithm for the faces in the test video sequence comprises the following steps:
determining the total number of people identified by the face recognition algorithm according to the number of face small images matched with the marked faces in all the frame images;
and calculating the ratio of the total number identified by the face recognition algorithm to the total number corresponding to the test video sequence to obtain the capture rate of the face recognition algorithm.
In a preferred embodiment of the present application, the step of determining the total number of people identified by the face recognition algorithm according to the number of face small images matched with the labeled face in all frame images includes:
if the number of the face small images matched with the marked face in all the frame images is zero, determining that the marked face is not recognized by the face recognition algorithm;
if the number of the face small images matched with the marked face in all the frame images is not zero, determining that the marked face is recognized by the face recognition algorithm;
and calculating the sum of the number of all the marked faces recognized by the face recognition algorithm to obtain the total number recognized by the face recognition algorithm.
In a preferred embodiment of the present application, the step of calculating the multi-grabbing rate of the face recognition algorithm for the face in the test video sequence according to the number of face small images matched with the labeled face in all frame images includes:
and calculating the ratio of the total number of the face small images of the multi-snapshot in all the frame images to the number of the face small images to obtain the multi-grabbing rate.
In a preferred embodiment of the present application, the step of calculating the false beat rate of the face recognition algorithm for the faces in the test video sequence according to the number of face small images matched with the labeled faces in all frame images includes:
calculating the difference value of the number of the face small images and the number of the face small images matched with the marked face to obtain the number of the face small images which are miscaptured by the face recognition algorithm;
and calculating the ratio of the number of face small images which are miscaptured by the face recognition algorithm to the number of the face small images to obtain the miscapturing rate corresponding to the face recognition algorithm.
In a preferred embodiment of the present application, the step of calculating the total number of non-duplicate faces in the face thumbnail includes:
according to the number of face small drawings matched with the marked faces in all the frame images, the step of calculating the face optimization rate of the face recognition algorithm for the faces in the test video sequence comprises the following steps:
and calculating the total number of non-repeated positive faces in the face small diagram, and obtaining the face optimization rate of the face recognition algorithm aiming at the faces in the test video sequence by the ratio of the total number of people corresponding to the test video sequence.
In a preferred embodiment of the present application, the step of calculating the total number of non-duplicate faces in the face thumbnail includes:
determining a face coincidence degree score of a labeling face matched with the face small image according to the coordinates of the frame image where the face small image is located;
if the number of the face small drawings matched with the labeling face is not zero, and the face coincidence degree score of the labeling face matched with the face small drawings is larger than or equal to a preset score, determining that the labeling face has a corresponding non-repeated face;
if the number of the face small drawings matched with the labeling face is zero or the face coincidence degree score of the labeling face matched with the face small drawings is smaller than a preset score, determining that the labeling face does not have a corresponding non-repeated face;
and calculating the number of the marked faces with the corresponding non-repeated faces in the marked faces to obtain the total number of the non-repeated faces in the face small diagram.
The preferred embodiment of the application also provides a face recognition algorithm evaluation device, which comprises:
the system comprises a labeling face information obtaining module, a labeling face information obtaining module and a labeling module, wherein the labeling face information obtaining module is used for obtaining a labeling face and face characteristic information of the labeling face after labeling the face in a test video sequence, the test video sequence comprises a plurality of frame images, and the face characteristic information comprises coordinates of the labeling face in the frame images, numbers of the frame images, a face coincidence degree score and a face size of the labeling face on the frame images;
the human face small image information obtaining module is used for obtaining a plurality of human face small images obtained by identifying the test video sequence through a human face recognition algorithm to be evaluated and small image characteristic information of the human face small images, wherein the small image characteristic information comprises the number of a frame image where the human face small images are located and the coordinates of the frame image where the human face small images are located;
the matching judgment module is used for determining whether the face small image is matched with the marked face on the frame image where the face small image is located according to the distance between the preset small image position point of the face small image and the preset marked position point of the marked face on the frame image where the face small image is located; and
Compared with the prior art, the application has the following beneficial effects:
the embodiment of the application provides a face recognition algorithm evaluation method and device, wherein the method comprises the following steps: firstly, labeling a face in a test video sequence, obtaining a labeled face and face characteristic information of the labeled face, then obtaining a plurality of face minidrawings obtained by identifying the test video sequence by a face recognition algorithm to be evaluated and minidrawings characteristic information of the face minidrawings, and then determining whether the face minidrawings are matched with labeled faces on frame images where the face minidrawings are located according to the distance between preset minidrawings position points of the face minidrawings and preset labeling position points of the labeled face on the frame images where the face minidrawings are located. Therefore, the matching degree of the face small image output by the face recognition algorithm and the target face in the test video sequence is converted into the distance between the preset small image position point of the face small image output by calculation and the preset labeling position point of the labeling face on the frame image of the face small image, so that the evaluation accuracy is higher, accurate data reference is provided for further calculation of various evaluation indexes, and the evaluation is more comprehensive.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application, and therefore should not be considered as limiting the scope, and other related drawings can be obtained by those skilled in the art without inventive effort from these drawings:
fig. 1 is a schematic flow chart of a face recognition algorithm evaluation method according to a preferred embodiment of the present application;
FIG. 2 is a schematic diagram of a face recognition algorithm for recognizing a face according to a preferred embodiment of the present application;
FIG. 3 is another schematic diagram of the face recognition algorithm according to the preferred embodiment of the present application;
fig. 4 is a flowchart illustrating a sub-step of step S230 in the face recognition algorithm evaluation method according to the preferred embodiment of the present application
Fig. 5 is a flowchart illustrating the substeps of step S240 in the face recognition algorithm evaluation method according to the preferred embodiment of the present application;
fig. 6 is another flow chart of the substeps of step S240 in the face recognition algorithm evaluation method according to the preferred embodiment of the present application;
fig. 7 is a schematic flow chart of a substep of step S240 in a face recognition algorithm evaluation method according to a preferred embodiment of the present application;
fig. 8 is a functional block diagram of a face recognition algorithm evaluation device according to a preferred embodiment of the present application.
Icon: 200-face recognition algorithm evaluation device; 210-labeling a face information obtaining module; 220-a face small image information obtaining module; 230-a match judging module.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments according to the application without any inventive effort, are within the scope of the application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a flow chart of an evaluation method of a face recognition algorithm according to a preferred embodiment of the present application, and it should be noted that the evaluation method according to the embodiment of the present application is not limited to the specific order shown in fig. 1 and described below. The method may be implemented by the following steps S210 to S240.
Step S210, after labeling the faces in the test video sequence, the obtained labeled faces and the face feature information of the labeled faces are obtained.
In this embodiment, the test video sequence may be a set of test videos of multiple walking people who are close to the actual scene, such as a canteen entrance scene. The test video sequence may be obtained in various manners, for example, may be downloaded from a server, or may be imported from an external terminal, or may be obtained by real-time acquisition, which is not particularly limited in this embodiment.
After the test video sequence is obtained, the face in the test video sequence can be marked, the test video sequence can be divided into a plurality of frame images, and the face marking process can be to mark the face in each frame image. For example, a rectangular frame may be used to label a face in the frame image, so as to obtain a labeled face. After the face is marked, face characteristic information of the marked face can be obtained. In the embodiment of the application, the face characteristic information of the marked face can be collected and recorded in a form of a table. The face feature information may include at least one parameter of coordinates of the labeling face in the frame image, the number of the frame image, the face coincidence score, and the size of the labeling face on the frame image, and may be recorded by using a table shown in the following table.
1 | … | J | … | m | |
1 | (1,1,x 11 ,y 11 ,s 11 ,l 11 ) | … | (1,j,x lj ,y lj ,s lj ,l 1j ) | … | (1,m,x 1m ,y 1m ,s 1m ,l 1m ) |
… | … | … | … | … | … |
i | (i,1,x i1 ,y i1 ,s i1 ,l i1 ) | … | (i,j,x ij ,y ij ,s ij ,l ij ) | … | (i,m,x im ,y im ,s im ,l im ) |
… | … | … | … | … | … |
n | (n,1,x n1 ,y n1 ,s nl ,l n1 ) | … | (n,j,x nj ,y nj ,s nj ,l nj ) | … | (n,m,x nm ,y nm ,s nm ,l nm ) |
When the face characteristic information of the marked faces is recorded, each marked face can be numbered, and meanwhile, frame images in the test video sequence can be numbered sequentially. In the above table, the first row represents the number of frame images, such as 1 st, 2 nd,..and j th frames, and the total number of frames of the test video sequence may be expressed as N frame The total frame number may be m. The first column in the table represents the number of the labeled face, that is, the number of different people, and in the embodiment of the present application, different labeled faces represent different people, that is, different people have corresponding numbers. The first column in the table represents the labeled face 1,Labeling faces 2, and labeling faces i, wherein the total number of people in the test video sequence is n, and the total number of labeling faces is also n.
As described above, the face feature information of the labeled face also includes coordinates of the labeled face, a face coincidence score, and a face size. In the above table, x ij 、y ij And labeling the horizontal coordinate and the vertical coordinate of the pixel point position corresponding to the preset labeling position point of the face on the j-th frame image for labeling the face i, wherein the preset labeling position point can be the center point of the labeled face. s is(s) ij In order to mark the face coincidence degree score of the face i on the j-th frame image, the face coincidence degree score can be obtained by scoring the quality of the mark face on the frame image. l (L) ij The size of the face i on the j-th frame image is marked. The face characteristic information of the labeled face can be recorded in the following form:
P ij =(i,j,x ij ,y ij ,s ij ,l ij )。
step S220, obtaining a plurality of face small images obtained by the identification of the test video sequence by a face recognition algorithm to be evaluated and small image characteristic information of the face small images, wherein the small image characteristic information comprises at least one parameter in the number of frame images where the face small images are located and the coordinates of the frame images where the face small images are located.
When the face recognition algorithm is evaluated, a face recognition camera provided with the face recognition algorithm to be evaluated can be used for shooting a test video sequence on the front face, the face recognition function of the face recognition camera is started, the face recognition algorithm can automatically recognize the face in the test video sequence and intercept the recognized face small image in the playing process of the test video sequence, the recognized face small image can be stored in a preset path, and the evaluation of the face recognition algorithm is realized through the analysis of various information of the face small image.
In detail, when the face recognition algorithm recognizes and obtains the face small image, the face small image can carry corresponding small image characteristic information, and the small image characteristic information can comprise the face small imageAt least one parameter in the number of the frame image and the coordinates of the frame image of the face small image. The number of face panels identified by the face recognition algorithm may be expressed as N all . The small image characteristic information of the human face small image can be expressed as: p (P) ij ′=(j,x kj ′,y kj '), wherein k is the number of the face small drawing, and k can be 1-N all An integer between. j is the serial number of the frame image where the face small picture is located, which can be 1-N frame An integer therebetween. X is x kj ′, y kj ' respectively represents the abscissa and the ordinate of the position point of the preset small image of the human face small image k mapped to the position of the corresponding pixel point on the j-th frame image of the test video sequence. The preset small image position point may be a center point of the human face small image.
Step S230, determining whether the face small image is matched with the labeling face on the frame image of the face small image according to the distance between the preset small image position point of the face small image and the preset labeling position point of the labeling face on the frame image of the face small image.
As described above, the preset plot position point may be a center point of the face plot, and the preset labeling position point may be a center point of the labeled face. When calculating the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeling human face on the frame image of the small image of the human face, the following formula can be adopted for calculation:
wherein said d ki Representing the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeling human face on the frame image of the small image of the human face.
In the embodiment of the application, whether the face small image is matched with the labeling face or not is judged by calculating the distance between the preset small image position point of the face small image and the preset labeling position point of the labeling face on the frame image where the face small image is positioned, so that the evaluation accuracy is higher.
In the prior art, the coincidence of the recognized face image sample and the marked face image is measured by calculating the ratio of the overlapping area of the recognized face area and the marked face image to the total area after the overlapping area and the total area, and the method has great defects, and is described below with reference to fig. 2 and 3.
The method has the following defects:
assume that face images D1 and D2 detected and output by the face recognition algorithm satisfy the following relationship:
that is, the matching degree of the face images D1 and D2 and the face region of the labeling area L is equal, and as shown in fig. 2 and 3, it is easy to see from fig. 2 that the face image D2 can be regarded as a recognized face image, and D1 is a recognized half face image. In this case, the method considers that the algorithm captures the face, but cannot further compare the performance of the algorithm. It is apparent from the figure that the algorithmic performance of the identification region D2 is significantly better than that of the identification region D1. As can be seen from fig. 3, the face image D2 can be seen as an algorithm capturing the face, whereas D1 recognizes only a very small part of the face, and the subjective perception should be considered as a false positive. In this case, the method would consider the face captured by the algorithm. In this case, erroneous judgment is likely to occur. The embodiment of the application adopts the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeling human face on the frame image of the small image of the human face, and the smaller the distance is, the more matching with the labeling human face image is shown, and the evaluation index of the optimization rate of the human face is combined, so that the algorithm performance can be reflected more accurately.
As shown in fig. 4, when determining whether the face thumbnail matches the labeled face on the frame image in which the face thumbnail is located according to the distance between the preset thumbnail position point of the face thumbnail and the preset labeling position point of the labeled face on the frame image in which the face thumbnail is located, the following sub-steps S231 to S233 may be included for calculation.
And step S231, comparing the distance between the preset small picture position point of the small picture of the human face and the preset labeling position point of the labeled human face in the same frame of image with the product of the size of the human face and the preset length coefficient.
And step S232, if the distance between the preset small picture position point of the small picture of the human face and the preset labeling position point of the labeled human face in the same frame of image is smaller than or equal to the product of the size of the labeled human face and the preset length coefficient, determining that the small picture of the human face is matched with the labeled human face.
And step S233, if the distance between the preset small picture position point of the small picture of the human face and the preset labeling position point of the labeled human face in the same frame of image is larger than the product of the size of the labeled human face and the preset length coefficient, determining that the small picture of the human face is not matched with the labeled human face.
The above process of determining whether the face thumbnail matches the labeled face can be represented by the following formula:
wherein m is kij And for the face coincidence degree of the face small image and the labeled face, the beta is a preset length coefficient related to the face coincidence degree score, the value range is 0-1, and the higher the face coincidence degree score is, the smaller the value is. In the actual test, a weight can be obtained according to a large amount of experimental data analysis. At m kij And when the face small image k is equal to 1, the face small image k is matched with the labeled face i of the frame image where the face small image k is positioned. At m kij When the face small image k is equal to 0, the face small image k is not matched with the labeled face i of the frame where the face small image k is located.
Step S240, calculating at least one evaluation index of the face recognition algorithm for the capture rate, the multi-capture rate, the false beat rate and the face preference rate of the faces in the test video sequence according to the number of the face small images matched with the marked faces in all the frame images.
In detail, as shown in fig. 5, the step of calculating the capturing rate of the face recognition algorithm for the face in the test video sequence according to the number of face small images matched with the labeled face in all frame images may include the following sub-steps S241 to S242.
S241, determining the total number of people identified by the face identification algorithm according to the number of face small images matched with the marked face in all the frame images;
in detail, if the number of face small images matched with the marked face in all the frame images is zero, determining that the marked face is not recognized by the face recognition algorithm; if the number of the face small images matched with the marked face in all the frame images is not zero, determining that the marked face is recognized by the face recognition algorithm; and calculating the sum of the number of all the marked faces recognized by the face recognition algorithm to obtain the total number recognized by the face recognition algorithm.
The method for obtaining the population identified by the face recognition algorithm can be expressed by the following formula:
wherein N is face Representing the total number of labeled faces in the test video sequence.
N capture Representing the total number of people identified by the face recognition algorithm.
r i Marking the identification mark of the face for the ith sheet, wherein r is as follows i The calculation formula of (2) can be:wherein the N is i Representing a face minigram number matched with a face i in the test video sequence, the N i The calculation formula of (2) can be:
in the substep S242, the ratio of the total number of people identified by the face recognition algorithm to the total number of people corresponding to the test video sequence is calculated, so as to obtain the capture rate of the face recognition algorithm, and the process of calculating the capture rate can be represented by the following formula:
wherein the R is capture And the capture rate of the face recognition algorithm.
Substep S243, calculating the multiple capture rate of the face recognition algorithm for the faces in the test video sequence according to the number of face small images matched with the labeled faces in all frame images, where the multiple capture rate is obtained by calculating the ratio of the total number of the multiple captured face small images to the number of the face small images in all frame images, and is denoted as R more The process of calculating the multiple grip ratio can be expressed by the following formula:
wherein the N is more Representing the total number of the multi-truncated face small figures;
as shown in fig. 6, the step of calculating the false beat rate of the face recognition algorithm for the faces in the test video sequence according to the number of face small images matched with the labeled faces in all frame images includes the following substeps S244 to S245.
In the substep S244, the difference between the number of face small images and the number of face small images matched with the labeled face is calculated, so as to obtain the number of face small images that are miscaptured by the face recognition algorithm.
And step S245, calculating the ratio of the number of face small images which are miscaptured by the face recognition algorithm to the number of the face small images to obtain the miscapturing rate corresponding to the face recognition algorithm.
The process of calculating the beat rate can be expressed by the following formula:
wherein the R is mistake Representing the false beat rate, the N is mistake And representing the number of face small drawings which are miscaptured by the face recognition algorithm.
According to the number of face small images matched with the marked faces in all frame images, the step of calculating the face preference rate of the face recognition algorithm for the faces in the test video sequence can obtain the face preference rate of the face recognition algorithm for the faces in the test video sequence by calculating the ratio of the total number of people corresponding to the test video sequence to the total number of non-repeated faces in the face small images.
In detail, as shown in fig. 7, the step of calculating the total number of non-repeated faces in the face-facet drawing includes sub-steps S246 to S249:
step S246, according to the coordinates of the frame image where the face small image is located, determining the face coincidence degree score of the labeled face matched with the face small image;
step S247, if the number of the face small drawings matched with the labeling face is not zero, and the face coincidence degree score of the labeling face matched with the face small drawings is larger than or equal to a preset score, determining that the labeling face has a corresponding non-repeated front face;
step S248, if the number of face small drawings matched with the labeling face is zero or the face coincidence degree score of the labeling face matched with the face small drawings is smaller than a preset score, determining that the labeling face does not have a corresponding non-repeated face;
and a substep S249, calculating the number of labeled faces with corresponding non-repeated faces in the labeled faces, and obtaining the total number of non-repeated faces in the face small diagram.
The process of calculating the face preference rate can be expressed by the following formula:
the op i The face optimization matching degree is defined as the face optimization matching degree, and the calculation formula is as follows:
wherein j is [1, N ] frame ]An integer between;
when the face recognition algorithm recognizes the face small image matched with the labeled face and the face coincidence score of the recognized face small image is greater than or equal to the preset score, for example, the preset score may be 90 scores, at this time, op i Equal to 1, the face thumbnail is classified as a preferred face thumbnail.
When the face recognition algorithm does not recognize the face small image matched with the marked face or the face recognition algorithm recognizes the face small image matched with the marked face, the face coincidence score of the marked face is lower than the preset score due to the fact that the face small image is unclear or the face image in the face small image is incomplete, at the moment, op i Equal to 0, the face thumbnail cannot be categorized as a preferred face thumbnail.
Further, as shown in fig. 8, a functional block diagram of the face recognition algorithm evaluation device according to the preferred embodiment of the present application is shown. In this embodiment, the face recognition algorithm evaluation device 200 includes:
the labeling face information obtaining module 210 is configured to obtain a labeling face obtained after labeling a face in a test video sequence and face feature information of the labeling face, where the test video sequence includes a plurality of frame images, and the face feature information includes coordinates of the labeling face in the frame images, numbers of the frame images, a face coincidence score, and a face size of the labeling face on the frame images.
The face small image information obtaining module 220 is configured to obtain a plurality of face small images obtained by identifying the test video sequence by a face recognition algorithm to be evaluated and small image feature information of the face small images, where the small image feature information includes a number of a frame image where the face small images are located and coordinates of the frame image where the face small images are located.
The matching judgment module 230 is configured to determine whether the face thumbnail is matched with the labeled face on the frame image where the face thumbnail is located according to the distance between the preset thumbnail position point of the face thumbnail and the preset labeling position point of the labeled face on the frame image where the face thumbnail is located.
It can be understood that the specific operation method of each functional module in this embodiment may refer to the detailed description of the corresponding steps in the above method embodiment, which is not repeated herein.
In summary, according to the method and the device for evaluating the face recognition algorithm provided by the embodiment of the application, after labeling the face in the test video sequence, the obtained labeled face and the face feature information of the labeled face are obtained, then a plurality of face miniatures obtained by identifying the test video sequence through the face recognition algorithm to be evaluated and the miniatures feature information of the face miniatures are obtained, then at least one evaluation index of the face recognition algorithm in terms of the capture rate, the multi-capture rate, the false capture rate and the face preference rate of the face in the test video sequence is calculated according to the distance between the preset miniature position point of the face miniature and the preset labeling position point of the labeled face on the frame image where the face miniature is located. Therefore, the matching degree of the face small image output by the face recognition algorithm and the target face in the test video sequence is converted into the calculated distance between the preset small image position point of the output face small image and the preset labeling position point of the labeling face on the frame image of the face small image, so that the evaluation accuracy is higher, and various evaluation indexes are further calculated, so that the evaluation is more comprehensive.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
Claims (8)
1. The face recognition algorithm evaluation method is characterized by comprising the following steps of:
the method comprises the steps of obtaining a marked face and face characteristic information of the marked face, which are obtained after marking the face in a test video sequence, wherein the test video sequence comprises a plurality of frame images, and the face characteristic information comprises coordinates of the marked face in the frame images, numbers of the frame images, face coincidence scores and face sizes of the marked face on the frame images;
obtaining a plurality of face small drawings and small drawing characteristic information of the face small drawings, wherein the face small drawings are obtained by identifying the test video sequence through a face recognition algorithm to be evaluated, and the small drawing characteristic information comprises the number of a frame image where the face small drawings are located and the coordinates of the frame image where the face small drawings are located;
comparing the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeled human face in the same frame of image, and the product of the human face size and the preset length coefficient;
if the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeling human face in the same frame of image is smaller than or equal to the product of the size of the labeling human face and the preset length coefficient, determining that the small image of the human face is matched with the labeling human face;
if the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeling human face in the same frame of image is larger than the product of the size of the labeling human face and the preset length coefficient, determining that the small image of the human face is not matched with the labeling human face;
and calculating at least one evaluation index of the face recognition algorithm aiming at the capture rate, the multi-capture rate, the false beat rate and the face preference rate of the faces in the test video sequence according to the number of the face small images matched with the marked faces in all the frame images.
2. The face recognition algorithm evaluation method according to claim 1, wherein the total number of people corresponding to the test video sequence is obtained according to the number of non-repeated faces in the labeled faces;
according to the number of face small images matched with the marked faces in all the frame images, the step of calculating the capture rate of the face recognition algorithm for the faces in the test video sequence comprises the following steps:
determining the total number of people identified by the face recognition algorithm according to the number of face small images matched with the marked faces in all the frame images;
and calculating the ratio of the total number identified by the face recognition algorithm to the total number corresponding to the test video sequence to obtain the capture rate of the face recognition algorithm.
3. The face recognition algorithm evaluation method according to claim 2, wherein the step of determining the total number of people recognized by the face recognition algorithm based on the number of face figures matching the labeled face in all frame images includes:
if the number of the face small images matched with the marked face in all the frame images is zero, determining that the marked face is not recognized by the face recognition algorithm;
if the number of the face small images matched with the marked face in all the frame images is not zero, determining that the marked face is recognized by the face recognition algorithm;
and calculating the sum of the number of all the marked faces recognized by the face recognition algorithm to obtain the total number recognized by the face recognition algorithm.
4. The method according to claim 1, wherein the step of calculating the multiple grabbing rates of the face recognition algorithm for the faces in the test video sequence according to the number of face small images matched with the labeled faces in all frame images comprises:
and calculating the ratio of the total number of the face small images of the multi-snapshot in all the frame images to the number of the face small images to obtain the multi-grabbing rate.
5. The method according to claim 1, wherein the step of calculating the false beat rate of the face recognition algorithm for the faces in the test video sequence according to the number of face small drawings matched with the labeled faces in all frame images comprises:
calculating the difference value of the number of the face small images and the number of the face small images matched with the marked face to obtain the number of the face small images which are miscaptured by the face recognition algorithm;
and calculating the ratio of the number of face small images which are miscaptured by the face recognition algorithm to the number of the face small images to obtain the miscapturing rate corresponding to the face recognition algorithm.
6. The face recognition algorithm evaluation method according to claim 1, wherein the step of calculating the face preference rate of the face recognition algorithm for the faces in the test video sequence according to the number of face panels matched with the labeled faces in all frame images comprises:
and calculating the total number of non-repeated positive faces in the face small diagram, and obtaining the face optimization rate of the face recognition algorithm aiming at the faces in the test video sequence by the ratio of the total number of people corresponding to the test video sequence.
7. The face recognition algorithm evaluation method of claim 6, wherein the step of calculating the total number of non-duplicate faces in the face thumbnail comprises:
determining a face coincidence degree score of a labeling face matched with the face small image according to the coordinates of the frame image where the face small image is located;
if the number of the face small drawings matched with the labeling face is not zero, and the face coincidence degree score of the labeling face matched with the face small drawings is larger than or equal to a preset score, determining that the labeling face has a corresponding non-repeated face;
if the number of the face small drawings matched with the labeling face is zero or the face coincidence degree score of the labeling face matched with the face small drawings is smaller than a preset score, determining that the labeling face does not have a corresponding non-repeated face;
and calculating the number of the marked faces with the corresponding non-repeated faces in the marked faces to obtain the total number of the non-repeated faces in the face small diagram.
8. A face recognition algorithm evaluation device, characterized in that the face recognition algorithm evaluation device comprises:
the system comprises a labeling face information obtaining module, a labeling face information obtaining module and a labeling module, wherein the labeling face is obtained after labeling a face in a test video sequence, and face characteristic information of the labeling face is obtained, the test video sequence comprises a plurality of frame images, and the face characteristic information comprises coordinates of the labeling face in the frame images, numbers of the frame images, a face coincidence degree score and a face size of the labeling face on the frame images;
the human face small image information obtaining module is used for obtaining a plurality of human face small images obtained by identifying the test video sequence through a human face recognition algorithm to be evaluated and small image characteristic information of the human face small images, wherein the small image characteristic information comprises the number of a frame image where the human face small images are located and the coordinates of the frame image where the human face small images are located;
the matching judgment module is used for comparing the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeled human face in the same frame of image and the product of the size of the human face and the preset length coefficient; if the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeling human face in the same frame of image is smaller than or equal to the product of the size of the labeling human face and the preset length coefficient, determining that the small image of the human face is matched with the labeling human face; if the distance between the preset small image position point of the small image of the human face and the preset labeling position point of the labeling human face in the same frame of image is larger than the product of the size of the labeling human face and the preset length coefficient, determining that the small image of the human face is not matched with the labeling human face;
the matching judgment module is also used for: and calculating at least one evaluation index of the face recognition algorithm aiming at the capture rate, the multi-capture rate, the false beat rate and the face preference rate of the faces in the test video sequence according to the number of the face small images matched with the marked faces in all the frame images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811297995.7A CN111126122B (en) | 2018-10-31 | 2018-10-31 | Face recognition algorithm evaluation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811297995.7A CN111126122B (en) | 2018-10-31 | 2018-10-31 | Face recognition algorithm evaluation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111126122A CN111126122A (en) | 2020-05-08 |
CN111126122B true CN111126122B (en) | 2023-10-27 |
Family
ID=70494546
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811297995.7A Active CN111126122B (en) | 2018-10-31 | 2018-10-31 | Face recognition algorithm evaluation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111126122B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111639298B (en) * | 2020-05-15 | 2023-06-20 | 圣点世纪科技股份有限公司 | Running lot detection method of biological feature recognition algorithm |
CN111626369B (en) * | 2020-05-29 | 2021-07-30 | 广州云从博衍智能科技有限公司 | Face recognition algorithm effect evaluation method and device, machine readable medium and equipment |
CN111738349B (en) * | 2020-06-29 | 2023-05-02 | 重庆紫光华山智安科技有限公司 | Detection effect evaluation method and device of target detection algorithm, storage medium and equipment |
CN112200217B (en) * | 2020-09-09 | 2023-06-09 | 天津津航技术物理研究所 | Identification algorithm evaluation method and system based on infrared image big data |
CN112631896B (en) * | 2020-12-02 | 2024-04-05 | 武汉旷视金智科技有限公司 | Equipment performance test method and device, storage medium and electronic equipment |
CN112836759B (en) * | 2021-02-09 | 2023-05-30 | 重庆紫光华山智安科技有限公司 | Machine-selected picture evaluation method and device, storage medium and electronic equipment |
CN117636426A (en) * | 2023-11-20 | 2024-03-01 | 北京理工大学珠海学院 | Attention mechanism-based facial and scene emotion recognition method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102945366A (en) * | 2012-11-23 | 2013-02-27 | 海信集团有限公司 | Method and device for face recognition |
CN107679578A (en) * | 2017-10-12 | 2018-02-09 | 北京旷视科技有限公司 | The method of testing of Target Recognition Algorithms, apparatus and system |
CN108491784A (en) * | 2018-03-16 | 2018-09-04 | 南京邮电大学 | The identification in real time of single feature towards large-scale live scene and automatic screenshot method |
CN108717530A (en) * | 2018-05-21 | 2018-10-30 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10282595B2 (en) * | 2016-06-24 | 2019-05-07 | International Business Machines Corporation | Facial recognition encode analysis |
-
2018
- 2018-10-31 CN CN201811297995.7A patent/CN111126122B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102945366A (en) * | 2012-11-23 | 2013-02-27 | 海信集团有限公司 | Method and device for face recognition |
CN107679578A (en) * | 2017-10-12 | 2018-02-09 | 北京旷视科技有限公司 | The method of testing of Target Recognition Algorithms, apparatus and system |
CN108491784A (en) * | 2018-03-16 | 2018-09-04 | 南京邮电大学 | The identification in real time of single feature towards large-scale live scene and automatic screenshot method |
CN108717530A (en) * | 2018-05-21 | 2018-10-30 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111126122A (en) | 2020-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111126122B (en) | Face recognition algorithm evaluation method and device | |
CN109344787B (en) | Specific target tracking method based on face recognition and pedestrian re-recognition | |
US11017215B2 (en) | Two-stage person searching method combining face and appearance features | |
CN110598535B (en) | Face recognition analysis method used in monitoring video data | |
CN107944427B (en) | Dynamic face recognition method and computer readable storage medium | |
CN108875600A (en) | A kind of information of vehicles detection and tracking method, apparatus and computer storage medium based on YOLO | |
CN107169106B (en) | Video retrieval method, device, storage medium and processor | |
CN109426785B (en) | Human body target identity recognition method and device | |
CN111310662B (en) | Flame detection and identification method and system based on integrated deep network | |
CN109784270B (en) | Processing method for improving face picture recognition integrity | |
CN111968152B (en) | Dynamic identity recognition method and device | |
CN111507232B (en) | Stranger identification method and system based on multi-mode multi-strategy fusion | |
CN111368772A (en) | Identity recognition method, device, equipment and storage medium | |
CN110827432B (en) | Class attendance checking method and system based on face recognition | |
CN110674680B (en) | Living body identification method, living body identification device and storage medium | |
CN111723656B (en) | Smog detection method and device based on YOLO v3 and self-optimization | |
CN111523469A (en) | Pedestrian re-identification method, system, equipment and computer readable storage medium | |
CN111079648A (en) | Data set cleaning method and device and electronic system | |
CN111625687A (en) | Method and system for quickly searching people in media asset video library through human faces | |
TW202042113A (en) | Face recognition system, establishing data method for face recognition, and face recognizing method thereof | |
CN115062186A (en) | Video content retrieval method, device, equipment and storage medium | |
CN112001280B (en) | Real-time and online optimized face recognition system and method | |
CN117079180A (en) | Video detection method and device | |
CN115019152A (en) | Image shooting integrity judgment method and device | |
CN111160263B (en) | Method and system for acquiring face recognition threshold |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |