CN112307852A - Matching method of face detection target and marking, storage medium and processor - Google Patents

Matching method of face detection target and marking, storage medium and processor Download PDF

Info

Publication number
CN112307852A
CN112307852A CN201910711116.9A CN201910711116A CN112307852A CN 112307852 A CN112307852 A CN 112307852A CN 201910711116 A CN201910711116 A CN 201910711116A CN 112307852 A CN112307852 A CN 112307852A
Authority
CN
China
Prior art keywords
marking
data
detection
algorithm model
test set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910711116.9A
Other languages
Chinese (zh)
Inventor
刘若鹏
栾琳
骆钊锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Guangqi Intelligent Technology Co ltd
Original Assignee
Xi'an Guangqi Future Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Guangqi Future Technology Research Institute filed Critical Xi'an Guangqi Future Technology Research Institute
Priority to CN201910711116.9A priority Critical patent/CN112307852A/en
Publication of CN112307852A publication Critical patent/CN112307852A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a matching method of a face detection target and marking, a storage medium and a processor, wherein the method comprises the following steps: obtaining test set data; calling a detection algorithm model to detect the test set data to obtain model detection facial feature pixel coordinate axis data; obtaining marking facial feature pixel coordinate axis data obtained by marking the test set data; matching the coordinate axis data of the model detection facial feature pixels with the coordinate axis data of the marking facial feature pixels, and calculating an intersection ratio to obtain an intersection ratio data set; and judging the capability of the algorithm model for detecting the result. Correctly judging and screening out the head pixel coordinates of correct detection, repeated detection, error detection and missed detection in the algorithm model checking result, and objectively evaluating the capability of the algorithm model; the training efficiency of the next step of the algorithm is improved, and the training direction of the algorithm model in one step is given; the method can also be used for the completed algorithm model, and provides help and evidence for debugging the optimal threshold value.

Description

Matching method of face detection target and marking, storage medium and processor
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of face recognition, in particular to a face detection target and marking matching method, a storage medium and a processor.
[ background of the invention ]
In recent years, artificial intelligence is rapidly developed, and the application based on the artificial intelligence is widely expanded in the fields of face recognition, intelligent tracking, image recognition and the like, and great progress and success are achieved. As is well known, the great premise of artificial intelligence face recognition and face recognition tracking is that face features can be extracted and then recognized. Whether the artificial intelligence algorithm accurately detects the human head in the image and identifies the human face features plays a key role in playing a vital role. In other words, an accurate human head detection algorithm is a basic prerequisite for face recognition and recognition tracking. At present, the method for effectively and visually testing the capability of the human head detection algorithm is to compare the result of a detection algorithm model with the marking result by using the same test set. When the human head detection algorithm model works with low confidence coefficient, a large number of human heads are repeatedly detected, and non-human heads are wrongly detected as human heads.
[ summary of the invention ]
The technical problem to be solved by the invention is to provide a matching method of a face detection target and marking, a storage medium and a processor, which can correctly judge and screen out human head pixel coordinates with correct detection, repeated detection, wrong detection and missed detection in an algorithm model checking result. The capability of the algorithm model can be objectively evaluated, and the indexes of average IOU, detection rate, precision rate and recall rate are given. The training efficiency of the next step of the algorithm is improved, and the training direction of the algorithm model in one step is given; meanwhile, the method can also act on the finished algorithm model to provide help and evidence for debugging the optimal threshold value.
To solve the foregoing technical problem, in one aspect, an embodiment of the present invention provides a matching method for a face detection target and a marking, including: obtaining test set data; calling a detection algorithm model to detect the test set data to obtain model detection facial feature pixel coordinate axis data; obtaining marking facial feature pixel coordinate axis data obtained by marking the test set data;
matching coordinate axis data of facial feature pixels of the model detection with coordinate axis data of marking facial feature pixels, and calculating an intersection ratio between the coordinate axis data of the facial feature pixels of the model detection and the coordinate axis data of marking facial feature pixels to obtain an intersection ratio data set; and judging the capability of the algorithm model for detecting the result according to the cross-comparison data set.
Preferably, the ability of the algorithm model to detect results is judged from the cross-comparison data set, including: and processing the cross comparison data, and calculating the number of false detections, the number of missed detections and the number of repeated detections of the test set data by calling a detection algorithm model.
Preferably, obtaining test set data comprises: and customizing the rule of the test set data.
Preferably, the obtaining of the marked facial feature pixel coordinate axis data obtained by marking the test set data includes: acquiring information for marking the test set data, marking the head in the information, and recording the pixel coordinates of the head.
Preferably, the step of calling a detection algorithm model to detect the test set data to obtain model detection facial feature pixel coordinate axis data includes: and calling a detection algorithm model to detect the test set data, and marking the facial feature pixel coordinates of each human head.
Preferably, the calculating the number of false detections, the number of missed detections and the number of repeated detections of the test set data by calling the detection algorithm model comprises: marking n person head rectangular boxes, detecting m person head rectangular boxes by the algorithm model, and then setting the n multiplied by m intersection comparison queues as:
Figure BDA0002153771330000021
the IOU is an intersection comparison, the front serial number of the IOU is a key value, the maximum intersection comparison value and the corresponding key value are taken out from an intersection comparison queue, whether the intersection comparison value is greater than a first intersection comparison threshold value or not is judged, if yes, the model is correctly detected to be matched with the marking frame, and a correct matching number sequence is put in to obtain the number of correct identifications;
eliminating the cross comparison data in the correct matching sequence and the corresponding key value in the cross comparison data set, and eliminating the key value AiMjN is more than or equal to i, m is less than or equal to j, and cutting is carried out to obtain a key value A of the marking number seriesiN is less than or equal to i and the key value M detected by the algorithm modeljJ is less than or equal to m, the cyclic cross-over ratio sequence will contain AiI is not more than n or MjAnd taking out the key value of j less than or equal to M for judgment and simultaneously removing the key value from the cross-comparison queue, if the key value contains MjIf the intersection comparison data of the key values of j and m are larger than the threshold value two, putting the key values into a repeated detection sequence to obtain the number of repeated detections;
such as containingAiAnd if the cross comparison data of the key value of i less than or equal to n is greater than the second cross comparison threshold value, putting the key value into the error detection sequence to obtain the error detection number.
Preferably, the rules for customizing the test set data include: the length and width or resolution of the test data set is customized.
Preferably, after recording the pixel coordinates of the human head, the method further comprises: and recording the pixel coordinate information of the head of the person by using an xml file, and outputting one xml file for each picture of the test set data.
Preferably, marking the facial feature pixel coordinates of each human head further comprises: storing the facial feature pixel coordinate data of each human head with a txt document, and outputting the detected pixel coordinates of each human head.
Preferably, the first intersection ratio threshold is 0.3-0.4.
Preferably, the second intersection ratio threshold value is 0.4-1.
In another aspect, an embodiment of the present invention provides a storage medium, which includes a stored program, where the program is executed to execute the above-mentioned face detection target and marking matching method.
In another aspect, an embodiment of the present invention provides a processor, where the processor is configured to execute a program, where the program is executed to perform the above-mentioned face detection target and marking matching method.
Compared with the prior art, the technical scheme has the following advantages: the marking and algorithm model detection matching method can correctly judge and screen out the head pixel coordinates of correct detection, repeated detection, error detection and missed detection in the algorithm model checking result; the capability of the algorithm model can be objectively evaluated, and the indexes of average IOU, detection rate, precision rate and recall rate are given; the training efficiency of the next step of the algorithm is improved, and the training direction of the algorithm model in one step is given; meanwhile, the method can also act on the finished algorithm model to provide help and evidence for debugging the optimal threshold value.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a schematic diagram of the cross-over ratio principle in target detection in the prior art.
FIG. 2 is a flow chart of a face detection target and marking matching method of the present invention.
[ detailed description ] embodiments
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a schematic diagram of the cross-over ratio principle in target detection in the prior art. As shown in fig. 1, an Intersection-over-unity (IOU), a concept used in target detection, is the overlapping rate of the generated candidate frame (candidate frame) and the original labeled frame (ground road frame), i.e. the ratio of their Intersection to Union. The optimal situation is complete overlap, i.e. a ratio of 1.
Example one
FIG. 2 is a flow chart of a face detection target and marking matching method of the present invention. As shown in fig. 2, a matching method of a face detection target and marking is characterized by comprising the following steps:
s11, obtaining test set data;
s12, calling a detection algorithm model to detect the test set data to obtain model detection facial feature pixel coordinate axis data;
s13, obtaining marking facial feature pixel coordinate axis data obtained by marking the test set data;
s14, matching the coordinate axis data of the facial feature pixel of the model detection with the coordinate axis data of the marking facial feature pixel, and calculating the intersection ratio between the coordinate axis data of the facial feature pixel of the model detection and the coordinate axis data of the marking facial feature pixel to obtain an intersection ratio data set;
and S15, judging the capability of the algorithm model to detect the result according to the intersection ratio data set.
Judging the capability of the algorithm model to detect the result according to the cross-comparison data set, wherein the capability comprises the following steps: and processing the cross comparison data, and calculating the number of false detections, the number of missed detections and the number of repeated detections of the test set data by calling a detection algorithm model. Obtaining test set data includes: and customizing the rule of the test set data. Rules for customizing test set data include: the length and width or resolution of the test data set is customized. Obtaining marking facial feature pixel coordinate axis data obtained by marking test set data comprises: acquiring information for marking the test set data, marking the head in the information, and recording the pixel coordinates of the head. Calling a detection algorithm model to detect the test set data, and obtaining model detection facial feature pixel coordinate axis data comprises the following steps: and calling a detection algorithm model to detect the test set data, and marking the facial feature pixel coordinates of each human head. The step of calculating the number of false detections, the number of missed detections and the number of repeated detections of the test set data by calling the detection algorithm model comprises the following steps: marking n person head rectangular boxes, detecting m person head rectangular boxes by the algorithm model, and then setting the n multiplied by m intersection comparison queues as:
Figure BDA0002153771330000051
the IOU is an intersection comparison, the front serial number of the IOU is a key value, the maximum intersection comparison value and the corresponding key value are taken out from an intersection comparison queue, whether the intersection comparison value is greater than a first intersection comparison threshold value or not is judged, if yes, the model is correctly detected to be matched with the marking frame, and a correct matching number sequence is put in to obtain the number of correct identifications; eliminating the cross comparison data in the correct matching sequence and the corresponding key value in the cross comparison data set, and eliminating the key value AiMjN is more than or equal to i, m is less than or equal to j, and cutting is carried out to obtain a key value A of the marking number seriesiI is less than or equal to n and the relation between the algorithm model detectionKey value MjJ is less than or equal to m, the cyclic cross-over ratio sequence will contain AiI is not more than n or MjAnd taking out the key value of j less than or equal to M for judgment and simultaneously removing the key value from the cross-comparison queue, if the key value contains MjIf the intersection comparison data of the key values of j and m are larger than the threshold value two, putting the key values into a repeated detection sequence to obtain the number of repeated detections; such as containing AiAnd if the cross comparison data of the key value of i less than or equal to n is greater than the second cross comparison threshold value, putting the key value into the error detection sequence to obtain the error detection number. After recording the pixel coordinates of the human head, the method also comprises the following steps: and recording the pixel coordinate information of the head of the person by using an xml file, and outputting one xml file for each picture of the test set data. Marking the facial feature pixel coordinates of each person's head further comprises: storing the facial feature pixel coordinate data of each human head with a txt document, and outputting the detected pixel coordinates of each human head. The first cross-over ratio threshold is 0.3-0.4. The second cross-over ratio threshold is 0.4-1.
Example two
The face detection may be human head detection, facial feature detection, or the like. This embodiment will be described by taking human head detection as an example.
Collecting test data:
in specific implementation, the picture pixels are set to be larger than 640 pixels × 480 pixels, the main object in the picture is a person, the main person in the picture is divided into intervals of 0 to 20 persons, 20 to 50 persons, 50 to 100 persons, 100 to 200 persons and the like, and the head of the person in the picture is clearly visible. During specific implementation, pictures are collected by cutting the video of the camera.
Data acquisition:
marking information of each test set picture is obtained in a circulating mode, a head in the picture in the information is marked, pixel coordinates of the information are recorded, coordinate information is stored by using an xml file, each picture outputs an xml file, and the file name is named by the picture name. The human head marking is to mark the human head in the picture by using a rectangular frame similar to the screenshot, and the coordinate of the human head is determined only by storing the coordinate point (xmin, ymin) at the upper left corner and the coordinate point (xmax, ymax) at the lower right corner of the rectangular frame.
And circulating each test set picture to call a detection algorithm model to detect the human head, and outputting the detected pixel coordinate of each human head, wherein each picture has txt document storage coordinate data, and the file names are named by picture names. The algorithm model detects the head, stores the detected head and marks the head by a rectangular box, and only needs to store the coordinate points (xmin, ymin) at the upper left corner and the coordinate points (xmax, ymax) at the lower right corner of the rectangular box for determining the coordinates of the head.
Data processing:
and reading the marked head coordinate xml file with the same name and the algorithm model detected head coordinate txt file by using a python script tool. The user-defined method processes file data to obtain two lists, and the lists respectively store (xmin, ymin, xmax and ymax) detected by the marking and algorithm model. And each rectangular frame coordinate in the list is endowed with a key value, and each head rectangular coordinate has an independent and unique key value matched with the coordinate value. For example, marking n heads, the marking list is:
{A1:(xmin、ymin、xmax、ymax),
A2:(xmin、ymin、xmax、ymax),
......
An:(xmin、ymin、xmax、ymax)},
such as an element A in the list1(xmin, ymin, xmax, ymax) in which A1Key value, separated from coordinates by a colon. The algorithm model detection data has m heads, and the algorithm model detection data list is as follows:
{M1:(xmin、ymin、xmax、ymax),
M2:(xmin、ymin、xmax、ymax),
......
Mm:(xmin、ymin、xmax、ymax)}。
and traversing the two lists to call an IOU method to calculate the IOU, if n person head rectangular boxes are marked, detecting m person head rectangular boxes by an algorithm model, and traversing the two lists to calculate n multiplied by m IOU values. And (3) taking the key values of the two lists to be recombined, giving the key value to the calculated IOU, and simultaneously creating a new list to store the IOU, such as:
Figure BDA0002153771330000071
the user-defined method comprises the following processing results:
in data processing, the marking result and the algorithm model detection result are matched pairwise to calculate the IOU. And traversing the two lists to call an IOU method to calculate the IOU, if n person head rectangular boxes are marked, detecting m person head rectangular boxes by an algorithm model, and traversing the two lists to calculate n multiplied by m IOU values. And (3) recombining the key values of the two lists, giving the key value to the calculated IOU, creating a new list to store the IOU at the same time, obtaining the number sequence of the total IOU, processing the obtained key value and the IOU according to a user-defined method, and judging the number of people, the number of false detections, the number of rechecks and the number of missed detections correctly detected by the algorithm model.
Removing the combination with IOU equal to 0 from the number sequence of the total IOU, and taking out the combination of the maximum value of the IOU from the circular IOU list if the maximum value is the key value of A1M1And IOU is greater than 0.3, adding 1 to number of correctly detected people, and then making key word contain A1And M1All combinations of (A) are removed from the IOU list if A1Adding 1 to the other combinations whose IOU is greater than 0.3, if A is1M2And A1M3If IOU is greater than 0.3, the number of rechecks is increased by 2. A. thei,i≤n,Mj,j is less than or equal to m and the error detection number is added with 1 if the IOU of other combination is more than 0.3, if A is2M1And A3M1If IOU is greater than 0.3, the false detection count is increased by 2. And subtracting the correct number of the model detection from the total number of the marked people to be equal to the number of missed detections. Then, the maximum IOU combination is taken out from the IOU list again, and the round-going steps are repeated until the maximum IOU is less than 0.3.
Eliminating the cross comparison data in the correct matching sequence and the corresponding key value in the cross comparison data set, and eliminating the key value AiMjN is more than or equal to i, m is less than or equal to j, and cutting is carried out to obtain a key value A of the marking number seriesiN is less than or equal to i and the key value M detected by the algorithm modeljJ is less than or equal to m, the cyclic cross-over ratio sequence will contain AiI is not more than n or MjAnd the key value of j less than or equal to m is taken out for judgment and is simultaneously removed from the cross-comparison queueE.g. containing MjIf the intersection comparison data of the key values of j and m are larger than the threshold value two, putting the key values into a repeated detection sequence to obtain the number of repeated detections; such as containing AiAnd if the cross comparison data of the key value of i less than or equal to n is greater than the second cross comparison threshold value, putting the key value into the error detection sequence to obtain the error detection number. In specific implementation, the first intersection ratio threshold is 0.3-0.4. The second cross-over ratio threshold is 0.4-1. In specific implementation, the first intersection ratio threshold and the second intersection ratio threshold may be set according to different scenarios, and the first intersection ratio threshold and the second intersection ratio threshold are not limited herein.
Obtaining the key value and the key value containing A corresponding to the maximum IOUi,i≤n、MjAnd j is less than or equal to m, a new IOU number array is removed from the IOU number array, then the maximum key value corresponding to the IOU is taken out, the previous step is repeated in a circulating mode, and the circulation is terminated until the maximum IOU value is smaller than the first intersection-parallel ratio threshold value. And obtaining a correct matching sequence, a repeated detection sequence and a wrong detection sequence, and then subtracting the number of the correct matching sequence from the total number of the heads of the marking persons to obtain a missing detection number.
And (3) calculating indexes:
the test aims to evaluate the detection result of the algorithm model under the condition of marking, and judge whether the detection result of the algorithm model is correct or not according to the marking result. And calculating the average IOU, the detection rate, the precision rate and the recall rate of the obtained correct matching sequence, the repeated detection sequence, the false detection sequence and the missed detection sequence according to the method. The average IOU is the average IOU that the model detects correctly.
And the detection rate is the total number of model detections/total number of marks.
And (4) detecting the correct number of people/total number of model detections.
The recall rate is the number of correct people detected/total number of marks.
EXAMPLE III
The embodiment of the invention also provides a storage medium, which comprises a stored program, wherein the program executes the flow of the face detection target and marking matching method when running.
Alternatively, in the present embodiment, the storage medium may be configured to store program codes for executing the following processes of the face detection target and marking matching method:
s11, obtaining test set data;
s12, calling a detection algorithm model to detect the test set data to obtain model detection facial feature pixel coordinate axis data;
s13, obtaining marking facial feature pixel coordinate axis data obtained by marking the test set data;
s14, matching the coordinate axis data of the facial feature pixel of the model detection with the coordinate axis data of the marking facial feature pixel, and calculating the intersection ratio between the coordinate axis data of the facial feature pixel of the model detection and the coordinate axis data of the marking facial feature pixel to obtain an intersection ratio data set;
and S15, judging the capability of the algorithm model to detect the result according to the intersection ratio data set.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Therefore, by adopting the storage medium, the storage capacity is reduced, the program running speed of the built-in face detection target and marking matching method flow is higher, the marking and algorithm model detection are matched, and the head pixel coordinates of correct detection, repeated detection, wrong detection and missed detection in the algorithm model detection result can be correctly judged and screened out. The capability of the algorithm model can be objectively evaluated, and the indexes of average IOU, detection rate, precision rate and recall rate are given; and improving the next training efficiency of the algorithm and providing a one-step training direction of the algorithm model. Meanwhile, the method can also act on the finished algorithm model to provide help and evidence for debugging the optimal threshold value.
Example four
Embodiments of the present invention also provide a processor for executing a program, where the program executes to perform the steps in the above-mentioned face detection target and marking matching method.
Optionally, in this embodiment, the program is configured to perform the following steps:
s11, obtaining test set data;
s12, calling a detection algorithm model to detect the test set data to obtain model detection facial feature pixel coordinate axis data;
s13, obtaining marking facial feature pixel coordinate axis data obtained by marking the test set data;
s14, matching the coordinate axis data of the facial feature pixel of the model detection with the coordinate axis data of the marking facial feature pixel, and calculating the intersection ratio between the coordinate axis data of the facial feature pixel of the model detection and the coordinate axis data of the marking facial feature pixel to obtain an intersection ratio data set;
and S15, judging the capability of the algorithm model to detect the result according to the intersection ratio data set.
Optionally, for a specific example in this embodiment, reference may be made to the above-described embodiment and examples described in the specific implementation, and details of this embodiment are not described herein again.
Therefore, by adopting the processor, the data volume to be processed is reduced, the program running speed of the built-in face detection target and marking matching method flow is higher, the marking and algorithm model detection are matched, and the head pixel coordinates of correct detection, repeated detection, wrong detection and missed detection in the algorithm model detection result can be correctly judged and screened out. The capability of the algorithm model can be objectively evaluated, and the indexes of average IOU, detection rate, precision rate and recall rate are given; and improving the next training efficiency of the algorithm and providing a one-step training direction of the algorithm model. Meanwhile, the method can also act on the finished algorithm model to provide help and evidence for debugging the optimal threshold value.
From the above description, it can be seen that the matching method for face detection target and marking, the storage medium and the processor of the invention can correctly judge and screen out the human head pixel coordinates with correct detection, repeated detection, wrong detection and missed detection in the algorithm model checking result for the marking and algorithm model detection matching method. The capability of the algorithm model can be objectively evaluated, and the indexes of average IOU, detection rate, precision rate and recall rate are given; and improving the next training efficiency of the algorithm and providing a one-step training direction of the algorithm model. Meanwhile, the method can also act on the finished algorithm model to provide help and evidence for debugging the optimal threshold value.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (13)

1. A face detection target and marking matching method is characterized by comprising the following steps:
obtaining test set data;
calling a detection algorithm model to detect the test set data to obtain model detection facial feature pixel coordinate axis data;
obtaining marking facial feature pixel coordinate axis data obtained by marking the test set data;
matching coordinate axis data of facial feature pixels of the model detection with coordinate axis data of marking facial feature pixels, and calculating an intersection ratio between the coordinate axis data of the facial feature pixels of the model detection and the coordinate axis data of marking facial feature pixels to obtain an intersection ratio data set;
and judging the capability of the algorithm model for detecting the result according to the cross-comparison data set.
2. The face detection target and marking matching method according to claim 1, wherein judging the ability of the algorithm model to detect results from the cross-comparison data set comprises:
and processing the cross comparison data, and calculating the number of false detections, the number of missed detections and the number of repeated detections of the test set data by calling a detection algorithm model.
3. The face detection target and marking matching method as claimed in claim 1, wherein obtaining test set data comprises: and customizing the rule of the test set data.
4. The method of matching face detection targets and labeling of claim 1, wherein obtaining labeled face feature pixel coordinate axis data labeled on test set data comprises:
acquiring information for marking the test set data, marking the head in the information, and recording the pixel coordinates of the head.
5. The method for matching a face detection target with a marking as claimed in claim 1, wherein the step of calling a detection algorithm model to detect the test set data and obtaining model detection face feature pixel coordinate axis data comprises:
and calling a detection algorithm model to detect the test set data, and marking the facial feature pixel coordinates of each human head.
6. The face detection target and marking matching method of claim 2, wherein calculating the number of false detections, the number of missed detections, and the number of repeat detections of the test set data by calling a detection algorithm model comprises: marking n person head rectangular boxes, detecting m person head rectangular boxes by the algorithm model, and then setting the n multiplied by m intersection comparison queues as:
Figure FDA0002153771320000021
the IOU is an intersection comparison, the front serial number of the IOU is a key value, the maximum intersection comparison value and the corresponding key value are taken out from an intersection comparison queue, whether the intersection comparison value is greater than a first intersection comparison threshold value or not is judged, if yes, the model is correctly detected to be matched with the marking frame, and a correct matching number sequence is put in to obtain the number of correct identifications;
eliminating the cross comparison data in the correct matching sequence and the corresponding key value in the cross comparison data set, and eliminating the key value AiMjN is more than or equal to i, m is less than or equal to j, and cutting is carried out to obtain a key value A of the marking number seriesiN is less than or equal to i and the key value M detected by the algorithm modeljJ is less than or equal to m, the cyclic cross-over ratio sequence will contain AiI is not more than n or MjAnd taking out the key value of j less than or equal to M for judgment and simultaneously removing the key value from the cross-comparison queue, if the key value contains MjIf the intersection comparison data of the key values of j and m are larger than the threshold value two, putting the key values into a repeated detection sequence to obtain the number of repeated detections;
such as containing AiAnd if the cross comparison data of the key value of i less than or equal to n is greater than the second cross comparison threshold value, putting the key value into the error detection sequence to obtain the error detection number.
7. The face detection target and marking matching method of claim 3, wherein the rules for customizing the test set data include: the length and width or resolution of the test data set is customized.
8. The face detection target and marking matching method as claimed in claim 4, further comprising, after recording the human head pixel coordinates: and recording the pixel coordinate information of the head of the person by using an xml file, and outputting one xml file for each picture of the test set data.
9. The face detection target and marking matching method as claimed in claim 5, wherein marking the facial feature pixel coordinates of each human head further comprises:
storing the facial feature pixel coordinate data of each human head with a txt document, and outputting the detected pixel coordinates of each human head.
10. The face detection target and marking matching method according to claim 6, wherein the first cross-over ratio threshold is 0.3-0.4.
11. The face detection target and marking matching method according to claim 6, wherein the second intersection ratio threshold is 0.4-1.
12. A storage medium comprising a stored program, wherein the program when executed performs the face detection target and marking matching method of any one of claims 1 to 11.
13. A processor for running a program, wherein the program is run to perform the face detection target and marking matching method of any one of claims 1 to 11.
CN201910711116.9A 2019-08-02 2019-08-02 Matching method of face detection target and marking, storage medium and processor Pending CN112307852A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910711116.9A CN112307852A (en) 2019-08-02 2019-08-02 Matching method of face detection target and marking, storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910711116.9A CN112307852A (en) 2019-08-02 2019-08-02 Matching method of face detection target and marking, storage medium and processor

Publications (1)

Publication Number Publication Date
CN112307852A true CN112307852A (en) 2021-02-02

Family

ID=74486514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910711116.9A Pending CN112307852A (en) 2019-08-02 2019-08-02 Matching method of face detection target and marking, storage medium and processor

Country Status (1)

Country Link
CN (1) CN112307852A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378635A (en) * 2021-05-08 2021-09-10 北京迈格威科技有限公司 Target attribute boundary condition searching method and device of target detection model
CN113435305A (en) * 2021-06-23 2021-09-24 平安国际智慧城市科技股份有限公司 Precision detection method, device and equipment of target object identification algorithm and storage medium
WO2023179133A1 (en) * 2022-03-22 2023-09-28 深圳云天励飞技术股份有限公司 Target algorithm selection method and apparatus, and electronic device and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378635A (en) * 2021-05-08 2021-09-10 北京迈格威科技有限公司 Target attribute boundary condition searching method and device of target detection model
CN113435305A (en) * 2021-06-23 2021-09-24 平安国际智慧城市科技股份有限公司 Precision detection method, device and equipment of target object identification algorithm and storage medium
WO2023179133A1 (en) * 2022-03-22 2023-09-28 深圳云天励飞技术股份有限公司 Target algorithm selection method and apparatus, and electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN109858371B (en) Face recognition method and device
CN112307852A (en) Matching method of face detection target and marking, storage medium and processor
CN109829467A (en) Image labeling method, electronic device and non-transient computer-readable storage medium
US20050141766A1 (en) Method, system and program for searching area considered to be face image
CN106228129A (en) A kind of human face in-vivo detection method based on MATV feature
CN111914633B (en) Face-changing video tampering detection method based on face characteristic time domain stability and application thereof
CN107622489A (en) A kind of distorted image detection method and device
CN109086734A (en) The method and device that pupil image is positioned in a kind of pair of eye image
CN113420745B (en) Image-based target identification method, system, storage medium and terminal equipment
CN106650670A (en) Method and device for detection of living body face video
CN104463240B (en) A kind of instrument localization method and device
CN116030538B (en) Weak supervision action detection method, system, equipment and storage medium
CN112417970A (en) Target object identification method, device and electronic system
CN113597614A (en) Image processing method and device, electronic device and storage medium
CN110322470A (en) Action recognition device, action recognition method and recording medium
CN110909565A (en) Image recognition and pedestrian re-recognition method and apparatus, electronic and storage device
CN110909655A (en) Method and equipment for identifying video event
CN111242075A (en) Method for recognizing sleeping behavior through police supervision
CN111046736B (en) Method, device and storage medium for extracting text information
CN111723719B (en) Video target detection method, system and device based on category external memory
CN110942073A (en) Container trailer number identification method and device and computer equipment
CN111402185A (en) Image detection method and device
CN112651996B (en) Target detection tracking method, device, electronic equipment and storage medium
CN110516641B (en) Construction method of environment map and related device
CN113869364A (en) Image processing method, image processing apparatus, electronic device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221207

Address after: 710000 second floor, building B3, yunhuigu, No. 156, Tiangu 8th Road, software new town, high tech Zone, Xi'an, Shaanxi

Applicant after: Xi'an Guangqi Intelligent Technology Co.,Ltd.

Address before: 710003 2nd floor, B3, yunhuigu, 156 Tiangu 8th Road, software new town, Xi'an City, Shaanxi Province

Applicant before: Xi'an Guangqi Future Technology Research Institute