CN110751012A - Target detection evaluation method and device, electronic equipment and storage medium - Google Patents

Target detection evaluation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110751012A
CN110751012A CN201910436284.1A CN201910436284A CN110751012A CN 110751012 A CN110751012 A CN 110751012A CN 201910436284 A CN201910436284 A CN 201910436284A CN 110751012 A CN110751012 A CN 110751012A
Authority
CN
China
Prior art keywords
target
real
information
candidate
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910436284.1A
Other languages
Chinese (zh)
Other versions
CN110751012B (en
Inventor
车正平
史雪凤
刘梦瑶
刘燕
叶杰平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN201910436284.1A priority Critical patent/CN110751012B/en
Publication of CN110751012A publication Critical patent/CN110751012A/en
Application granted granted Critical
Publication of CN110751012B publication Critical patent/CN110751012B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The target detection and evaluation method comprises the steps of obtaining truth-valued parameters of a plurality of pictures to be evaluated in a sample image, calculating the truth-valued parameters to calculate the real size of the truth-valued target and distance information between the truth-valued target and a camera, identifying the sample image by using a detection model to be evaluated, and obtaining a plurality of candidate targets and parameters of each candidate target of the sample image, namely calculating the evaluation information of the detection model to be evaluated by the real size of the real target, the distance information of the camera and the parameters of the candidate targets, wherein multi-dimensional parameters such as camera parameters, positions and classifications are considered, so that the accuracy of target detection and evaluation in the prior art is improved.

Description

Target detection evaluation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a target detection and evaluation method and apparatus, an electronic device, and a storage medium.
Background
Object Detection (Object Detection) is a fundamental research topic in the field of computer vision, and is also widely applied in fields such as face Detection, automatic driving, understanding and searching of videos and pictures, and the like. The target detection mainly utilizes a target detection model to identify, locate and correctly classify one or more specific object targets in one or more input pictures. In order to better perform target detection, the performance of a target detection model needs to be more accurately and comprehensively evaluated, so that different target detection models can be more effectively developed, trained, selected and optimized, and the performance of the target detection model in practical application is improved.
In the prior art, in view of the evaluation methods involved in the conventional classification task (classification) and information retrieval task (information retrieval), all the targets to be detected are mainly evaluated simply, or the number of pixels occupied by the detected targets on the picture is simply limited.
However, the performance of the target detection model is evaluated by the method in the prior art, and the accuracy of the evaluation result is not high.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide a method and an apparatus for target detection and evaluation, an electronic device, and a storage medium, which are used to solve the problem in the prior art that the accuracy of target detection and evaluation is not high.
In a first aspect, an embodiment of the present application provides a target detection and evaluation method, where the method includes: acquiring a sample image, wherein the sample image comprises a plurality of pictures to be evaluated, the pictures to be evaluated are marked with truth-valued parameters of a real target, and the truth-valued parameters comprise: camera parameters, size information, position information and classification information of a real target;
calculating the real size of the real target and the distance information between the real target and the camera according to the true value parameters, identifying the sample image by adopting a detection model to be evaluated, and acquiring a plurality of candidate targets and parameters of each candidate target, wherein the parameters of the candidate targets comprise: size information, position information, confidence information and classification information of each candidate target;
and calculating and acquiring evaluation information of the detection model to be evaluated according to the real size of the real target, the distance information between the real target and the camera and the parameters of the candidate targets.
Optionally, the camera parameters include: the method comprises the steps of shooting a picture to be evaluated, wherein the picture to be evaluated comprises camera installation height, a camera horizontal downward depression angle, camera optical axis horizontal and vertical offset, and camera horizontal and vertical focal lengths.
Optionally, the calculating, according to the real size of the real target, the distance information between the real target and the camera, and the parameters of each candidate target, to obtain the evaluation information of the detection model to be evaluated includes:
adding a mark of a real target according to a preset condition, the real size of the real target and distance information between the real target and a camera, wherein the mark indicates that the real target meeting the preset condition is a target to be detected and the real target not meeting the preset condition is a target to be ignored;
and calculating and acquiring the evaluation information of the detection model to be evaluated according to the true value parameters of the marked real target and the parameters of each candidate target.
Optionally, the calculating and obtaining evaluation information of the detection model to be evaluated according to the true value parameter of the marked real target and the parameters of each candidate target includes:
calculating the intersection ratio of each candidate target and each real target in each picture to be evaluated respectively;
obtaining the evaluation information of the detection model to be evaluated according to the cross comparison, the mark of the corresponding real target of the cross comparison and a preset rule, wherein the evaluation information comprises: an evaluation classification for each candidate object.
Optionally, the calculating and comparing each candidate target with each real target in each picture to be evaluated respectively includes:
and sequentially calculating the intersection ratio of each candidate target and each real target in each picture to be evaluated from large to small according to the confidence degree information of the candidate targets.
Optionally, the evaluation classification of the candidate target includes any one of: true class TP, false positive class FP, and non-TP and non-FP.
Optionally, after the obtaining of the evaluation information of the detection model to be evaluated is calculated according to the real size of the real target, the distance information between the real target and the camera, and the parameters of each candidate target, the method further includes:
counting classification values corresponding to all candidate targets in the obtained sample image, wherein the classification values comprise TP values and FP values;
and generating an evaluation result of the detection model to be evaluated according to the classification values corresponding to all the candidate targets.
Optionally, the obtaining the evaluation classification of each candidate target according to the cross-over comparison, the mark of the corresponding real target of the cross-over comparison, and a preset rule includes:
and obtaining the evaluation classification of each candidate target according to the intersection comparison, the mark of the corresponding real target of the intersection comparison and a preset threshold value of the intersection comparison.
In a second aspect, an embodiment of the present application provides an object detection and evaluation apparatus, including: the device comprises an acquisition module, a calculation module and an evaluation module;
the obtaining module is used for obtaining a sample image, the sample image comprises a plurality of pictures to be evaluated, the pictures to be evaluated are marked with truth-valued parameters of a real target, and the truth-valued parameters comprise: camera parameters, size information, position information and classification information of a real target;
the calculation module is used for calculating the real size of the real target and the distance information between the real target and the camera according to the true value parameters, identifying the sample image by adopting the detection model to be evaluated, and acquiring a plurality of candidate targets and parameters of each candidate target, wherein the parameters of the candidate targets comprise: size information, position information, confidence information and classification information of each candidate target;
and the evaluation module is used for calculating and acquiring evaluation information of the detection model to be evaluated according to the real size of the real target, the distance information between the real target and the camera and the parameters of the candidate targets.
Optionally, the camera parameters include: camera mounting height for taking a sample image, camera horizontal downward depression, camera optical axis horizontal and vertical offsets, camera horizontal and vertical focal lengths.
Optionally, the evaluation module is specifically configured to add a mark of the real target according to a preset condition, a real size of the real target, and distance information from the camera, where the mark indicates that the real target meeting the preset condition is a target to be detected and the real target not meeting the preset condition is a target to be ignored;
and calculating and acquiring the evaluation information of the detection model to be evaluated according to the true value parameters of the marked real target and the parameters of each candidate target.
Optionally, the evaluation module is specifically configured to calculate an intersection ratio between each candidate target and each real target in each picture to be evaluated;
acquiring evaluation information of the detection model to be evaluated according to the cross comparison, the mark of the corresponding real target and a preset rule, wherein the evaluation information comprises: an evaluation classification for each candidate object.
Optionally, the evaluation module is specifically configured to sequentially calculate, from large to small, an intersection ratio between each candidate target and each real target in each picture to be evaluated according to the confidence information of the candidate targets.
Optionally, the evaluation classification of the candidate target comprises any one of: true class TP, false positive class FP, and non-TP and non-FP.
Optionally, the apparatus further includes a generation module; the generating module is used for counting and obtaining classification values corresponding to all candidate targets in the sample image, wherein the classification values comprise TP values and FP values; and generating an evaluation result of the detection model to be evaluated according to the classification values corresponding to all the candidate targets.
Optionally, the evaluation module is specifically configured to obtain an evaluation classification of each candidate target according to the cross-over ratio, the mark of the corresponding real target in the cross-over ratio, and a preset threshold of the cross-over ratio.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a storage medium storing a computer program and a processor, where the computer program is read by the processor and executed to implement the method of any one of the first aspect.
In a fourth aspect, an embodiment of the present application provides a storage medium, where a computer program is stored on the storage medium, and when the computer program is read and executed by a processor, the method of any one of the above first aspects is implemented.
Based on the above aspect, the present application has the following beneficial effects:
in the target detection and evaluation method provided in the embodiment of the present application, the true value parameters of multiple to-be-evaluated pictures in a sample image are obtained, the true value parameters are calculated to calculate the true size of the true value target and distance information between the true value and a camera, and then a to-be-evaluated detection model is used to identify the sample image, so as to obtain multiple candidate targets of the sample image and parameters of each candidate target, where the true value parameters include: the evaluation information of the detection model to be evaluated is calculated according to the real size of the real target, the distance information of the camera and the parameters of a plurality of candidate targets, wherein the camera parameters, the position, the classification and other multi-dimensional parameters are considered, so that the accuracy of target detection evaluation in the prior art is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart of a target detection and evaluation method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating another method for target detection and evaluation according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart diagram illustrating another method for target detection and evaluation according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart diagram illustrating another method for target detection and evaluation according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of an object detection and evaluation apparatus according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of another target detection and evaluation apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device provided in the present disclosure.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following description of the embodiments of the present application, provided in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should also be noted that the term "comprising" will be used in the embodiments of the present application to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
Fig. 1 is a schematic flowchart of a target detection and evaluation method according to an embodiment of the present application. As shown in fig. 1, the method includes:
s101, obtaining a sample image, wherein the sample image comprises a plurality of pictures to be evaluated, and the pictures to be evaluated are marked with true value parameters of a real target.
The true parameters include: camera parameters, size information, position information, and classification information of real targets.
The real-valued parameters of one or more real targets on the image to be evaluated can be obtained in advance and labeled.
The camera parameters may be used to represent parameters related to a camera that takes the picture to be evaluated. The size information of the real target represents the size information of the real target in the picture to be evaluated. The location information may identify the coordinate location of the real target in the picture to be evaluated. The classification information is used to indicate the category of the real object, which may be specified by human, such as a person, a car, an animal, etc.
It should be noted that the true parameters of the real target may be parameters obtained by artificial measurement.
For example, assuming that a picture to be evaluated includes a plurality of vehicles as real targets, parameter information such as coordinates, sizes, categories (vehicles) and the like of each vehicle on the picture to be evaluated can be marked by measurement respectively.
S102, calculating the real size of the real target and the distance information between the real target and the camera according to the true value parameters, identifying the sample image by adopting the detection model to be evaluated, and acquiring a plurality of candidate targets and the parameters of each candidate target.
The parameters of the candidate target include: size information, position information, confidence information, and classification information of each candidate object.
According to the camera parameters, the size information, the position information and the like of the real target in the picture to be evaluated, the real size information in reality corresponding to each pixel point in the picture to be evaluated can be calculated and obtained, and the distance information between the real target and the camera can also be calculated and obtained.
The real target-to-camera distance information may represent a ground forward distance of the real target from the camera.
In addition, the plurality of sample images are detected through a detection model to be evaluated, and a plurality of candidate targets, size information, position information, confidence information, classification information and the like of each candidate target are obtained according to the detection result of the detection model to be evaluated. The size information of the candidate target is the real size of the candidate target in the picture to be evaluated, the position information of the candidate target is the position coordinate of the candidate target in the picture to be evaluated, the confidence information of the candidate target is used for representing the credibility of the candidate target, the confidence is greater than or equal to 0 and less than or equal to 1, and the higher the confidence is, the higher the possibility that the candidate target is the real target is.
S103, calculating and obtaining evaluation information of the detection model to be evaluated according to the real size of the real target, the distance information of the camera and the parameters of the candidate targets.
The evaluation information of the detection model to be evaluated can be comprehensively acquired by comparing the true value parameter of the real target with the parameter of the candidate target obtained by the detection model to be evaluated through multi-dimensional analysis.
In view of the above, in the target detection and evaluation method provided in the embodiment of the present application, the true value parameters of multiple to-be-evaluated pictures in a sample image are obtained, the true value parameters are calculated to calculate the true size of the true value target and the distance information between the true value target and a camera, and then a to-be-evaluated detection model is used to identify the sample image, so as to obtain multiple candidate targets of the sample image and parameters of each candidate target, where the true value parameters include: the evaluation information of the detection model to be evaluated is calculated according to the real size of the real target, the distance information of the camera and the parameters of a plurality of candidate targets, wherein the camera parameters, the position, the classification and other multi-dimensional parameters are considered, so that the accuracy of target detection evaluation in the prior art is improved.
Optionally, the camera parameters include: camera mounting height h, camera horizontal downward depression angle (theta), camera optical axis horizontal and vertical offset (c) for taking picture to be evaluatedx,cy) Camera horizontal and vertical focal length (f)x,fy)。
Wherein, the camera mounting height can refer to the vertical distance between the camera mounting position and the horizontal ground.
Specifically, if the camera is a vehicle-mounted automobile data recorder, the camera installation height for shooting the sample image in the camera parameters is the vertical distance between the automobile data recorder and the horizontal ground, the horizontal downward depression angle of the camera is the included angle between the camera of the automobile data recorder and the ground, the horizontal and vertical offsets of the optical axis of the camera are the horizontal and vertical offsets of the optical axis of the automobile data recorder, the horizontal and vertical focal lengths of the camera are the horizontal and vertical focal lengths of the automobile data recorder, and the specific type of the camera is not limited herein.
Accordingly, the calculating of the real size of the real target and the distance information from the camera according to the true value parameters may include: and calculating the ground distance between the pixel point at any height in the picture to be evaluated and the camera, the height position of the pixel point of the ground target in the picture to be evaluated corresponding to any given ground distance, and the horizontal size length represented by the pixel point at any height in the picture to be evaluated.
Specifically, a reference pixel point may be found on the picture to be evaluated, for example, a reference pixel point located on the ground is found, and if the real target is a person, a pixel point of a contact portion between the foot of the person and the ground may be used as the reference pixel point, or the real target is a vehicle, and a pixel point of a contact portion between the wheel of the vehicle and the ground may be used as the reference pixel point. And then, by combining information such as coordinates and dimensions of the reference pixel points and the true value parameters of the real targets, the ground distance between the pixel point at any height in the picture to be evaluated and the camera, the height position of the pixel point of the ground target in the picture to be evaluated corresponding to any given ground distance, and the horizontal dimension length represented by the pixel point at any height in the picture to be evaluated can be calculated.
Further, taking pixel point (x, y) in the picture as an example (assuming that in the coordinate system of the picture, the upper left corner is used as the origin, i.e. the coordinates of the upper left corner are (0,0), and the coordinates of the lower right corner of the picture are (H-1, W-1), then calculating the position p of the real target relative to the cameranAnd (right, down, forward) representing the three dimensions of the coordinate system, namely the right, lower and first dimensions, wherein forward is used as the distance information between the real target and the camera, namely the ground forward distance between the real target and the camera. More specifically, the present invention is to provide a novel,
Figure BDA0002070620160000101
wherein the content of the first and second substances,
Figure BDA0002070620160000102
Figure BDA0002070620160000103
where T denotes a transposed matrix.
Further, according to the position of the real target relative to the camera obtained through the above, distance information forward of the real target and the camera can be calculated and obtained.
Fig. 2 is a schematic flowchart of another target detection and evaluation method according to an embodiment of the present application. As shown in fig. 2, optionally, calculating and obtaining evaluation information of the detection model to be evaluated according to the real size of the real target, the distance information between the real target and the camera, and the parameters of each candidate target, includes:
s201, adding a mark of the real target according to preset conditions, the real size of the real target and distance information between the real target and the camera.
In the real targets, screening out the targets to be detected meeting preset conditions for the real targets according to the real sizes of the real targets and the distance information between the real targets and the camera, marking the real targets meeting the preset conditions as the targets to be detected, and marking the real targets not meeting the preset conditions as the targets to be ignored. For example, a plurality of real targets may exist in one picture to be evaluated, and after some real targets are converted to obtain real sizes in reality, the real sizes may be too small or distance information between the real targets and the camera is too large, and if the real sizes are not in accordance with the reality condition, the real targets are marked as targets to be ignored. The preset condition may be a threshold, and if the real size of the real target is larger than a preset size and the distance between the real target and the camera is smaller than a preset distance, the real target is marked as a target to be detected, and if the real size of the real target is not larger than the preset size and the distance between the real target and the camera is not smaller than the preset distance, the real target is marked as a target to be ignored. It should be noted that the preset size and the preset distance may be a point value or a range, and are not limited herein.
S202, calculating and obtaining evaluation information of the detection model to be evaluated according to the true value parameters of the marked real target and the parameters of each candidate target.
After the real targets are marked, the true value parameters of the marked real targets are obtained, the parameters of each candidate target are obtained, and then the evaluation information of the evaluation detection model is calculated and obtained according to the true value parameters of the target to be detected and the parameters of each candidate target, so that the accuracy of evaluation can be improved.
Fig. 3 is a schematic flowchart of another target detection and evaluation method according to an embodiment of the present application. As shown in fig. 3, optionally, the step of calculating and obtaining evaluation information of the detection model to be evaluated according to the true value parameter of the marked real target and the parameters of each candidate target includes:
s301, calculating and comparing each candidate target with each real target in each picture to be evaluated respectively.
After marking a plurality of real targets in each picture to be evaluated, respectively calculating the intersection ratio of the target to be detected and the target to be ignored with each candidate target. For example, a rectangular frame representing each target to be detected is obtained in the picture to be evaluated, a ratio of an area of an intersection region of the rectangular frame of each picture to be evaluated and the rectangular frame of each candidate target in the picture to be evaluated to a parallel region area is respectively calculated, the ratio is an intersection ratio of the candidate target and the target to be detected in the picture to be evaluated, it should be noted that a calculation method of the intersection ratio of the candidate target and the target to be detected of the picture to be evaluated is set according to an actual situation, and no specific limitation is made herein.
An Intersection-over-Union (IoU) is a concept used in target detection, and is the overlapping rate of the generated candidate frame and the original labeled frame, i.e. the ratio of their Intersection to Union. The optimal situation is complete overlap, i.e. a ratio of 1.
S302, obtaining evaluation information of the detection model to be evaluated according to the cross comparison, the mark of the corresponding real target of the cross comparison and a preset rule, wherein the evaluation information comprises: an evaluation classification for each candidate object.
Specifically, the intersection ratio of each obtained candidate target and each target to be detected in each picture to be evaluated is calculated, and then the evaluation result of each candidate target is obtained according to the mark of the real target and a preset evaluation rule. If the real target is the target to be ignored, the real target is classified as the evaluation classification corresponding to the target to be ignored.
The preset evaluation rule may be determined comprehensively by combining information such as classification of real targets, classification of each candidate target in the same picture to be evaluated, and the intersection ratio, so as to obtain an evaluation classification of each candidate target. The division may be performed by setting a threshold value of the intersection ratio, which is not particularly limited herein.
Optionally, the calculating and comparing of each candidate target with each real target in each picture to be evaluated respectively includes:
and sequentially calculating the intersection ratio of each candidate target and each real target in each picture to be evaluated from large to small according to the confidence degree information of the candidate targets.
Specifically, when calculating the intersection ratio of each candidate target to each target to be detected in each picture to be evaluated, the candidate targets are sorted in order from large to small according to the confidence of each candidate target, and then the intersection ratio of each candidate target to each target to be detected in each picture to be evaluated is sequentially solved according to the order.
Optionally, the evaluation classification of the candidate target comprises any one of: true class TP, false positive class FP, non-TP and non-FP.
Specifically, each candidate target is evaluated and classified according to the preset evaluation rule, and the specific classification may include a true class TP, a false positive class FP, and a non-TP and non-FP.
Fig. 4 is a schematic flowchart of another target detection and evaluation method according to an embodiment of the present application. As shown in fig. 4, optionally, after calculating and acquiring evaluation information of the detection model to be evaluated according to the real size of the real target, the distance information with the camera, and the parameters of each candidate target, the method further includes:
s401, counting classification values corresponding to all candidate targets in the obtained sample image, wherein the classification values comprise TP values and FP values.
After the intersection ratio of each candidate target and each target to be detected in each picture to be evaluated is obtained, the evaluation classification of each candidate target is sequentially determined, and if the candidate target is classified into TP classes, the existing TP value is increased by one; if the candidate target is divided into FP classes, adding one to the existing FP value; if the candidate target is neither a TP class nor a FP class, the existing TP value and FP value are unchanged. Note that the initial TP value and FP value are both 0.
S402, generating an evaluation result of the detection model to be evaluated according to the classification values corresponding to all the candidate targets.
The evaluation result may be an evaluation score, a score curve, an evaluation grade, etc. generated after counting the classification values corresponding to all the candidate targets, so as to describe the evaluation result more intuitively.
For example, when the evaluation result is an evaluation score, the evaluation score may be equal to the TP value minus the TP value, or when the evaluation result is an evaluation level, the ranking rule of the evaluation level may be that ten evaluation scores are one evaluation level. Further, a curve may also be plotted according to the evaluation score or evaluation grade.
For example, the final TP value and the FP value of all candidate targets are counted, the TP value is 80, the FP value is 30, and the difference between the TP value and the FP value is 50, then the evaluation result of the evaluation detection model is 50 points, specifically, the mode and the type of the evaluation result are set according to actual needs, and are not limited herein.
Optionally, the obtaining the evaluation classification of each candidate target according to the cross-over comparison, the mark of the corresponding real target of the cross-over comparison, and the preset rule includes:
and obtaining the evaluation classification of each candidate target according to the cross comparison, the mark of the corresponding real target of the cross comparison and the preset threshold of the cross comparison.
In the specific implementation process, the value of the cross-over ratio of the real target and the candidate target is compared with a cross-over ratio preset threshold, the cross-over ratio meeting the cross-over ratio preset threshold is selected, the mark of the real target corresponding to the cross-over ratio meeting the cross-over ratio preset threshold is obtained, if the real target is the target to be detected, the real target is classified as the evaluation classification corresponding to the target to be detected, and if the real target is the target to be ignored, the real target is classified as the evaluation classification corresponding to the target to be ignored.
For example, after the cross-comparison between each candidate target and each real target in each picture to be evaluated is sequentially calculated from large to small according to the confidence information of the candidate targets, the evaluation classification process of each candidate target is obtained according to the mark of the corresponding real target and the preset rule of the cross-comparison, and the judgment on each candidate target may include:
selecting a candidate target with the highest confidence coefficient from the candidate targets which are not judged, obtaining the cross-over ratio of the candidate target and each real target in the same picture to be evaluated, obtaining the maximum cross-over ratio, if the maximum cross-over ratio is larger than a preset cross-over ratio threshold value, further judging whether the mark of the corresponding real target is a target to be ignored or not, and if so, classifying the real target into a non-FP and non-TP class; if the mark of the real target is the target to be detected, judging whether the maximum intersection comparison corresponding real target is detected, if not, classifying the real target into TP class, namely adding 1 to the TP value. If the maximum intersection comparison corresponds to the real target which is detected, judging whether the candidate target meets the candidate target condition according to the pixel distance and the size information of the candidate target, if so, classifying the candidate target into an FP class, and if not, classifying the candidate target into a non-FP and non-TP class. It should be noted that the pixel distance and the size information of the candidate target may be obtained according to the pixel size of the candidate target and the size information of the candidate target on the picture to be evaluated, so as to avoid that the judgment result is affected by too small candidate target or too long shooting distance.
Or if the maximum intersection ratio is not greater than a preset intersection ratio threshold, judging whether the candidate target meets the candidate target condition or not according to the pixel distance and the size information of the candidate target, if so, classifying the candidate target into an FP class, and if not, classifying the candidate target into a non-FP and non-TP class.
Whether the candidate target meets the candidate target condition is judged according to the pixel distance and the size information of the candidate target, and the judgment is similar to the judgment that the marked real target is the target to be detected or the target to be ignored, and the description is omitted here.
In the target detection and evaluation method provided by the embodiment of the application, the true value parameters of a plurality of pictures to be evaluated in a sample image are acquired, the true value parameters are calculated to calculate the true size of the true value target and the distance information between the true value target and a camera, then a detection model to be evaluated is used for identifying the sample image, and a plurality of candidate targets and parameters of each candidate target of the sample image are acquired, wherein the true value parameters include: the evaluation information of the detection model to be evaluated is calculated according to the real size of the real target, the distance information of the camera and the parameters of a plurality of candidate targets, wherein the camera parameters, the position, the classification and other multi-dimensional parameters are considered, so that the accuracy of target detection evaluation in the prior art is improved.
Fig. 5 is a schematic flowchart of an object detection and evaluation apparatus according to an embodiment of the present application, and as shown in fig. 5, an object detection and evaluation apparatus according to an embodiment of the present application includes: an acquisition module 501, a calculation module 502 and an evaluation module 503;
the obtaining module 501 is configured to obtain a sample image, where the sample image includes multiple pictures to be evaluated, and the pictures to be evaluated are labeled with true values of a real target, where the true values include: camera parameters, size information, position information and classification information of a real target;
a calculating module 502, configured to calculate a real size of a real target and distance information between the real target and the camera according to the true value parameters, identify a sample image by using a detection model to be evaluated, and obtain a plurality of candidate targets and parameters of each candidate target, where the parameters of the candidate targets include: size information, position information, confidence information and classification information of each candidate target;
and the evaluation module 503 is configured to calculate and obtain evaluation information of the detection model to be evaluated according to the real size of the real target, the distance information between the real target and the camera, and the parameters of each candidate target.
Optionally, the camera parameters include: camera mounting height for taking a sample image, camera horizontal downward depression, camera optical axis horizontal and vertical offsets, camera horizontal and vertical focal lengths.
Optionally, the evaluation module 503 is specifically configured to add a mark of the real target according to the preset condition, the real size of the real target, and the distance information from the camera, where the mark indicates that the real target meeting the preset condition is the target to be detected and the real target not meeting the preset condition is the target to be ignored; and calculating and acquiring the evaluation information of the detection model to be evaluated according to the true value parameters of the marked real target and the parameters of each candidate target.
Optionally, the evaluation module 503 is specifically configured to calculate an intersection ratio between each candidate target and each real target in each picture to be evaluated; and obtaining the evaluation classification of each candidate target according to the cross comparison, the mark of the corresponding real target and the preset rule.
Optionally, the evaluation module 503 is specifically configured to calculate, from large to small, an intersection ratio between each candidate target and each real target in each picture to be evaluated in sequence according to the confidence information of the candidate targets.
Optionally, the evaluation classification of the candidate target comprises any one of: true class TP, false positive class FP, non-TP and non-FP.
Fig. 6 is a schematic flowchart of another target detection and evaluation apparatus according to an embodiment of the present application. As shown in fig. 6, optionally, the apparatus further includes a generating module 504, where the generating module 504 is configured to statistically obtain classification values corresponding to all candidate targets in the sample image, where the classification values include TP values and FP values; and generating an evaluation result of the detection model to be evaluated according to the classification values corresponding to all the candidate targets.
Optionally, the evaluation module 503 is specifically configured to obtain evaluation information of the detection model to be evaluated according to the cross-over ratio, the mark of the corresponding real target in the cross-over ratio, and a preset threshold of the cross-over ratio, where the evaluation information includes: an evaluation classification for each candidate object.
The target detection and evaluation device provided in the embodiment of the present application calculates the true value parameter by obtaining the true value parameter of a plurality of pictures to be evaluated in a sample image, calculates the true value parameter, and calculates the true size of the true value target and distance information between the true value parameter and a camera, and then identifies the sample image by using a detection model to be evaluated, so as to obtain a plurality of candidate targets of the sample image and parameters of each candidate target, where the true value parameter includes: the evaluation information of the detection model to be evaluated is calculated according to the real size of the real target, the distance information of the camera and the parameters of a plurality of candidate targets, wherein the camera parameters, the position, the classification and other multi-dimensional parameters are considered, so that the accuracy of target detection evaluation in the prior art is improved.
The modules may be connected or in communication with each other via a wired or wireless connection. The wired connection may include a metal cable, an optical cable, a hybrid cable, etc., or any combination thereof. The wireless connection may comprise a connection over a LAN, WAN, bluetooth, ZigBee, NFC, or the like, or any combination thereof. Two or more modules may be combined into a single module, and any one module may be divided into two or more units. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be noted that the above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, the modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
Fig. 7 is a schematic structural diagram of an electronic device provided in the present disclosure. As shown in fig. 7, an electronic device is further provided in an embodiment of the present application, and includes a storage medium 601 storing a computer program and a processor 602, where the computer program is read by the processor and executed to implement the steps of the method in the foregoing method embodiment.
Embodiments of the present application further provide a storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method in the foregoing method embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A target detection evaluation method, comprising:
obtaining a sample image, wherein the sample image comprises a plurality of pictures to be evaluated, the pictures to be evaluated are marked with truth-valued parameters of a real target, and the truth-valued parameters comprise: camera parameters, size information, position information and classification information of a real target;
calculating the real size of the real target and the distance information between the real target and a camera according to the true value parameters, identifying the sample image by adopting a detection model to be evaluated, and acquiring a plurality of candidate targets and parameters of each candidate target, wherein the parameters of the candidate targets comprise: size information, position information, confidence information and classification information of each candidate target;
and calculating and acquiring the evaluation information of the detection model to be evaluated according to the real size of the real target, the distance information between the real target and the camera and the parameters of each candidate target.
2. The method according to claim 1, wherein the calculating evaluation information of the detection model to be evaluated according to the real size of the real target, the distance information with the camera, and the parameters of each candidate target comprises:
adding a mark of the real target according to a preset condition, the real size of the real target and distance information between the real target and a camera, wherein the mark indicates that the real target meeting the preset condition is a target to be detected and the real target not meeting the preset condition is a target to be ignored;
and calculating and acquiring the evaluation information of the detection model to be evaluated according to the marked true value parameters of the real target and the parameters of each candidate target.
3. The method according to claim 2, wherein the calculating evaluation information of the detection model to be evaluated according to the marked true value parameters of the real target and the parameters of each candidate target includes:
calculating an intersection ratio between each candidate target and each real target in each picture to be evaluated;
obtaining evaluation information of the detection model to be evaluated according to the cross-over comparison, the mark of the real target corresponding to the cross-over comparison and a preset rule, wherein the evaluation information comprises: an evaluation classification for each of the candidate objects.
4. The method according to claim 3, wherein said calculating a cross-over ratio of each candidate target to each real target in each picture to be evaluated comprises:
and sequentially calculating the intersection ratio of each candidate target and each real target in each picture to be evaluated from large to small according to the confidence information of the candidate targets.
5. The method of claim 3 or 4, wherein the evaluation classification of the candidate object comprises any one of: true class TP, false positive class FP, and non-TP and non-FP.
6. The method according to claim 5, wherein after calculating and acquiring evaluation information of the detection model to be evaluated according to the real size of the real target, the distance information with the camera, and the parameters of each candidate target, the method further comprises:
counting and obtaining classification values corresponding to all the candidate targets in the sample image, wherein the classification values comprise TP values and FP values;
and generating an evaluation result of the detection model to be evaluated according to the classification values corresponding to all the candidate targets.
7. An object detection evaluation apparatus, characterized in that the apparatus comprises: the device comprises an acquisition module, a calculation module and an evaluation module;
the obtaining module is configured to obtain a sample image, where the sample image includes multiple pictures to be evaluated, the pictures to be evaluated are labeled with true value parameters of a real target, and the true value parameters include: camera parameters, size information, position information and classification information of a real target;
the calculation module is configured to calculate a real size of the real target and distance information between the real target and a camera according to the true value parameters, identify the sample image by using a detection model to be evaluated, and obtain a plurality of candidate targets and parameters of each candidate target, where the parameters of the candidate targets include: size information, position information, confidence information and classification information of each candidate target;
and the evaluation module is used for calculating and acquiring evaluation information of the detection model to be evaluated according to the real size of the real target, the distance information between the real target and the camera and the parameters of each candidate target.
8. The device according to claim 7, wherein the evaluation module is specifically configured to add a mark of the real target according to a preset condition, a real size of the real target, and distance information from a camera, where the mark indicates that the real target satisfying the preset condition is a target to be detected and the real target not satisfying the preset condition is a target to be ignored;
and calculating and acquiring the evaluation information of the detection model to be evaluated according to the marked true value parameters of the real target and the parameters of each candidate target.
9. The apparatus according to claim 8, wherein the evaluation module is specifically configured to calculate a merging ratio between each candidate target and each real target in each picture to be evaluated;
obtaining evaluation information of the detection model to be evaluated according to the cross comparison, the mark of the corresponding real target of the cross comparison and a preset rule, wherein the evaluation information comprises: an evaluation classification for each of the candidate objects.
10. An electronic device, comprising a storage medium storing a computer program and a processor, wherein the computer program is read by the processor and when executed implements the method of any of claims 1-6.
11. A storage medium, characterized in that it has stored thereon a computer program which, when read and executed by a processor, implements the method of any of the preceding claims 1-6.
CN201910436284.1A 2019-05-23 2019-05-23 Target detection evaluation method and device, electronic equipment and storage medium Active CN110751012B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910436284.1A CN110751012B (en) 2019-05-23 2019-05-23 Target detection evaluation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910436284.1A CN110751012B (en) 2019-05-23 2019-05-23 Target detection evaluation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110751012A true CN110751012A (en) 2020-02-04
CN110751012B CN110751012B (en) 2021-01-12

Family

ID=69275739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910436284.1A Active CN110751012B (en) 2019-05-23 2019-05-23 Target detection evaluation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110751012B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329892A (en) * 2020-12-03 2021-02-05 中国第一汽车股份有限公司 Target detection algorithm evaluation method, device, equipment and storage medium
CN112528079A (en) * 2020-12-22 2021-03-19 北京百度网讯科技有限公司 System detection method, apparatus, electronic device, storage medium, and program product
CN112712119A (en) * 2020-12-30 2021-04-27 杭州海康威视数字技术股份有限公司 Method and device for determining detection accuracy of target detection model
CN113674315A (en) * 2021-07-21 2021-11-19 浙江大华技术股份有限公司 Object detection method, device and computer readable storage medium
WO2023273895A1 (en) * 2021-06-29 2023-01-05 苏州一径科技有限公司 Method for evaluating clustering-based target detection model
CN115902227A (en) * 2022-12-22 2023-04-04 巴迪泰(广西)生物科技有限公司 Detection evaluation method and system of immunofluorescence kit

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130034263A1 (en) * 2011-08-04 2013-02-07 Yuanyuan Ding Adaptive Threshold for Object Detection
CN107563372A (en) * 2017-07-20 2018-01-09 济南中维世纪科技有限公司 A kind of license plate locating method based on deep learning SSD frameworks
CN108764372A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Construction method and device, mobile terminal, the readable storage medium storing program for executing of data set
CN109582793A (en) * 2018-11-23 2019-04-05 深圳前海微众银行股份有限公司 Model training method, customer service system and data labeling system, readable storage medium storing program for executing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130034263A1 (en) * 2011-08-04 2013-02-07 Yuanyuan Ding Adaptive Threshold for Object Detection
CN107563372A (en) * 2017-07-20 2018-01-09 济南中维世纪科技有限公司 A kind of license plate locating method based on deep learning SSD frameworks
CN108764372A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Construction method and device, mobile terminal, the readable storage medium storing program for executing of data set
CN109582793A (en) * 2018-11-23 2019-04-05 深圳前海微众银行股份有限公司 Model training method, customer service system and data labeling system, readable storage medium storing program for executing

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329892A (en) * 2020-12-03 2021-02-05 中国第一汽车股份有限公司 Target detection algorithm evaluation method, device, equipment and storage medium
CN112528079A (en) * 2020-12-22 2021-03-19 北京百度网讯科技有限公司 System detection method, apparatus, electronic device, storage medium, and program product
CN112712119A (en) * 2020-12-30 2021-04-27 杭州海康威视数字技术股份有限公司 Method and device for determining detection accuracy of target detection model
CN112712119B (en) * 2020-12-30 2023-10-24 杭州海康威视数字技术股份有限公司 Method and device for determining detection accuracy of target detection model
WO2023273895A1 (en) * 2021-06-29 2023-01-05 苏州一径科技有限公司 Method for evaluating clustering-based target detection model
CN113674315A (en) * 2021-07-21 2021-11-19 浙江大华技术股份有限公司 Object detection method, device and computer readable storage medium
CN115902227A (en) * 2022-12-22 2023-04-04 巴迪泰(广西)生物科技有限公司 Detection evaluation method and system of immunofluorescence kit
CN115902227B (en) * 2022-12-22 2024-05-14 巴迪泰(广西)生物科技有限公司 Detection and evaluation method and system for immunofluorescence kit

Also Published As

Publication number Publication date
CN110751012B (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN110751012B (en) Target detection evaluation method and device, electronic equipment and storage medium
CN106952303B (en) Vehicle distance detection method, device and system
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN107944450B (en) License plate recognition method and device
CN105335955B (en) Method for checking object and object test equipment
CN110119726B (en) Vehicle brand multi-angle identification method based on YOLOv3 model
CN113822247B (en) Method and system for identifying illegal building based on aerial image
CN106951898B (en) Vehicle candidate area recommendation method and system and electronic equipment
US20120020523A1 (en) Information creation device for estimating object position and information creation method and program for estimating object position
WO2017051480A1 (en) Image processing device and image processing method
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN111898491B (en) Identification method and device for reverse driving of vehicle and electronic equipment
CN111008576B (en) Pedestrian detection and model training method, device and readable storage medium
US20210133495A1 (en) Model providing system, method and program
CN110544268B (en) Multi-target tracking method based on structured light and SiamMask network
Matzka et al. Efficient resource allocation for attentive automotive vision systems
CN111931683B (en) Image recognition method, device and computer readable storage medium
US20230236038A1 (en) Position estimation method, position estimation device, and position estimation program
CN111860219B (en) High-speed channel occupation judging method and device and electronic equipment
CN111091023A (en) Vehicle detection method and device and electronic equipment
CN113030990A (en) Fusion ranging method and device for vehicle, ranging equipment and medium
CN114005105B (en) Driving behavior detection method and device and electronic equipment
JP2014016710A (en) Object detection device and program
CN112912892A (en) Automatic driving method and device and distance determining method and device
JP2011081614A (en) Recognition system, recognition method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant