CN111797993B - Evaluation method and device of deep learning model, electronic equipment and storage medium - Google Patents

Evaluation method and device of deep learning model, electronic equipment and storage medium Download PDF

Info

Publication number
CN111797993B
CN111797993B CN202010549757.1A CN202010549757A CN111797993B CN 111797993 B CN111797993 B CN 111797993B CN 202010549757 A CN202010549757 A CN 202010549757A CN 111797993 B CN111797993 B CN 111797993B
Authority
CN
China
Prior art keywords
difference
vertex
determining
index parameter
evaluation index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010549757.1A
Other languages
Chinese (zh)
Other versions
CN111797993A (en
Inventor
苏英菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202010549757.1A priority Critical patent/CN111797993B/en
Publication of CN111797993A publication Critical patent/CN111797993A/en
Application granted granted Critical
Publication of CN111797993B publication Critical patent/CN111797993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The invention discloses an evaluation method and device of a deep learning model, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a first vertex coordinate of a prediction boundary frame of a target object in a test image based on a deep learning model, determining an evaluation index parameter based on the first vertex coordinate and a second vertex coordinate, wherein the second vertex coordinate comprises a vertex coordinate of an actual boundary frame of the target object in the test image, and evaluating the deep learning model based on the evaluation index parameter. The invention can enrich the evaluation modes of the current trained deep learning model, realize accurate evaluation of the deep learning model and ensure the accuracy of the trained deep learning model.

Description

Evaluation method and device of deep learning model, electronic equipment and storage medium
Technical Field
The present invention relates to the field of automatic driving technologies, and in particular, to a method and apparatus for evaluating a deep learning model, an electronic device, and a storage medium.
Background
The overall framework for deep learning model training includes: and acquiring a sample image, preprocessing the sample image, inputting the preprocessed image into an existing model (such as a yolo model and the like) for training, evaluating the trained model, and further determining to continue training or end training based on an evaluation result.
The model evaluation mode adopted in the related technology is single, and an accurate evaluation training deep learning model cannot be realized, so that the accuracy of the trained deep learning model can be influenced.
Disclosure of Invention
In view of the above, the present invention provides a method, apparatus, electronic device and storage medium for evaluating a deep learning model to solve the above-mentioned problems.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
according to a first aspect of an embodiment of the present invention, there is provided an evaluation method of a deep learning model, including:
determining first vertex coordinates of a prediction boundary box of a target object in the test image based on the deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, wherein the second vertex coordinates comprise vertex coordinates of an actual bounding box of the target object in the test image;
and evaluating the deep learning model based on the evaluation index parameter.
In an embodiment, the determining the evaluation index parameter based on the first vertex coordinate and the second vertex coordinate includes:
the evaluation index parameter is determined based on a difference between the first vertex coordinates and the second vertex coordinates.
In an embodiment, the determining the evaluation index parameter based on the difference between the first vertex coordinate and the second vertex coordinate comprises:
determining a first difference between the first vertex and a maximum value of the abscissa of the second vertex, a second difference between minimum values of the abscissa, a third difference between maximum values of the ordinate, and a fourth difference between minimum values of the ordinate;
calculating a fifth difference value between the maximum value and the minimum value of the abscissa of the second vertex and a sixth difference value between the maximum value and the minimum value of the ordinate;
determining a ratio of a sum of absolute values of the first, second, third, and fourth differences to a sum of absolute values of the fifth and sixth differences;
the evaluation index parameter is determined based on the ratio.
In an embodiment, the determining the evaluation index parameter based on the ratio comprises:
the evaluation index parameter is determined based on the difference of 1 and the ratio.
In an embodiment, the determining the evaluation index parameter based on the ratio comprises:
acquiring the intersection ratio of the prediction boundary box and the actual boundary box;
determining an improved cross-over ratio based on a product of the ratio and the cross-over ratio;
and determining the evaluation index parameter based on the improved cross-over ratio.
According to a second aspect of the embodiment of the present invention, there is provided an evaluation device for a deep learning model, including:
the vertex coordinate determining module is used for determining first vertex coordinates of a prediction boundary frame of the target object in the test image based on the deep learning model;
the index parameter determining module is used for determining an evaluation index parameter based on the first vertex coordinates and the second vertex coordinates, wherein the second vertex coordinates comprise vertex coordinates of an actual boundary frame of the target object in the test image;
and the learning model evaluation module is used for evaluating the deep learning model based on the evaluation index parameter.
In an embodiment, the index parameter determination module is further configured to determine the evaluation index parameter based on a difference between the first vertex coordinate and the second vertex coordinate.
In an embodiment, the index parameter determining module includes:
a difference determining unit configured to determine a first difference between maximum values of abscissas of the first vertex and the second vertex, a second difference between minimum values of abscissas, a third difference between maximum values of ordinates, and a fourth difference between minimum values of ordinates;
a difference calculating unit, configured to calculate a fifth difference between a maximum value and a minimum value of an abscissa of the second vertex and a sixth difference between a maximum value and a minimum value of an ordinate;
a ratio determining unit configured to determine a ratio of a sum of absolute values of the first difference, the second difference, the third difference, and the fourth difference to a sum of absolute values of the fifth difference and the sixth difference;
and a parameter determining unit for determining the evaluation index parameter based on the ratio.
In an embodiment, the parameter determining unit is further configured to determine the evaluation index parameter based on a difference between 1 and the ratio.
In an embodiment, the parameter determining unit is further configured to:
acquiring the intersection ratio of the prediction boundary box and the actual boundary box;
determining an improved cross-over ratio based on a product of the ratio and the cross-over ratio;
and determining the evaluation index parameter based on the improved cross-over ratio.
According to a third aspect of an embodiment of the present invention, there is provided an electronic device including:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
determining first vertex coordinates of a prediction boundary box of a target object in the test image based on the deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, wherein the second vertex coordinates comprise vertex coordinates of an actual bounding box of the target object in the test image;
and evaluating the deep learning model based on the evaluation index parameter.
According to a fourth aspect of an embodiment of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when processed by a processor, implements:
determining first vertex coordinates of a prediction boundary box of a target object in the test image based on the deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, wherein the second vertex coordinates comprise vertex coordinates of an actual bounding box of the target object in the test image;
and evaluating the deep learning model based on the evaluation index parameter.
Compared with the prior art, the evaluation method of the deep learning model has the advantages that the first vertex coordinates of the prediction boundary frame of the target object in the test image are determined based on the deep learning model, the evaluation index parameters are determined based on the first vertex coordinates and the second vertex coordinates, the second vertex coordinates comprise the vertex coordinates of the actual boundary frame of the target object in the test image, the deep learning model is evaluated based on the evaluation index parameters, the evaluation mode of the current trained deep learning model can be enriched, the accurate evaluation of the deep learning model is realized, and the accuracy of the trained deep learning model can be ensured.
Drawings
FIG. 1 illustrates a flowchart of a method of evaluating a deep learning model according to an exemplary embodiment of the present invention;
FIG. 2 shows a flowchart of a method of evaluating a deep learning model according to yet another exemplary embodiment of the present invention;
FIG. 3A is a schematic diagram showing how the evaluation index parameter is determined based on the difference between the first vertex coordinates and the second vertex coordinates according to the present invention;
FIG. 3B shows a schematic representation of a predicted bounding box and an actual bounding box of a target object in accordance with the present invention;
FIG. 4 shows a schematic diagram of how the evaluation index parameter is determined based on the ratio according to the present invention;
FIG. 5 shows a block diagram of a deep learning model evaluation apparatus according to an exemplary embodiment of the present invention;
FIG. 6 shows a block diagram of a deep learning model evaluation apparatus according to another exemplary embodiment of the present invention;
fig. 7 shows a block diagram of an electronic device according to an exemplary embodiment of the invention.
Detailed Description
The present invention will be described in detail below with reference to specific embodiments shown in the drawings. The embodiments are not intended to limit the invention and structural, methodological, or functional modifications of the invention based on the embodiments are within the scope of the invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms actual, predicted, etc. may be used in this disclosure to describe various structures, these structures should not be limited by these terms. These terms are only used to distinguish one type of structure from another.
Fig. 1 shows a flowchart of an evaluation method of a deep learning model according to an exemplary embodiment of the present invention. The method of the embodiment can be applied to a server (such as a server or a server cluster formed by a plurality of servers). As shown in fig. 1, the method includes the following steps S101-S104:
in step S101, first vertex coordinates of a prediction boundary box of a target object in a test image are determined based on a deep learning model.
In this embodiment, the server may determine, based on the deep learning model, the first vertex coordinates of the prediction bounding box of the target object in the test image.
For example, after a test image for testing a deep learning model is acquired, the test image may be input to the deep learning model to predict a prediction Bounding Box (e.g., a first Bounding Box) of the target object in the test image, and then each vertex coordinate (i.e., a first vertex coordinate) of the Bounding Box may be acquired.
In this embodiment, the deep learning model may be a deep learning model that is trained by a set model training method by using training sample images in advance. It should be noted that, the set model training method may be set based on actual service requirements, which is not limited in this embodiment.
It will be appreciated that the above-described test images are matched to the use of the deep learning model. For example, if the purpose of the deep learning model is to implement object recognition in the surrounding environment of the vehicle in the autopilot field, the test image may include an image of the surrounding environment of the vehicle, and the target object may be an object such as a vehicle or an obstacle in the image.
In an alternative embodiment, the test image may be acquired by a monocular camera or the like mounted in a set position on the vehicle. The type of the test image may be set by a developer according to actual needs, which is not limited in this embodiment.
In step S102, an evaluation index parameter is determined based on the first vertex coordinates and the second vertex coordinates.
In this embodiment, after determining the first vertex coordinates of the prediction bounding box of the target object in the test image based on the deep learning model, the evaluation index parameter may be determined based on the first vertex coordinates and the second vertex coordinates, where the second vertex coordinates include vertex coordinates of an actual bounding box of the target object in the test image.
For example, after the test image for testing the deep learning model is obtained, an actual Bounding Box (e.g., the second Bounding Box) of the target object may be marked in the test image by using a manual marking or an automatic marking, and then each vertex coordinate (i.e., the second vertex coordinate) of the Bounding Box may be obtained.
It should be noted that, the above manner of labeling the prediction bounding box of the target object in the test image may be referred to the explanation and description in the related art, and the embodiment is not limited to the specific labeling manner.
Further, after the first vertex coordinates and the second vertex coordinates are determined, the evaluation index parameter may be determined based on the first vertex coordinates and the second vertex coordinates.
In another embodiment, the above manner of determining the evaluation index parameter based on the first vertex coordinate and the second vertex coordinate may be referred to the embodiment shown in fig. 2 described below, which is not described in detail herein.
In step S103, the deep learning model is evaluated based on the evaluation index parameter.
In this embodiment, after determining the evaluation index parameter based on the first vertex coordinate and the second vertex coordinate, the deep learning model may be evaluated based on the evaluation index parameter.
For example, after determining the evaluation index parameter based on the first vertex coordinate and the second vertex coordinate, the evaluation index parameter may be compared with a set threshold value, and the deep learning model may be evaluated based on the obtained comparison result.
It should be noted that, the size of the set threshold may be set by a developer according to actual service needs, which is not limited in this embodiment.
In another embodiment, the above-mentioned evaluation of the deep learning model based on the evaluation index parameter may also refer to the embodiment shown in fig. 3A described below, which is not described in detail herein.
As can be seen from the above technical solution, in this embodiment, by determining, based on a deep learning model, a first vertex coordinate of a prediction boundary frame of a target object in a test image, and determining, based on the first vertex coordinate and a second vertex coordinate, an evaluation index parameter, where the second vertex coordinate includes a vertex coordinate of an actual boundary frame of the target object in the test image, and further evaluating the deep learning model based on the evaluation index parameter, it is possible to enrich an evaluation manner of a currently trained deep learning model, implement an accurate evaluation deep learning model, and ensure accuracy of the trained deep learning model.
FIG. 2 shows a flowchart of a method of evaluating a deep learning model according to yet another exemplary embodiment of the present invention; the method of the embodiment can be applied to a server (such as a server or a server cluster formed by a plurality of servers). As shown in fig. 2, the method includes the following steps S201 to S204:
in step S201, first vertex coordinates of a prediction boundary box of a target object in a test image are determined based on a deep learning model.
In step S202, the evaluation index parameter is determined based on the difference between the first vertex coordinates and the second vertex coordinates.
In this embodiment, after determining the first vertex coordinates of the prediction bounding box of the target object in the test image based on the deep learning model, the evaluation index parameter may be determined based on a difference between the first vertex coordinates and the second vertex coordinates, where the second vertex coordinates include vertex coordinates of an actual bounding box of the target object in the test image.
For example, after the test image for testing the deep learning model is obtained, an actual Bounding Box (e.g., the second Bounding Box) of the target object may be marked in the test image by using a manual marking or an automatic marking, and then each vertex coordinate (i.e., the second vertex coordinate) of the Bounding Box may be obtained.
It should be noted that, the above manner of labeling the prediction bounding box of the target object in the test image may be referred to the explanation and description in the related art, and the embodiment is not limited to the specific labeling manner.
Further, after the first vertex coordinates and the second vertex coordinates are determined, the evaluation index parameter may be determined based on a difference value between the first vertex coordinates and the second vertex coordinates.
In another embodiment, the above manner of determining the evaluation index parameter based on the difference between the first vertex coordinate and the second vertex coordinate may be referred to as an embodiment shown in fig. 3A, which is not described in detail herein.
In step S203, the deep learning model is evaluated based on the evaluation index parameter.
The explanation and explanation of steps S201 and S203 may be referred to the above embodiments, and are not repeated here.
As can be seen from the above technical solution, in this embodiment, by determining, based on a deep learning model, a first vertex coordinate of a prediction boundary frame of a target object in a test image, and determining, based on a difference between the first vertex coordinate and the second vertex coordinate, the evaluation index parameter, where the second vertex coordinate includes a vertex coordinate of an actual boundary frame of the target object in the test image, and further evaluating the deep learning model based on the evaluation index parameter, an evaluation manner of a currently trained deep learning model may be enriched, an accurate evaluation of the deep learning model may be implemented, and accuracy of the trained deep learning model may be ensured.
FIG. 3A is a schematic diagram showing how the evaluation index parameter is determined based on the difference between the first vertex coordinates and the second vertex coordinates according to the present invention; the present embodiment exemplifies how the evaluation index parameter is determined based on the difference between the first vertex coordinates and the second vertex coordinates on the basis of the above-described embodiments. As shown in fig. 3A, the determining the evaluation index parameter based on the difference between the first vertex coordinate and the second vertex coordinate in the above step S202 may include the following steps S301 to S304:
in step S301, a first difference between the maximum values of the abscissas of the first vertex and the second vertex, a second difference between the minimum values of the abscissas, a third difference between the maximum values of the ordinates, and a fourth difference between the minimum values of the ordinates are determined.
In this embodiment, after determining the first vertex coordinates and the second vertex coordinates, a first difference between the maximum values of the first vertex and the second vertex, a second difference between the minimum values of the abscissa, a third difference between the maximum values of the ordinate, and a fourth difference between the minimum values of the ordinate may be determined.
For example, fig. 3B shows a schematic diagram of a prediction bounding box and an actual bounding box of a target object according to the present invention. As shown in fig. 3B, when determining the prediction bounding box (i.e., solid line box 100 shown in the figure) and the actual bounding box (i.e.,dashed box 200 shown in the figure), the respective vertex coordinates D1 (x) of the prediction bounding box may be obtained d1 ,y d1 )、D2(x d2 ,y d2 )、D3(x d3 ,y d3 ) And D4 (x) d4 ,y d4 ) And the respective vertex coordinates G1 (x g1 ,y g1 )、G2(x g2 ,y g2 )、G3(x g3 ,y g3 ) And G4 (x) g4 ,y g4 )。
As can be seen from the contents shown in fig. 3B, x d1 =x d3 =x d min ,x d2 =x d4 =x d max ,x g1 =x g3 =x G min ,x g2 =x g4 =x g max ;yd 1 =yd 2 =y d min ,y d3 =y d4 =y d max ,y g1 =y g2 =y g m i n ,y g3 =y g4 =y g max
Wherein x is d min Is the minimum of the abscissa of the first vertex; x is x d max Is the maximum value of the abscissa of the first vertex; x is x g min Is the minimum of the abscissa of the second vertex; x is x g max Is the maximum value of the abscissa of the second vertex; y is d min Is the minimum value of the ordinate of the first vertex; y is d max Is the maximum value of the ordinate of the first vertex; y is g min Is the minimum value of the ordinate of the second vertex; y is g max Is the maximum of the ordinate of the second vertex.
On the basis of the above, a first difference diff between the maximum values of the abscissas of the first and second vertexes can be calculated based on the following equations (1) to (4) 1 Second difference diff between minimum values of abscissa 2 Third difference diff between maximum values of ordinate 3 And a fourth difference diff between the minima of the ordinate 4 :
diff 1 =x d max -x g max ; (1)
diff 2 =x d min -x g min ; (2)
diff 3 =y d max -y g max ; (3)
diff 4 =y d min -y g min ; (4)
In step S302, a fifth difference between the maximum value and the minimum value of the abscissa of the second vertex and a sixth difference between the maximum value and the minimum value of the ordinate are calculated.
Still referring to the embodiment shown in FIG. 3B, a fifth difference diff between the maximum and minimum values of the abscissa of the second vertex may be calculated based on the following equations (5) - (6) 5 Sixth difference diff between maximum and minimum values of ordinate 6
diff 5 =x g max -x g min ; (5)
diff 6 =y g max -x g min 。 (6)
In step S303, a ratio of a sum of absolute values of the first difference, the second difference, the third difference, and the fourth difference to a sum of absolute values of the fifth difference and the sixth difference is determined.
On the basis of this, the first difference diff can be determined based on the following equation (7) 1 Said second difference diff 2 The third difference value diff 3 Said fourth difference diff 4 Sum of absolute values of (c) and the fifth difference diff 5 And the sixth difference diff 6 Ratio beta of the sum of absolute values of:
β=(|diff 1 |+|diff 2 |+|diff 3 |+|diff 4 |)/(|diff 5 |+|diff 6 |); (7)
in step S304, the evaluation index parameter is determined based on the ratio.
In this embodiment, after the above-mentioned ratio β is calculated, the evaluation index parameter may be determined based on the ratio.
In this embodiment, the evaluation index parameter may be determined based on a difference between 1 and the above ratio.
Namely, the evaluation index parameter α is calculated based on the following equation (8):
α=1-β。 (8)
it follows that 0< α <1, and when the area of the actual bounding box is larger than the area of the prediction bounding box, α is closer to 0; conversely, the more the area of the actual bounding box is smaller than the area of the predicted bounding box, the closer α is to 1.
It is worth noting that the greater the value of the intersection ratio IOU of the prediction bounding box and the actual bounding box, the higher the accuracy of the current deep learning model. However, if the IOU is small (e.g., smaller than the set value of 0.5, etc.), the safety of the vehicle can be ensured if the area of the prediction bounding box is greater than or equal to the area of the actual bounding box; conversely, if the prediction bounding box is smaller than the area of the actual bounding box, the safety of the vehicle may not be ensured. Therefore, in this embodiment, the deep learning model may be evaluated by the magnitude of the evaluation index parameter α, that is, the closer α is to 0, the lower the safety of the result predicted by the current deep learning model may be determined; and when alpha is closer to 1, the higher the safety of the result predicted by the current deep learning model can be judged, so that the rationality and the accuracy of model evaluation can be improved.
In another embodiment, the above manner of determining the evaluation index parameter based on the ratio may also be referred to the embodiment shown in fig. 4 described below, which will not be described in detail.
As is apparent from the above description, in this embodiment, by determining the ratio of the first difference between the maximum values of the abscissas of the first vertex and the second vertex, the second difference between the minimum values of the abscissas, the third difference between the maximum values of the ordinates, and the fourth difference between the minimum values of the ordinates of the second vertex, and calculating the fifth difference between the maximum values of the abscissas and the minimum values of the ordinates, and the sixth difference between the maximum values of the ordinates and the minimum values of the ordinates, and determining the ratio of the sum of the absolute values of the first difference, the second difference, the third difference, and the fourth difference to the sum of the absolute values of the fifth difference and the sixth difference, and further determining the evaluation index parameter based on the ratio, the accurate determination of the evaluation index parameter can be realized, and further the subsequent evaluation of the deep learning model based on the evaluation index parameter can be realized, the evaluation mode of the currently trained deep learning model can be enriched, the accurate evaluation of the deep learning model can be realized, and the accuracy of the training deep learning model can be ensured.
Fig. 4 shows a schematic diagram of how the evaluation index parameter is determined based on the ratio according to the invention. The present embodiment is exemplified on the basis of the above-described embodiments by taking as an example how the evaluation index parameter is determined based on the ratio. As shown in fig. 4, the determining the evaluation index parameter based on the ratio in the above step S304 may include the following steps S401 to S403:
in step S401, an intersection ratio of the prediction bounding box and the actual bounding box is acquired.
In this embodiment, after determining the prediction bounding box and the actual bounding box of the target object, the intersection ratio of the prediction bounding box and the actual bounding box may be obtained.
Specifically, the blending ratio of the prediction bounding box and the actual bounding box may be represented by the following formula (9):
IOU=(A∩B)/(A∪B) (9)
where IOU represents the intersection ratio of the prediction boundary box and the actual boundary box, A represents the prediction boundary box, and B represents the actual boundary box.
In step S402, an improved cross-over ratio is determined based on the product of the ratio and the cross-over ratio.
In this embodiment, after obtaining the blending ratio of the prediction bounding box and the actual bounding box, an improved blending ratio may be determined based on the product of the blending ratio and the ratio.
Specifically, the above-mentioned improved cross-over ratio can be represented by the following formula (10):
IOU improvement of =β·IOU (10)
In step S403, the evaluation index parameter is determined based on the improvement cross ratio.
It will be appreciated that the greater the value of the intersection ratio IOU of the prediction bounding box and the actual bounding box, the tableThe higher the accuracy of the explicit current deep learning model. However, if the IOU is small (e.g., smaller than the set value of 0.5, etc.), if the area of the prediction Bounding Box is greater than or equal to the area of the actual Bounding Box, the safety of the vehicle can be ensured; conversely, if the prediction Bounding Box is smaller than the area of the actual Bounding Box, the safety of the vehicle may not be ensured. Therefore, in the present embodiment, the deep learning model is evaluated by combining the cross-over ratio and the ratio β, i.e., the improved cross-over ratio IOU Improvement of The larger the value of (2) is, the higher the accuracy of the current deep learning model is, and the higher the safety of the predicted result is; conversely, the lower the accuracy of the current deep learning model, the lower the safety of the predicted outcome.
As can be seen from the foregoing description, in this embodiment, by obtaining the intersection ratio of the prediction bounding box and the actual bounding box, determining an improved intersection ratio based on the product of the ratio and the intersection ratio, and determining the evaluation index parameter based on the improved intersection ratio, it is possible to accurately determine the evaluation index parameter, and further, it is possible to implement subsequent evaluation of the deep learning model based on the evaluation index parameter, enrich the evaluation manner of the currently trained deep learning model, implement an accurate evaluation deep learning model, and ensure the accuracy of the trained deep learning model.
FIG. 5 shows a block diagram of a deep learning model evaluation apparatus according to an exemplary embodiment of the present invention; the device of the embodiment can be applied to a server (for example, a server or a server cluster formed by a plurality of servers). As shown in fig. 5, the apparatus includes: a vertex coordinates determination module 110, an index parameter determination module 120, and a learning model evaluation module 130, wherein:
a vertex coordinate determining module 110 for determining a first vertex coordinate of a prediction boundary box of the target object in the test image based on the deep learning model;
the index parameter determining module 120 is configured to determine an evaluation index parameter based on the first vertex coordinate and the second vertex coordinate.
Wherein the second vertex coordinates include vertex coordinates of an actual bounding box of the target object in the test image.
And a learning model evaluation module 130, configured to evaluate the deep learning model based on the evaluation index parameter.
As can be seen from the foregoing description, in this embodiment, by determining, based on a deep learning model, a first vertex coordinate of a prediction boundary frame of a target object in a test image, and determining, based on the first vertex coordinate and a second vertex coordinate, an evaluation index parameter, where the second vertex coordinate includes a vertex coordinate of an actual boundary frame of the target object in the test image, and further evaluating the deep learning model based on the evaluation index parameter, it is possible to enrich an evaluation manner of a currently trained deep learning model, implement an accurate evaluation deep learning model, and ensure accuracy of the trained deep learning model.
FIG. 6 shows a block diagram of a deep learning model evaluation apparatus according to another exemplary embodiment of the present invention; the device of the embodiment can be applied to a server (for example, a server or a server cluster formed by a plurality of servers). The functions of the vertex coordinate determining module 210, the index parameter determining module 220, and the learning model evaluating module 230 are the same as those of the vertex coordinate determining module 110, the index parameter determining module 120, and the learning model evaluating module 130 in the embodiment shown in fig. 5, and are not described in detail herein.
As shown in fig. 6, the metric parameter determination module 220 may also be configured to determine the evaluation metric parameter based on a difference between the first vertex coordinate and the second vertex coordinate.
In an embodiment, the index parameter determining module 220 may include:
a difference determining unit 221 for determining a first difference between the maximum values of the abscissas of the first vertex and the second vertex, a second difference between the minimum values of the abscissas, a third difference between the maximum values of the ordinates, and a fourth difference between the minimum values of the ordinates;
a difference calculating unit 222, configured to calculate a fifth difference between a maximum value and a minimum value of an abscissa of the second vertex and a sixth difference between a maximum value and a minimum value of an ordinate;
a ratio determining unit 223 for determining a ratio of a sum of absolute values of the first difference, the second difference, the third difference, and the fourth difference to a sum of absolute values of the fifth difference and the sixth difference;
a parameter determination unit 224 for determining the evaluation index parameter based on the ratio.
In an embodiment, the parameter determining unit 224 may be further configured to determine the evaluation index parameter based on a difference between 1 and the ratio.
In another embodiment, the parameter determination unit 224 may be further configured to:
acquiring the intersection ratio of the prediction boundary box and the actual boundary box;
determining an improved cross-over ratio based on a product of the ratio and the cross-over ratio;
and determining the evaluation index parameter based on the improved cross-over ratio.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The embodiment of the evaluation device of the deep learning model can be applied to network equipment. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking a software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor of a device where the device is located for operation. From the hardware level, as shown in fig. 7, a hardware structure diagram of an electronic device where the evaluation device of the deep learning model of the present invention is located is shown, where in addition to the processor, the network interface, the memory and the nonvolatile memory shown in fig. 7, the device where the device is located may generally include other hardware, such as a forwarding chip responsible for processing a message, etc.; the device may also be a distributed device in terms of hardware architecture, possibly comprising a plurality of interface cards, for the extension of the message processing at the hardware level.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when processed by a processor, implements the following task processing method:
determining first vertex coordinates of a prediction boundary box of a target object in the test image based on the deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, wherein the second vertex coordinates comprise vertex coordinates of an actual bounding box of the target object in the test image;
and evaluating the deep learning model based on the evaluation index parameter.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (8)

1. A method for evaluating a deep learning model, comprising:
determining first vertex coordinates of a prediction boundary box of a target object in the test image based on the deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, wherein the second vertex coordinates comprise vertex coordinates of an actual bounding box of the target object in the test image;
evaluating the deep learning model based on the evaluation index parameter;
the determining the evaluation index parameter based on the first vertex coordinate and the second vertex coordinate includes:
determining the evaluation index parameter based on a difference between the first vertex coordinates and the second vertex coordinates; the determining the evaluation index parameter based on the difference between the first vertex coordinate and the second vertex coordinate includes:
determining a first difference between the first vertex and a maximum value of the abscissa of the second vertex, a second difference between minimum values of the abscissa, a third difference between maximum values of the ordinate, and a fourth difference between minimum values of the ordinate;
calculating a fifth difference value between the maximum value and the minimum value of the abscissa of the second vertex and a sixth difference value between the maximum value and the minimum value of the ordinate;
determining a ratio of a sum of absolute values of the first, second, third, and fourth differences to a sum of absolute values of the fifth and sixth differences;
the evaluation index parameter is determined based on the ratio.
2. The method of claim 1, wherein the determining the evaluation index parameter based on the ratio comprises:
the evaluation index parameter is determined based on the difference of 1 and the ratio.
3. The method of claim 1, wherein the determining the evaluation index parameter based on the ratio comprises:
acquiring the intersection ratio of the prediction boundary box and the actual boundary box;
determining an improved cross-over ratio based on a product of the ratio and the cross-over ratio;
and determining the evaluation index parameter based on the improved cross-over ratio.
4. An evaluation device for a deep learning model, comprising:
the vertex coordinate determining module is used for determining first vertex coordinates of a prediction boundary frame of the target object in the test image based on the deep learning model;
the index parameter determining module is used for determining an evaluation index parameter based on the first vertex coordinates and the second vertex coordinates, wherein the second vertex coordinates comprise vertex coordinates of an actual boundary frame of the target object in the test image;
the learning model evaluation module is used for evaluating the deep learning model based on the evaluation index parameters;
the index parameter determination module is further configured to determine the evaluation index parameter based on a difference between the first vertex coordinate and the second vertex coordinate;
the index parameter determining module comprises:
a difference determining unit configured to determine a first difference between maximum values of abscissas of the first vertex and the second vertex, a second difference between minimum values of abscissas, a third difference between maximum values of ordinates, and a fourth difference between minimum values of ordinates;
a difference calculating unit, configured to calculate a fifth difference between a maximum value and a minimum value of an abscissa of the second vertex and a sixth difference between a maximum value and a minimum value of an ordinate;
a ratio determining unit configured to determine a ratio of a sum of absolute values of the first difference, the second difference, the third difference, and the fourth difference to a sum of absolute values of the fifth difference and the sixth difference;
and a parameter determining unit for determining the evaluation index parameter based on the ratio.
5. The apparatus of claim 4, wherein the parameter determination unit is further configured to determine the evaluation index parameter based on a difference between 1 and the ratio.
6. The apparatus of claim 4, wherein the parameter determination unit is further configured to:
acquiring the intersection ratio of the prediction boundary box and the actual boundary box;
determining an improved cross-over ratio based on a product of the ratio and the cross-over ratio;
and determining the evaluation index parameter based on the improved cross-over ratio.
7. An electronic device, the electronic device comprising:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
determining first vertex coordinates of a prediction boundary box of a target object in the test image based on the deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, wherein the second vertex coordinates comprise vertex coordinates of an actual bounding box of the target object in the test image;
evaluating the deep learning model based on the evaluation index parameter;
the determining the evaluation index parameter based on the first vertex coordinate and the second vertex coordinate includes:
determining the evaluation index parameter based on a difference between the first vertex coordinates and the second vertex coordinates;
the determining the evaluation index parameter based on the difference between the first vertex coordinate and the second vertex coordinate includes:
determining a first difference between the first vertex and a maximum value of the abscissa of the second vertex, a second difference between minimum values of the abscissa, a third difference between maximum values of the ordinate, and a fourth difference between minimum values of the ordinate;
calculating a fifth difference value between the maximum value and the minimum value of the abscissa of the second vertex and a sixth difference value between the maximum value and the minimum value of the ordinate;
determining a ratio of a sum of absolute values of the first, second, third, and fourth differences to a sum of absolute values of the fifth and sixth differences;
the evaluation index parameter is determined based on the ratio.
8. A computer-readable storage medium having stored thereon a computer program, characterized in that the program, when processed by a processor, implements:
determining first vertex coordinates of a prediction boundary box of a target object in the test image based on the deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, wherein the second vertex coordinates comprise vertex coordinates of an actual bounding box of the target object in the test image;
evaluating the deep learning model based on the evaluation index parameter;
the determining the evaluation index parameter based on the first vertex coordinate and the second vertex coordinate includes:
determining the evaluation index parameter based on a difference between the first vertex coordinates and the second vertex coordinates;
the determining the evaluation index parameter based on the difference between the first vertex coordinate and the second vertex coordinate includes:
determining a first difference between the first vertex and a maximum value of the abscissa of the second vertex, a second difference between minimum values of the abscissa, a third difference between maximum values of the ordinate, and a fourth difference between minimum values of the ordinate;
calculating a fifth difference value between the maximum value and the minimum value of the abscissa of the second vertex and a sixth difference value between the maximum value and the minimum value of the ordinate;
determining a ratio of a sum of absolute values of the first, second, third, and fourth differences to a sum of absolute values of the fifth and sixth differences;
the evaluation index parameter is determined based on the ratio.
CN202010549757.1A 2020-06-16 2020-06-16 Evaluation method and device of deep learning model, electronic equipment and storage medium Active CN111797993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010549757.1A CN111797993B (en) 2020-06-16 2020-06-16 Evaluation method and device of deep learning model, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010549757.1A CN111797993B (en) 2020-06-16 2020-06-16 Evaluation method and device of deep learning model, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111797993A CN111797993A (en) 2020-10-20
CN111797993B true CN111797993B (en) 2024-02-27

Family

ID=72803043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010549757.1A Active CN111797993B (en) 2020-06-16 2020-06-16 Evaluation method and device of deep learning model, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111797993B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184700B (en) * 2020-10-21 2022-03-18 西北民族大学 Monocular camera-based agricultural unmanned vehicle obstacle sensing method and device
CN113642521B (en) * 2021-09-01 2024-02-09 东软睿驰汽车技术(沈阳)有限公司 Traffic light identification quality evaluation method and device and electronic equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0814860A (en) * 1994-06-30 1996-01-19 Toshiba Corp Model creating device
KR20160131621A (en) * 2015-05-08 2016-11-16 (주)케이사인 Parallel processing system
WO2017101292A1 (en) * 2015-12-16 2017-06-22 深圳市汇顶科技股份有限公司 Autofocusing method, device and system
CN108805093A (en) * 2018-06-19 2018-11-13 华南理工大学 Escalator passenger based on deep learning falls down detection algorithm
CN109523503A (en) * 2018-09-11 2019-03-26 北京三快在线科技有限公司 A kind of method and apparatus of image cropping
CN109919072A (en) * 2019-02-28 2019-06-21 桂林电子科技大学 Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN110097091A (en) * 2019-04-10 2019-08-06 东南大学 It is trained be distributed with inference data it is inconsistent under the conditions of image fine granularity recognition methods
CN110263939A (en) * 2019-06-24 2019-09-20 腾讯科技(深圳)有限公司 A kind of appraisal procedure, device, equipment and medium indicating learning model
CN110298298A (en) * 2019-06-26 2019-10-01 北京市商汤科技开发有限公司 Target detection and the training method of target detection network, device and equipment
CN110503095A (en) * 2019-08-27 2019-11-26 中国人民公安大学 Alignment quality evaluation method, localization method and the equipment of target detection model
CN110598751A (en) * 2019-08-14 2019-12-20 安徽师范大学 Anchor point generating method based on geometric attributes
CN110688994A (en) * 2019-12-10 2020-01-14 南京甄视智能科技有限公司 Human face detection method and device based on cross-over ratio and multi-model fusion and computer readable storage medium
CN110765951A (en) * 2019-10-24 2020-02-07 西安电子科技大学 Remote sensing image airplane target detection method based on bounding box correction algorithm
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN111160527A (en) * 2019-12-27 2020-05-15 歌尔股份有限公司 Target identification method and device based on MASK RCNN network model

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0814860A (en) * 1994-06-30 1996-01-19 Toshiba Corp Model creating device
KR20160131621A (en) * 2015-05-08 2016-11-16 (주)케이사인 Parallel processing system
WO2017101292A1 (en) * 2015-12-16 2017-06-22 深圳市汇顶科技股份有限公司 Autofocusing method, device and system
CN108805093A (en) * 2018-06-19 2018-11-13 华南理工大学 Escalator passenger based on deep learning falls down detection algorithm
CN109523503A (en) * 2018-09-11 2019-03-26 北京三快在线科技有限公司 A kind of method and apparatus of image cropping
CN109919072A (en) * 2019-02-28 2019-06-21 桂林电子科技大学 Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN110097091A (en) * 2019-04-10 2019-08-06 东南大学 It is trained be distributed with inference data it is inconsistent under the conditions of image fine granularity recognition methods
CN110263939A (en) * 2019-06-24 2019-09-20 腾讯科技(深圳)有限公司 A kind of appraisal procedure, device, equipment and medium indicating learning model
CN110298298A (en) * 2019-06-26 2019-10-01 北京市商汤科技开发有限公司 Target detection and the training method of target detection network, device and equipment
CN110598751A (en) * 2019-08-14 2019-12-20 安徽师范大学 Anchor point generating method based on geometric attributes
CN110503095A (en) * 2019-08-27 2019-11-26 中国人民公安大学 Alignment quality evaluation method, localization method and the equipment of target detection model
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN110765951A (en) * 2019-10-24 2020-02-07 西安电子科技大学 Remote sensing image airplane target detection method based on bounding box correction algorithm
CN110688994A (en) * 2019-12-10 2020-01-14 南京甄视智能科技有限公司 Human face detection method and device based on cross-over ratio and multi-model fusion and computer readable storage medium
CN111160527A (en) * 2019-12-27 2020-05-15 歌尔股份有限公司 Target identification method and device based on MASK RCNN network model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression;Zhaohui Zheng等;《arXiv:1911.08287》;1-8 *
基于RetinaNet改进的车辆信息检测;刘革等;《计算机应用》;第40卷(第03期);854-858 *
大角度倾斜的车牌识别算法研究;周文婷;《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》(第2019(07)期);C034-337 *

Also Published As

Publication number Publication date
CN111797993A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
WO2020239015A1 (en) Image recognition method and apparatus, image classification method and apparatus, electronic device, and storage medium
CN108229509B (en) Method and device for identifying object class and electronic equipment
CN110569703B (en) Computer-implemented method and device for identifying damage from picture
CN111797993B (en) Evaluation method and device of deep learning model, electronic equipment and storage medium
US10783643B1 (en) Segmentation-based damage detection
CN111429482A (en) Target tracking method and device, computer equipment and storage medium
US20200357111A1 (en) Recognizing damage through image analysis
CN113344986B (en) Point cloud registration result evaluation method, device, equipment and storage medium
US20180063488A1 (en) Information processing apparatus, information processing method, and computer program product
CN111583199A (en) Sample image annotation method and device, computer equipment and storage medium
CN113673413A (en) Method and device for examining architectural drawings, computer readable medium and electronic equipment
CN114663397A (en) Method, device, equipment and storage medium for detecting travelable area
CN111488883A (en) Vehicle frame number identification method and device, computer equipment and storage medium
CN111553914A (en) Vision-based goods detection method and device, terminal and readable storage medium
CN112907575B (en) Face quality evaluation method and device and electronic equipment
CN112613462B (en) Weighted intersection ratio method
JP2012123631A (en) Attention area detection method, attention area detection device, and program
CN111428858A (en) Method and device for determining number of samples, electronic equipment and storage medium
CN111126493B (en) Training method and device for deep learning model, electronic equipment and storage medium
CN114821513B (en) Image processing method and device based on multilayer network and electronic equipment
US20230123671A1 (en) Localization and mapping
CN114627397A (en) Behavior recognition model construction method and behavior recognition method
CN111079523A (en) Object detection method, object detection device, computer equipment and storage medium
Melo et al. Trust on beliefs: Source, time and expertise
CN115578329A (en) Screen detection method and device, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant