CN111797993A - Evaluation method and device for deep learning model, electronic equipment and storage medium - Google Patents

Evaluation method and device for deep learning model, electronic equipment and storage medium Download PDF

Info

Publication number
CN111797993A
CN111797993A CN202010549757.1A CN202010549757A CN111797993A CN 111797993 A CN111797993 A CN 111797993A CN 202010549757 A CN202010549757 A CN 202010549757A CN 111797993 A CN111797993 A CN 111797993A
Authority
CN
China
Prior art keywords
difference
determining
vertex coordinates
learning model
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010549757.1A
Other languages
Chinese (zh)
Other versions
CN111797993B (en
Inventor
苏英菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202010549757.1A priority Critical patent/CN111797993B/en
Publication of CN111797993A publication Critical patent/CN111797993A/en
Application granted granted Critical
Publication of CN111797993B publication Critical patent/CN111797993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The invention discloses an evaluation method and device of a deep learning model, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a first vertex coordinate of a prediction boundary box of a target object in a test image based on a deep learning model, determining an evaluation index parameter based on the first vertex coordinate and a second vertex coordinate, wherein the second vertex coordinate comprises a vertex coordinate of an actual boundary box of the target object in the test image, and evaluating the deep learning model based on the evaluation index parameter. The method can enrich the evaluation modes of the currently trained deep learning model, realize accurate evaluation of the deep learning model, and ensure the accuracy of the trained deep learning model.

Description

Evaluation method and device for deep learning model, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to an evaluation method and device of a deep learning model, electronic equipment and a storage medium.
Background
The whole framework of deep learning model training comprises: acquiring a sample image, preprocessing the sample image, inputting the preprocessed image into an existing model (such as a yolo model) for training, evaluating the trained model, and determining to continue training or finish training based on an evaluation result.
The model evaluation mode adopted in the related technology is single, and the accurate evaluation and training of the deep learning model cannot be realized, so that the accuracy of the trained deep learning model can be influenced.
Disclosure of Invention
In view of the above, the present invention provides an evaluation method and apparatus for a deep learning model, an electronic device, and a storage medium to solve the above technical problems.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
according to a first aspect of the embodiments of the present invention, there is provided an evaluation method of a deep learning model, including:
determining first vertex coordinates of a prediction bounding box of a target object in a test image based on a deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, the second vertex coordinates including vertex coordinates of an actual bounding box of the target object in the test image;
and evaluating the deep learning model based on the evaluation index parameters.
In one embodiment, the determining an evaluation index parameter based on the first vertex coordinates and the second vertex coordinates includes:
determining the evaluation index parameter based on a difference between the first vertex coordinates and the second vertex coordinates.
In one embodiment, the determining the evaluation index parameter based on the difference between the first vertex coordinate and the second vertex coordinate includes:
determining a first difference between the maxima of the abscissas, a second difference between the minima of the abscissas, a third difference between the maxima of the ordinates and a fourth difference between the minima of the ordinates of the first vertex and the second vertex;
calculating a fifth difference value between the maximum value and the minimum value of the abscissa and a sixth difference value between the maximum value and the minimum value of the ordinate of the second vertex;
determining a ratio of a sum of absolute values of the first difference, the second difference, the third difference, and the fourth difference to a sum of absolute values of the fifth difference and the sixth difference;
determining the evaluation index parameter based on the ratio.
In an embodiment, the determining the evaluation index parameter based on the ratio includes:
determining the evaluation index parameter based on a difference between 1 and the ratio.
In an embodiment, the determining the evaluation index parameter based on the ratio includes:
acquiring the intersection ratio of the prediction boundary box and the actual boundary box;
determining an improved intersection ratio based on a product of the ratio and the intersection ratio;
determining the evaluation index parameter based on the improved intersection ratio.
According to a second aspect of the embodiments of the present invention, there is provided an evaluation apparatus of a deep learning model, including:
the vertex coordinate determination module is used for determining first vertex coordinates of a prediction boundary box of a target object in the test image based on the deep learning model;
an index parameter determination module, configured to determine an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, where the second vertex coordinates include vertex coordinates of an actual bounding box of the target object in the test image;
and the learning model evaluation module is used for evaluating the deep learning model based on the evaluation index parameters.
In an embodiment, the index parameter determination module is further configured to determine the evaluation index parameter based on a difference between the first vertex coordinates and the second vertex coordinates.
In one embodiment, the metric parameter determination module includes:
a difference determining unit for determining a first difference between the first vertex and a maximum value of the abscissa of the second vertex, a second difference between minimum values of the abscissa, a third difference between maximum values of the ordinate, and a fourth difference between minimum values of the ordinate;
a difference calculation unit configured to calculate a fifth difference between a maximum value and a minimum value of the abscissa of the second vertex, and a sixth difference between a maximum value and a minimum value of the ordinate;
a ratio determination unit configured to determine a ratio of a sum of absolute values of the first difference, the second difference, the third difference, and the fourth difference to a sum of absolute values of the fifth difference and the sixth difference;
a parameter determination unit configured to determine the evaluation index parameter based on the ratio.
In an embodiment, the parameter determining unit is further configured to determine the evaluation index parameter based on a difference between 1 and the ratio.
In an embodiment, the parameter determining unit is further configured to:
acquiring the intersection ratio of the prediction boundary box and the actual boundary box;
determining an improved intersection ratio based on a product of the ratio and the intersection ratio;
determining the evaluation index parameter based on the improved intersection ratio.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus, including:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
determining first vertex coordinates of a prediction bounding box of a target object in a test image based on a deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, the second vertex coordinates including vertex coordinates of an actual bounding box of the target object in the test image;
and evaluating the deep learning model based on the evaluation index parameters.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when processed by a processor, implements:
determining first vertex coordinates of a prediction bounding box of a target object in a test image based on a deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, the second vertex coordinates including vertex coordinates of an actual bounding box of the target object in the test image;
and evaluating the deep learning model based on the evaluation index parameters.
Compared with the prior art, the evaluation method of the deep learning model determines the first vertex coordinate of the prediction boundary frame of the target object in the test image based on the deep learning model, determines the evaluation index parameter based on the first vertex coordinate and the second vertex coordinate, wherein the second vertex coordinate comprises the vertex coordinate of the actual boundary frame of the target object in the test image, and further evaluates the deep learning model based on the evaluation index parameter, so that the evaluation mode of the currently trained deep learning model can be enriched, the accurate evaluation of the deep learning model can be realized, and the accuracy of the trained deep learning model can be ensured.
Drawings
FIG. 1 shows a flow diagram of a method of evaluating a deep learning model according to an exemplary embodiment of the invention;
FIG. 2 shows a flow chart of a method of evaluation of a deep learning model according to a further exemplary embodiment of the present invention;
FIG. 3A shows a schematic diagram of how the evaluation index parameter is determined based on the difference of the first vertex coordinates and the second vertex coordinates, according to the present invention;
FIG. 3B is a schematic diagram illustrating a predicted bounding box and an actual bounding box of a target object in accordance with the present invention;
FIG. 4 shows a schematic diagram of how the evaluation index parameter is determined based on the ratio according to the present invention;
FIG. 5 is a block diagram illustrating an evaluation apparatus of a deep learning model according to an exemplary embodiment of the present invention;
fig. 6 is a block diagram showing a configuration of an evaluation apparatus of a deep learning model according to another exemplary embodiment of the present invention;
fig. 7 shows a block diagram of an electronic device according to an exemplary embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to specific embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those of ordinary skill in the art in light of these embodiments are intended to be within the scope of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms actual, predicted, etc. may be used herein to describe various structures, these structures should not be limited by these terms. These terms are only used to distinguish one type of structure from another.
Fig. 1 shows a flowchart of an evaluation method of a deep learning model according to an exemplary embodiment of the present invention. The method of this embodiment may be applied to a server (e.g., a server or a server cluster composed of multiple servers). As shown in fig. 1, the method comprises the following steps S101-S104:
in step S101, first vertex coordinates of a predicted bounding box of a target object in a test image are determined based on a deep learning model.
In this embodiment, the server may determine the first vertex coordinates of the predicted bounding box of the target object in the test image based on the deep learning model.
For example, after obtaining a test image for testing a deep learning model, the test image may be input to the deep learning model to predict a predicted bounding box (e.g., a first bounding box) of the target object in the test image, and vertex coordinates (i.e., first vertex coordinates) of the bounding box may be obtained.
In this embodiment, the deep learning model may be a deep learning model that is trained by using a training sample image and a set model training method in advance. It should be noted that the set model training method may be set based on actual business needs, which is not limited in this embodiment.
It will be appreciated that the above test images match the use of the deep learning model. For example, if the purpose of the deep learning model is to realize object recognition in the surrounding environment of the vehicle in the field of automatic driving, the test image may include an image of the surrounding environment of the vehicle, and the target object may be an object such as a vehicle or an obstacle in the image.
In an alternative embodiment, the test image may be captured by a monocular camera or the like mounted at a set position on the vehicle. The type of the test image may be set by a developer according to actual needs, which is not limited in this embodiment.
In step S102, an evaluation index parameter is determined based on the first vertex coordinates and the second vertex coordinates.
In this embodiment, after determining the first vertex coordinates of the predicted bounding box of the target object in the test image based on the deep learning model, the evaluation index parameter may be determined based on the first vertex coordinates and the second vertex coordinates, where the second vertex coordinates include the vertex coordinates of the actual bounding box of the target object in the test image.
For example, after obtaining a test image for testing the deep learning model, an actual Bounding Box (e.g., a second Bounding Box) of the target object may be labeled in the test image by using a manual labeling or an automatic labeling, and further, coordinates of vertices (i.e., second vertex coordinates) of the Bounding Box may be obtained.
It should be noted that, for the above-mentioned manner of labeling the predicted bounding box of the target object in the test image, reference may be made to explanation and description in the related art, and this embodiment does not limit the specific labeling manner.
Further, after the first vertex coordinates and the second vertex coordinates are determined, the evaluation index parameter may be determined based on the first vertex coordinates and the second vertex coordinates.
In another embodiment, the above-mentioned manner of determining the evaluation index parameter based on the first vertex coordinates and the second vertex coordinates may be referred to the following embodiment shown in fig. 2, and will not be described in detail herein.
In step S103, the deep learning model is evaluated based on the evaluation index parameter.
In this embodiment, after determining an evaluation index parameter based on the first vertex coordinate and the second vertex coordinate, the deep learning model may be evaluated based on the evaluation index parameter.
For example, after determining an evaluation index parameter based on the first vertex coordinates and the second vertex coordinates, the evaluation index parameter may be compared with a set threshold, and the deep learning model may be evaluated based on the obtained comparison result.
It should be noted that the size of the set threshold may be set by a developer according to actual business needs, which is not limited in this embodiment.
In another embodiment, the above-mentioned evaluation of the deep learning model based on the evaluation index parameter can also refer to the following embodiment shown in fig. 3A, which is not described in detail herein.
According to the technical scheme, the first vertex coordinate of the prediction boundary frame of the target object in the test image is determined based on the deep learning model, the evaluation index parameter is determined based on the first vertex coordinate and the second vertex coordinate, the second vertex coordinate comprises the vertex coordinate of the actual boundary frame of the target object in the test image, the deep learning model is evaluated based on the evaluation index parameter, the evaluation mode of the currently trained deep learning model can be enriched, the accurate evaluation of the deep learning model is achieved, and the accuracy of the trained deep learning model can be ensured.
FIG. 2 shows a flow chart of a method of evaluation of a deep learning model according to a further exemplary embodiment of the present invention; the method of this embodiment may be applied to a server (e.g., a server or a server cluster composed of multiple servers). As shown in fig. 2, the method comprises the following steps S201-S204:
in step S201, first vertex coordinates of a predicted bounding box of a target object in a test image are determined based on a deep learning model.
In step S202, the evaluation index parameter is determined based on a difference between the first vertex coordinates and the second vertex coordinates.
In this embodiment, after determining the first vertex coordinates of the predicted bounding box of the target object in the test image based on the deep learning model, the evaluation index parameter may be determined based on a difference between the first vertex coordinates and the second vertex coordinates, where the second vertex coordinates include the vertex coordinates of the actual bounding box of the target object in the test image.
For example, after obtaining a test image for testing the deep learning model, an actual Bounding Box (e.g., a second Bounding Box) of the target object may be labeled in the test image by using a manual labeling or an automatic labeling, and further, coordinates of vertices (i.e., second vertex coordinates) of the Bounding Box may be obtained.
It should be noted that, for the above-mentioned manner of labeling the predicted bounding box of the target object in the test image, reference may be made to explanation and description in the related art, and this embodiment does not limit the specific labeling manner.
Further, after the first vertex coordinates and the second vertex coordinates are determined, the evaluation index parameter may be determined based on a difference between the first vertex coordinates and the second vertex coordinates.
In another embodiment, the manner of determining the evaluation index parameter based on the difference between the first vertex coordinate and the second vertex coordinate may be referred to the following embodiment shown in fig. 3A, and will not be described in detail here.
In step S203, the deep learning model is evaluated based on the evaluation index parameter.
For the explanation and explanation of steps S201 and S203, reference may be made to the above embodiments, which are not described herein again.
According to the technical scheme, the first vertex coordinate of the prediction boundary frame of the target object in the test image is determined based on the deep learning model, the evaluation index parameter is determined based on the difference value of the first vertex coordinate and the second vertex coordinate, the second vertex coordinate comprises the vertex coordinate of the actual boundary frame of the target object in the test image, the deep learning model is evaluated based on the evaluation index parameter, the evaluation mode of the currently trained deep learning model can be enriched, the accurate evaluation of the deep learning model is achieved, and the accuracy of the trained deep learning model can be ensured.
FIG. 3A shows a schematic diagram of how the evaluation index parameter is determined based on the difference of the first vertex coordinates and the second vertex coordinates, according to the present invention; the present embodiment exemplifies how to determine the evaluation index parameter based on the difference between the first vertex coordinates and the second vertex coordinates on the basis of the above embodiments. As shown in fig. 3A, the determining the evaluation index parameter based on the difference between the first vertex coordinate and the second vertex coordinate in step S202 may include the following steps S301 to S304:
in step S301, a first difference between the maximum values of the abscissas, a second difference between the minimum values of the abscissas, a third difference between the maximum values of the ordinates and a fourth difference between the minimum values of the ordinates of the first vertex and the second vertex are determined.
In this embodiment, after determining the first vertex coordinate and the second vertex coordinate, a first difference between maximum values of abscissas of the first vertex and the second vertex, a second difference between minimum values of abscissas, a third difference between maximum values of ordinates, and a fourth difference between minimum values of ordinates may be determined.
For example, FIG. 3B shows a schematic diagram of a predicted bounding box and an actual bounding box of a target object according to the present invention. As shown in FIG. 3B, when the predicted bounding box (i.e., the solid line box 100 shown in the figure) and the actual bounding box (i.e., the dashed line box 200 shown in the figure) of the target object are determined, the respective vertex coordinates D1 (x) of the predicted bounding box may be obtainedd1,yd1)、D2(xd2,yd2)、D3(xd3,yd3) And D4 (x)d4,yd4) And the coordinates of the respective vertices G1 (x) of the actual bounding boxg1,yg1)、G2(xg2,yg2)、G3(xg3,yg3) And G4 (x)g4,yg4)。
As can be seen from the contents shown in FIG. 3B, xd1=xd3=xd min,xd2=xd4=xd max,xg1=xg3=xG min,xg2=xg4=xg max;yd1=yd2=yd min,yd3=yd4=yd max,yg1=yg2=yg min,yg3=yg4=yg max
Wherein x isd minIs the minimum of the abscissa of the first vertex; x is the number ofd maxIs the maximum value of the abscissa of the first vertex; x is the number ofg minIs the minimum of the abscissa of the second vertex; x is the number ofg maxIs the maximum value of the abscissa of the second vertex; y isd minIs the minimum of the ordinate of the first vertex; y isd maxIs the maximum value of the ordinate of the first vertex; y isg minIs the minimum of the ordinate of the second vertex; y isg maxIs the maximum value of the ordinate of the second vertex.
On this basis, a first difference diff between the maximum values of the abscissas of the first vertex and the second vertex can be calculated based on the following equations (1) to (4)1Second difference diff between the minimum values of the abscissa2Third difference diff between the maxima of the ordinate3And sit uprightFourth difference diff between target minimum values4:
diff1=xd max-xg max; (1)
diff2=xd min-xg min; (2)
diff3=yd max-yg max; (3)
diff4=yd min-yg min; (4)
In step S302, a fifth difference between the maximum value and the minimum value of the abscissa and a sixth difference between the maximum value and the minimum value of the ordinate of the second vertex are calculated.
Still referring to the embodiment shown in fig. 3B, a fifth difference diff between the maximum value and the minimum value of the abscissa of the second vertex may be calculated based on the following equations (5) to (6)5Sixth difference diff between maximum and minimum of ordinate6
diff5=xg max-xg min; (5)
diff6=yg max-xg min。 (6)
In step S303, a ratio of a sum of absolute values of the first difference, the second difference, the third difference, and the fourth difference to a sum of absolute values of the fifth difference and the sixth difference is determined.
On this basis, the first difference diff may be determined based on the following equation (7)1The second difference diff2The third difference diff3And the fourth difference diff4And said fifth difference diff5And said sixth difference diff6Ratio β of the sum of absolute values of:
β=(|diff1|+|diff2|+|diff3|+|diff4|)/(|diff5|+|diff6|); (7)
in step S304, the evaluation index parameter is determined based on the ratio.
In this embodiment, after the ratio β is calculated, the evaluation index parameter may be determined based on the ratio.
In this embodiment, the evaluation index parameter may be determined based on a difference between 1 and the above ratio.
That is, the evaluation index parameter α is calculated based on the following equation (8):
α=1-β。 (8)
it follows that 0< α <1, and α is closer to 0 as the area of the actual bounding box is larger than the area of the predicted bounding box; conversely, when the area of the actual bounding box is less than the area of the predicted bounding box the more, α is closer to 1.
It is worth noting that the larger the intersection ratio of the prediction bounding box and the actual bounding box is than the value of the IOU, the higher the accuracy of the current deep learning model is. However, if the IOU is small (e.g., less than the set value 0.5), if the area of the predicted bounding box is greater than or equal to the area of the actual bounding box, the safety of the vehicle can be ensured; on the contrary, if the predicted bounding box is smaller than the area of the actual bounding box, the safety of the vehicle may not be ensured. Therefore, in this embodiment, the deep learning model may be evaluated according to the magnitude of the evaluation index parameter α, that is, the closer α is to 0, the lower the security of the result predicted by the current deep learning model can be determined; and when the alpha is closer to 1, the safety of the prediction result of the current deep learning model can be judged to be higher, so that the rationality and the accuracy of the model evaluation can be improved.
In another embodiment, the above-mentioned manner for determining the evaluation index parameter based on the ratio can also be referred to the following embodiment shown in fig. 4, which is not described in detail herein.
As can be seen from the above description, in this embodiment, by determining a first difference between the maximum values of the abscissas of the first vertex and the second vertex, a second difference between the minimum values of the abscissas, a third difference between the maximum values of the ordinates, and a fourth difference between the minimum values of the ordinates, calculating a fifth difference between the maximum values of the abscissas and the minimum values of the second vertex, and a sixth difference between the maximum values of the ordinates, and determining a ratio of a sum of absolute values of the first difference, the second difference, the third difference, and the fourth difference to a sum of absolute values of the fifth difference and the sixth difference, and determining the evaluation index parameter based on the ratio, accurate determination of the evaluation index parameter can be achieved, and subsequent evaluation of the depth learning model based on the evaluation index parameter can be achieved, the evaluation modes of the deep learning model trained at present can be enriched, the accurate evaluation of the deep learning model can be realized, and the accuracy of the deep learning model trained can be ensured.
Fig. 4 shows a schematic diagram of how the evaluation index parameter is determined based on the ratio according to the present invention. The present embodiment exemplifies how to determine the evaluation index parameter based on the ratio on the basis of the above-described embodiments. As shown in fig. 4, the determining the evaluation index parameter based on the ratio in step S304 may include the following steps S401 to S403:
in step S401, the intersection ratio of the predicted bounding box and the actual bounding box is acquired.
In this embodiment, after determining the predicted bounding box and the actual bounding box of the target object, the intersection ratio of the predicted bounding box and the actual bounding box may be obtained.
Specifically, the intersection ratio of the prediction bounding box and the actual bounding box can be expressed by the following formula (9):
IOU=(A∩B)/(A∪B) (9)
in the formula, IOU represents the intersection ratio between the prediction bounding box and the actual bounding box, a represents the prediction bounding box, and B represents the actual bounding box.
In step S402, an improved intersection ratio is determined based on a product of the ratio and the intersection ratio.
In this embodiment, after obtaining the intersection ratio between the prediction bounding box and the actual bounding box, an improved intersection ratio may be determined based on a product of the ratio and the intersection ratio.
Specifically, the above-mentioned improved intersection ratio can be represented by the following formula (10):
IOUimprovement of=β·IOU (10)
In step S403, the evaluation index parameter is determined based on the improved intersection ratio.
It will be appreciated that a larger value of the intersection ratio of the prediction bounding box to the actual bounding box than the IOU indicates a higher accuracy of the current deep learning model. However, if the IOU is small (e.g., less than the set value 0.5), if the area of the predicted Bounding Box is greater than or equal to the area of the actual Bounding Box, the safety of the vehicle can be ensured; on the contrary, if the area of the prediction Bounding Box is smaller than that of the actual Bounding Box, the safety of the vehicle may not be ensured. Therefore, in the present embodiment, the deep learning model is evaluated by combining the cross-over ratio and the ratio β, i.e. the improved cross-over ratio IOUImprovement ofThe larger the numerical value is, the higher the accuracy of the current deep learning model is, and the higher the safety of the predicted result is; conversely, the lower the accuracy of the current deep learning model, the lower the safety of the predicted result.
As can be seen from the above description, in this embodiment, by obtaining the intersection ratio between the prediction boundary box and the actual boundary box, determining an improved intersection ratio based on the product of the ratio and the intersection ratio, and determining the evaluation index parameter based on the improved intersection ratio, it is possible to accurately determine the evaluation index parameter, and further realize subsequent evaluation of the deep learning model based on the evaluation index parameter, so that the evaluation manner of the currently trained deep learning model can be enriched, an accurate evaluation of the deep learning model can be realized, and the accuracy of the trained deep learning model can be ensured.
FIG. 5 is a block diagram illustrating an evaluation apparatus of a deep learning model according to an exemplary embodiment of the present invention; the device of the embodiment can be applied to a server (e.g., a server or a server cluster composed of a plurality of servers, etc.). As shown in fig. 5, the apparatus includes: a vertex coordinates determination module 110, an index parameter determination module 120, and a learning model evaluation module 130, wherein:
a vertex coordinate determination module 110, configured to determine first vertex coordinates of a predicted bounding box of a target object in a test image based on a deep learning model;
an index parameter determination module 120, configured to determine an evaluation index parameter based on the first vertex coordinates and the second vertex coordinates.
Wherein the second vertex coordinates comprise vertex coordinates of an actual bounding box of the target object in the test image.
A learning model evaluation module 130, configured to evaluate the deep learning model based on the evaluation index parameter.
As can be seen from the above description, in this embodiment, the first vertex coordinates of the prediction bounding box of the target object in the test image are determined based on the deep learning model, the evaluation index parameters are determined based on the first vertex coordinates and the second vertex coordinates, the second vertex coordinates include the vertex coordinates of the actual bounding box of the target object in the test image, and the deep learning model is evaluated based on the evaluation index parameters, so that the evaluation modes of the currently trained deep learning model can be enriched, the accurate evaluation of the deep learning model can be realized, and the accuracy of the trained deep learning model can be ensured.
Fig. 6 is a block diagram showing a configuration of an evaluation apparatus of a deep learning model according to another exemplary embodiment of the present invention; the device of the embodiment can be applied to a server (e.g., a server or a server cluster composed of a plurality of servers, etc.). The vertex coordinate determining module 210, the index parameter determining module 220, and the learning model evaluating module 230 have the same functions as the vertex coordinate determining module 110, the index parameter determining module 120, and the learning model evaluating module 130 in the embodiment shown in fig. 5, and are not described herein again.
As shown in fig. 6, the index parameter determination module 220 may be further configured to determine the evaluation index parameter based on a difference between the first vertex coordinates and the second vertex coordinates.
In an embodiment, the metric parameter determination module 220 may include:
a difference determining unit 221 for determining a first difference between the first vertex and a maximum value of the abscissa of the second vertex, a second difference between minimum values of the abscissa, a third difference between maximum values of the ordinate, and a fourth difference between minimum values of the ordinate;
a difference calculation unit 222 configured to calculate a fifth difference between the maximum value and the minimum value of the abscissa and a sixth difference between the maximum value and the minimum value of the ordinate of the second vertex;
a ratio determining unit 223, configured to determine a ratio of a sum of absolute values of the first difference, the second difference, the third difference, and the fourth difference to a sum of absolute values of the fifth difference and the sixth difference;
a parameter determination unit 224 configured to determine the evaluation index parameter based on the ratio.
In an embodiment, the parameter determining unit 224 may be further configured to determine the evaluation index parameter based on a difference between 1 and the ratio.
In another embodiment, the parameter determining unit 224 may be further configured to:
acquiring the intersection ratio of the prediction boundary box and the actual boundary box;
determining an improved intersection ratio based on a product of the ratio and the intersection ratio;
determining the evaluation index parameter based on the improved intersection ratio.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the evaluation device of the deep learning model can be applied to network equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor of the device where the software implementation is located as a logical means. From a hardware aspect, as shown in fig. 7, a hardware structure diagram of an electronic device where an evaluation apparatus of a deep learning model of the present invention is located is shown, except for the processor, the network interface, the memory, and the nonvolatile memory shown in fig. 7, the device where the apparatus is located in the embodiment may generally include other hardware, such as a forwarding chip responsible for processing a packet, and the like; the device may also be a distributed device in terms of hardware structure, and may include multiple interface cards to facilitate expansion of message processing at the hardware level.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program implements the following task processing method when being processed by a processor:
determining first vertex coordinates of a prediction bounding box of a target object in a test image based on a deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, the second vertex coordinates including vertex coordinates of an actual bounding box of the target object in the test image;
and evaluating the deep learning model based on the evaluation index parameters.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (12)

1. A method for evaluating a deep learning model is characterized by comprising the following steps:
determining first vertex coordinates of a prediction bounding box of a target object in a test image based on a deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, the second vertex coordinates including vertex coordinates of an actual bounding box of the target object in the test image;
and evaluating the deep learning model based on the evaluation index parameters.
2. The method of claim 1, wherein determining an evaluation index parameter based on the first and second vertex coordinates comprises:
determining the evaluation index parameter based on a difference between the first vertex coordinates and the second vertex coordinates.
3. The method of claim 2, wherein determining the evaluation index parameter based on the difference between the first vertex coordinate and the second vertex coordinate comprises:
determining a first difference between the maxima of the abscissas, a second difference between the minima of the abscissas, a third difference between the maxima of the ordinates and a fourth difference between the minima of the ordinates of the first vertex and the second vertex;
calculating a fifth difference value between the maximum value and the minimum value of the abscissa and a sixth difference value between the maximum value and the minimum value of the ordinate of the second vertex;
determining a ratio of a sum of absolute values of the first difference, the second difference, the third difference, and the fourth difference to a sum of absolute values of the fifth difference and the sixth difference;
determining the evaluation index parameter based on the ratio.
4. The method of claim 3, wherein said determining the evaluation index parameter based on the ratio comprises:
determining the evaluation index parameter based on a difference between 1 and the ratio.
5. The method of claim 3, wherein said determining the evaluation index parameter based on the ratio comprises:
acquiring the intersection ratio of the prediction boundary box and the actual boundary box;
determining an improved intersection ratio based on a product of the ratio and the intersection ratio;
determining the evaluation index parameter based on the improved intersection ratio.
6. An evaluation device for a deep learning model, comprising:
the vertex coordinate determination module is used for determining first vertex coordinates of a prediction boundary box of a target object in the test image based on the deep learning model;
an index parameter determination module, configured to determine an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, where the second vertex coordinates include vertex coordinates of an actual bounding box of the target object in the test image;
and the learning model evaluation module is used for evaluating the deep learning model based on the evaluation index parameters.
7. The apparatus of claim 6, wherein the metric parameter determination module is further configured to determine the evaluation metric parameter based on a difference between the first vertex coordinates and the second vertex coordinates.
8. The apparatus of claim 7, wherein the metric parameter determination module comprises:
a difference determining unit for determining a first difference between the first vertex and a maximum value of the abscissa of the second vertex, a second difference between minimum values of the abscissa, a third difference between maximum values of the ordinate, and a fourth difference between minimum values of the ordinate;
a difference calculation unit configured to calculate a fifth difference between a maximum value and a minimum value of the abscissa of the second vertex, and a sixth difference between a maximum value and a minimum value of the ordinate;
a ratio determination unit configured to determine a ratio of a sum of absolute values of the first difference, the second difference, the third difference, and the fourth difference to a sum of absolute values of the fifth difference and the sixth difference;
a parameter determination unit configured to determine the evaluation index parameter based on the ratio.
9. The apparatus according to claim 8, wherein the parameter determination unit is further configured to determine the evaluation index parameter based on a difference between 1 and the ratio.
10. The apparatus of claim 8, wherein the parameter determining unit is further configured to:
acquiring the intersection ratio of the prediction boundary box and the actual boundary box;
determining an improved intersection ratio based on a product of the ratio and the intersection ratio;
determining the evaluation index parameter based on the improved intersection ratio.
11. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
determining first vertex coordinates of a prediction bounding box of a target object in a test image based on a deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, the second vertex coordinates including vertex coordinates of an actual bounding box of the target object in the test image;
and evaluating the deep learning model based on the evaluation index parameters.
12. A computer-readable storage medium, on which a computer program is stored, which program, when being processed by a processor, is adapted to carry out:
determining first vertex coordinates of a prediction bounding box of a target object in a test image based on a deep learning model;
determining an evaluation index parameter based on the first vertex coordinates and second vertex coordinates, the second vertex coordinates including vertex coordinates of an actual bounding box of the target object in the test image;
and evaluating the deep learning model based on the evaluation index parameters.
CN202010549757.1A 2020-06-16 2020-06-16 Evaluation method and device of deep learning model, electronic equipment and storage medium Active CN111797993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010549757.1A CN111797993B (en) 2020-06-16 2020-06-16 Evaluation method and device of deep learning model, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010549757.1A CN111797993B (en) 2020-06-16 2020-06-16 Evaluation method and device of deep learning model, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111797993A true CN111797993A (en) 2020-10-20
CN111797993B CN111797993B (en) 2024-02-27

Family

ID=72803043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010549757.1A Active CN111797993B (en) 2020-06-16 2020-06-16 Evaluation method and device of deep learning model, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111797993B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184700A (en) * 2020-10-21 2021-01-05 西北民族大学 Monocular camera-based agricultural unmanned vehicle obstacle sensing method and device
CN113642521A (en) * 2021-09-01 2021-11-12 东软睿驰汽车技术(沈阳)有限公司 Traffic light identification quality evaluation method and device and electronic equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0814860A (en) * 1994-06-30 1996-01-19 Toshiba Corp Model creating device
KR20160131621A (en) * 2015-05-08 2016-11-16 (주)케이사인 Parallel processing system
WO2017101292A1 (en) * 2015-12-16 2017-06-22 深圳市汇顶科技股份有限公司 Autofocusing method, device and system
CN108805093A (en) * 2018-06-19 2018-11-13 华南理工大学 Escalator passenger based on deep learning falls down detection algorithm
CN109523503A (en) * 2018-09-11 2019-03-26 北京三快在线科技有限公司 A kind of method and apparatus of image cropping
CN109919072A (en) * 2019-02-28 2019-06-21 桂林电子科技大学 Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN110097091A (en) * 2019-04-10 2019-08-06 东南大学 It is trained be distributed with inference data it is inconsistent under the conditions of image fine granularity recognition methods
CN110263939A (en) * 2019-06-24 2019-09-20 腾讯科技(深圳)有限公司 A kind of appraisal procedure, device, equipment and medium indicating learning model
CN110298298A (en) * 2019-06-26 2019-10-01 北京市商汤科技开发有限公司 Target detection and the training method of target detection network, device and equipment
CN110503095A (en) * 2019-08-27 2019-11-26 中国人民公安大学 Alignment quality evaluation method, localization method and the equipment of target detection model
CN110598751A (en) * 2019-08-14 2019-12-20 安徽师范大学 Anchor point generating method based on geometric attributes
CN110688994A (en) * 2019-12-10 2020-01-14 南京甄视智能科技有限公司 Human face detection method and device based on cross-over ratio and multi-model fusion and computer readable storage medium
CN110765951A (en) * 2019-10-24 2020-02-07 西安电子科技大学 Remote sensing image airplane target detection method based on bounding box correction algorithm
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN111160527A (en) * 2019-12-27 2020-05-15 歌尔股份有限公司 Target identification method and device based on MASK RCNN network model

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0814860A (en) * 1994-06-30 1996-01-19 Toshiba Corp Model creating device
KR20160131621A (en) * 2015-05-08 2016-11-16 (주)케이사인 Parallel processing system
WO2017101292A1 (en) * 2015-12-16 2017-06-22 深圳市汇顶科技股份有限公司 Autofocusing method, device and system
CN108805093A (en) * 2018-06-19 2018-11-13 华南理工大学 Escalator passenger based on deep learning falls down detection algorithm
CN109523503A (en) * 2018-09-11 2019-03-26 北京三快在线科技有限公司 A kind of method and apparatus of image cropping
CN109919072A (en) * 2019-02-28 2019-06-21 桂林电子科技大学 Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN110097091A (en) * 2019-04-10 2019-08-06 东南大学 It is trained be distributed with inference data it is inconsistent under the conditions of image fine granularity recognition methods
CN110263939A (en) * 2019-06-24 2019-09-20 腾讯科技(深圳)有限公司 A kind of appraisal procedure, device, equipment and medium indicating learning model
CN110298298A (en) * 2019-06-26 2019-10-01 北京市商汤科技开发有限公司 Target detection and the training method of target detection network, device and equipment
CN110598751A (en) * 2019-08-14 2019-12-20 安徽师范大学 Anchor point generating method based on geometric attributes
CN110503095A (en) * 2019-08-27 2019-11-26 中国人民公安大学 Alignment quality evaluation method, localization method and the equipment of target detection model
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN110765951A (en) * 2019-10-24 2020-02-07 西安电子科技大学 Remote sensing image airplane target detection method based on bounding box correction algorithm
CN110688994A (en) * 2019-12-10 2020-01-14 南京甄视智能科技有限公司 Human face detection method and device based on cross-over ratio and multi-model fusion and computer readable storage medium
CN111160527A (en) * 2019-12-27 2020-05-15 歌尔股份有限公司 Target identification method and device based on MASK RCNN network model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHAOHUI ZHENG等: "Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression", 《ARXIV:1911.08287》, pages 1 - 8 *
刘革等: "基于RetinaNet改进的车辆信息检测", 《计算机应用》, vol. 40, no. 03, pages 854 - 858 *
周文婷: "大角度倾斜的车牌识别算法研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》, no. 2019, pages 034 - 337 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184700A (en) * 2020-10-21 2021-01-05 西北民族大学 Monocular camera-based agricultural unmanned vehicle obstacle sensing method and device
CN112184700B (en) * 2020-10-21 2022-03-18 西北民族大学 Monocular camera-based agricultural unmanned vehicle obstacle sensing method and device
CN113642521A (en) * 2021-09-01 2021-11-12 东软睿驰汽车技术(沈阳)有限公司 Traffic light identification quality evaluation method and device and electronic equipment
CN113642521B (en) * 2021-09-01 2024-02-09 东软睿驰汽车技术(沈阳)有限公司 Traffic light identification quality evaluation method and device and electronic equipment

Also Published As

Publication number Publication date
CN111797993B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
US10964060B2 (en) Neural network-based camera calibration
CN110569703B (en) Computer-implemented method and device for identifying damage from picture
CN110263713B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
US10783643B1 (en) Segmentation-based damage detection
CN111047609A (en) Pneumonia focus segmentation method and device
US10885625B2 (en) Recognizing damage through image analysis
US20220254063A1 (en) Gaze point estimation processing apparatus, gaze point estimation model generation apparatus, gaze point estimation processing system, and gaze point estimation processing method
CN111797993B (en) Evaluation method and device of deep learning model, electronic equipment and storage medium
CN111429482A (en) Target tracking method and device, computer equipment and storage medium
CN111553914B (en) Vision-based goods detection method and device, terminal and readable storage medium
CN112613462B (en) Weighted intersection ratio method
CN114627397A (en) Behavior recognition model construction method and behavior recognition method
CN111428858A (en) Method and device for determining number of samples, electronic equipment and storage medium
US20230123671A1 (en) Localization and mapping
JP7059889B2 (en) Learning device, image generator, learning method, and learning program
CN114821513B (en) Image processing method and device based on multilayer network and electronic equipment
CN111079523A (en) Object detection method, object detection device, computer equipment and storage medium
CN111611836A (en) Ship detection model training and ship tracking method based on background elimination method
CN115578329A (en) Screen detection method and device, computer equipment and readable storage medium
CN109446016B (en) AR function test method, device and system for augmented reality technology
CN112258563A (en) Image alignment method and device, electronic equipment and storage medium
CN111402335B (en) Evaluation method and device of deep learning model, electronic equipment and storage medium
CN112115976B (en) Model training method, model training device, storage medium and electronic equipment
CN114677670B (en) Method for automatically identifying and positioning identity card tampering
KR102658711B1 (en) Method for annotation using boundary designation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant