CN111402335B - Evaluation method and device of deep learning model, electronic equipment and storage medium - Google Patents

Evaluation method and device of deep learning model, electronic equipment and storage medium Download PDF

Info

Publication number
CN111402335B
CN111402335B CN202010191641.5A CN202010191641A CN111402335B CN 111402335 B CN111402335 B CN 111402335B CN 202010191641 A CN202010191641 A CN 202010191641A CN 111402335 B CN111402335 B CN 111402335B
Authority
CN
China
Prior art keywords
learning model
deep learning
reference plane
set reference
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010191641.5A
Other languages
Chinese (zh)
Other versions
CN111402335A (en
Inventor
苏英菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202010191641.5A priority Critical patent/CN111402335B/en
Publication of CN111402335A publication Critical patent/CN111402335A/en
Application granted granted Critical
Publication of CN111402335B publication Critical patent/CN111402335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an evaluation method and device of a deep learning model, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring the actual distance between a target object and a set reference plane in the test image information; determining a predicted distance between the target object and the set reference plane in the test image based on a deep learning model; and evaluating the deep learning model based on a comparison result of the actual distance and the predicted distance. The invention can enrich the evaluation modes of the current trained deep learning model, realize accurate evaluation of the deep learning model, and further ensure the accuracy of the trained deep learning model.

Description

Evaluation method and device of deep learning model, electronic equipment and storage medium
Technical Field
The present invention relates to the field of automatic driving technologies, and in particular, to a method and apparatus for evaluating a deep learning model, an electronic device, and a storage medium.
Background
The overall framework for deep learning model training includes: and acquiring a sample image, preprocessing the sample image, inputting the preprocessed image into an existing model (such as a yolo model and the like) for training, evaluating the trained model, and further determining to continue training or end training based on an evaluation result.
The model evaluation mode adopted in the related technology is single, and an accurate evaluation training deep learning model cannot be realized, so that the accuracy of the trained deep learning model can be influenced.
Disclosure of Invention
In view of the above, the present invention provides a method, apparatus, electronic device and storage medium for evaluating a deep learning model to solve the above-mentioned problems.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
according to a first aspect of an embodiment of the present invention, there is provided an evaluation method of a deep learning model, including:
acquiring the actual distance between a target object and a set reference plane in the test image information;
determining a predicted distance between the target object and the set reference plane in the test image based on a deep learning model;
and evaluating the deep learning model based on a comparison result of the actual distance and the predicted distance.
In an embodiment, the target object comprises a contact point of a wheel of the target vehicle with the set reference plane or a contact point of a Bounding box of the target vehicle with the set reference plane, the set reference plane comprising the ground.
In an embodiment, the evaluating the deep learning model based on a comparison of the actual distance and the predicted distance includes:
determining a difference between the actual distance and the predicted distance;
and evaluating the deep learning model based on the difference value.
In an embodiment, the evaluating the deep learning model based on the difference value includes:
acquiring an average accuracy MAP index of the deep learning model;
the deep learning model is evaluated based on a weighted sum of the inverse of the difference and the MAP index.
In an embodiment, the evaluating the deep learning model based on a weighted sum of the inverse of the difference and the MAP index includes:
normalizing the reciprocal of the difference value and the MAP index to obtain the reciprocal of the difference value and the MAP index after normalization;
and evaluating the deep learning model based on the inverse of the difference value after normalization processing and the weighted sum of the MAP indexes.
According to a second aspect of the embodiment of the present invention, there is provided an evaluation device for a deep learning model, including:
the actual distance acquisition module is used for acquiring the actual distance between the target object and the set reference plane in the test image information;
the predicted distance determining module is used for determining the predicted distance between the target object and the set reference plane in the test image based on a deep learning model;
and the learning model evaluation module is used for evaluating the deep learning model based on the comparison result of the actual distance and the predicted distance.
In an embodiment, the target object comprises a contact point of a wheel of the target vehicle with the set reference plane or a contact point of a Bounding box of the target vehicle with the set reference plane, the set reference plane comprising the ground.
In one embodiment, the learning model evaluation module includes:
a distance difference value determining unit configured to determine a difference value between the actual distance and the predicted distance;
and the learning model evaluation unit is used for evaluating the deep learning model based on the difference value.
In an embodiment, the learning model evaluation unit is further configured to:
acquiring an average accuracy MAP index of the deep learning model;
the deep learning model is evaluated based on a weighted sum of the inverse of the difference and the MAP index.
In an embodiment, the learning model evaluation unit is further configured to:
normalizing the reciprocal of the difference value and the MAP index to obtain the reciprocal of the difference value and the MAP index after normalization;
and evaluating the deep learning model based on the inverse of the difference value after normalization processing and the weighted sum of the MAP indexes.
According to a third aspect of an embodiment of the present invention, there is provided an electronic device including:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
acquiring the actual distance between a target object and a set reference plane in the test image information;
determining a predicted distance between the target object and the set reference plane in the test image based on a deep learning model;
and evaluating the deep learning model based on a comparison result of the actual distance and the predicted distance.
According to a fourth aspect of an embodiment of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when processed by a processor, implements:
acquiring the actual distance between a target object and a set reference plane in the test image information;
determining a predicted distance between the target object and the set reference plane in the test image based on a deep learning model;
and evaluating the deep learning model based on a comparison result of the actual distance and the predicted distance.
Compared with the prior art, the evaluation method of the deep learning model has the advantages that the actual distance between the target object and the set reference plane in the test image information is obtained, the predicted distance between the target object and the set reference plane in the test image is determined based on the deep learning model, the deep learning model is evaluated based on the comparison result of the actual distance and the predicted distance, the evaluation mode of the currently trained deep learning model can be enriched, the accurate evaluation of the deep learning model can be realized, and the accuracy of the trained deep learning model can be ensured.
Drawings
FIG. 1 illustrates a flowchart of a method of evaluating a deep learning model according to an exemplary embodiment of the present invention;
FIG. 2 shows a schematic diagram of how a deep learning model is evaluated based on a comparison of an actual distance to the predicted distance, according to the present invention;
FIG. 3 shows a schematic diagram of how a deep learning model is evaluated based on differences in accordance with the present invention;
FIG. 4 shows a schematic diagram of how a deep learning model is evaluated based on a weighted sum of the inverse of the difference and the MAP index, in accordance with the present invention;
FIG. 5 shows a block diagram of a deep learning model evaluation apparatus according to an exemplary embodiment of the present invention;
FIG. 6 shows a block diagram of a deep learning model evaluation apparatus according to another exemplary embodiment of the present invention;
fig. 7 shows a block diagram of an electronic device according to an exemplary embodiment of the invention.
Detailed Description
The present invention will be described in detail below with reference to specific embodiments shown in the drawings. The embodiments are not intended to limit the invention and structural, methodological, or functional modifications of the invention based on the embodiments are within the scope of the invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used herein to describe various structures, these structures should not be limited by these terms. These terms are only used to distinguish one type of structure from another.
Fig. 1 shows a flowchart of an evaluation method of a deep learning model according to an exemplary embodiment of the present invention. The method of the embodiment can be applied to a server (such as a server or a server cluster formed by a plurality of servers). As shown in fig. 1, the method includes the following steps S101-S103:
in step S101, the actual distance between the target object and the set reference plane in the test image information is acquired.
In this embodiment, the server may obtain the actual distance between the target object and the set reference plane in the test image information.
For example, after the test image information for training the deep learning model is obtained, a manual labeling or automatic labeling mode may be adopted to label the target object and the set reference plane in the test image information, so as to calculate the actual distance between the target object and the set reference plane in the test image information.
It should be noted that, the above-mentioned manner of labeling the target object and setting the reference plane in the test image information may be referred to the explanation and description in the related art, and the embodiment is not limited to the specific labeling manner.
In this embodiment, the deep learning model may be a deep learning model that is trained by a set model training method by using training sample images in advance. It should be noted that, the set model training method may be set based on actual service requirements, which is not limited in this embodiment.
It will be appreciated that the above-described test image information matches the purpose of the deep learning model. For example, if the purpose of the deep learning model is to implement object recognition in the vehicle surroundings in the field of autopilot, the test image may include an image of the vehicle surroundings.
In an alternative embodiment, the test image information may be collected by a monocular camera or the like mounted at a set position on the vehicle. The type of the test image may be set by a developer according to actual needs, which is not limited in this embodiment.
In step S102, a predicted distance between the target object and the set reference plane in the test image is determined based on a deep learning model.
In this embodiment, after the actual distance between the target object and the set reference plane in the test image information is obtained, or while the actual distance between the target object and the set reference plane in the test image information is obtained, the predicted distance between the target object and the set reference plane in the test image may be determined based on a deep learning model.
For example, after training the deep learning model, a test image may be input to the deep learning model to predict the positions of the target object and the set reference plane in the test image, and then a predicted distance between the target object and the set reference plane may be calculated based on the prediction result.
In an alternative embodiment, the target object may be a contact point between a wheel of the target vehicle and the set reference plane, or may be a contact point between a Bounding box of the target vehicle and the set reference plane, where the set reference plane includes the ground.
For example, after the test image information for training the deep learning model is obtained, the wheels of the target vehicle and the set reference plane may be marked in the test image information by means of manual marking or automatic marking, or the Bounding box of the target vehicle and the set reference plane may be marked, so that the actual distance between the target object and the set reference plane may be calculated.
In step S103, the deep learning model is evaluated based on the result of the comparison of the actual distance and the predicted distance.
In this embodiment, after the actual distance between the target object and the set reference plane in the test image information is obtained and the predicted distance between the target object and the set reference plane in the test image is determined based on the deep learning model, the deep learning model may be evaluated based on the comparison result of the actual distance and the predicted distance.
In an alternative embodiment, the form of the comparison result between the actual distance and the predicted distance may be set by a developer according to the actual service requirement, so long as an index that can represent the difference between the actual distance and the predicted distance is not limited in this embodiment.
It will be appreciated that the greater the difference between the actual distance and the predicted distance, the poorer the accuracy of the currently trained deep learning model, and therefore, the difference may be used as an evaluation index for evaluating the deep learning model.
In an alternative embodiment, the comparison result may be combined with the model evaluation mode in the related art to obtain the comprehensive evaluation index of the deep learning model.
In another embodiment, the above manner of evaluating the deep learning model based on the comparison result between the actual distance and the predicted distance may also refer to the embodiment shown in fig. 2 described below, which will not be described in detail herein.
According to the technical scheme, in the evaluation method of the deep learning model, the actual distance between the target object and the set reference plane in the test image information is obtained, the predicted distance between the target object and the set reference plane in the test image is determined based on the deep learning model, and then the deep learning model is evaluated based on the comparison result of the actual distance and the predicted distance.
FIG. 2 shows a schematic diagram of how a deep learning model is evaluated based on a comparison of an actual distance to the predicted distance, according to the present invention; the present embodiment exemplifies how the deep learning model is evaluated based on the comparison result of the actual distance and the predicted distance on the basis of the above-described embodiments. As shown in fig. 2, the evaluation of the deep learning model based on the comparison result between the actual distance and the predicted distance in the above step S103 may include the following steps S201 to S202:
in step S201, a difference between the actual distance and the predicted distance is determined.
In this embodiment, after the actual distance between the target object and the set reference plane in the test image information is obtained, and the predicted distance between the target object and the set reference plane in the test image is determined based on the deep learning model, the difference between the actual distance and the predicted distance may be calculated.
In an alternative embodiment, the difference between the actual distance and the predicted distance may be a difference between the actual distance and the predicted distance, or may be a difference between the predicted distance and the actual distance, or may be an absolute value of a difference between the two, which is not limited in this embodiment.
In step S202, the deep learning model is evaluated based on the difference value.
In this embodiment, after determining the difference between the actual distance and the predicted distance, the deep learning model may be evaluated based on the difference.
It will be appreciated that the greater the difference between the actual distance and the predicted distance, the poorer the accuracy of the currently trained deep learning model, and therefore, the difference may be used as an evaluation index for evaluating the deep learning model.
In an alternative embodiment, the above difference may be combined with the model evaluation mode in the related art to obtain the comprehensive evaluation index of the deep learning model.
According to the technical scheme, the difference between the actual distance and the predicted distance is determined, and the deep learning model is evaluated based on the difference.
FIG. 3 shows a schematic diagram of how a deep learning model is evaluated based on differences in accordance with the present invention; the present embodiment exemplifies how the deep learning model is evaluated based on the difference value on the basis of the above-described embodiments. As shown in fig. 3, the evaluating the deep learning model based on the difference in the step S202 may include the following steps S301 to S302:
in step S301, an average accuracy MAP index of the deep learning model is acquired.
In this embodiment, after the current deep learning model to be evaluated is obtained, the average accuracy MAP index of the deep learning model may be calculated based on the model evaluation method in the related art.
It is worth noting that average accuracy (Mean Average Precision, MAP) is a performance metric for this class of algorithms that predicts target location and class, which is very useful for evaluating target localization models, target detection models, and instance segmentation models. In this embodiment, the calculation manner of the MAP index of the deep learning model may be referred to as explanation and explanation in the related art, and this embodiment is not limited thereto.
In step S302, the deep learning model is evaluated based on a weighted sum of the inverse of the difference and the MAP index.
In this embodiment, after the average accuracy MAP index of the deep learning model is obtained, the deep learning model may be evaluated based on a weighted sum of the inverse of the difference and the MAP index.
It is worth to say that, the difference between the actual distance and the predicted distance is inversely related to the accuracy of the currently trained deep learning model, that is, the greater the difference between the actual distance and the predicted distance is, the lower the accuracy of the currently trained deep learning model is; the smaller the difference between the actual distance and the predicted distance, the higher the accuracy of the currently trained deep learning model.
Conversely, the average accuracy MAP index of the deep learning model is positively correlated with the accuracy of the currently trained deep learning model, i.e. the larger the value of the average accuracy MAP index is, the higher the accuracy of the currently trained deep learning model is; the smaller the value of the average accuracy MAP index, the lower the accuracy of the currently trained deep learning model.
In order to realize that the difference between the actual distance and the predicted distance is combined with the MAP index to evaluate the currently trained deep learning model, in this embodiment, the inverse of the difference is calculated, and then a weighted sum of the inverse and the MAP index is calculated, so that the weighted sum is used as a comprehensive evaluation index to evaluate the deep learning model.
As can be seen from the above technical solution, in this embodiment, by obtaining the average accuracy MAP index of the deep learning model and evaluating the deep learning model based on the weighted sum of the inverse of the difference value and the MAP index, the difference value between the actual distance and the predicted distance and the MAP index can be combined to evaluate the currently trained deep learning model.
FIG. 4 shows a schematic diagram of how a deep learning model is evaluated based on a weighted sum of the inverse of the difference and the MAP index, in accordance with the present invention; the present embodiment exemplifies how the deep learning model is evaluated based on a weighted sum of the reciprocal of the difference and the MAP index on the basis of the above-described embodiments. As shown in fig. 4, the evaluating the deep learning model based on the weighted sum of the inverse of the difference and the MAP index in the above step S302 may include the following steps S401 to S402:
in step S401, normalization processing is performed on the inverse of the difference value and the MAP index, so as to obtain the inverse of the difference value and the MAP index after normalization processing.
In this embodiment, after calculating the reciprocal of the difference between the actual distance and the predicted distance, normalization processing may be performed on the reciprocal of the difference and the MAP index, to obtain the reciprocal of the difference and the MAP index after normalization processing.
It will be appreciated that normalizing the inverse of the difference and the MAP index ensures that the factors for subsequent weighted sums are of the same order of magnitude.
In step S402, the deep learning model is evaluated based on the inverse of the difference value after the normalization processing and a weighted sum of the MAP indices.
In this embodiment, after the inverse of the difference value and the MAP index are normalized to obtain the inverse of the difference value and the MAP index after the normalization, a weighted sum operation may be performed based on the inverse of the difference value and the MAP index after the normalization to obtain an operation result, and then the operation result may be used as a comprehensive evaluation index to evaluate the currently trained deep learning model.
It can be understood that the value of the comprehensive evaluation index is positively correlated with the accuracy of the currently trained deep learning model, that is, the greater the value of the comprehensive evaluation index is, the higher the accuracy of the currently trained deep learning model is. Conversely, the smaller the value of the above-mentioned comprehensive evaluation index, the lower the accuracy of the currently trained deep learning model.
As can be seen from the above technical solution, in this embodiment, by obtaining the average accuracy MAP index of the deep learning model, normalizing the inverse of the difference value and the MAP index to obtain the inverse of the difference value and the MAP index after normalization, and evaluating the deep learning model based on the weighted sum of the inverse of the difference value and the MAP index after normalization, the difference value between the actual distance and the predicted distance and the MAP index can be combined to evaluate the currently trained deep learning model.
FIG. 5 shows a block diagram of a deep learning model evaluation apparatus according to an exemplary embodiment of the present invention; as shown in fig. 5, the apparatus includes: an actual distance acquisition module 110, a predicted distance determination module 120, and a learning model evaluation module 130, wherein:
an actual distance acquiring module 110, configured to acquire an actual distance between a target object and a set reference plane in the test image information;
a predicted distance determining module 120, configured to determine a predicted distance between the target object and the set reference plane in the test image based on a deep learning model;
and a learning model evaluation module 130, configured to evaluate the deep learning model based on a comparison result of the actual distance and the predicted distance.
According to the technical scheme, the evaluation method of the deep learning model of the embodiment obtains the actual distance between the target object and the set reference plane in the test image information, and determines the predicted distance between the target object and the set reference plane in the test image based on the deep learning model, so that the deep learning model is evaluated based on the comparison result of the actual distance and the predicted distance, the evaluation mode of the current trained deep learning model can be enriched, the accurate evaluation of the deep learning model can be realized, and the accuracy of the trained deep learning model can be ensured.
FIG. 6 shows a block diagram of a deep learning model evaluation apparatus according to another exemplary embodiment of the present invention; the actual distance acquiring module 210, the predicted distance determining module 220, and the learning model evaluating module 230 have the same functions as the actual distance acquiring module 110, the predicted distance determining module 120, and the learning model evaluating module 130 in the embodiment shown in fig. 5, and are not described herein. As shown in fig. 6, the target object includes a contact point of a wheel of the target vehicle with the set reference plane or a contact point of a Bounding box of the target vehicle with the set reference plane, the set reference plane including the ground.
In an alternative embodiment, the learning model evaluation module 230 may include:
a distance difference value determining unit 231 for determining a difference value between the actual distance and the predicted distance;
and a learning model evaluation unit 232 configured to evaluate the deep learning model based on the difference value.
In an alternative embodiment, the learning model evaluation unit 232 may also be configured to:
acquiring an average accuracy MAP index of the deep learning model;
the deep learning model is evaluated based on a weighted sum of the inverse of the difference and the MAP index.
In an alternative embodiment, the learning model evaluation unit 232 may also be configured to:
normalizing the reciprocal of the difference value and the MAP index to obtain the reciprocal of the difference value and the MAP index after normalization;
and evaluating the deep learning model based on the inverse of the difference value after normalization processing and the weighted sum of the MAP indexes.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The embodiment of the evaluation device of the deep learning model can be applied to network equipment. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking a software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor of a device where the device is located for operation. From the hardware level, as shown in fig. 7, a hardware structure diagram of an electronic device where the evaluation device of the deep learning model of the present invention is located is shown, where in addition to the processor, the network interface, the memory and the nonvolatile memory shown in fig. 7, the device where the device is located may generally include other hardware, such as a forwarding chip responsible for processing a message, etc.; the device may also be a distributed device in terms of hardware architecture, possibly comprising a plurality of interface cards, for the extension of the message processing at the hardware level.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when processed by a processor, implements the following task processing method:
acquiring the actual distance between a target object and a set reference plane in the test image information;
determining a predicted distance between the target object and the set reference plane in the test image based on a deep learning model;
and evaluating the deep learning model based on a comparison result of the actual distance and the predicted distance.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A method for evaluating a deep learning model, comprising:
acquiring the actual distance between a target object and a set reference plane in test image information, wherein the target object comprises a contact point of a wheel of a target vehicle and the set reference plane or a contact point of a boundary frame of the target vehicle and the set reference plane, and the set reference plane comprises the ground;
determining a predicted distance between the target object and the set reference plane in the test image based on a deep learning model;
and evaluating the deep learning model based on a comparison result of the actual distance and the predicted distance.
2. The method of claim 1, wherein evaluating the deep learning model based on the comparison of the actual distance and the predicted distance comprises:
determining a difference between the actual distance and the predicted distance;
and evaluating the deep learning model based on the difference value.
3. The method of claim 2, wherein evaluating the deep learning model based on the difference comprises:
acquiring an average accuracy MAP index of the deep learning model;
the deep learning model is evaluated based on a weighted sum of the inverse of the difference and the MAP index.
4. The method of claim 3, wherein evaluating the deep learning model based on a weighted sum of the inverse of the difference and the MAP index comprises:
normalizing the reciprocal of the difference value and the MAP index to obtain the reciprocal of the difference value and the MAP index after normalization;
and evaluating the deep learning model based on the inverse of the difference value after normalization processing and the weighted sum of the MAP indexes.
5. An evaluation device for a deep learning model, comprising:
the device comprises an actual distance acquisition module, a test image acquisition module and a test image acquisition module, wherein the actual distance acquisition module is used for acquiring the actual distance between a target object and a set reference plane in the test image information, the target object comprises a contact point of a wheel of a target vehicle and the set reference plane or a contact point of a boundary frame of the target vehicle and the set reference plane, and the set reference plane comprises the ground;
the predicted distance determining module is used for determining the predicted distance between the target object and the set reference plane in the test image based on a deep learning model;
and the learning model evaluation module is used for evaluating the deep learning model based on the comparison result of the actual distance and the predicted distance.
6. The apparatus of claim 5, wherein the learning model evaluation module comprises:
a distance difference value determining unit configured to determine a difference value between the actual distance and the predicted distance;
and the learning model evaluation unit is used for evaluating the deep learning model based on the difference value.
7. The apparatus of claim 6, wherein the learning model evaluation unit is further configured to:
acquiring an average accuracy MAP index of the deep learning model;
the deep learning model is evaluated based on a weighted sum of the inverse of the difference and the MAP index.
8. The apparatus of claim 7, wherein the learning model evaluation unit is further configured to:
normalizing the reciprocal of the difference value and the MAP index to obtain the reciprocal of the difference value and the MAP index after normalization;
and evaluating the deep learning model based on the inverse of the difference value after normalization processing and the weighted sum of the MAP indexes.
9. An electronic device, the electronic device comprising:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
acquiring the actual distance between a target object and a set reference plane in test image information, wherein the target object comprises a contact point of a wheel of a target vehicle and the set reference plane or a contact point of a boundary frame of the target vehicle and the set reference plane, and the set reference plane comprises the ground;
determining a predicted distance between the target object and the set reference plane in the test image based on a deep learning model;
and evaluating the deep learning model based on a comparison result of the actual distance and the predicted distance.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that the program, when processed by a processor, implements:
acquiring the actual distance between a target object and a set reference plane in test image information, wherein the target object comprises a contact point of a wheel of a target vehicle and the set reference plane or a contact point of a boundary frame of the target vehicle and the set reference plane, and the set reference plane comprises the ground;
determining a predicted distance between the target object and the set reference plane in the test image based on a deep learning model;
and evaluating the deep learning model based on a comparison result of the actual distance and the predicted distance.
CN202010191641.5A 2020-03-18 2020-03-18 Evaluation method and device of deep learning model, electronic equipment and storage medium Active CN111402335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010191641.5A CN111402335B (en) 2020-03-18 2020-03-18 Evaluation method and device of deep learning model, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010191641.5A CN111402335B (en) 2020-03-18 2020-03-18 Evaluation method and device of deep learning model, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111402335A CN111402335A (en) 2020-07-10
CN111402335B true CN111402335B (en) 2023-07-28

Family

ID=71432597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010191641.5A Active CN111402335B (en) 2020-03-18 2020-03-18 Evaluation method and device of deep learning model, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111402335B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000005677A1 (en) * 1998-07-23 2000-02-03 Lockheed Martin Corporation System for automated detection of cancerous masses in mammograms
CN107909585A (en) * 2017-11-14 2018-04-13 华南理工大学 Inner membrance dividing method in a kind of blood vessel of intravascular ultrasound image
CN108596058A (en) * 2018-04-11 2018-09-28 西安电子科技大学 Running disorder object distance measuring method based on computer vision
CN110321815A (en) * 2019-06-18 2019-10-11 中国计量大学 A kind of crack on road recognition methods based on deep learning
CN110598764A (en) * 2019-08-28 2019-12-20 杭州飞步科技有限公司 Training method and device of target detection model and electronic equipment
CN110826476A (en) * 2019-11-02 2020-02-21 国网浙江省电力有限公司杭州供电公司 Image detection method and device for identifying target object, electronic equipment and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226637B (en) * 2007-01-18 2010-05-19 中国科学院自动化研究所 Method for detecting automatically contact point of vehicle wheel and ground
DE102012024957A1 (en) * 2012-12-20 2014-06-26 GM Global Technology Operations LLC (n. d. Ges. d. Staates Delaware) Method for evaluating images of camera in driver assistance system of motor car, involves identifying probable point of contact of structure corresponding to object with image detected in floor surface
CN105678221B (en) * 2015-12-29 2020-03-24 大连楼兰科技股份有限公司 Pedestrian detection method and system in rainy and snowy weather
CN107817018B (en) * 2016-09-12 2020-03-03 上海沃尔沃汽车研发有限公司 Test system and test method for lane line deviation alarm system
US11062461B2 (en) * 2017-11-16 2021-07-13 Zoox, Inc. Pose determination from contact points
US11210537B2 (en) * 2018-02-18 2021-12-28 Nvidia Corporation Object detection and detection confidence suitable for autonomous driving
CN110148148A (en) * 2019-03-01 2019-08-20 北京纵目安驰智能科技有限公司 A kind of training method, model and the storage medium of the lower edge detection model based on target detection
CN110210363B (en) * 2019-05-27 2022-09-06 中国科学技术大学 Vehicle-mounted image-based target vehicle line pressing detection method
CN110264520B (en) * 2019-06-14 2021-06-08 北京百度网讯科技有限公司 Vehicle-mounted sensor and vehicle pose relation calibration method, device, equipment and medium
CN110176151A (en) * 2019-06-17 2019-08-27 北京精英路通科技有限公司 A kind of method, apparatus, medium and the equipment of determining parking behavior

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000005677A1 (en) * 1998-07-23 2000-02-03 Lockheed Martin Corporation System for automated detection of cancerous masses in mammograms
CN107909585A (en) * 2017-11-14 2018-04-13 华南理工大学 Inner membrance dividing method in a kind of blood vessel of intravascular ultrasound image
CN108596058A (en) * 2018-04-11 2018-09-28 西安电子科技大学 Running disorder object distance measuring method based on computer vision
CN110321815A (en) * 2019-06-18 2019-10-11 中国计量大学 A kind of crack on road recognition methods based on deep learning
CN110598764A (en) * 2019-08-28 2019-12-20 杭州飞步科技有限公司 Training method and device of target detection model and electronic equipment
CN110826476A (en) * 2019-11-02 2020-02-21 国网浙江省电力有限公司杭州供电公司 Image detection method and device for identifying target object, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Vehicle global 6-DoF pose estimation under traffic surveillance camera;Zhang S等;《ISPRS Journal of Photogrammetry and Remote Sensing》;第 159卷;114-128 *
非高斯背景下基于动态阈值的距离扩展目标检测器;简涛等;《电子学报》;第39卷(第1期);59-63 *

Also Published As

Publication number Publication date
CN111402335A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
EP3581890B1 (en) Method and device for positioning
CN109657716B (en) Vehicle appearance damage identification method based on deep learning
CN107481292B (en) Attitude error estimation method and device for vehicle-mounted camera
CN110569856B (en) Sample labeling method and device, and damage category identification method and device
CN110874583A (en) Passenger flow statistics method and device, storage medium and electronic equipment
CN111027381A (en) Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN110660102B (en) Speaker recognition method, device and system based on artificial intelligence
CN111121797B (en) Road screening method, device, server and storage medium
CN114913197B (en) Vehicle track prediction method and device, electronic equipment and storage medium
CN116964588A (en) Target detection method, target detection model training method and device
CN111488883A (en) Vehicle frame number identification method and device, computer equipment and storage medium
CN111797993B (en) Evaluation method and device of deep learning model, electronic equipment and storage medium
CN111274852A (en) Target object key point detection method and device
CN111428858A (en) Method and device for determining number of samples, electronic equipment and storage medium
CN113505720A (en) Image processing method and device, storage medium and electronic device
CN111402335B (en) Evaluation method and device of deep learning model, electronic equipment and storage medium
CN111126493B (en) Training method and device for deep learning model, electronic equipment and storage medium
CN109766799B (en) Parking space recognition model training method and device and parking space recognition method and device
CN114821513A (en) Image processing method and device based on multilayer network and electronic equipment
CN111126336B (en) Sample collection method, device and equipment
EP4264569A1 (en) Systems and methods for nose-based pet identification
CN113066100A (en) Target tracking method, device, equipment and storage medium
CN111044035B (en) Vehicle positioning method and device
CN116541713B (en) Bearing fault diagnosis model training method based on local time-frequency characteristic transfer learning
CN111753625B (en) Pedestrian detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant