CN114021705A - Model accuracy determination method, related device and equipment - Google Patents

Model accuracy determination method, related device and equipment Download PDF

Info

Publication number
CN114021705A
CN114021705A CN202210000700.5A CN202210000700A CN114021705A CN 114021705 A CN114021705 A CN 114021705A CN 202210000700 A CN202210000700 A CN 202210000700A CN 114021705 A CN114021705 A CN 114021705A
Authority
CN
China
Prior art keywords
output result
node
predictor
target model
accuracy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210000700.5A
Other languages
Chinese (zh)
Inventor
汪照
陈波扬
孙伶君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210000700.5A priority Critical patent/CN114021705A/en
Publication of CN114021705A publication Critical patent/CN114021705A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a model accuracy determination method, a related device and equipment, wherein the model accuracy determination method comprises the following steps: respectively inputting the same objects to be predicted into a target model in a preset platform and a target model in a landing platform; sequentially acquiring a first output result of a target model in a preset platform for processing an object to be predicted at each prediction sub-node and a second output result of the target model in a landing platform for processing the object to be predicted at each prediction sub-node; comparing the first output result with the output results of the corresponding same predictor nodes in the second output result respectively, and determining the precision of each predictor node of the target model in the landing platform based on the comparison results; and each predication child node of the target model is obtained by dividing based on the type difference of the output result. According to the scheme, the accuracy loss judgment efficiency of the model and the accuracy loss positioning efficiency can be improved.

Description

Model accuracy determination method, related device and equipment
Technical Field
The present application relates to the technical field of model precision, and in particular, to a method for determining model precision, and a related apparatus and device.
Background
Since the advent of a mathematical method for simulating a human actual neural network, namely a neural network model, the neural network model has wide application and considerable prospect in various fields such as system identification, pattern recognition, intelligent control and the like.
However, the problem of accuracy may occur when the neural network model trained by the current algorithm falls on the ground to the actual platform, and the problem is mainly caused by the quantization process existing in the actual platform model conversion, and the quantization may cause a certain accuracy loss in the application of the neural network model.
After the neural network model falls to the actual platform, the precision loss and the precision loss positioning of the model are often judged manually, and the efficiency is low.
Disclosure of Invention
The application provides a model precision determination method, a related device and equipment, and solves the problems of low precision loss judgment and precision loss positioning efficiency of a model in the prior art.
The application provides a model accuracy determination method, which comprises the following steps: respectively inputting the same objects to be predicted into a target model in a preset platform and a target model in a landing platform; sequentially acquiring a first output result of a target model in a preset platform for processing an object to be predicted at each prediction sub-node and a second output result of the target model in a landing platform for processing the object to be predicted at each prediction sub-node; comparing the first output result with the output results of the corresponding same predictor nodes in the second output result respectively, and determining the precision of each predictor node of the target model in the landing platform based on the comparison results; and each predication child node of the target model is obtained by dividing based on the type difference of the output result.
The step of comparing the first output result with the output result of the corresponding same predictor node in the second output result respectively, and determining the precision of each predictor node of the target model in the landing platform based on the comparison result comprises the following steps: comparing the first output result with the output results of the corresponding same predictor nodes in the second output result respectively to obtain the accuracy of the second output result in each predictor node relative to the first output result; and in response to the accuracy rate being smaller than the accuracy rate threshold of the corresponding prediction sub-node, determining that the accuracy of the prediction result of the target model in the landing platform at the prediction sub-node is unqualified.
The method comprises the following steps of sequentially obtaining a first output result of a target model in a preset platform for processing an object to be predicted at each prediction sub-node and a second output result of the target model in a landing platform for processing the object to be predicted at each prediction sub-node, wherein the first output result comprises the following steps: when a plurality of objects to be predicted are obtained, a plurality of first output results of a target model in a preset platform after being processed by each prediction sub-node and a plurality of second output results of the target model in a landing platform after being processed by each prediction sub-node are respectively obtained; the step of comparing the first output result with the output result of the corresponding same predictor node in the second output result, and determining the precision of each predictor node of the target model in the landing platform based on the comparison result further comprises: sequentially and respectively comparing the first output result of the same object to be predicted with the output result of the corresponding same prediction sub-node in the second output result to obtain the accuracy of each object to be predicted at each prediction sub-node; the average accuracy of a plurality of objects to be predicted at the same prediction sub-node is obtained, and the precision of each prediction sub-node of the target model in the landing platform is obtained based on the average accuracy.
The method comprises the following steps of sequentially obtaining a first output result of a target model in a preset platform for processing an object to be predicted at each prediction sub-node and a second output result of the target model in a landing platform for processing the object to be predicted at each prediction sub-node, respectively comparing the first output result with the output result of the prediction sub-node corresponding to the same target model in the second output result, and determining the precision of each prediction sub-node of the target model in the landing platform based on the comparison result, wherein the method further comprises the following steps: sequentially obtaining a third output result of the target model of the preset platform on each network layer of the prediction sub-node and a fourth output result of the target model of the landing platform on each network layer of the prediction sub-node; and respectively comparing the third output result with the output results of the corresponding same network layers in the fourth output result, and determining the precision of each network layer of the prediction sub-node of the target model of the landing platform so as to determine the precision of each prediction sub-node.
The method comprises the following steps of sequentially obtaining third output results of a target model of a preset platform in each network layer of a prediction sub-node and fourth output results of the target model of a landing platform in each network layer of the prediction sub-node, and further comprising the following steps of: performing parameter configuration on the target model to add output nodes to each network layer of the target model; and converting the target model through a platform conversion tool so as to apply the target model to the landing platform and the preset platform.
The predictor nodes corresponding to the same first output result and the same second output result respectively comprise at least one output result of the same type; the step of comparing the first output result with the output result of the corresponding same predictor node in the second output result to obtain the accuracy of the second output result at each predictor node relative to the first output result further comprises: and performing comprehensive calculation by using at least one type of output result of each predictor node based on a preset rule to obtain the accuracy of the second output result in each predictor node relative to the first output result.
When the output results of the predictor nodes comprise classification categories and confidence degrees, the step of performing comprehensive calculation by using at least one type of output results of each predictor node according to a preset rule to obtain the accuracy of a second output result at each predictor node relative to a first output result comprises the following steps: in response to the same classification category of the output results of the same type of the first output result and the second output result in the same predictor node, determining the accuracy of the second output result in the predictor node relative to the first output result by using the confidence coefficient; and/or when the output results of the predictor nodes comprise detection categories, confidence degrees and coordinate frames, performing comprehensive calculation by using at least one type of output results of the predictor nodes according to a preset rule to obtain the accuracy of a second output result in each predictor node relative to a first output result, wherein the step comprises the following steps: establishing association between the first output result and a coordinate frame corresponding to the same coordinate in the second output result; in response to the fact that the detection categories of the coordinate frames associated with the first output result and the second output result are the same, determining the accuracy rate of the second output result in each prediction sub-node relative to the first output result based on the number of the coordinate frames in the first output result, the number of the coordinate frames in the second output result and the number of the coordinate frames with the same categories; and/or when the output result of the predictor node comprises the characteristic data, performing comprehensive calculation by using at least one type of output result of each predictor node by using a preset rule to obtain the accuracy of the second output result at each predictor node relative to the first output result, wherein the step comprises the following steps of: and determining the accuracy of the second output result at each predictor node relative to the first output result by using the characteristic data of the first output result and the characteristic data of the second output result.
When the output result of the predictor node comprises the classification category and the confidence coefficient, the step of determining the accuracy of the second output result in the predictor node relative to the first output result by using the confidence coefficient comprises the following steps: calculating the accuracy rate accurve of the second output result at the predictor node relative to the first output result based on the formula accurve = 1-abs (ref _ conf-self _ conf)/ref _ conf) by using the confidence rate ref _ conf of the second output result at the predictor node and the confidence rate self _ conf of the first output result at the predictor node; wherein abs () refers to calculating an absolute value; when the output result of the predictor node comprises the detection category, the confidence coefficient and the coordinate frame, the step of determining the accuracy of the second output result in each predictor node relative to the first output result based on the number of the coordinate frames in the first output result, the number of the coordinate frames in the second output result and the number of the coordinate frames with the same category comprises the following steps: determining the accuracy rate accuracy of the second output result relative to the first output result at each predictor node by using a formula accuracy = Tp/(Tp + Fn + Fp) = c/(m + n-c) based on the number n of coordinate frames in the first output result, the number m of coordinate frames in the second output result and the number c of coordinate frames with the same category; wherein Tp = c, Fn = n-c and Fp = m-c; when the output result of the predictor node comprises the characteristic data, the step of determining the accuracy of the second output result relative to the first output result at each predictor node by using the characteristic data of the first output result and the characteristic data of the second output result comprises the following steps: and determining the accuracy rate accurve of the second output result at each predictor node relative to the first output result based on the formula accurve = a × b/(| a |. | b |) by using the feature data b of the first output result and the feature data a of the second output result.
The method comprises the following steps of inputting the same objects to be predicted into a target model in a preset platform and a target model in a landing platform respectively: acquiring a first prediction result obtained after a target model in a preset platform performs prediction processing on an object to be predicted and a second prediction result obtained after the target model in a landing platform performs prediction processing on the object to be predicted; determining the precision of a target model in a preset platform based on the first prediction result and the second prediction result; and responding to the unqualified precision of the target model in the preset platform, and executing the step of sequentially acquiring a first output result of the target model in the preset platform for processing the object to be predicted at each prediction sub-node and a second output result of the target model in the landing platform for processing the object to be predicted at each prediction sub-node.
Each predication sub-node comprises one or more of a preprocessing sub-node, an inference sub-node, a classification sub-node and a detection sub-node. Wherein the target model comprises an image recognition model; the method for predicting the target model of the ground platform comprises the following steps of respectively inputting the same object to be predicted into a target model in a preset platform and a target model in the ground platform: respectively inputting the same image to be recognized into an image recognition model in a preset platform and an image recognition model in a landing platform; the method comprises the following steps of sequentially obtaining a first output result of a target model in a preset platform for processing an object to be predicted at each prediction sub-node and a second output result of the target model in a landing platform for processing the object to be predicted at each prediction sub-node, wherein the first output result comprises the following steps: sequentially acquiring a first output result of the image recognition model in the preset platform for processing the image to be recognized at each predictor node and a second output result of the image recognition model in the landing platform for processing the image to be recognized at each predictor node; the step of comparing the first output result with the output result of the corresponding same predictor node in the second output result respectively, and determining the precision of each predictor node of the target model in the landing platform based on the comparison result comprises the following steps: and comparing the first output result with the output result of the corresponding same predictor node in the second output result respectively, and determining the precision of each predictor node of the image recognition model in the landing platform based on the comparison result.
The present application further provides an electronic device, comprising a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the method for determining the accuracy of any one of the models described above.
The present application also provides a computer readable storage medium having stored thereon program instructions that, when executed by a processor, implement the method of accuracy determination of any of the models described above.
According to the scheme, the target model is divided based on the type difference of the output results to obtain each predication sub-node of the target model, so that when the accuracy is determined, the same objects to be predicted are respectively input into the target models of the two platforms, the first output result and the second output result of the two platforms are respectively and sequentially obtained, the first output result and the output result of the corresponding same predication sub-node in the second output result are respectively compared, the accuracy of each predication sub-node of the target model in the landing platform is determined based on the comparison result, the accuracy loss determination efficiency of the model is improved, the accuracy damage position in the target model can be directly positioned based on the predication sub-node corresponding to the comparison result while the accuracy damage of the model is determined based on the comparison result of each predication sub-node, and the accuracy repair of the target model of the landing platform is facilitated subsequently, i.e. to improve the efficiency of the localization of the loss of accuracy of the model.
Drawings
FIG. 1 is a schematic flow chart diagram of an embodiment of a method for determining accuracy of a model of the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a method for determining accuracy of a model of the present application;
FIG. 3 is a schematic diagram of an embodiment of a target model in the embodiment of FIG. 2;
FIG. 4 is a block diagram of an embodiment of an apparatus for evaluating accuracy of a model of the present application;
FIG. 5 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 6 is a block diagram of an embodiment of a computer-readable storage medium according to the present application.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, and there may be three relationships, e.g., a and/or B, and: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in this document, the character "/", generally, the former and latter related objects are in an "or" relationship. Further, herein, "more" than two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating an embodiment of a method for determining accuracy of a model according to the present application.
Step S11: and respectively inputting the same objects to be predicted into a target model in a preset platform and a target model in a landing platform.
And respectively inputting the same objects to be predicted into a target model in a preset platform and a target model in a landing platform. The preset platform is a preset platform without a quantization process, the target model can not be influenced when running on the preset platform, and the precision is guaranteed. The landing platform refers to a landing platform with a quantization process, and may include a platform on which the target model actually lands.
In a specific application scenario, after training of the target model is completed, the target model can be loaded into a preset platform and a landing platform respectively for operation.
In the embodiment, the target model in the floor platform is judged on the basis of the related treatment of the target model to be predicted in the preset platform, and the same treatment condition of the target model to be predicted in the floor platform is judged, so that the precision of the target model in the floor platform is determined.
The target model may include a convolutional network model, a cyclic network model, a feedforward network model, an ONNX, a Caffe, and other neural network models or a deep learning framework, and the target model may also be applied to the fields of various model applications such as image recognition, target object classification, target object detection, speech recognition, face recognition, and the like, which is not limited herein.
Step S12: sequentially obtaining a first output result of the target model in the preset platform for processing the object to be predicted at each prediction sub-node and a second output result of the target model in the landing platform for processing the object to be predicted at each prediction sub-node.
After the same objects to be predicted are respectively input into the target models of the two platforms, a first output result of the target model in the preset platform for processing the objects to be predicted at each prediction sub-node and a second output result of the target model in the landing platform for processing the objects to be predicted at each prediction sub-node are sequentially obtained.
The output results of the prediction sub-nodes of the target model can be determined based on the type of the target model. In a specific application scenario, when the target model is a classification model applied to garbage classification, the output result of each prediction sub-node of the target model may include a target image, feature data, a classification result and a confidence thereof, and the classification model may be divided into an image processing sub-node, an inference sub-node and a classification sub-node. In another specific application scenario, when the target model is applied to a speech detection model, the output result of each of the prediction sub-nodes of the target model may respectively include target speech, feature data, a detection result and a confidence thereof, and then the speech detection model may be divided into a speech extraction sub-node, an inference sub-node, and a detection sub-node.
The first output result is all results output after the target model in the preset platform processes the object to be predicted at each predictor node. And the second output model is all results output after the target model in the floor platform processes the object to be predicted at each prediction sub-node. In a specific application scenario, when each predictor node of the target model has 3, the first output result and the second output result respectively include output results of the 3 predictor nodes.
Step S13: and comparing the first output result with the output results of the corresponding same prediction sub-nodes in the second output result respectively, and determining the precision of each prediction sub-node of the target model in the landing platform based on the comparison results.
And after a first output result and a second output result of the target models of the two platforms after being processed by each prediction sub-node are obtained, comparing the output results of the prediction sub-nodes corresponding to the same first output result and the same second output result respectively, and determining the precision of each prediction sub-node of the target model in the landing platform based on the comparison results.
In a specific application scenario, when the target model includes A, B, C predictor nodes, the first output result includes A, B, C output results a corresponding to the predictor nodes respectively1、b1、c1And the second output result respectively comprises A, B, C output results a corresponding to the three predictor nodes2、b2、c2Respectively outputting a in the first output result1And a in the second output result2B in the comparison and first output result1And b in the second output result2C in the comparison and first output result1And c in the second output result2And comparing, and determining the precision of each predictor node of the target model in the landing platform based on the comparison result of each predictor node.
When the precision of each predication sub-node of the target model in the floor platform is determined to be qualified based on the comparison result, the precision is not damaged after the whole target model is applied to the floor platform. And when the precision of one or more prediction sub-nodes is determined to be unqualified based on the comparison result, the precision is determined to be damaged after the whole target model is applied to the ground platform, and the position with the damaged precision is the corresponding preset sub-node.
Through the above steps, the model accuracy determining method of the embodiment divides the target model based on the type difference of the output results to obtain the prediction sub-nodes of the target model, so that when the accuracy is determined, the same objects to be predicted are respectively input into the target models of the two platforms, the first output result and the second output result of the two platforms are respectively obtained in sequence, the first output result and the output result of the corresponding same prediction sub-node in the second output result are respectively compared, the accuracy of each prediction sub-node of the target model in the landing platform is determined based on the comparison result, the accuracy loss determining efficiency of the model is improved, and the accuracy loss of the model is determined based on the comparison result of each prediction sub-node, and the accuracy loss position in the target model can be directly located based on the prediction sub-node corresponding to the comparison result, therefore, the accuracy repairing of the target model of the landing platform is facilitated, and the positioning efficiency of the accuracy loss of the model is improved.
Referring to fig. 2, fig. 1 is a schematic flow chart of another embodiment of the accuracy determination method of the model of the present application.
Step S21: and respectively inputting the same objects to be predicted into a target model in a preset platform and a target model in a landing platform.
And respectively inputting the same objects to be predicted into a target model in a preset platform and a target model in a landing platform.
This step is the same as step S11 of the previous embodiment, please refer to the foregoing, and will not be described herein again.
When a plurality of objects to be predicted are provided, the same plurality of objects to be predicted can be respectively input into the target model in the preset platform and the target model in the landing platform.
In a specific embodiment, after the same object to be predicted is respectively input into the target model in the preset platform and the target model in the floor platform, a first prediction result obtained after the target model in the preset platform performs prediction processing on the object to be predicted and a second prediction result obtained after the target model in the floor platform performs prediction processing on the object to be predicted are obtained, the precision of the target model in the preset platform is determined based on the first prediction result and the second prediction result, and step S22 is executed when the precision of the target model in the preset platform is not qualified. That is, the accuracy determining method of the model according to this embodiment may be performed when the accuracy of the entire target model in the landing platform is damaged, so as to further determine the position where the accuracy is damaged, thereby performing accuracy-damaged positioning and improving the efficiency of repairing the model.
In a specific application scenario, the step of determining the accuracy of the target model in the preset platform based on the first prediction result and the second prediction result may include: and comparing the second prediction result with the first prediction result, and determining the precision of the target model in the landing platform corresponding to the second prediction result based on the comparison result between the second prediction result and the first prediction result. In other application scenarios, the accuracy of the target model in the floor platform corresponding to the second prediction result can be determined by calculating the mean square error between the second prediction result and the first prediction result. The specific structure is not limited herein.
Step S22: sequentially obtaining a first output result of the target model in the preset platform for processing the object to be predicted at each prediction sub-node and a second output result of the target model in the landing platform for processing the object to be predicted at each prediction sub-node.
The output results of the prediction sub-nodes of the target model in the step can be determined based on the type of the target model. In particular, each predictor sub-node of the target model may include one or more of a preprocessing sub-node, an inference sub-node, a classification sub-node, and a detection sub-node. In the neural network model, the preprocessing sub-node, the reasoning sub-node, the classification sub-node and/or the detection sub-node may substantially comprise the specific prediction steps of most neural network models. Therefore, the target model is divided according to the type difference of the output result, the functions of each processing step of the object to be predicted can be refined based on the neural network model, particularly the convolutional network model in the prediction process, each prediction sub-node which is adapted to most neural network models is obtained, and the application range of the model precision determination method is widened. In yet other embodiments, the partition of each predictor node may further include predictor nodes with other functions, such as: a global parameter configuration child node, a function configuration child node, and the like, which are not limited in this embodiment.
The method includes the steps of obtaining a first output result of a target model in a preset platform, processing a to-be-predicted object at each predictor node, and obtaining a second output result of the target model in a landing platform, processing the to-be-predicted object at each predictor node.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an embodiment of a target model in the embodiment of fig. 2.
Here, the target model 30 of the present embodiment is an image classification model for image recognition, and the target model 30 includes a preprocessing sub-node 31, an inference sub-node 32, and a classification sub-node 33, which are arranged in cascade with each other.
The preprocessing sub-node 31 is configured to perform at least one of framing, clipping, and gray level adjustment on the image to obtain a preprocessed image; the reasoning sub-node 32 is used for performing reasoning processing such as feature extraction, coding and/or decoding on the preprocessed image to obtain feature vectors and/or feature data; the classification sub-node 33 is configured to classify the image based on the feature vector and/or the feature data output by the inference sub-node 32, and obtain a classification category and a confidence thereof.
The present embodiment is only an example of the output of each child node of the target model, and is not limited thereto, and when the target model is a neural network model of another type, the division of each prediction child node may be determined based on the type of its own output result.
After the same objects to be predicted are respectively input into the target models of the two platforms, a first output result of the target model in the preset platform for processing the objects to be predicted at each prediction sub-node and a second output result of the target model in the landing platform for processing the objects to be predicted at each prediction sub-node are sequentially obtained. The first output result comprises output results of the target model in the preset platform after all the predictor nodes of the target model process the object to be predicted; and the second output result comprises the output result of the target model in the floor platform after all the predictor nodes process the object to be predicted. The first output result and the second output result are the same in type correspondence.
The types of the first output result and the second output result may at least include a classification category and a confidence thereof, a detection category and a confidence thereof, a coordinate frame and feature data, and the like, which are specifically determined based on a specific prediction sub-step of the target model and are not limited herein.
Step S23: and comparing the first output result with the output results of the corresponding same predictor nodes in the second output result respectively to obtain the accuracy of the second output result in each predictor node relative to the first output result.
And after the first output result and the second output result are obtained, comparing the output results of the corresponding same predictor nodes in the first output result and the second output result respectively to obtain the accuracy of the second output result in each predictor node relative to the first output result. The first output result is the first output result of the target model in the preset platform, which is used for processing the object to be predicted at each prediction sub-node, and no quantization process exists, so that the first output result is used as a reference, and the second output result is compared, so that the accuracy of the target model of the landing platform is obtained.
In a specific application scenario, the step of sequentially obtaining a first output result of the target model in the preset platform for processing the object to be predicted at each predictor node and a second output result of the target model in the floor platform for processing the object to be predicted at each predictor node may include: sequentially acquiring an output result of a target model in a preset platform for processing an object to be predicted at a first prediction sub-node and an output result of the target model in a landing platform for processing the object to be predicted at the first prediction sub-node, comparing the output results, determining the precision of the target model in the landing platform at the first prediction sub-node based on the comparison result, if the precision is damaged, not performing subsequent comparison, directly determining that the precision of the target model in the landing platform is damaged at the first prediction sub-node, and if the precision of the first prediction sub-node is damaged, depending on the output of the first prediction sub-node, outputting of the subsequent prediction sub-node possibly causes inaccurate comparison result because the precision of the first prediction sub-node is damaged but not the self precision is damaged because the input of the subsequent prediction sub-node, therefore, no subsequent invalid comparison is performed at this time. And if the precision is not damaged, sequentially taking an output result of the target model in the preset platform for processing the object to be predicted at the second predictor node and an output result of the target model in the landing platform for processing the object to be predicted at the second predictor node, and so on until the predictor node with the damaged precision is determined or all the predictor nodes are traversed. By obtaining and comparing the arrangement sequence of each predication sub-node based on the target model, the precision loss judgment efficiency and the precision loss positioning efficiency of the model can be further improved, invalid comparison is reduced, and resource waste is reduced.
In another specific application scenario, when at least two prediction sub-stages exist in the target model and need to be performed synchronously, the accuracy determination can be performed by comparing after obtaining all the prediction sub-stage output results of the target models of the two platforms.
In a specific application scenario, an error may also be calculated based on output results of the same corresponding predictor nodes in the first output result and the second output result, and the precision of each predictor node of the target model in the ground platform may be determined based on the calculation results.
In a specific application scenario, when a plurality of objects to be predicted are provided, the objects to be predicted are respectively input to the preset platform and the floor platform, and then a plurality of first output results of the target model in the preset platform after being processed by each prediction sub-node and a plurality of second output results of the target model in the floor platform after being processed by each prediction sub-node are respectively obtained, and then the comparison process may include: and sequentially and respectively comparing the first output result of the same object to be predicted with the output result of the corresponding same prediction sub-node in the second output result to obtain the accuracy of each object to be predicted at each prediction sub-node, obtaining the average accuracy of a plurality of objects to be predicted at the same prediction sub-node, and obtaining the accuracy of each prediction sub-node of the target model in the landing platform based on the average accuracy. Specifically, the accuracy of each object to be predicted at the predictor node is averaged, so as to obtain the average accuracy of each object to be predicted at the predictor node. The accuracy of the corresponding predictor node is determined through the average accuracy, so that the single conclusion can be avoided from being broken down, and the accuracy and the reliability of the follow-up accuracy conclusion are improved.
And performing comprehensive calculation by using the at least one type of output result of each predictor node based on a preset rule to obtain the accuracy of the second output result in each predictor node relative to the first output result. The preset rules may be set based on the specific types of the corresponding predictor nodes and the output results thereof, and the preset rules of each predictor node may be different. The step integrates all output results of each predictor node to judge the accuracy of each predictor node, and can improve the reliability and comprehensiveness of accuracy judgment.
In a specific application scenario, when the target model is an image detection model, it may include a preprocessing sub-node, an inference sub-node, and a detection sub-node. Wherein, the output result of the preprocessing sub-node comprises the preprocessed image, wherein the specific preprocessing steps can be set based on actual situations, for example: the types of output results of the preprocessing sub-nodes are not limited herein, and the application scenario only exemplifies the output results. The output result of the inference sub-node comprises characteristic data, and the output result of the detection sub-node comprises a detection category, a confidence degree and a coordinate frame. Judging the accuracy of the preprocessed sub-nodes based on the preprocessed image, judging the accuracy of the reasoning sub-nodes based on the characteristic data, and comprehensively judging the accuracy of the detecting sub-nodes based on the detection category, the confidence coefficient and the coordinate frame.
And when the output result of the prediction child node comprises the classification category and the confidence coefficient, the prediction child node is a classification child node, and the prediction child node performs comprehensive calculation by utilizing two types of output results of the classification child node by utilizing a preset rule to obtain the accuracy of the second output result in the classification child node relative to the first output result.
In a specific application scenario, the preset rule for classifying child nodes may include: and in response to the same classification category of the output result of the same type of the first output result and the second output result in the same predictor node, determining the accuracy of the second output result in the classification child node relative to the first output result by using the confidence coefficient. And responding to the same type of output results of the first output result and the second output result in the same predictor node, wherein the classification category of the output results is different, and the accuracy of the second output result in the classification child node relative to the first output result is 0. In another specific application scenario, the preset rule for classifying child nodes may also include: and setting corresponding weights for the classification category and the confidence level respectively, and adding the product of the classification category and the corresponding weight thereof to the product of the confidence level and the corresponding weight thereof to obtain the accuracy of the second output result in the classification child node relative to the first output result, wherein the setting of the weights can be set based on actual conditions, and is not limited herein.
In a specific application scenario, the specific calculation manner for determining the accuracy of the second output result at the classification child node relative to the first output result by using the confidence coefficient may be: calculating the accuracy rate accurve of the second output result at the predictor node relative to the first output result based on the formula accurve = 1-abs (ref _ conf-self _ conf)/ref _ conf) by using the confidence rate ref _ conf of the second output result at the predictor node and the confidence rate self _ conf of the first output result at the predictor node; where abs () refers to calculating an absolute value. In other application scenarios, the specific calculation method for determining the accuracy of the second output result in the classification child node relative to the first output result by using the confidence may also adopt other methods for comparing the confidence, which is not limited herein.
And when the output result of the prediction child node comprises the detection category, the confidence coefficient and the coordinate frame, the prediction child node is a detection child node, comprehensive calculation is carried out by utilizing various types of output results of the detection child node through a preset rule, and the accuracy of the second output result in the detection child node relative to the first output result is obtained.
In a specific application scenario, the preset rule for detecting the child node may include: and establishing association between the first output result and the coordinate frames corresponding to the same coordinates in the second output result, and determining the accuracy of the second output result in each predictor node relative to the first output result based on the number of the coordinate frames in the first output result, the number of the coordinate frames in the second output result and the number of the coordinate frames with the same category in response to the detection category of the coordinate frames of which the association is established between the first output result and the second output result being the same. In another specific application scenario, the preset rule for detecting the child node may also include: setting corresponding weights for the detection category, the confidence coefficient and the coordinate frame respectively, and adding the product of the detection category and the corresponding weight thereof, the product of the confidence coefficient and the corresponding weight thereof and the product of the associated coordinate frame and the corresponding weight thereof to obtain the accuracy of the second output result in the detection child node relative to the first output result, wherein the setting of the weights can be set based on actual conditions, and is not limited herein.
In a specific application scenario, the specific step of determining the accuracy of the second output result at each predictor node relative to the first output result based on the number of coordinate frames in the first output result, the number of coordinate frames in the second output result, and the number of coordinate frames with the same category may include: determining the accuracy rate accuracy of the second output result relative to the first output result at each predictor node by using a formula accuracy = Tp/(Tp + Fn + Fp) = c/(m + n-c) based on the number n of coordinate frames in the first output result, the number m of coordinate frames in the second output result and the number c of coordinate frames with the same category; wherein, the accurate number Tp = c, the missing number Fn = n-c and the false number Fp = m-c. In other application scenarios, the specific step of determining the accuracy of the second output result at each predictor node relative to the first output result based on the number of coordinate frames in the first output result, the number of coordinate frames in the second output result, and the number of coordinate frames with the same category may also be performed in a manner of comparing other numbers, which is not limited herein.
And when the output result of the predictor node comprises the characteristic data, the predictor node is an inference child node, and comprehensive calculation is carried out on the two types of output results of the inference child node by utilizing a preset rule to obtain the accuracy of the second output result in the inference child node relative to the first output result.
In a specific application scenario, the preset rule for reasoning the child node may include: and determining the accuracy of the second output result at each predictor node relative to the first output result by using the characteristic data of the first output result and the characteristic data of the second output result.
In a specific application scenario, the step of determining the accuracy of the second output result at each predictor node relative to the first output result by using the feature data of the first output result and the feature data of the second output result may specifically include: and determining the accuracy rate accurve of the second output result at each predictor node relative to the first output result based on the formula accurve = a × b/(| a |. | b |) by using the feature data b of the first output result and the feature data a of the second output result. In other application scenarios, the specific step of determining the accuracy of the second output result at each predictor node relative to the first output result by using the feature data of the first output result and the feature data of the second output result may also be performed by comparing other feature data, which is not limited herein.
When the predictor node is a preprocessing node, because the preprocessing modes of different target models are different, the outputs of the preprocessing child nodes are also different, so that when the accuracy of the second output result in the preprocessing child node relative to the first output result is calculated, specific setting can be performed based on the output type of the preprocessing child node, and the method is not limited herein. In a specific application scenario, when the output of the preprocessing sub-node is a clipped picture, the accuracy of the clipped picture can be determined by comparing the clipped picture processed by the second output result in the preprocessing sub-node with the clipped picture processed by the first output result in the preprocessing sub-node.
And when the predicting child node is a child node of other types, comparing based on the type of the output result of each child node, thereby determining the accuracy.
Step S24: and in response to the accuracy rate being smaller than the accuracy rate threshold of the corresponding prediction sub-node, determining that the accuracy of the prediction result of the target model in the landing platform at the prediction sub-node is unqualified.
And when the accuracy of the second output result in each predicting sub-node relative to the first output result is calculated, comparing the accuracy of the second output result in each predicting sub-node relative to the first output result with the accuracy threshold corresponding to each predicting sub-node, and when the accuracy is smaller than the accuracy threshold of the corresponding predicting sub-node, determining that the accuracy of the target model in the landing platform in the predicting sub-node is unqualified. And when the accuracy is greater than or equal to the accuracy threshold of the corresponding prediction sub-node, determining that the accuracy of the prediction result of the target model in the landing platform at the prediction sub-node is qualified.
In a specific application scenario, when a plurality of objects to be predicted are provided, the average accuracy of the plurality of objects to be predicted at the same prediction sub-node is obtained, and the accuracy of each prediction sub-node of the target model in the floor platform is obtained based on the average accuracy. Specifically, whether the average accuracy of each predictor node is smaller than the corresponding accuracy threshold is judged, if so, the accuracy of the predictor node is unqualified, and if not, the accuracy of the predictor node is qualified.
In a specific application scenario, after determining that the accuracy of the target model of the preset platform at the predictor is not qualified, further sequentially obtaining a third output result of the target model of the preset platform at each network layer of the predictor and a fourth output result of the target model of the landing platform at each network layer of the predictor; and respectively comparing the third output result with the output results of the corresponding same network layers in the fourth output result, and determining the precision of each network layer of the prediction sub-node of the target model of the landing platform so as to determine the unqualified network layer. In order to further narrow the positioning range of the accuracy damage and improve the positioning accuracy, the damaged network layer can be determined by comparing the further acquired output results of the network layers, and the application scene gradually refines the accuracy damage positioning through hierarchical comparison, so that the invalid calculation of the accuracy determination can be reduced, and the accuracy loss judgment efficiency of the model is improved. And the third output result is all results output after the target model in the preset platform processes the object to be predicted in each network layer of each predictor node. And the fourth output model is all results output after the target model in the floor platform processes the object to be predicted in each network layer of each prediction sub-node.
In a specific application scenario, after the step of inputting the same object to be predicted into the target model in the preset platform and the target model in the floor platform respectively, the third output results of the target model of the preset platform in each network layer of the prediction sub-node and the fourth output results of the target model of the floor platform in each network layer of the prediction sub-node can be directly and sequentially obtained; and respectively comparing the third output result with the output results of the corresponding same network layers in the fourth output result, and determining the precision of each network layer of the prediction sub-node of the target model of the landing platform so as to determine the precision of each prediction sub-node. Therefore, the positioning range with damaged precision is further refined into the range of the network layer directly by comparing the output mode of the network layer, and the positioning efficiency of precision loss is improved.
The third output result and the fourth output result may be compared or a corresponding error may be calculated based on the specific output result types of the third output result and the fourth output result, which is not limited herein.
In a specific application scenario, when the predictor node with the unqualified precision determined in step S24 is the reasoning child node, a third output result and a fourth output result of the reasoning child node of the target model after processing the same object to be predicted on the preset platform and the landing platform respectively are further obtained. When the inference sub-node includes a convolution layer and a pooling layer, which are cascaded with each other, the output types of the third output result and the fourth output result both include a convolution result and a pooling result. Comparing the third output result with the output results of the convolution layer and the pooling layer corresponding to the same output result in the fourth output result respectively to obtain the difference of the fourth output result in each convolution layer and pooling layer relative to the third output result, and further judging whether the precision is damaged specifically into the convolution layer or the pooling layer. The present application scenario only describes a method for performing precision positioning on a network layer, and does not limit the structure of an inference subnode and the like.
In practice, most of the platforms applied by the model do not support the output of data of an intermediate network layer, so that the network inference precision cannot be further positioned and does not reach a standard layer. In order to solve the above problem, in the present embodiment, before the third output result and the fourth output result are obtained, parameter configuration is performed on the source model of the target model to add an output node to each network layer of the target model; and converting the target model through a platform conversion tool so as to apply the target model to the landing platform and the preset platform. The types of the output nodes include a Split node, a Pooling node, a Slice node, and the like, and the specific types are not limited herein. And the type of output node added by each network layer is determined based on the type of output node supported by the platform to which the target model is to be applied.
In a specific application scenario, a specific method for configuring parameters may include: when there are two convolutional layers conv1 and conv2 connected back and forth in the network layer, and an output node of split type is added between convolutional layer conv1 and convolutional layer conv2, the corresponding parameter configuration may include:
layer {
name: "split1"
type: "Split"
bottom: "conv1"
top: "conv2 "
}
the parameters may be added before a specified network layer name, or a specified network layer type, or a specified network layer index, and the like, and the specific addition position is not limited herein. In practical application, the specific method of parameter configuration may be set based on the type of the output node, the type of the output node supported by the platform applied by the target model, or other conditions.
And after the configuration is finished, converting the source model based on a platform conversion tool to obtain a target model so as to be respectively applied to a preset platform and a landing platform.
Based on the node capability of exporting intermediate layer data supported by each platform, the configuration file of the target model is automatically modified and the node for exporting data is inserted before the source model is converted into the target model during the operation of the whole service process, so that the intermediate network layer data can be exported during the operation of the target model, the processing results of each network layer of each predication sub-node of the target model can be automatically output during the operation of the target model, and the precision damage positioning is facilitated.
When no network layer exists in a certain predication sub-node, the network layer is not required to be further positioned and refined, or further division is carried out based on each output type in the sub-step of the predication sub-node, so that the output result of each sub-step is obtained, and then the fourth output result of each sub-step in the predication sub-node of the target model in the floor platform is compared based on the third output result of each sub-step in the predication sub-node of the target model in the preset platform, so that the sub-step with damaged precision is determined.
Through the above steps, the model accuracy determining method of the embodiment divides the target model based on the type difference of the output results to obtain the prediction sub-nodes of the target model, so that when the accuracy is determined, the same objects to be predicted are respectively input into the target models of the two platforms, the first output result and the second output result of the target models of the two platforms at each prediction sub-node are respectively and sequentially obtained, the first output result is compared with the output result of the corresponding same prediction sub-node in the second output result to obtain the accuracy of the second output result at each prediction sub-node relative to the first output result, the accuracy of each prediction sub-node of the target model in the landing platform is determined based on the comparison result, the accuracy loss determining efficiency of the model is improved, and when the accuracy of the model is determined to be damaged based on the comparison result of each prediction sub-node, the position with damaged precision in the target model can be directly positioned based on the predication sub-node corresponding to the comparison result, so that the target model of the landing platform can be repaired with precision in a follow-up mode, and the positioning efficiency of the precision loss of the model is improved. Further, in this embodiment, when it is determined that the accuracy of a certain predictor node of the model is not good, the third output result and the fourth output result of each network layer under the predictor node are further obtained and compared, so as to determine a damaged network layer, and reduce the range of accuracy damaged positioning to the network layer, thereby further improving the accuracy damaged positioning accuracy and improving the efficiency and reliability of accuracy repairing.
In a specific application scenario, the target model of the embodiment includes an image recognition model, then the same image to be recognized is respectively input into the image recognition model in the preset platform and the image recognition model in the floor platform, a first output result of the image recognition model in the preset platform for processing the image to be recognized at each predictor node is sequentially obtained, and a second output result of the image recognition model in the floor platform for processing the image to be recognized at each predictor node respectively compares the first output result with an output result of the corresponding same predictor node in the second output result, and determines the precision of each predictor node of the image recognition model in the floor platform based on the comparison result.
Specifically, the image recognition model includes at least a preprocessing sub-node, an inference sub-node, and a detection sub-node. The preprocessing sub-node can be used for framing, cutting, gray processing and the like of an image to be recognized to obtain a preprocessed image, and is specifically set based on recognition requirements. The reasoning sub-node is used for carrying out reasoning processing based on the preprocessed image to obtain the feature data of the preprocessed image. The detection child nodes are used for carrying out detection and identification based on the characteristic data to obtain detection objects, confidence degrees and detection types. The outputs of each child node are compared separately to determine the accuracy of each child node.
In a specific application scenario, when there are a plurality of images to be recognized, it is firstly determined whether the plurality of images to be recognized are completely recognized by the target models on the preset platform and the landing platform, if the recognition is not finished, respectively inputting the new image to be recognized into the target models on the preset platform and the landing platform to obtain a first output result of each prediction sub-node of the target model of the preset platform and a second output result of each prediction sub-node of the target model on the landing platform, namely, the pre-processed image and the feature data output by each predictor node of the target model of the preset platform, and the first output result of the detection object, the confidence coefficient and the detection category, and the second output results comprise preprocessed images and feature data output by each prediction sub-node of the target model of the landing platform, and detection objects, confidence degrees and detection types thereof.
Respectively comparing the preprocessed image in the first output result with the preprocessed image in the second output result, comparing the feature data in the first output result with the feature data in the second output result, comparing the detection object, the confidence coefficient and the detection category in the first output result with the detection object, the confidence coefficient and the detection category in the second output result, respectively obtaining the accuracy of the preprocessing sub-node, the reasoning sub-node and the detection sub-node under the image to be recognized, and judging whether the images to be recognized are recognized by the target models on the preset platform and the landing platform after obtaining the accuracy.
And if the images to be recognized are judged to be recognized by the preset platform and the target models on the landing platform, carrying out average processing on a plurality of accuracy rates of the sub-nodes of the images to be recognized correspondingly, respectively obtaining the average accuracy rate of the preprocessing sub-nodes, the average accuracy rate of the reasoning sub-nodes and the average accuracy rate of the detection sub-nodes, and finally comparing the average accuracy rate of the preprocessing sub-nodes, the average accuracy rate of the reasoning sub-nodes and the average accuracy rate of the detection sub-nodes with corresponding accuracy rate thresholds respectively to determine whether the precision is damaged and the precision damaged position when the precision is damaged.
Referring to fig. 4, fig. 4 is a schematic diagram of a frame of an embodiment of an accuracy evaluation device of the application model. The model accuracy evaluation device 40 comprises an input module 41, an acquisition module 42 and a comparison module 43. The input module 41 is configured to input the same object to be predicted into a target model in a preset platform and a target model in a landing platform respectively; the obtaining module 42 is configured to sequentially obtain a first output result of the target model in the preset platform, which is used for processing the object to be predicted at each predictor node, and a second output result of the target model in the landing platform, which is used for processing the object to be predicted at each predictor node; and the comparison module 43 is configured to compare the output results of the same corresponding predictor nodes in the first output result and the second output result, and determine the precision of each predictor node of the target model in the landing platform based on the comparison results. And each predication child node of the target model is obtained by dividing based on the type difference of the output result.
The comparison module 43 is further configured to compare the first output result with the output results of the corresponding same predictor nodes in the second output result, respectively, to obtain the accuracy of the second output result at each predictor node relative to the first output result; and in response to the accuracy rate being smaller than the accuracy rate threshold of the corresponding prediction sub-node, determining that the accuracy of the prediction result of the target model in the landing platform at the prediction sub-node is unqualified.
The obtaining module 42 is further configured to, when there are multiple objects to be predicted, respectively obtain multiple first output results of the target model in the preset platform after being processed by each prediction sub-node, and multiple second output results of the target model in the floor platform after being processed by each prediction sub-node; the comparison module 43 is further configured to sequentially and respectively compare the first output result of the same object to be predicted with the output result of the corresponding same predictor node in the second output result, so as to obtain the accuracy of each object to be predicted at each predictor node; the average accuracy of a plurality of objects to be predicted at the same prediction sub-node is obtained, and the precision of each prediction sub-node of the target model in the landing platform is obtained based on the average accuracy.
The comparison module 43 is further configured to sequentially obtain third output results of the target model of the preset platform in each network layer of the predictor, and fourth output results of the target model of the landing platform in each network layer of the predictor; and respectively comparing the third output result with the output results of the corresponding same network layers in the fourth output result, and determining the precision of each network layer of the prediction sub-node of the target model of the landing platform so as to determine the unqualified network layer.
The obtaining module 42 is further configured to perform parameter configuration on the target model, so as to add an output node to each network layer of the target model; and converting the target model through a platform conversion tool so as to apply the target model to the landing platform and the preset platform.
The comparison module 43 is further configured to perform comprehensive calculation by using at least one type of output result of each predictor node based on a preset rule, so as to obtain an accuracy of the second output result at each predictor node relative to the first output result.
The comparing module 43 is further configured to, when the output result of the predictor includes the classification category and the confidence level, perform comprehensive calculation by using at least one type of output result of each predictor by using a preset rule, so as to obtain the accuracy of the second output result at each predictor with respect to the first output result, and includes: in response to the same classification category of the output results of the same type of the first output result and the second output result in the same predictor node, determining the accuracy of the second output result in the predictor node relative to the first output result by using the confidence coefficient; and/or when the output results of the predictor nodes comprise detection categories, confidence degrees and coordinate frames, performing comprehensive calculation by using at least one type of output results of the predictor nodes according to a preset rule to obtain the accuracy of a second output result in each predictor node relative to a first output result, wherein the step comprises the following steps: establishing association between the first output result and a coordinate frame corresponding to the same coordinate in the second output result; in response to the fact that the detection categories of the coordinate frames associated with the first output result and the second output result are the same, determining the accuracy rate of the second output result in each prediction sub-node relative to the first output result based on the number of the coordinate frames in the first output result, the number of the coordinate frames in the second output result and the number of the coordinate frames with the same categories; and/or when the output result of the predictor node comprises the characteristic data, performing comprehensive calculation by using at least one type of output result of each predictor node by using a preset rule to obtain the accuracy of the second output result at each predictor node relative to the first output result, wherein the step comprises the following steps of: and determining the accuracy of the second output result at each predictor node relative to the first output result by using the characteristic data of the first output result and the characteristic data of the second output result.
The comparison module 43 is further configured to calculate an accuracy accurve of the second output result at the predictor node relative to the first output result based on the formula accurve = 1-abs (ref _ conf-self _ conf)/ref _ conf) by using the confidence ref _ conf of the second output result at the predictor node and the confidence self _ conf of the first output result at the predictor node; where abs () refers to calculating an absolute value. The comparison module 43 is further configured to determine, based on the number n of coordinate frames in the first output result, the number m of coordinate frames in the second output result, and the number of coordinate frames with the same category, an accuracy accurve of the second output result at each predictor node relative to the first output result by using a formula accurve = Tp/(Tp + Fn + Fp) = c/(m + n-c); wherein Tp = c, Fn = n-c and Fp = m-c. The comparison module 43 is further configured to determine, based on the formula accracy = a × b/(| a | - |) and using the feature data b of the first output result and the feature data a of the second output result, the accuracy accracy of the second output result at each predictor node relative to the first output result.
The input module 41 is further configured to input the same image to be recognized into an image recognition model in the preset platform and an image recognition model in the landing platform, respectively; the obtaining module 42 is further configured to sequentially obtain a first output result of the image recognition model in the preset platform processing the image to be recognized at each predictor node, and a second output result of the image recognition model in the landing platform processing the image to be recognized at each predictor node; the comparison module 43 is further configured to compare the first output result with the output result of the corresponding same predictor node in the second output result, and determine the precision of each predictor node of the image recognition model in the landing platform based on the comparison result.
The obtaining module 42 is further configured to obtain a first prediction result obtained by performing prediction processing on the object to be predicted by using the target model in the preset platform, and a second prediction result obtained by performing prediction processing on the object to be predicted by using the target model in the landing platform; determining the precision of a target model in a preset platform based on the first prediction result and the second prediction result; and responding to the unqualified precision of the target model in the preset platform, and executing the step of sequentially acquiring a first output result of the target model in the preset platform for processing the object to be predicted at each prediction sub-node and a second output result of the target model in the landing platform for processing the object to be predicted at each prediction sub-node.
According to the scheme, the accuracy loss judgment efficiency of the model and the accuracy loss positioning efficiency can be improved.
Referring to fig. 5, fig. 5 is a schematic diagram of a frame of an embodiment of an electronic device according to the present application. The electronic device 50 comprises a memory 51 and a processor 52 coupled to each other, and the processor 52 is configured to execute program instructions stored in the memory 51 to implement the steps of the model accuracy determination method according to any of the embodiments described above. In one particular implementation scenario, electronic device 50 may include, but is not limited to: a microcomputer, a server, and the electronic device 50 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 52 is configured to control itself and the memory 51 to implement the steps of any of the above-described model accuracy determination method embodiments. Processor 52 may also be referred to as a CPU (Central Processing Unit). Processor 52 may be an integrated circuit chip having signal processing capabilities. The Processor 52 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 52 may be commonly implemented by an integrated circuit chip.
According to the scheme, the accuracy loss judgment efficiency of the model and the accuracy loss positioning efficiency can be improved.
Referring to fig. 6, fig. 6 is a block diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. The computer-readable storage medium 60 stores program instructions 601 executable by the processor, the program instructions 601 for implementing the steps of the method for determining the accuracy of a model according to any of the embodiments described above.
According to the scheme, the accuracy loss judgment efficiency of the model and the accuracy loss positioning efficiency can be improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (13)

1. A method for determining the accuracy of a model is characterized by comprising the following steps:
respectively inputting the same objects to be predicted into a target model in a preset platform and a target model in a landing platform;
sequentially obtaining a first output result of the target model in the preset platform for processing the object to be predicted at each predicting sub-node and a second output result of the target model in the landing platform for processing the object to be predicted at each predicting sub-node;
comparing the first output result with the output results of the corresponding same predictor nodes in the second output result respectively, and determining the precision of each predictor node of the target model in the landing platform based on the comparison results;
and each predication child node of the target model is obtained by dividing based on the type difference of the output result.
2. The method of claim 1, wherein the step of comparing the first output result with the output result of the corresponding same predictor node in the second output result, and determining the accuracy of each predictor node of the target model in the ground platform based on the comparison result comprises:
comparing the first output result with the output results of the corresponding same predictor nodes in the second output result respectively to obtain the accuracy of the second output result in each predictor node relative to the first output result;
and in response to the accuracy rate being smaller than the accuracy rate threshold of the corresponding predictor node, determining that the accuracy of the prediction result of the target model in the floor platform at the predictor node is unqualified.
3. The method for determining the accuracy of the model according to claim 1 or 2, wherein the step of sequentially obtaining a first output result of the target model in the preset platform processing the object to be predicted at each predicting sub-node and a second output result of the target model in the ground platform processing the object to be predicted at each predicting sub-node comprises:
when a plurality of objects to be predicted are available, a plurality of first output results of the target model in the preset platform after being processed by each prediction sub-node and a plurality of second output results of the target model in the landing platform after being processed by each prediction sub-node are obtained respectively;
the step of comparing the first output result with the output result of the corresponding same predictor node in the second output result, and determining the precision of each predictor node of the target model in the landing platform based on the comparison result further includes:
sequentially and respectively comparing the first output result of the same object to be predicted with the output result of the corresponding same predictor node in the second output result to obtain the accuracy of each object to be predicted at each predictor node;
and acquiring the average accuracy of a plurality of objects to be predicted at the same prediction sub-node, and obtaining the precision of each prediction sub-node of the target model in the floor platform based on the average accuracy.
4. The method according to claim 1 or 2, wherein the step of sequentially obtaining a first output result of the target model in the preset platform processing the object to be predicted at each of the predicting sub-nodes and a second output result of the target model in the ground platform processing the object to be predicted at each of the predicting sub-nodes, comparing the first output result with an output result of a corresponding same predicting sub-node in the second output result, and determining the accuracy of each predicting sub-node of the target model in the ground platform based on the comparison result further comprises:
sequentially obtaining third output results of the target model of the preset platform on each network layer of the predictor node and fourth output results of the target model of the landing platform on each network layer of the predictor node;
and comparing the third output result with the output result of the same corresponding network layer in the fourth output result respectively, and determining the precision of each network layer of the predictor node of the target model of the landing platform so as to determine the precision of each predictor node.
5. The method of claim 4, wherein the step of sequentially obtaining a third output result of the target model of the default platform in each network layer of the predictor node and a fourth output result of the target model of the landing platform in each network layer of the predictor node further comprises:
performing parameter configuration on a target model to add output nodes to each network layer of the target model;
and converting the target model through a platform conversion tool so as to apply the target model to the landing platform and the preset platform.
6. The method for determining the accuracy of the model according to claim 2, wherein the predictor nodes corresponding to the same first output result and the same second output result each include at least one output result of the same type;
the step of comparing the first output result with the output results of the corresponding same predictor nodes in the second output result to obtain the accuracy of the second output result at each predictor node relative to the first output result further includes:
and performing comprehensive calculation by using at least one type of output result of each predictor node based on a preset rule to obtain the accuracy of the second output result in each predictor node relative to the first output result.
7. The method of determining the accuracy of a model according to claim 6,
when the output results of the predictor nodes include classification categories and confidence degrees, the step of performing comprehensive calculation by using at least one type of output results of each predictor node according to a preset rule to obtain the accuracy of the second output result in each predictor node relative to the first output result comprises the following steps:
in response to the classification category of the output result of the same type of the first output result and the second output result in the same predictor node is the same, determining the accuracy of the second output result in the predictor node relative to the first output result by using the confidence coefficient; and/or
When the output results of the predictor nodes include the detection type, the confidence degree and the coordinate frame, the step of performing comprehensive calculation by using at least one type of output results of each predictor node according to a preset rule to obtain the accuracy of the second output result in each predictor node relative to the first output result comprises the following steps:
establishing association between the first output result and a coordinate frame corresponding to the same coordinate in the second output result;
in response to the first output result and the second output result establishing association, determining the accuracy of the second output result at each of the predicting sub-nodes relative to the first output result based on the number of coordinate frames in the first output result, the number of coordinate frames in the second output result and the number of coordinate frames with the same category; and/or
When the output results of the predictor nodes include feature data, the step of performing comprehensive calculation by using at least one type of output results of each predictor node according to a preset rule to obtain the accuracy of the second output result in each predictor node relative to the first output result includes:
and determining the accuracy of the second output result in each predictor node relative to the first output result by using the characteristic data of the first output result and the characteristic data of the second output result.
8. The method of claim 7, wherein when the output of the predictor node includes a classification category and a confidence level, the step of determining the accuracy of the second output at the predictor node relative to the first output using the confidence level comprises:
calculating the accuracy rate accuracy of the second output result at the predictor node relative to the first output result based on the formula accuracy = 1-abs (ref _ conf-self _ conf)/ref _ conf) using the confidence rate ref _ conf of the second output result at the predictor node and the confidence rate self _ conf of the first output result at the predictor node;
wherein abs () refers to calculating an absolute value;
when the output result of the predictor node includes a detection category, a confidence level and coordinate frames, the step of determining the accuracy of the second output result at each predictor node relative to the first output result based on the number of coordinate frames in the first output result, the number of coordinate frames in the second output result and the number of coordinate frames with the same category includes:
determining the accuracy rate accuracy of the second output result at each predictor node relative to the first output result based on the number n of coordinate frames in the first output result, the number m of coordinate frames in the second output result and the number c of coordinate frames with the same category by using the formula accuracy = Tp/(Tp + Fn + Fp) = c/(m + n-c);
wherein Tp = c, Fn = n-c and Fp = m-c;
when the output result of the predictor node includes feature data, the step of determining the accuracy of the second output result at each predictor node relative to the first output result by using the feature data of the first output result and the feature data of the second output result includes:
and determining the accuracy rate accurve of the second output result at each predictor node relative to the first output result based on the formula accurve = a × b/(| a |. | b |) by using the feature data b of the first output result and the feature data a of the second output result.
9. The method for determining the accuracy of the model according to claim 1, wherein the step of inputting the same object to be predicted into the target model in the preset platform and the target model in the landing platform respectively further comprises the following steps:
acquiring a first prediction result obtained after the target model in the preset platform performs prediction processing on the object to be predicted, and a second prediction result obtained after the target model in the landing platform performs prediction processing on the object to be predicted;
determining the precision of a target model in the preset platform based on the first prediction result and the second prediction result;
and responding to the unqualified precision of the target model in the preset platform, and executing the step of sequentially obtaining a first output result of the target model in the preset platform for processing the object to be predicted at each predicting sub-node and a second output result of the target model in the landing platform for processing the object to be predicted at each predicting sub-node.
10. The method of claim 1, wherein each predictor node comprises one or more of a preprocessing sub-node, an inference sub-node, a classification sub-node, and a detection sub-node.
11. The method of accuracy determination of a model of claim 1, wherein the target model comprises an image recognition model;
the step of inputting the same object to be predicted into the target model in the preset platform and the target model in the landing platform respectively comprises the following steps:
respectively inputting the same image to be recognized into an image recognition model in a preset platform and an image recognition model in a landing platform;
the step of sequentially obtaining a first output result of the target model in the preset platform for processing the object to be predicted at each prediction sub-node and a second output result of the target model in the landing platform for processing the object to be predicted at each prediction sub-node comprises:
sequentially obtaining a first output result of the image recognition model in the preset platform for processing the image to be recognized at each prediction sub-node and a second output result of the image recognition model in the landing platform for processing the image to be recognized at each prediction sub-node;
the step of comparing the first output result with the output result of the corresponding same predictor node in the second output result respectively, and determining the precision of each predictor node of the target model in the landing platform based on the comparison result comprises:
and comparing the first output result with the output result of the corresponding same predictor node in the second output result respectively, and determining the precision of each predictor node of the image recognition model in the landing platform based on the comparison result.
12. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement a method of accuracy determination of a model according to any one of claims 1 to 11.
13. A computer-readable storage medium on which program instructions are stored, which program instructions, when executed by a processor, implement a method of accuracy determination of a model according to any one of claims 1 to 11.
CN202210000700.5A 2022-01-04 2022-01-04 Model accuracy determination method, related device and equipment Pending CN114021705A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210000700.5A CN114021705A (en) 2022-01-04 2022-01-04 Model accuracy determination method, related device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210000700.5A CN114021705A (en) 2022-01-04 2022-01-04 Model accuracy determination method, related device and equipment

Publications (1)

Publication Number Publication Date
CN114021705A true CN114021705A (en) 2022-02-08

Family

ID=80069544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210000700.5A Pending CN114021705A (en) 2022-01-04 2022-01-04 Model accuracy determination method, related device and equipment

Country Status (1)

Country Link
CN (1) CN114021705A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056238A (en) * 2023-10-11 2023-11-14 深圳鲲云信息科技有限公司 Method and computing device for verifying correctness of model conversion under deployment framework

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102215563A (en) * 2011-04-21 2011-10-12 为一智联(北京)科技有限公司 Multistage positioning method and device for mobile terminal
CN105426882A (en) * 2015-12-24 2016-03-23 上海交通大学 Method for rapidly positioning human eyes in human face image
CN111382808A (en) * 2020-05-29 2020-07-07 浙江大华技术股份有限公司 Vehicle detection processing method and device
CN113469345A (en) * 2021-07-28 2021-10-01 浙江大华技术股份有限公司 Method and device for optimizing quantization model, storage medium and electronic device
CN113660345A (en) * 2021-08-23 2021-11-16 上海微盟企业发展有限公司 Rapid positioning method, system, device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102215563A (en) * 2011-04-21 2011-10-12 为一智联(北京)科技有限公司 Multistage positioning method and device for mobile terminal
CN105426882A (en) * 2015-12-24 2016-03-23 上海交通大学 Method for rapidly positioning human eyes in human face image
CN111382808A (en) * 2020-05-29 2020-07-07 浙江大华技术股份有限公司 Vehicle detection processing method and device
CN113469345A (en) * 2021-07-28 2021-10-01 浙江大华技术股份有限公司 Method and device for optimizing quantization model, storage medium and electronic device
CN113660345A (en) * 2021-08-23 2021-11-16 上海微盟企业发展有限公司 Rapid positioning method, system, device and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056238A (en) * 2023-10-11 2023-11-14 深圳鲲云信息科技有限公司 Method and computing device for verifying correctness of model conversion under deployment framework
CN117056238B (en) * 2023-10-11 2024-01-30 深圳鲲云信息科技有限公司 Method and computing device for verifying correctness of model conversion under deployment framework

Similar Documents

Publication Publication Date Title
CN108921782B (en) Image processing method, device and storage medium
US7359759B2 (en) Method and system for virtual metrology in semiconductor manufacturing
CN113780466B (en) Model iterative optimization method, device, electronic equipment and readable storage medium
CN110096938B (en) Method and device for processing action behaviors in video
CN110135505B (en) Image classification method and device, computer equipment and computer readable storage medium
CN115185760A (en) Abnormality detection method and apparatus
CN112132130A (en) Real-time license plate detection method and system for whole scene
CN114021705A (en) Model accuracy determination method, related device and equipment
CN112101543A (en) Neural network model determination method and device, electronic equipment and readable storage medium
US20210089960A1 (en) Training a machine learning model using a batch based active learning approach
CN113392867A (en) Image identification method and device, computer equipment and storage medium
CN110113708B (en) Positioning method and device based on Wi-Fi position fingerprint
CN111507396A (en) Method and device for relieving error classification of neural network on unknown samples
CN108596068B (en) Method and device for recognizing actions
CN111047049A (en) Method, apparatus and medium for processing multimedia data based on machine learning model
CN110135224B (en) Method and system for extracting foreground target of surveillance video, storage medium and terminal
CN116128044A (en) Model pruning method, image processing method and related devices
CN117408931A (en) Image defect detecting system, its producing method and computer readable recording medium
CN111062468B (en) Training method and system for generating network, and image generation method and device
CN113988316A (en) Method and device for training machine learning model
CN110705627A (en) Target detection method, target detection system, target detection device and readable storage medium
CN110765303A (en) Method and system for updating database
CN111027667A (en) Intention category identification method and device
CN117521737B (en) Network model conversion method, device, terminal and computer readable storage medium
CN111815658A (en) Image identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220208