CN115731435A - Method, device and equipment for verifying label and storage medium - Google Patents

Method, device and equipment for verifying label and storage medium Download PDF

Info

Publication number
CN115731435A
CN115731435A CN202211352144.4A CN202211352144A CN115731435A CN 115731435 A CN115731435 A CN 115731435A CN 202211352144 A CN202211352144 A CN 202211352144A CN 115731435 A CN115731435 A CN 115731435A
Authority
CN
China
Prior art keywords
target
information
comparison
identification information
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211352144.4A
Other languages
Chinese (zh)
Inventor
丁建鹏
黄刚
谢钱昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zero Run Technology Co Ltd
Original Assignee
Zhejiang Zero Run Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zero Run Technology Co Ltd filed Critical Zhejiang Zero Run Technology Co Ltd
Priority to CN202211352144.4A priority Critical patent/CN115731435A/en
Publication of CN115731435A publication Critical patent/CN115731435A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for marking verification, wherein the method comprises the following steps: acquiring an image to be verified containing a plurality of targets, wherein the image to be verified is marked with first marking information related to the targets; processing an image to be verified by using an image processing model to obtain first identification information of the image to be verified, wherein the first identification information relates to a target; and determining a verification result of the first labeling information based on the consistency result of the first labeling information and the first identification information. By means of the mode, automatic verification can be conducted on the marked data, and manpower consumption is reduced.

Description

Method, device and equipment for verifying label and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for label verification.
Background
The development of deep learning technology cannot leave massive data, and the performance of many network models based on deep learning mainly depends on the quality and quantity of data used for model training, wherein most of the data used for model training are labeled data, and the verification of label correctness of labeled data is a key step of a data processing link. In a common verification method, after the labeling personnel complete the labeling task, the labeling personnel perform cross validation, and a large amount of labor cost needs to be consumed.
Therefore, how to automatically check and verify the labeled data is significant in reducing the labor consumption.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a method, a device, equipment and a storage medium for verifying the label, which can automatically verify the label data and reduce the manpower consumption.
In order to solve the technical problem, the application adopts a technical scheme that: a method for verifying a label is provided, which comprises the following steps: acquiring an image to be verified containing a plurality of targets, wherein the image to be verified is marked with first marking information related to the targets; processing an image to be verified by using an image processing model to obtain first identification information of the image to be verified, wherein the first identification information relates to a target; and determining a verification result of the first labeling information based on the consistency result of the first labeling information and the first identification information.
In order to solve the technical problem, the other technical scheme adopted by the application is as follows: an electronic device is provided, comprising a memory and a processor coupled to each other, the memory storing program instructions; the processor is configured to execute program instructions stored in the memory to implement the above-described method.
In order to solve the above technical problem, the present application adopts another technical solution: a computer readable storage medium is provided for storing program instructions that can be executed to implement the above-described method.
The beneficial effect of this application is: according to the method and the device, the image to be verified with the first annotation information is processed by the image processing model, after the first identification information about the target of the image to be verified is obtained, the verification result of the first annotation information can be determined based on the consistency result of the first annotation information and the first identification information, and compared with the method and the device for manually verifying the annotation data in the image to be verified, the method and the device can automatically verify the first annotation information in the image to be verified, and labor consumption can be reduced.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a label verification method provided in the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a label verification method provided in the present application;
FIG. 3 is a flowchart illustrating an embodiment of step S21 shown in FIG. 2;
FIG. 4 is a schematic diagram of the position information of a target A and a target B in a set of aligned target pairs provided herein;
FIG. 5 is a schematic flow chart of determining a matching target pair provided herein;
FIG. 6 is a schematic diagram of a case where the matching requirement is not satisfied and the corresponding error type provided by the present application;
FIG. 7 is a schematic flowchart illustrating another embodiment of a label verification method according to the present application;
FIG. 8 is a block diagram of an embodiment of a callout verification device provided herein;
FIG. 9 is a schematic structural diagram of an embodiment of an electronic device provided in the present application;
fig. 10 is a schematic structural diagram of a computer-readable storage medium provided in the present application.
Detailed Description
In order to make the purpose, technical solution and effect of the present application clearer and clearer, the present application is further described in detail below with reference to the accompanying drawings and examples.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present application, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
It should be noted that, in the case that some objects in the image to be verified are difficult to be recognized, for example, blur or occlusion, and the image processing model may not recognize the objects difficult to be recognized accurately enough, in order to ensure the accuracy of the recognition information (each second recognition information) of the recognized object given by the image processing model, and avoid the image processing model giving wrong recognition information about such difficult-to-be-recognized object, which affects the accuracy of the verification result of the subsequent first labeled information, a higher confidence threshold (for example, 0.9 or 0.95, and the like) may be set in advance for the recognition information of the image processing model, so as to filter the recognition information whose object does not satisfy the confidence threshold, for example, the object a in the image to be verified is blurred, and when the image processing model recognizes the object a, the confidence of the recognition information of the object a is 0.38, and is lower than the preset confidence threshold, the image processing model does not give the second recognition information about the object a, and the accuracy of the recognition information of the object a cannot be verified, it is understood that the image processing model gives the second recognition information about the confidence of the object a corresponding to the identification information of the object a target a, and therefore, the second recognition information in the image to be verified by using the image processing model, and the identification information of the second identification information of the object to be verified accurately set.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating an embodiment of a label verification method provided in the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 1 is not limited in this embodiment. As shown in fig. 1, the present embodiment includes:
s11: the method comprises the steps of obtaining an image to be verified containing a plurality of targets, and labeling the image to be verified with first labeling information related to the targets.
The method is used for verifying the first annotation information marked in the image to be verified by using the first identification information to obtain a verification result of the first annotation information, namely whether the first annotation information in the image to be verified is marked correctly or not.
The number of the objects in the image to be verified, which includes several objects, may be one or multiple, where the several objects may be any type of object, such as a dog, a cat, a building, a car, or the like, and in the case that there are multiple objects, the types of the objects in the image to be verified may be the same or different, and the number of the objects included in the specific image and the corresponding type are not specifically limited herein.
It should be noted that, this embodiment aims to verify whether the first annotation information is correctly annotated, and does not limit the origin of the first annotation information annotated in the image to be verified.
In an embodiment, the first annotation information includes second annotation information corresponding to at least one annotated target, and the at least one annotated target is at least part of targets in the image to be verified, in this embodiment, an annotated target refers to a target annotated in the image to be verified, annotation information corresponding to any one annotated target is the second annotation information, and annotation information corresponding to all annotated targets is the first annotation information (the sum of the second annotation information of each annotated target). At least part of the targets represent that the number of the labeled targets is less than or equal to the number of all targets in the image to be verified, that is, the labeled targets in the first labeling information may be part of the targets in the image to be verified or all the targets in the image to be verified. It should be noted that, due to negligence of a labeling person or due to reasons such as labeling a model, a certain target or some targets in the image may not be labeled in the labeling process, so that the number of labeled targets corresponding to the first labeling information is at least part of targets in the image to be verified.
For example, the image to be verified contains 3 targets, which are respectively a car, a person and a cat, wherein the cat in the image is not labeled due to negligence of labeling personnel or labeling models, and finally the first labeling information includes second labeling information corresponding to 2 targets of the car and the person.
S12: and processing the image to be verified by using the image processing model to obtain first identification information of the image to be verified, which relates to the target.
After the image to be verified is obtained, processing each target in the image to be verified by using the image processing model to obtain first identification information of the target of the image to be verified.
In an embodiment, the first identification information includes second identification information corresponding to at least one identified target respectively, and the at least one identified target is at least a part of targets in the image to be verified, where the at least part of targets indicate that the number of identified targets is less than or equal to the number of all targets in the image to be verified. In this embodiment, the identified targets are targets that the image processing model can identify targets in the image to be verified and can provide corresponding second identification information, identification information corresponding to any identified target is the second identification information, identification information corresponding to all identified targets is the first identification information (the sum of the second identification information of each identified target), and due to the fact that targets are blurred or blocked in the image to be verified or due to the processing performance of the image processing model, the targets may be missed to be identified, so that the number of identified targets corresponding to the first identification information is at least part of targets in the image to be verified.
S13: and determining a verification result of the first labeling information based on the consistency result of the first labeling information and the first identification information.
In this embodiment, if the first annotation information is consistent with the first identification information, it is determined that the first annotation information is correct; otherwise, if the first labeling information is inconsistent with the first identification information, determining that the first labeling information has an error.
In an embodiment, when at least one annotated target included in the first annotation information and at least one identified target included in the first identification information both have a corresponding matching target pair, it is determined that the first annotation information is consistent with the first identification information, and it is further determined that the first annotation information is correct.
If a certain marked target and a certain identified target can form a matching target pair, the second marking information of the marked target in the matching target pair is consistent with the second identification information of the identified target, which indicates that the second marking information of the marked target is correct. It can be understood that, if there is a matching target pair between all labeled targets contained in the first labeling information and all identified targets contained in the first identification information, it indicates that the number of labeled targets in the first labeling information is the same as the number of identified targets in the first identification information, and all labeled targets in the first labeling information are labeled correctly, and it is determined that the first labeling information is consistent with the first identification information, and the first labeling information is correct.
In an embodiment, when there is no matching target pair in a certain identified target in the first identification information, it is determined that the first annotation information is inconsistent with the first identification information, and it is further determined that the first annotation information is incorrect. The second identification information of the identified target represents the actual information of the target in the image to be verified, and if a certain identified target does not have a matching target pair, it indicates that the second labeling information of any labeled target is inconsistent with the second identification information of the identified target, that is, the first labeling information is inconsistent with the first identification information.
It should be noted that, in an embodiment, in step S11, an image to be verified including a plurality of targets is acquired, in step S12, the image to be verified is processed by using an image processing model, so as to obtain first identification information of the image to be verified about the target, and in step S13, based on a consistency result of the first annotation information and the first identification information, it is determined that a verification result of the first annotation information can be executed by the image processing model.
In another embodiment, the steps S11, S12 and S13 are all performed by the related device or apparatus.
According to the scheme, after the image to be verified with the first annotation information is processed by the image processing model to obtain the first identification information of the image to be verified, which is related to the target, the verification result of the first annotation information can be determined based on the consistency result of the first annotation information and the first identification information.
In some embodiments, after determining the verification result of the annotation information based on the consistency result of the first annotation information and the first identification information, if the consistency result of the first annotation information and the first identification information is a partial consistency, it may be determined that only the verification result of a part of the annotated targets in the first annotation information is correct, but the verification result of another part of the annotated targets is not obtained. That is, whether the second annotation information of the other part of the annotated target is correct or not is unknown, that is, the annotation information of the part of the annotated target in the first annotation information is correctly annotated, and the annotation result of the annotation information of the other part of the annotated target is temporarily indeterminable.
When all recognized targets contained in the first recognition information have corresponding matching target pairs, but part of the tagged targets in the first tagging information do not have corresponding matching target pairs, it is determined that the first tagging information is partially consistent with the first recognition information. Illustratively, the number of labeled targets in the image to be verified is 3 (A1, A2, and A3, respectively), and the number of identified targets is 2 (B1 and B2, respectively), where A1 and B1, A2, and B2 can form a matching target pair, but A3 does not have a corresponding matching target pair, which indicates that the second labeling information corresponding to A1 is consistent with the second identification information corresponding to B1, the second labeling information corresponding to A2 is consistent with the second identification information corresponding to B2, but A3 does not have corresponding second identification information, specifically, whether the second labeling information corresponding to A3 is correct or unknown, and in this case, it is considered that the first labeling information is partially consistent with the first identification information.
In an embodiment, after determining the verification result of the labeling information based on the consistency result of the first labeling information and the first identification information, the parameter of the image processing model may be adjusted by using the first identification information and the first identification information corresponding to the consistency result being completely consistent or partially consistent, so as to improve the processing capability of the image processing model, for example, improve the recognition capability of some blurred or partially occluded objects.
In an embodiment, due to the existence of an object which is difficult to identify in the image to be verified, and the like, the consistency result of the first annotation information and the first identification information may be partially consistent, and if the consistency result of the first annotation information and the first identification information is partially consistent, the image to be verified is processed by using the image processing model again, so as to obtain the first identification information about the object of the image to be verified, and the subsequent steps are performed. That is, the verification result of the first annotation information may be determined according to the new first identification information obtained by re-executing the above steps and the consistency result of the first annotation information in the image to be verified.
When the consistency result of the first label information and the first identification information is partial consistency, the image processing model is executed again to process the image to be verified to obtain the first identification information of the image to be verified, and in the subsequent steps, the image processing model is obtained by adjusting the parameters of the original image processing model by using the second label information and the first identification information of the partial consistency target. That is to say, when the consistency result of the first annotation information and the first identification information is partially consistent, the second annotation information and the first identification information of the partially consistent target may be used to adjust the model parameters, and then the image processing model after parameter adjustment is used to re-process the image to be verified whose consistency result is partially consistent, so as to obtain the first identification information of the image to be verified about the target, and the subsequent steps.
It can be understood that, when the adjusted image processing model is used to identify the image to be verified with the target difficult to identify again, the confidence level of the identification information of the target difficult to identify is increased, and if the confidence level of the identification information of the target difficult to identify meets the preset confidence level threshold after re-identification, the second identification information about the target can be given, so that the verification result of the first annotation information is determined based on the consistency result of the first identification information and the first annotation information obtained again.
It should be noted that, for the above-mentioned case that the consistency result of the first labeling information and the first identification information is partially consistent, if the image to be verified is processed by using the image processing model again to obtain the first identification information of the target of the image to be verified and the labeling result of the partial target cannot be determined after the subsequent steps, the image to be verified is labeled, so that the subsequent verification is performed manually, or a result that the labeling of the first labeling information is incorrect is given, and the result is fed back to the image labeling personnel for re-labeling.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a label verification method according to another embodiment of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 2 is not limited in this embodiment. In this embodiment, before determining the verification result of the labeling information based on the consistency result of the first labeling information and the first identification information, the method further includes:
s21: and determining a target matching relationship between the at least one marked target and the at least one identified target by using the second marking information of each marked target and the second identification information of each identified target.
The embodiment is used for determining the target matching relationship between each labeled target and each identified target, and then determining the consistency result of the first labeling information and the first identification information according to the target matching relationship.
In some embodiments, the image to be verified may include a plurality of targets, and the categories of the plurality of targets may be different, so as to facilitate determining the consistency result of the first labeling information and the first identification information, the second labeling information of each labeled target and the second identification information of each identified target may be utilized to determine a target matching relationship between each labeled target and each identified target, so as to facilitate subsequently determining the consistency result of the first labeling information and the first identification information based on the target matching relationship.
Specifically, referring to fig. 3, fig. 3 is a schematic flowchart of an embodiment of step S21 shown in fig. 2. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 3 is not limited in this embodiment. In this embodiment, determining a target matching relationship between the labeled target and the identified target includes:
s31: and comparing the second labeling information of each labeled target with the second identification information of each identified target to obtain comparison results of each group of comparison target pairs, wherein each group of comparison target pairs comprises a labeled target and an identified target.
The embodiment is used for determining the matching target pairs with matching relations based on the comparison results of each group of comparison target pairs.
In this embodiment, any marked target and any identified target may form a set of comparison target pairs, for example, the marked targets in the image to be verified are A1 and A2, the identified targets are B1 and B2, and four sets of comparison target pairs, i.e., A1 and B1, A1 and B2, A2 and B1, and A2 and B2, may be formed. After each group of comparison target pairs is determined, for each group of comparison target pairs, comparing the second labeling information of the labeled target with the second identification information of the identified target, so as to obtain a comparison result of the group of comparison target pairs.
In an embodiment, the second labeling information and the second identification information both include the category and the position information of the corresponding target, and the comparison result of each set of comparison target pairs can be determined according to whether the position coincidence information and the category of the labeled target and the identified target in each set of comparison target pairs are the same, that is, the comparison result of each set of comparison target pairs includes whether the category of the comparison target pair is the same and the position coincidence information. The position information may be, but is not limited to, coordinates of a center point of a bounding box capable of wrapping the marked target or wrapping the identified target, a width and a height of the bounding box, coordinates of an upper left corner and a lower right corner of the bounding box, and the like. The position coincidence degree information of the marked target and the identified target in each pair of the target pairs can be used for representing whether the position of the marked target and the position of the identified target in the image to be verified are the same or not.
As shown in fig. 4, fig. 4 is a schematic diagram of position information of a target a and a target B in a set of aligned target pairs provided by the present application. In this embodiment, the position information of the second labeling information and the second identification information both include coordinates of an upper left corner and a lower right corner of the target bounding box, and the position coincidence information of each group of comparison target pairs may be determined according to an intersection ratio (IOU value) of areas of the labeled target bounding box and the bounding box of the identified target, and specifically, the position coincidence information (IOU value) of each group of comparison target pairs may be calculated by using the following formula:
S A =|X A1 -X A2 |×|Y A1 -Y A2 |
S B =|X B1 -X B2 |×|Y B1 -Y B2 |
S A ∪S B =|max(X A1 ,X B1 )-min(X A2 ,Y B2 )|×|max(Y A1 ,Y B1 )-min(Y A2 ,Y B2 )|
Figure BDA0003919308670000091
wherein A and B are a set of comparison target pairs, IOU AB Showing the coincidence degree of the positions of the bounding box of the marked object A and the bounding box of the identified object B, S A The area of the bounding box of the marked target A is represented, and the coordinates of the upper left corner and the lower right corner of the bounding box of the target A are respectively (X) A1 ,Y A1 )、(X A2 ,Y A2 ),S B The area of the bounding box of the identified target B is shown, and the coordinates of the upper left corner and the lower right corner of the bounding box of the target B are respectively (X) B1 ,Y B1 )、(X B2 ,Y B2 ),S A ∩S B Representing the intersection of the bounding box of target A and the bounding box of target B, S A US B The union of the two bounding box areas of the target A bounding box and the target B bounding box is represented.
It will be appreciated that the above calculation method is merely exemplary and that the location overlap information may be calculated in other ways, for example, using the difference between the center points of the positions of the marked object and the identified object to determine the location overlap information.
S32: and taking the comparison target pair with the comparison result meeting the matching requirement as a matching target pair with a matching relationship.
The matching target pair with the matching relation indicates that the second labeling information of the labeled target in the matching target pair is consistent with the second identification information of the identified target, and the second labeling information of the labeled target in the matching target pair is labeled correctly.
In an embodiment, for a case that the second labeling information and the second identification information both include the category and the location information of the corresponding target, the matching requirement is that the categories of the comparison target pairs are the same and the location contact ratio satisfies the contact ratio requirement, where the contact ratio requirement includes that the location contact ratio is the largest in each currently existing comparison target pair group, and the location contact ratio is greater than a preset threshold, it should be noted that the maximum location contact ratio indicates that the labeled target and the identified target in each currently existing comparison target pair group are most likely to be the same target in the image, and if the maximum location contact ratio is greater than a preset threshold (for example, 0.95), the labeled target and the identified target in the comparison target pair corresponding to the maximum location contact ratio are considered to be the same target in the image, where the specific preset threshold may be determined according to the accuracy of the verification result of the first labeling information, and no specific limitation is made here.
It can be understood that, if the coincidence degree of the positions of the labeled target and the identified target in a set of comparison target pairs is greater than the preset threshold and the categories are the same, it indicates that the positions of the identified target and the labeled target in the set of comparison target pairs in the image are the same and the categories are the same (a matching target pair having a matching relationship), and then the second identification information and the second labeling information about the target in the image are the same, and then the second labeling information of the target is labeled correctly.
Specifically, the step of using the comparison target pair whose comparison result meets the matching requirement as the matching target pair having the matching relationship includes the following steps:
firstly, selecting a group of comparison target pairs with comparison results meeting matching requirements from all the currently existing comparison target pairs as matching target pairs.
For example, referring to fig. 5, fig. 5 is a schematic flowchart of determining a matching target pair provided in the present application, in fig. 5, a indicates first label information (including second label information of 5 targets), B indicates first identification information (including second identification information of 3 targets), cls indicates a category, and "correct" indicates that a labeled target in a currently determined matching target pair is correctly labeled. The comparison target pairs a and B include 15 comparison target pairs A1B1, A1B2, A1B3, A2B1, A2B2, A2B3.. A5B1, A5B2, and A5B3 therebetween, the position coincidence degree (IOU) of each comparison target pair is calculated, and a comparison target pair having a comparison result satisfying a matching requirement (the position coincidence degree is the largest in each comparison target pair currently existing, the position coincidence degree is greater than a preset threshold, and the categories are the same) is selected as a matching target pair from among the comparison target pairs currently existing, as shown in fig. 5, the comparison target pair A2B1 is a matching target pair currently. It can be understood that A2 and B1 in the matching target pair A2B1 correspond to the same target in the image, and A2 and B1 are consistent, and A2 labels correctly.
And secondly, deleting a comparison target pair containing any target in the matching target pairs, repeatedly executing each group of comparison target pairs which currently exist, selecting a group of comparison target pairs with comparison results meeting the current matching requirements as the matching target pairs and the subsequent steps until no comparison target pairs exist or no comparison target pairs meeting the matching requirements exist in the rest comparison target pairs.
As shown in fig. 5, it is determined through the IOU value that A2 and B1 are the same target in the image, and A2 and B1 are the same, then two targets in the comparison target pairs of any one of A2 and B1 in other groups may not be the same target in the image, and the information may not be the same, so in order to facilitate the subsequent determination of each group of matching target pairs, the comparison target pair including any target in the matching target pairs (A2 and B1) may be deleted, then, the current comparison target pairs in each group of comparison target pairs may be repeatedly executed, and one group of comparison target pairs whose comparison results satisfy the current matching requirement may be selected as a matching target pair and the subsequent steps until no comparison target pair exists, or no comparison target pair satisfying the matching requirement exists in the remaining comparison target pairs.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a case where the matching requirement is not satisfied and corresponding error types provided by the present application. In fig. 6, the condition that the matching requirement is not satisfied includes 3 cases, and the position coincidence degree of the comparison target pair with the largest position coincidence degree among the first and currently existing comparison target pairs is greater than the preset threshold value, but the categories are different; secondly, the position contact ratio of the comparison target pair with the largest position contact ratio in each group of comparison target pairs currently exists is smaller than a preset threshold value, but the types of the comparison target pairs are the same; and thirdly, the position contact ratio of the comparison target pair with the largest position contact ratio in each group of comparison target pairs currently exists is smaller than a preset threshold value, and the types of the comparison target pairs are different.
S22: and determining a consistency result of the first marking information and the first identification information based on the target matching relationship.
In an embodiment, when corresponding matching target pairs exist in all annotated targets and all identified targets in an image to be verified, it is described that second identification information and second annotation information of each target in the image are consistent, and it is determined that the first annotation information is consistent with the first identification information.
In another embodiment, if the comparison target pair with the largest position coincidence degree in the currently existing comparison target pairs does not meet the matching requirement, that is, a certain identified target does not have a matching target pair, it is indicated that the labeled target of the comparison target pair is labeled incorrectly, that is, the first labeling information is inconsistent with the first identification information and is incorrect, and the incorrect first labeling information is fed back to the labeling party for re-labeling without verifying subsequent other labeling information.
Referring to fig. 6, in some embodiments, after determining the verification result of the first labeled information based on the consistency result of the first labeled information and the first identification information in step S13, if the verification result of the first labeled information is that the first labeled information has an error, the error type of the first labeled information may be determined based on the comparison result of each comparison target pair, where the error type includes at least one of a category error, a target missing label, and a mismatch between a labeled position of the target and an actual position. Wherein, the error type corresponds to the error type of the first label information. The class error indicates that an object originally belonging to a certain class in the image to be verified is wrongly labeled as another class, for example, the object a in the image to be verified is actually a "dog" but wrongly labeled as a "cat"; the target missing mark indicates that a target existing in the image to be verified is not marked (i.e., there is no corresponding second marking information), for example, two targets in the image to be verified are respectively a target a and a target B, but only the marking information of the target a exists in the first marking information, and there is no marking information of the target B, then the target B is missing mark; the fact that the annotated position of the target does not match with the actual position indicates that the annotated position of the target cannot completely include the actual position of the target in the image, for example, the annotated position information of the target a in the image to be verified only includes a part of the area of the target a.
The determining of the error type of the first labeling information based on the comparison result of each group of comparison target pairs includes:
if the maximum position coincidence degree of each currently existing group of comparison target pairs is greater than the preset threshold value and the types of the comparison target pairs corresponding to the maximum position coincidence degree are different, it is determined that the first labeling information has a type error (for example, "wrong label" as shown in fig. 6). The maximum position coincidence degree indicates that the marked target and the identified target in each current group of comparison target pairs are most likely to be the same target in the image, the maximum position coincidence degree in the comparison target pairs is greater than a preset threshold value, it can be considered that the position of the marked target and the position of the identified target in the comparison target pairs are the same in the image, the marked target and the identified target correspond to the same target in the image, but for the same target at the same position, the type of the marked target is different from the type of the identified target, and a type marking error of the second marking information of the target is described, that is, the first marking information has a type error.
If the maximum position coincidence degree of each currently existing group of comparison target pairs is smaller than the preset threshold value and the types of the comparison target pairs corresponding to the maximum position coincidence degree are the same, it is determined that the labeling position of the first labeling information existing target is not matched with the actual position (for example, "the enclosure frame is not attached" shown in fig. 6). The maximum position coincidence degree in the comparison target pair is smaller than a preset threshold value, the categories of the comparison target pair corresponding to the maximum position coincidence degree are the same, it is indicated that the position of the marked target and the position of the identified target of the comparison target pair which is most likely to be the same target in the image are different in position in the image, the marked target category is the same as the identified target category, it can be determined that the marked target and the identified target in the comparison target pair are the same target in the image, but the coincidence degree of the position information of the target marking and the identified position information is not completely coincident (the preset coincidence degree is not reached), that is, the marked position information of the target and the actual position information of the target are not matched, so that the marked position of the target in the first marking information is not matched with the actual position.
If the maximum position coincidence degree of each currently existing group of comparison target pairs is smaller than a preset threshold value and the types of the comparison target pairs corresponding to the maximum position coincidence degree are different, it is determined that target missing marks (such as 'missing marks' shown in fig. 6) exist in the first labeling information. The maximum position coincidence degree of the comparison target pair is smaller than a preset threshold value, the categories of the comparison target pair corresponding to the maximum position coincidence degree are different, the fact that the position of the marked target of the comparison target pair which is most likely to be the same target in the image is different from the position of the identified target in the image is shown, and the categories of the comparison target pair are different indicates that the marked target and the identified target in the comparison target pair are not the same target in the image, second marking information corresponding to the identified target does not exist, and target missing exists in the first marking information.
In an embodiment, as shown in fig. 6, after determining any error type of the first annotation information, prompt information including the error type of the first annotation information may be generated, and the prompt information is fed back to the annotating party of the image to be verified, where the prompt information is used to remind the annotating party to re-annotate the image to be verified based on the error type.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a label verification method according to another embodiment of the present application. In this embodiment, after an image to be verified, to which first labeling information (second labeling information of each labeled target) is labeled, is acquired, the image to be verified is processed by using an image processing model to obtain first identification information (second identification information of each identified target) of the identified target in the image to be verified, then a verification result of the first labeling information is determined according to a consistency result of the first labeling information and the first identification information, and after an error and an error type of the first labeling information are determined, prompt information of the error type including the first labeling information is generated and fed back to a labeling party of the image to be verified, so that the labeling party is prompted to label the image to be verified again based on the error type; and when the first labeling information is determined to be correctly labeled or partially correctly labeled, adjusting parameters of the image processing model by using the first labeling information and the first identification information which are correctly labeled or partially correctly correspond.
Referring to fig. 8, fig. 8 is a schematic frame diagram of an embodiment of a mark verification apparatus provided in the present application. In this embodiment, the annotation checking device 80 includes an obtaining module 81, a processing module 82, and a result determining module 83. The obtaining module 81 is configured to obtain an image to be verified including a plurality of targets, where the image to be verified is labeled with first labeling information related to the targets; the processing module 82 is configured to process the image to be verified by using the image processing model to obtain first identification information of the image to be verified, the first identification information being related to the target; the result determining module 83 is configured to determine a verification result of the first annotation information based on a consistency result of the first annotation information and the first identification information.
In some embodiments, the first annotation information in the image to be verified acquired by the acquiring module 81 includes second annotation information corresponding to at least one annotated target, where the at least one annotated target is at least a partial target in the image to be verified, the first identification information obtained by the processing module 82 includes second identification information corresponding to at least one identified target, and the at least one identified target is at least a partial target in the image to be verified; before the result determining module 83 determines the verification result of the annotation information based on the consistency result of the first annotation information and the first identification information, the method further includes: the result determining module 83 determines a target matching relationship between at least one labeled target and at least one identified target by using the second labeling information of each labeled target and the second identification information of each identified target; and determining a consistency result of the first labeling information and the first identification information based on the target matching relationship.
In some embodiments, the result determining module 83 determines the object matching relationship between at least one labeled object and at least one identified object by using the second labeling information of each labeled object and the second identification information of each identified object, including: comparing the second labeling information of each labeled target with the second identification information of each identified target to obtain comparison results of each group of comparison target pairs, wherein each group of comparison target pairs comprises a labeled target and an identified target; taking the comparison target pair with the comparison result meeting the matching requirement as a matching target pair with a matching relationship; determining a consistency result of the first labeling information and the first identification information based on the target matching relationship, wherein the consistency result comprises the following steps: and responding to the at least one marked target and the at least one identified target to have a corresponding matching target pair, and determining that the first marking information is consistent with the first identification information.
In some embodiments, the second label information and the second identification information each include a category and location information of the corresponding target, the comparison result includes whether the categories of the comparison target pair are the same and a location contact ratio obtained based on the location information obtained by the comparison target pair, the matching requirement is that the categories of the comparison target pair are the same and the location contact ratio satisfies a contact ratio requirement, and the contact ratio requirement includes at least one of: the position contact ratio is maximum in each group of comparison target pairs currently existing, and the position contact ratio is greater than a preset threshold value; and/or, taking the comparison target pair with the comparison result meeting the matching requirement as a matching target pair with a matching relationship, and comprising: selecting a group of comparison target pairs with comparison results meeting the matching requirements from the currently existing groups of comparison target pairs as matching target pairs; deleting the comparison target pair containing any target in the matching target pairs, repeatedly executing each currently existing comparison target pair, selecting one comparison target pair with a comparison result meeting the current matching requirement as the matching target pair and the subsequent steps until no comparison target pair exists or no comparison target pair meeting the matching requirement exists in the rest comparison target pairs.
In some embodiments, the result determining module 83, after determining the verification result of the first annotation information based on the consistency result of the first annotation information and the first identification information, further comprises: in response to the verification result of the first labeling information being that the first labeling information has an error, determining an error type of the first labeling information based on the comparison result, wherein the error type includes at least one of: the category is wrong, the target is missed, and the marked position of the target is not matched with the actual position.
In some embodiments, the second labeling information and the second identification information both include the category and the location information of the corresponding target, and the comparison result includes whether the targets of the comparison target pair are the same or not and a location contact ratio obtained based on the location information obtained by the correspondence of the comparison target pair; determining the error type of the first labeling information based on the comparison result, wherein the error type of the first labeling information comprises any one or more of the following steps: determining that the first labeling information has a category error in response to that the maximum position coincidence degree of each group of currently existing comparison target pairs is larger than a preset threshold value and the categories of the comparison target pairs corresponding to the maximum position coincidence degree are different; in response to that the maximum position coincidence degree of each group of comparison target pairs currently existing is smaller than a preset threshold value and the types of the comparison target pairs corresponding to the maximum position coincidence degree are the same, determining that the labeling position of the target in the first labeling information is not matched with the actual position; and determining that the target missing exists in the first labeling information in response to the fact that the maximum position coincidence degree of each group of comparison target pairs currently existing is smaller than a preset threshold value and the types of the comparison target pairs corresponding to the maximum position coincidence degree are different.
In some embodiments, the sum result determining module 83, after determining the error type of the first annotation information, further comprises: and generating prompt information of the error type containing the first annotation information, and feeding the prompt information back to the annotation party of the image to be verified, wherein the prompt information is used for reminding the annotation party of re-annotation of the image to be verified based on the error type.
In some embodiments, the result determining module 83 determines the verification result of the first labeling information based on the consistency result of the first labeling information and the first identification information, including: determining that the first labeling information is correct in response to the first labeling information being consistent with the first identification information; in response to the inconsistency between the first annotation information and the first identification information, determining that the first annotation information has an error; and/or after determining the verification result of the first labeling information based on the consistency result of the first labeling information and the first identification information, any one or more steps of the following steps are included: in response to the partial consistency of the first labeling information and the first identification information, processing the image to be verified by using the image processing model again to obtain first identification information of the image to be verified, wherein the image processing model used in the re-execution is the original image processing model, or parameter adjustment is carried out on the original image processing model by using the partial consistency of the first labeling information and the first identification information; and adjusting the parameters of the image processing model by using the corresponding first labeling information and the first identification information of which the consistency results are completely consistent or partially consistent.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of an electronic device provided in the present application. In this embodiment, the electronic device 90 includes a processor 91 and a memory 92.
The processor 91 may also be referred to as a CPU (Central Processing Unit). The processor 91 may be an integrated circuit chip having signal processing capabilities. The processor 91 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 91 may be any conventional processor 91 or the like.
Memory 92 in electronic device 90 is used to store program instructions that are needed for processor 91 to operate.
The processor 91 is configured to execute program instructions to implement the methods provided by any of the embodiments described above, and any non-conflicting combinations.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a computer-readable storage medium provided in the present application. The computer-readable storage medium 100 of the embodiments of the present application stores program instructions 101, and the program instructions 101 when executed implement the method provided by any of the embodiments and any non-conflicting combination. The program instructions 101 may form a program file stored in the computer-readable storage medium 100 in the form of a software product, so that a computer device (which may be a personal computer, a server, or a network device) executes all or part of the steps of the method according to the embodiments of the present application. And the aforementioned computer-readable storage medium 100 includes: various media capable of storing program codes, such as a usb disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices such as a computer, a server, a mobile phone, and a tablet.
According to the scheme, after the image to be verified with the first annotation information is processed by the image processing model to obtain the first identification information of the target of the image to be verified, the verification result of the first annotation information can be determined based on the consistency result of the first annotation information and the first identification information.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an embodiment of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed by the present application and the contents of the attached drawings, which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method for verifying a label, the method comprising:
acquiring an image to be verified containing a plurality of targets, wherein the image to be verified is marked with first marking information related to the targets;
processing the image to be verified by using an image processing model to obtain first identification information of the image to be verified, wherein the first identification information is related to the target;
and determining a verification result of the first labeling information based on a consistency result of the first labeling information and the first identification information.
2. The method according to claim 1, wherein the first annotation information includes second annotation information corresponding to at least one annotated target respectively, the at least one annotated target being at least a part of the target in the image to be verified, the first identification information includes second identification information corresponding to at least one identified target respectively, the at least one identified target being at least a part of the target in the image to be verified;
before the determining a verification result of the annotation information based on the consistency result of the first annotation information and the first identification information, the method further includes:
determining a target matching relationship between the at least one labeled target and the at least one identified target by using second labeling information of each labeled target and second identification information of each identified target;
and determining a consistency result of the first labeling information and the first identification information based on the target matching relationship.
3. The method of claim 2, wherein the determining the object matching relationship between the at least one labeled object and the at least one identified object by using the second labeling information of each labeled object and the second identification information of each identified object comprises:
comparing the second labeling information of each labeled target with the second identification information of each identified target to obtain comparison results of each group of comparison target pairs, wherein each group of comparison target pairs comprises one labeled target and one identified target;
taking the comparison target pair with the comparison result meeting the matching requirement as a matching target pair with a matching relationship;
the determining a consistency result of the first labeling information and the first identification information based on the target matching relationship includes:
and responding to the at least one marked target and the at least one identified target to have the corresponding matching target pair, and determining that the first marking information is consistent with the first identification information.
4. The method according to claim 3, wherein the second label information and the second identification information each include a category and location information of a corresponding target, the comparison result includes whether the categories of the comparison target pair are the same and a location contact ratio obtained based on location information obtained by the comparison target pair corresponding to the comparison target pair, the matching requirement is that the categories of the comparison target pair are the same and the location contact ratio satisfies a contact ratio requirement, and the contact ratio requirement includes at least one of: the position contact ratio is the largest in each group of comparison target pairs currently existing, and the position contact ratio is greater than a preset threshold value;
and/or, the step of taking the comparison target pair with the comparison result meeting the matching requirement as a matching target pair with a matching relationship comprises:
selecting a group of comparison target pairs with the comparison results meeting the matching requirements from the currently existing groups of comparison target pairs as the matching target pairs;
deleting the comparison target pair containing any target in the matching target pair, repeatedly executing the comparison target pairs from each group of currently existing comparison target pairs, selecting one group of comparison target pairs with the comparison results meeting the current matching requirements as the matching target pairs and the subsequent steps until the comparison target pairs do not exist or the comparison target pairs meeting the matching requirements do not exist in the rest comparison target pairs.
5. The method of claim 3, wherein after the determining the verification result of the first label information based on the consistency result of the first label information and the first identification information, the method further comprises:
in response to the verification result of the first labeling information being that there is an error in the first labeling information, determining the error type of the first labeling information based on the comparison result, wherein the error type includes at least one of: the category is wrong, the target is missed, and the marked position of the target is not matched with the actual position.
6. The method according to claim 5, wherein the second label information and the second identification information each include category and location information of a corresponding target, and the comparison result includes whether the targets of the comparison target pair are the same and a location coincidence degree obtained based on the location information correspondingly obtained by the comparison target pair;
the determining the error type of the first labeling information based on the comparison result includes any one or more of the following steps:
determining that the first labeling information has a category error in response to that the maximum position coincidence degree of each group of the comparison target pairs currently existing is greater than a preset threshold value and the categories of the comparison target pairs corresponding to the maximum position coincidence degree are different;
in response to that the maximum position coincidence degree of each currently existing group of the comparison target pairs is smaller than the preset threshold value and the types of the comparison target pairs corresponding to the maximum position coincidence degree are the same, determining that the labeling position of the first labeling information existing target is not matched with the actual position;
and determining that the first labeling information has target missing marks in response to that the maximum position coincidence degree of each currently existing group of the comparison target pairs is smaller than the preset threshold value and the types of the comparison target pairs corresponding to the maximum position coincidence degree are different.
7. The method of claim 5, wherein after the determining the error type of the first annotation information, the method further comprises:
and generating prompt information of the error type containing the first annotation information, and feeding the prompt information back to an annotation party of the image to be verified, wherein the prompt information is used for reminding the annotation party to re-annotate the image to be verified based on the error type.
8. The method of claim 1, wherein the determining a verification result of the first labeling information based on a result of the consistency between the first labeling information and the first identification information comprises:
in response to the first labeling information and the first identification information being consistent, determining that the first labeling information is correct;
in response to the first annotation information and the first identification information being inconsistent, determining that the first annotation information has an error;
and/or the first labeling information comprises second labeling information corresponding to at least one labeled target respectively, wherein the at least one labeled target is at least part of targets in the image to be verified; after the determining the verification result of the first annotation information based on the consistency result of the first annotation information and the first identification information, any one or more of the following steps are further included:
in response to that the first annotation information and the first identification information are partially consistent, re-executing the step of processing the image to be verified by using the image processing model to obtain first identification information of the image to be verified, wherein the image processing model used in the re-executing step is obtained by performing parameter adjustment on the original image processing model by using the second annotation information and the first identification information of the partially consistent target;
and adjusting the parameters of the image processing model by using the second labeling information and the first identification information corresponding to the target with the consistency result of complete consistency or partial consistency.
9. An electronic device comprising a memory and a processor coupled to each other,
the memory stores program instructions;
the processor is configured to execute program instructions stored in the memory to implement the method of any of claims 1-8.
10. A computer-readable storage medium for storing program instructions executable to implement the method of any one of claims 1-8.
CN202211352144.4A 2022-10-31 2022-10-31 Method, device and equipment for verifying label and storage medium Pending CN115731435A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211352144.4A CN115731435A (en) 2022-10-31 2022-10-31 Method, device and equipment for verifying label and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211352144.4A CN115731435A (en) 2022-10-31 2022-10-31 Method, device and equipment for verifying label and storage medium

Publications (1)

Publication Number Publication Date
CN115731435A true CN115731435A (en) 2023-03-03

Family

ID=85294407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211352144.4A Pending CN115731435A (en) 2022-10-31 2022-10-31 Method, device and equipment for verifying label and storage medium

Country Status (1)

Country Link
CN (1) CN115731435A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503830A (en) * 2023-06-25 2023-07-28 小米汽车科技有限公司 Method and device for testing target detection algorithm and server

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503830A (en) * 2023-06-25 2023-07-28 小米汽车科技有限公司 Method and device for testing target detection algorithm and server
CN116503830B (en) * 2023-06-25 2023-10-13 小米汽车科技有限公司 Method and device for testing target detection algorithm and server

Similar Documents

Publication Publication Date Title
CN107798299B (en) Bill information identification method, electronic device and readable storage medium
CN110795482B (en) Data benchmarking method, device and storage device
CN110675546B (en) Invoice picture identification and verification method, system, equipment and readable storage medium
WO2019071662A1 (en) Electronic device, bill information identification method, and computer readable storage medium
CN111078908A (en) Data annotation detection method and device
CN111639648B (en) Certificate identification method, device, computing equipment and storage medium
WO2019100613A1 (en) Electronic insurance policy signing method and apparatus, computer device and storage medium
CN111553251B (en) Certificate four-corner defect detection method, device, equipment and storage medium
CN110245087B (en) State checking method and device of manual client for sample auditing
CN113837151B (en) Table image processing method and device, computer equipment and readable storage medium
CN105279525A (en) Image processing method and device
WO2020253741A1 (en) Method and device for checking status of manual client by using error samples
CN110288755A (en) The invoice method of inspection, server and storage medium based on text identification
CN115731435A (en) Method, device and equipment for verifying label and storage medium
CN112396122A (en) Method and system for multiple optimization of target detector based on vertex distance and cross-over ratio
CN111858977A (en) Bill information acquisition method and device, computer equipment and storage medium
CN112286780A (en) Method, device and equipment for testing recognition algorithm and storage medium
CN112487270A (en) Method and device for asset classification and accuracy verification based on picture identification
CN113284141A (en) Model determination method, device and equipment for defect detection
CN114373188A (en) Drawing identification method and device, electronic equipment and storage medium
CN112287828A (en) Financial statement generation method and device based on machine learning
CN110751110A (en) Identity image information verification method, device, equipment and storage medium
CN114357007A (en) Method and device for verifying label and electronic equipment
CN112529038B (en) Method and device for identifying main board material and storage medium
CN112529039B (en) Method and device for checking material information of main board and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination