CN113221999B - Picture annotation accuracy obtaining method and device and electronic equipment - Google Patents

Picture annotation accuracy obtaining method and device and electronic equipment Download PDF

Info

Publication number
CN113221999B
CN113221999B CN202110491590.2A CN202110491590A CN113221999B CN 113221999 B CN113221999 B CN 113221999B CN 202110491590 A CN202110491590 A CN 202110491590A CN 113221999 B CN113221999 B CN 113221999B
Authority
CN
China
Prior art keywords
labeling
granularity
picture
membership
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110491590.2A
Other languages
Chinese (zh)
Other versions
CN113221999A (en
Inventor
杨雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110491590.2A priority Critical patent/CN113221999B/en
Publication of CN113221999A publication Critical patent/CN113221999A/en
Application granted granted Critical
Publication of CN113221999B publication Critical patent/CN113221999B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/40Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method, a device and electronic equipment for obtaining accuracy of picture annotation, relates to the technical field of artificial intelligence, and particularly relates to the technical field of computer vision and image processing. The specific implementation scheme is as follows: obtaining a labeling result of a picture and a labeling granularity of the labeling result; identifying the judgment result of the labeling result; obtaining at least one membership labeling granularity belonging to the feature dimension of the target picture from the labeling granularity, and obtaining a labeling result of the membership labeling granularity; and obtaining the labeling accuracy of the picture under the characteristic dimension of the target picture according to the judgment result of the labeling result of the membership labeling granularity. Therefore, the marking accuracy of the picture under different feature dimensions can be obtained based on marking granularity, the flexibility is high, and the diversity of the accuracy of the picture marking is improved.

Description

Picture annotation accuracy obtaining method and device and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, an electronic device, a storage medium, and a computer program product for obtaining a correctness of a picture annotation.
Background
At present, with the development of artificial intelligence technology, the image annotation is widely applied in the fields of unmanned vehicle scene obstacle annotation, skeleton key point annotation and the like, for example, in unmanned vehicle scene obstacle annotation, the attributes of shooting time, type, position, shielding, cutting and the like of the image acquired by the unmanned vehicle can be annotated. However, the method for obtaining the accuracy of the picture marking in the prior art is single and not flexible enough, and cannot comprehensively reflect the accuracy of the picture marking.
Disclosure of Invention
Provided are a method, an apparatus, an electronic device, a storage medium, and a computer program product for obtaining the accuracy of picture annotation.
According to a first aspect, there is provided a method for obtaining a correctness of a picture annotation, including: obtaining a labeling result of a picture and a labeling granularity of the labeling result; identifying the judgment result of the labeling result; obtaining at least one membership labeling granularity belonging to the feature dimension of the target picture from the labeling granularity, and obtaining a labeling result of the membership labeling granularity; and obtaining the labeling accuracy of the picture under the characteristic dimension of the target picture according to the judgment result of the labeling result of the membership labeling granularity.
According to a second aspect, there is provided a device for obtaining a correct rate of a picture annotation, including: the first acquisition module is used for acquiring a labeling result of the picture and a labeling granularity of the labeling result; the first identification module is used for identifying the judgment result of the labeling result; the second acquisition module is used for acquiring at least one membership labeling granularity belonging to the characteristic dimension of the target picture from the labeling granularity and acquiring a labeling result of the membership labeling granularity; and the third acquisition module is used for acquiring the labeling accuracy of the picture under the characteristic dimension of the target picture according to the judgment result of the labeling result of the membership labeling granularity.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method for obtaining accuracy of picture marking according to the first aspect of the present disclosure.
According to a fourth aspect, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method of obtaining accuracy of picture annotation according to the first aspect of the present disclosure.
According to a fifth aspect, there is provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method for obtaining the accuracy of the picture annotation according to the first aspect of the disclosure.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of a method for obtaining accuracy of picture annotation according to a first embodiment of the present disclosure;
fig. 2 is a flowchart of a determination result of identifying a labeling result in a method for obtaining a correctness of a picture labeling according to a second embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a method for obtaining accuracy of labeling of a picture in a feature dimension of a target picture according to a third embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for obtaining a correct rate of image annotation according to a fourth embodiment of the disclosure, wherein the method is used for obtaining a weight of at least one membership annotation granularity;
Fig. 5 is a flowchart illustrating a method for obtaining accuracy of labeling of a picture in a dimension of a picture feature according to a fifth embodiment of the present disclosure;
FIG. 6 is a block diagram of a picture annotation accuracy acquisition device according to a first embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device for implementing a method of obtaining accuracy of picture annotation according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
AI (Artificial Intelligence ) is a technical science that studies, develops theories, methods, techniques and application systems for simulating, extending and expanding human intelligence. At present, the AI technology has the advantages of high automation degree, high accuracy and low cost, and is widely applied.
Computer Vision (Computer Vision) refers to machine Vision that uses a camera and a Computer to replace human eyes to recognize, track and measure targets, and further performs graphic processing, so that the Computer processing becomes an image more suitable for human eyes to observe or transmit to an instrument for detection. Computer vision is a comprehensive discipline including computer science and engineering, signal processing, physics, applied mathematics and statistics, neurophysiology and cognitive sciences, and the like.
Image Processing (Image Processing) refers to a technique of analyzing an Image with a computer to achieve a desired result. Image processing generally refers to digital image processing, and digital images refer to a large two-dimensional array obtained by shooting with equipment such as industrial cameras, video cameras, scanners and the like, wherein elements of the array are called pixels, and values of the elements are called gray values. Image processing techniques generally include image compression, enhancement and restoration, matching, description and recognition of 3 parts.
Fig. 1 is a flowchart illustrating a method for obtaining accuracy of picture annotation according to a first embodiment of the present disclosure.
As shown in fig. 1, a method for obtaining a correctness of a picture annotation according to a first embodiment of the present disclosure includes:
s101, obtaining a labeling result of the picture and a labeling granularity of the labeling result.
It should be noted that, the execution body of the method for obtaining the accuracy of the picture marking in the embodiment of the disclosure may be a hardware device with data information processing capability and/or software necessary for driving the hardware device to work. Alternatively, the execution body may include a workstation, a server, a computer, a user terminal, and other intelligent devices. The user terminal comprises, but is not limited to, a mobile phone, a computer, intelligent voice interaction equipment, intelligent household appliances, vehicle-mounted terminals and the like.
In the embodiment of the disclosure, the labeling result of the picture and the labeling granularity of the labeling result can be obtained. It should be noted that, in the embodiment of the present disclosure, the labeling result and the labeling granularity of the picture are not excessively limited, the labeling granularity may be preset according to the actual situation, and different labeling results may correspond to different labeling granularities.
In one embodiment, a labeling result for the entire picture and/or a labeling result for elements in the picture may be obtained.
The labeling results for the whole picture include, but are not limited to, labeling results for whether the whole picture is clear, whether the shooting scene of the whole picture meets the requirement, whether the whole picture is markable, whether the shooting time of the whole picture is daytime or evening, and the like.
Wherein the elements include, but are not limited to, point elements, line elements, area elements, box elements, and the like.
The labeling results for the elements in the picture include, but are not limited to, labeling results for the position, type and other attributes of the elements. For example, the labeling results for the point elements in the picture include, but are not limited to, labeling results for whether the positions of the point elements are accurate, whether the objects to be labeled corresponding to the point elements are front wheels or rear wheels, and the like; labeling results for the region elements in the picture include, but are not limited to labeling results for the degree of fit of the region elements with the actual picture, whether the region elements are sky or land, and the like; the labeling results for the frame elements in the picture include, but are not limited to, labeling results for whether the frame elements completely wrap the object to be labeled, whether the object to be labeled in the frame elements is a car or a large truck, and the like.
In one embodiment, obtaining the annotation granularity of the annotation result may include identifying an annotation object of the annotation result, and determining the annotation granularity of the annotation result based on the annotation object, wherein the annotation object includes the entire picture and/or elements in the picture.
The determining the labeling granularity of the labeling result based on the labeling object may include obtaining a candidate labeling granularity corresponding to the labeling object according to a corresponding relation between the labeling object and the labeling granularity, and determining the labeling granularity of the labeling result from the candidate labeling granularity based on the content of the labeling result. For example, if the labeling result is a labeling result for whether the whole picture is clear, it may be determined that the labeling object of the labeling result is the whole picture, and obtaining the candidate labeling granularity corresponding to the whole picture includes whether the whole picture is clear, whether the shooting scene of the whole picture meets the requirement, whether the whole picture is markable, whether the shooting time of the whole picture is daytime or evening, and the like, and then determining whether the labeling granularity of the labeling result is clear from the candidate labeling granularities based on the labeling result.
S102, identifying a judgment result of the labeling result.
In the embodiment of the disclosure, the judgment result of the labeling result may be correct or incorrect.
In one embodiment, the determination of the labeling result may be identified manually. For example, if the labeling result is for a car or a large truck of the object to be labeled in the frame element, and if the labeling result is for a car, the labeling result is not consistent with the manual judgment, the judgment result of the labeling result can be identified as error, otherwise, if the labeling result is for a car, the labeling result is consistent with the manual judgment, the judgment result of the labeling result can be identified as correct.
S103, at least one membership labeling granularity belonging to the feature dimension of the target picture is obtained from the labeling granularity, and a labeling result of the membership labeling granularity is obtained.
In the embodiment of the disclosure, the feature dimension of the picture may be preset according to the actual situation, which is not limited herein. For example, picture feature dimensions include, but are not limited to, location, type, attribute, element, data, title, page, lot. The attribute feature dimension is obtained by aggregating a position feature dimension and a type feature dimension, the element feature dimension is obtained by aggregating a plurality of attribute feature dimensions, the data feature dimension is obtained by aggregating a plurality of element feature dimensions, the topic feature dimension is obtained by aggregating a plurality of data feature dimensions, the page feature dimension is obtained by aggregating a plurality of topic feature dimensions, and the batch feature dimension is obtained by aggregating a plurality of page feature dimensions.
In embodiments of the present disclosure, the annotation granularity has a membership relationship with the picture feature dimension, and one or more annotation granularities may be affiliated with one picture feature dimension.
For example, labeling granularity aiming at whether the position of the point element is accurate, the fitting degree of the area element and the actual picture, whether the frame element completely wraps the object to be marked or not and the like belongs to the position feature dimension.
For example, labeling granularity of whether an object to be labeled corresponding to a point element is a front wheel or a rear wheel, whether an area element is sky or land, whether the object to be labeled in a frame element is a car or a large truck and the like belongs to type feature dimensions.
For example, labeling granularity such as whether the position of the point element is accurate, whether an object to be labeled corresponding to the point element is a front wheel or a rear wheel, whether the area element is sky or land, whether the frame element completely wraps the object to be labeled, whether the object to be labeled in the frame element is a car or a large truck and the like is subject to attribute feature dimensions.
For example, labeling granularity aiming at whether the position of the point element is accurate, whether the object to be labeled corresponding to the point element is a front wheel or a rear wheel and the like belongs to the feature dimension of the point element.
For example, labeling granularity aiming at the degree of fitting of the area element with the actual picture, whether the area element is sky or land and the like belongs to the feature dimension of the area element.
For example, the marking granularity aiming at whether the frame element completely wraps the object to be marked, whether the object to be marked in the frame element is a car or a large truck, and the like belongs to the feature dimension of the frame element.
It should be noted that, the membership relationship between the labeling granularity and the feature dimension of the picture may also include other embodiments, which are not limited herein.
In the embodiment of the disclosure, at least one membership labeling granularity belonging to the feature dimension of the target picture can be obtained from the labeling granularity, and a labeling result of the membership labeling granularity is obtained. The number of the feature dimensions of the target picture can be one or more, and at least one membership annotation granularity belonging to different feature dimensions of the target picture can be obtained respectively.
For example, when the feature dimension of the target picture is the feature dimension of the position, the membership labeling granularity obtained from the labeling granularity includes labeling granularity for whether the position of the point element is accurate, the fitting degree of the area element and the actual picture, whether the frame element completely wraps the object to be labeled, and the like.
And S104, obtaining the labeling accuracy of the picture in the characteristic dimension of the target picture according to the judgment result of the labeling result of the membership labeling granularity.
In the embodiment of the disclosure, the labeling accuracy of the picture under the dimension of the target feature can be obtained according to the judgment result of the labeling result of the membership labeling granularity.
For example, when the feature dimension of the target picture is the feature dimension of the position, the subordinate marking granularity obtained from the marking granularity comprises marking granularity aiming at whether the position of the point element is accurate, the fitting degree of the area element and the actual picture, whether the frame element completely wraps the object to be marked and the like, and the marking accuracy of the picture under the feature dimension of the position is obtained according to judging results aiming at the marking results of the subordinate marking granularity of whether the position of the point element is accurate, the fitting degree of the area element and the actual picture, whether the frame element completely wraps the object to be marked and the like.
In summary, according to the method for obtaining the accuracy of the image annotation according to the embodiment of the disclosure, at least one membership annotation granularity belonging to the feature dimension of the target image is obtained from the annotation granularity, the annotation result of the membership annotation granularity is obtained, and the accuracy of the image annotation under the feature dimension of the target image is obtained according to the judgment result of the annotation result of the membership annotation granularity. Therefore, the marking accuracy of the picture under different feature dimensions can be obtained based on marking granularity, the flexibility is high, and the diversity of the accuracy of the picture marking is improved.
On the basis of any of the above embodiments, as shown in fig. 2, the determining result of the identification marking result in step S102 includes:
S201, obtaining a reference marking result corresponding to the marking result.
In the embodiment of the disclosure, the reference marking result corresponding to the marking result can be obtained.
In one embodiment, the pictures can be marked manually, and the marking result of the pictures can be used as a reference marking result.
S202, comparing the labeling result with the reference labeling result.
S203, responding to the consistency of the labeling result and the reference labeling result, and identifying the judgment result of the labeling result to be correct.
S204, in response to the fact that the labeling result is inconsistent with the reference labeling result, the judgment result of the identification labeling result is wrong.
In embodiments of the present disclosure, the labeling results may be compared to reference labeling results.
In one embodiment, the determination of the labeling result may be identified as correct in response to the labeling result being consistent with the reference labeling result. For example, if the labeling result is for the object to be labeled in the frame element, which is a car or a large truck, and if the reference labeling result is for the object to be labeled in the frame element, which is a car or a large truck, and which is a car, the labeling result can be responded to be consistent with the reference labeling result, and the judgment result of the labeling result can be identified to be correct.
In one embodiment, the determination of the labeling result may be identified as erroneous in response to the labeling result not being consistent with the reference labeling result. For example, if the labeling result is for the object to be labeled in the frame element, which is a car or a large truck, and if the reference labeling result is for the object to be labeled in the frame element, which is a car or a large truck, and which is a large truck, the labeling result may be responded to be inconsistent with the reference labeling result, and the judgment result of the labeling result may be identified as an error.
Therefore, the method can compare the labeling result with the reference labeling result, and identify the judgment result of the labeling result according to whether the labeling result is consistent with the reference labeling result.
On the basis of any of the above embodiments, as shown in fig. 3, in step S104, according to the determination result of the labeling result of the membership labeling granularity, obtaining the labeling accuracy of the picture in the feature dimension of the target picture includes:
s301, obtaining the weight of at least one membership labeling granularity.
In the embodiment of the disclosure, weights may be set for the labeling granularity in advance, and different labeling granularities may correspond to different weights. For example, the granularity of labeling for the location attribute of an element may be set to 50%, and the granularity of labeling for the type attribute of an element may be set to 10%.
In embodiments of the present disclosure, a weight of at least one membership labeling granularity may be obtained.
In one embodiment, a mapping relationship or mapping table between the labeling granularity and the weight may be established in advance, and after the membership labeling granularity is obtained, the weight of the membership labeling granularity may be obtained by querying the mapping relationship or mapping table. It should be noted that, the mapping relationship or the mapping table may be set according to the actual situation.
S302, according to the judgment result of the labeling result of any membership labeling granularity, the labeling accuracy of any membership labeling granularity is obtained.
In the embodiment of the disclosure, the labeling accuracy of any membership labeling granularity can be obtained according to the judgment result of the labeling result of any membership labeling granularity.
In one embodiment, according to the judgment result of the labeling result of any membership labeling granularity, obtaining the labeling accuracy of any membership labeling granularity may include obtaining a first number of labeling results of any membership labeling granularity, obtaining a second number of labeling results of which the judgment result is correct in the labeling results of any membership labeling granularity, and obtaining a ratio of the second number to the first number as the labeling accuracy of any membership labeling granularity.
For example, if the first number of labeling results of any membership labeling granularity is 10, and the second number of labeling results of any membership labeling granularity, which are judged to be correct in the labeling results of any membership labeling granularity, is 4, the labeling accuracy of any membership labeling granularity is 40%.
S303, obtaining the labeling accuracy of the picture under the characteristic dimension of the target picture according to the labeling accuracy of the membership labeling granularity and the weight of the membership labeling granularity.
In the embodiment of the disclosure, the labeling accuracy of the picture under the dimension of the target feature can be obtained according to the labeling accuracy of the membership labeling granularity and the weight of the membership labeling granularity.
In one embodiment, according to the labeling accuracy of the membership labeling granularity and the weight of the membership labeling granularity, obtaining the labeling accuracy of the picture in the characteristic dimension of the target picture may include identifying that the membership labeling granularity includes at least one target labeling granularity, and in response to the judgment result of the labeling result with any target labeling granularity being incorrect, obtaining the labeling accuracy of the picture in the characteristic dimension of the target picture as 0.
The target marking granularity can be set according to practical situations, for example, the target marking granularity includes, but is not limited to, marking granularity for whether the whole picture is clear, whether the shooting scene of the whole picture meets the requirement, whether the whole picture is markable, and the like.
For example, if the membership labeling granularity includes labeling granularity for whether the whole picture is markable, and the judgment result of the labeling result of whether the whole picture is markable is wrong, the labeling accuracy of the picture under the characteristic dimension of the target picture can be obtained to be 0.
Therefore, when the membership labeling granularity comprises the target labeling granularity and the judgment result of the labeling result of any target labeling granularity is wrong, the labeling accuracy of the picture under the characteristic dimension of the target picture can be directly obtained to be 0.
In one embodiment, obtaining the labeling accuracy of the picture in the characteristic dimension of the target picture according to the labeling accuracy of the membership labeling granularity and the weight of the membership labeling granularity may include identifying that the membership labeling granularity includes at least one target labeling granularity, and obtaining the sum of products of the labeling accuracy of the rest membership labeling granularities and the weight as the labeling accuracy of the picture in the characteristic dimension of the target picture in response to the judgment result of the labeling results of all the target labeling granularities being correct.
For example, if the membership labeling granularity includes labeling granularity for whether the whole picture is markable, and the judgment result of labeling result of whether the whole picture is markable is correct, labeling accuracy rates of the rest membership labeling granularities are respectively 80%, 50% and 60%, weights of the rest membership labeling granularities are respectively 10%,50% and 40%, and labeling accuracy rates of the picture in the characteristic dimension of the target picture are 80% + 10% +50% +60% + 40% = 57%.
Therefore, when the membership labeling granularity contains the target labeling granularity and the judgment result of the labeling results of all the target labeling granularity is correct, the method can acquire the sum of products of labeling accuracy and weights of the rest membership labeling granularity as the labeling accuracy of the picture under the characteristic dimension of the target picture.
In one embodiment, according to the labeling accuracy of the membership labeling granularity and the weight of the membership labeling granularity, obtaining the labeling accuracy of the picture in the characteristic dimension of the target picture may include identifying that the membership labeling granularity does not include the target labeling granularity, and obtaining a sum of products of the labeling accuracy of the membership labeling granularity and the weight as the labeling accuracy of the picture in the characteristic dimension of the target picture. Therefore, when the membership labeling granularity does not contain the target labeling granularity, the method can directly acquire the sum of the products of the labeling accuracy rate and the weight of the membership labeling granularity, and the sum can be used as the labeling accuracy rate of the picture under the characteristic dimension of the target picture.
It should be noted that, according to the labeling accuracy of the membership labeling granularity and the weight of the membership labeling granularity, the labeling accuracy of the image under the characteristic dimension of the target image is obtained, and other possible embodiments may also be included, which are not limited too much herein.
Therefore, the method can obtain the labeling accuracy of any membership labeling granularity according to the judgment result of the labeling result of any membership labeling granularity, and obtain the labeling accuracy of the picture under the characteristic dimension of the target picture according to the labeling accuracy and the weight of the membership labeling granularity.
On the basis of any of the above embodiments, as shown in fig. 4, the obtaining the weight of the at least one membership label granularity in step S301 may include:
s401, historical weights of the membership annotation granularity, first attention parameters and/or second attention parameters are obtained, wherein the first attention parameters are used for representing the attention of a user to the membership annotation granularity, and the second attention parameters are used for representing the attention of a server to the membership annotation granularity.
In embodiments of the present disclosure, a historical weight, a first attention parameter, and/or a second attention parameter of a membership annotation granularity may be obtained.
In one embodiment, the larger the first attention parameter, the higher the attention of the characterization user to the membership labeling granularity, and the larger the second attention parameter, the higher the attention of the characterization server to the membership labeling granularity.
In one embodiment, the weight set before the membership labeling granularity may be used as a historical weight, for example, the set weight of the labeling granularity may be saved in a storage space of the server, and then the weight of the membership labeling granularity set before may be obtained from the storage space of the server as the historical weight of the membership labeling granularity. For example, an average value of weights set N times before the membership labeling granularity can be obtained as a historical weight, so that timeliness is good, wherein N is a positive integer and can be set according to actual conditions.
In one embodiment, a mapping relation or a mapping table between the membership labeling granularity and the historical weight, the first attention parameter and the second attention parameter can be established in advance, and the historical weight, the first attention parameter and the second attention parameter corresponding to the membership labeling granularity are obtained by inquiring the mapping relation or the mapping table. It should be noted that, the mapping relationship or the mapping table may be set according to the actual situation.
S402, according to the first attention degree parameter and/or the second attention degree parameter, the adjustment parameter of the historical weight is determined.
In embodiments of the present disclosure, the adjustment parameters of the historical weights may be determined according to the first and/or second attention parameters.
In one embodiment, the adjustment direction and the adjustment value of the historical weight may be determined according to the first attention parameter and/or the second attention parameter. For example, the larger the first attention degree parameter and/or the second attention degree parameter, the higher the attention degree of the user and/or the server side to the membership labeling granularity is represented, the adjusting direction of the historical weight can be determined to be an increasing direction, and the adjusting value of the historical weight can be determined according to the first attention degree parameter and/or the second attention degree parameter.
S403, adjusting the historical weight based on the adjustment parameter, and taking the adjusted historical weight as the weight of the membership labeling granularity.
In the embodiment of the disclosure, the historical weight can be adjusted based on the adjustment parameter, and the adjusted historical weight is used as the weight of the membership labeling granularity. For example, if the adjustment parameter of the history weight of the membership labeling granularity is that the adjustment direction is the improvement direction, the adjustment value is 10%, the history weight is 20%, and the adjusted history weight is 30%, the weight of the membership labeling granularity is 30%.
Therefore, the method can determine the adjustment parameters of the historical weights according to the first attention degree parameters and/or the second attention degree parameters, adjust the historical weights based on the adjustment parameters, and take the adjusted historical weights as weights of membership labeling granularity.
On the basis of any of the above embodiments, as shown in fig. 5, obtaining the labeling accuracy of the picture in the picture feature dimension may include:
s501, identifying hierarchical relationships among picture feature dimensions.
In the embodiment of the disclosure, the picture feature dimensions have a hierarchical relationship, and the hierarchical relationship between the picture feature dimensions can be identified.
S502, aiming at any one first picture feature dimension belonging to a first level of a picture, acquiring each second picture feature dimension belonging to a second level corresponding to the first picture feature dimension and the labeling accuracy of the picture under the second picture feature dimension, wherein the second picture feature dimension belonging to the second level is used for aggregation into the first picture feature dimension of the first level.
In an embodiment of the present disclosure, the picture feature dimensions include a first picture feature dimension belonging to a first hierarchy and a second picture feature dimension belonging to a second hierarchy. The second image feature dimensions belonging to the second level are used for being aggregated into first image feature dimensions of the first level, the first image feature dimensions and the second image feature dimensions have corresponding relations, and different first image feature dimensions can correspond to different second image feature dimensions.
For example, picture feature dimensions include, but are not limited to, location, type, attribute, element, data, title, page, lot. The position feature dimension and the type feature dimension are used for being aggregated into an attribute feature dimension, the attribute feature dimensions are used for being aggregated into an element feature dimension, the element feature dimensions are used for being aggregated into a data feature dimension, the data feature dimensions are used for being aggregated into a topic feature dimension, the topic feature dimensions are used for being aggregated into a page feature dimension, and the page feature dimensions are used for being aggregated into a batch feature dimension.
In the embodiment of the disclosure, each second picture feature dimension belonging to the second hierarchy corresponding to the first picture feature dimension and the labeling accuracy of the picture under the second picture feature dimension can be obtained for any one of the first picture feature dimensions belonging to the first hierarchy.
For example, for an element feature dimension belonging to a first hierarchy, the acquired second picture feature dimension belonging to a second hierarchy includes an attribute feature dimension, and a labeling accuracy of the picture under the attribute feature dimension may be acquired.
And S503, according to the labeling accuracy of the picture in each second picture feature dimension, acquiring the labeling accuracy of the picture in the first picture feature dimension.
In the embodiment of the disclosure, the labeling accuracy of the picture in the first picture feature dimension can be obtained according to the labeling accuracy of the picture in each second picture feature dimension.
In one embodiment, according to the labeling accuracy of the picture in each second picture feature dimension, obtaining the labeling accuracy of the picture in the first picture feature dimension may include obtaining an average value of the labeling accuracy of the picture in all the second picture feature dimensions as the labeling accuracy of the picture in the first picture feature dimension.
For example, for the question feature dimension belonging to the first level, the acquired second picture feature dimension belonging to the second level includes data feature dimensions, the labeling accuracy of the acquired picture in the data feature dimensions is 80%, 50% and 60%, respectively, the average value of the labeling accuracy of the picture in all the data feature dimensions is 63.3%, and the labeling accuracy of the picture in the question feature dimension is 63.3%.
Therefore, the method can obtain the labeling accuracy of the picture in the first picture feature dimension of the first hierarchy according to the labeling accuracy of the picture in the second picture feature dimension of the second hierarchy, and can obtain the labeling accuracy by utilizing the hierarchical relationship among the picture feature dimensions.
Fig. 6 is a block diagram of a picture marking accuracy obtaining apparatus according to a first embodiment of the present disclosure.
As shown in fig. 6, a device 600 for obtaining accuracy of picture annotation according to an embodiment of the disclosure includes: a first acquisition module 601, a first identification module 602, a second acquisition module 603, and a third acquisition module 604.
The first obtaining module 601 is configured to obtain a labeling result of a picture and a labeling granularity of the labeling result;
a first identifying module 602, configured to identify a judgment result of the labeling result;
a second obtaining module 603, configured to obtain at least one membership annotation granularity belonging to the feature dimension of the target picture from the annotation granularity, and obtain an annotation result of the membership annotation granularity;
and a third obtaining module 604, configured to obtain, according to a determination result of the labeling result of the membership labeling granularity, a labeling accuracy of the picture in the feature dimension of the target picture.
In one embodiment of the present disclosure, the first identification module 602 is specifically configured to: obtaining a reference marking result corresponding to the marking result; comparing the labeling result with the reference labeling result; responding to the labeling result being consistent with the reference labeling result, and identifying that the judging result of the labeling result is correct; and in response to the inconsistent labeling result and the reference labeling result, recognizing that the judgment result of the labeling result is an error.
In one embodiment of the present disclosure, the third obtaining module 604 includes: the first acquisition unit is used for acquiring the weight of the at least one membership labeling granularity; the second acquisition unit is used for acquiring the labeling accuracy of any membership labeling granularity according to the judgment result of the labeling result of any membership labeling granularity; and the third acquisition unit is used for acquiring the labeling accuracy of the picture under the characteristic dimension of the target picture according to the labeling accuracy of the membership labeling granularity and the weight of the membership labeling granularity.
In one embodiment of the disclosure, the second obtaining unit is specifically configured to: acquiring a first number of labeling results of any membership labeling granularity; obtaining a second number of labeling results with correct judgment results in the labeling results of any membership labeling granularity; and obtaining the ratio of the second quantity to the first quantity as the labeling accuracy of any membership labeling granularity.
In one embodiment of the disclosure, the third obtaining unit is specifically configured to: identifying that the membership annotation particle size comprises at least one target annotation particle size; responding to the judgment result of the labeling result with any target labeling granularity as an error, and acquiring the labeling accuracy of the picture under the characteristic dimension of the target picture as 0; or, in response to the judgment result of the labeling results of all the target labeling granularity being correct, obtaining the sum of products of the labeling accuracy rates and weights of the rest membership labeling granularity as the labeling accuracy rate of the picture under the characteristic dimension of the target picture.
In one embodiment of the disclosure, the third obtaining unit is specifically configured to: and identifying that the membership labeling granularity does not contain the target labeling granularity, and acquiring the sum of products of the labeling accuracy rate and the weight of the membership labeling granularity as the labeling accuracy rate of the picture under the characteristic dimension of the target picture.
In one embodiment of the disclosure, the first obtaining unit is specifically configured to: acquiring historical weights of the membership labeling granularity, a first attention degree parameter and/or a second attention degree parameter, wherein the first attention degree parameter is used for representing the attention degree of a user to the membership labeling granularity, and the second attention degree parameter is used for representing the attention degree of a server to the membership labeling granularity; determining an adjustment parameter of the historical weight according to the first attention parameter and/or the second attention parameter; and adjusting the historical weight based on the adjustment parameter, and taking the adjusted historical weight as the weight of the membership labeling granularity.
In one embodiment of the present disclosure, the apparatus further comprises: the second identification module is used for identifying the hierarchical relationship among the feature dimensions of the picture; a fourth obtaining module, configured to obtain, for a first image feature dimension of any one of the images belonging to a first hierarchy, each second image feature dimension of the images belonging to a second hierarchy corresponding to the first image feature dimension, and a labeling accuracy of the images under the second image feature dimension, where the second image feature dimension of the images belonging to the second hierarchy is used to aggregate into the first image feature dimension of the first hierarchy; and a fifth obtaining module, configured to obtain the labeling accuracy of the picture in the first picture feature dimension according to the labeling accuracy of the picture in each second picture feature dimension.
In summary, the device for obtaining the accuracy of the image annotation according to the embodiment of the present disclosure obtains at least one membership annotation granularity belonging to the feature dimension of the target image from the annotation granularity, obtains the annotation result of the membership annotation granularity, and obtains the accuracy of the image annotation under the feature dimension of the target image according to the judgment result of the annotation result of the membership annotation granularity. Therefore, the marking accuracy of the picture under different feature dimensions can be obtained based on marking granularity, the flexibility is high, and the diversity of the accuracy of the picture marking is improved.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the electronic device 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 701 performs the respective methods and processes described above, for example, the accuracy rate acquisition method of the picture marking described in fig. 1 to 5. For example, in some embodiments, the accuracy rate acquisition method of picture annotation may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the above-described accuracy rate acquisition method of picture annotation may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the accuracy acquisition method of the picture annotation by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
According to an embodiment of the disclosure, the disclosure further provides a computer program product, which comprises a computer program, wherein the computer program realizes the method for obtaining the accuracy of the picture annotation according to the above embodiment of the disclosure when the computer program is executed by a processor.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (14)

1. A method for obtaining the accuracy of picture annotation comprises the following steps:
obtaining a labeling result of a picture and a labeling granularity of the labeling result;
Identifying the judgment result of the labeling result;
obtaining at least one membership labeling granularity belonging to the feature dimension of the target picture from the labeling granularity, and obtaining a labeling result of the membership labeling granularity;
obtaining the labeling accuracy of the picture under the characteristic dimension of the target picture according to the judgment result of the labeling result of the membership labeling granularity; wherein,
the obtaining the labeling accuracy of the picture under the characteristic dimension of the target picture according to the judgment result of the labeling result of the membership labeling granularity comprises the following steps:
acquiring the weight of the at least one membership labeling granularity;
obtaining the labeling accuracy of any membership labeling granularity according to the judgment result of the labeling result of any membership labeling granularity;
obtaining the labeling accuracy of the picture under the characteristic dimension of the target picture according to the labeling accuracy of the membership labeling granularity and the weight of the membership labeling granularity;
the obtaining the weight of the at least one membership labeling granularity comprises the following steps:
acquiring historical weights of the membership labeling granularity, a first attention degree parameter and/or a second attention degree parameter, wherein the first attention degree parameter is used for representing the attention degree of a user to the membership labeling granularity, and the second attention degree parameter is used for representing the attention degree of a server to the membership labeling granularity;
Determining an adjustment parameter of the historical weight according to the first attention parameter and/or the second attention parameter;
and adjusting the historical weight based on the adjustment parameter, and taking the adjusted historical weight as the weight of the membership labeling granularity.
2. The method of claim 1, wherein the identifying the labeling result comprises:
obtaining a reference marking result corresponding to the marking result;
comparing the labeling result with the reference labeling result;
responding to the labeling result being consistent with the reference labeling result, and identifying that the judging result of the labeling result is correct;
and in response to the inconsistent labeling result and the reference labeling result, recognizing that the judgment result of the labeling result is an error.
3. The method of claim 1, wherein the obtaining the labeling accuracy of any membership labeling granularity according to the judgment result of the labeling result of any membership labeling granularity comprises:
acquiring a first number of labeling results of any membership labeling granularity;
obtaining a second number of labeling results with correct judgment results in the labeling results of any membership labeling granularity;
And obtaining the ratio of the second quantity to the first quantity as the labeling accuracy of any membership labeling granularity.
4. The method of claim 1, wherein the obtaining the labeling accuracy of the picture in the target picture feature dimension according to the labeling accuracy of the membership labeling granularity and the weight of the membership labeling granularity comprises:
identifying that the membership annotation particle size comprises at least one target annotation particle size;
responding to the judgment result of the labeling result with any target labeling granularity as an error, and acquiring the labeling accuracy of the picture under the characteristic dimension of the target picture as 0; or,
and responding to the judgment results of the labeling results of all the target labeling granularity to be correct, and obtaining the sum of products of labeling accuracy and weights of the rest membership labeling granularity as the labeling accuracy of the picture under the characteristic dimension of the target picture.
5. The method of claim 4, wherein the obtaining the labeling accuracy of the picture in the target picture feature dimension according to the labeling accuracy of the membership labeling granularity and the weight of the membership labeling granularity further comprises:
And identifying that the membership labeling granularity does not contain the target labeling granularity, and acquiring the sum of products of the labeling accuracy rate and the weight of the membership labeling granularity as the labeling accuracy rate of the picture under the characteristic dimension of the target picture.
6. The method of any of claims 1-5, wherein the method further comprises:
identifying hierarchical relationships between picture feature dimensions;
for any first picture feature dimension belonging to a first level of the picture, acquiring each second picture feature dimension belonging to a second level corresponding to the first picture feature dimension and the labeling accuracy of the picture under the second picture feature dimension, wherein the second picture feature dimension belonging to the second level is used for aggregation into the first picture feature dimension of the first level;
and obtaining the labeling accuracy of the picture in the first picture feature dimension according to the labeling accuracy of the picture in each second picture feature dimension.
7. A device for obtaining the accuracy of picture annotation comprises:
the first acquisition module is used for acquiring a labeling result of the picture and a labeling granularity of the labeling result;
The first identification module is used for identifying the judgment result of the labeling result;
the second acquisition module is used for acquiring at least one membership labeling granularity belonging to the characteristic dimension of the target picture from the labeling granularity and acquiring a labeling result of the membership labeling granularity;
the third acquisition module is used for acquiring the labeling accuracy of the picture under the characteristic dimension of the target picture according to the judgment result of the labeling result of the membership labeling granularity; wherein,
the third acquisition module includes:
the first acquisition unit is used for acquiring the weight of the at least one membership labeling granularity;
the second acquisition unit is used for acquiring the labeling accuracy of any membership labeling granularity according to the judgment result of the labeling result of any membership labeling granularity;
the third obtaining unit is used for obtaining the labeling accuracy of the picture under the characteristic dimension of the target picture according to the labeling accuracy of the membership labeling granularity and the weight of the membership labeling granularity;
the first obtaining unit is specifically configured to:
acquiring historical weights of the membership labeling granularity, a first attention degree parameter and/or a second attention degree parameter, wherein the first attention degree parameter is used for representing the attention degree of a user to the membership labeling granularity, and the second attention degree parameter is used for representing the attention degree of a server to the membership labeling granularity;
Determining an adjustment parameter of the historical weight according to the first attention parameter and/or the second attention parameter;
and adjusting the historical weight based on the adjustment parameter, and taking the adjusted historical weight as the weight of the membership labeling granularity.
8. The apparatus of claim 7, wherein the first identification module is specifically configured to:
obtaining a reference marking result corresponding to the marking result;
comparing the labeling result with the reference labeling result;
responding to the labeling result being consistent with the reference labeling result, and identifying that the judging result of the labeling result is correct;
and in response to the inconsistent labeling result and the reference labeling result, recognizing that the judgment result of the labeling result is an error.
9. The apparatus of claim 7, wherein the second acquisition unit is specifically configured to:
acquiring a first number of labeling results of any membership labeling granularity;
obtaining a second number of labeling results with correct judgment results in the labeling results of any membership labeling granularity;
and obtaining the ratio of the second quantity to the first quantity as the labeling accuracy of any membership labeling granularity.
10. The apparatus of claim 7, wherein the third obtaining unit is specifically configured to:
identifying that the membership annotation particle size comprises at least one target annotation particle size;
responding to the judgment result of the labeling result with any target labeling granularity as an error, and acquiring the labeling accuracy of the picture under the characteristic dimension of the target picture as 0; or,
and responding to the judgment results of the labeling results of all the target labeling granularity to be correct, and obtaining the sum of products of labeling accuracy and weights of the rest membership labeling granularity as the labeling accuracy of the picture under the characteristic dimension of the target picture.
11. The apparatus of claim 10, wherein the third obtaining unit is specifically configured to:
and identifying that the membership labeling granularity does not contain the target labeling granularity, and acquiring the sum of products of the labeling accuracy rate and the weight of the membership labeling granularity as the labeling accuracy rate of the picture under the characteristic dimension of the target picture.
12. The apparatus according to any one of claims 7-11, wherein the apparatus further comprises:
the second identification module is used for identifying the hierarchical relationship among the feature dimensions of the picture;
A fourth obtaining module, configured to obtain, for a first image feature dimension of any one of the images belonging to a first hierarchy, each second image feature dimension of the images belonging to a second hierarchy corresponding to the first image feature dimension, and a labeling accuracy of the images under the second image feature dimension, where the second image feature dimension of the images belonging to the second hierarchy is used to aggregate into the first image feature dimension of the first hierarchy;
and a fifth obtaining module, configured to obtain the labeling accuracy of the picture in the first picture feature dimension according to the labeling accuracy of the picture in each second picture feature dimension.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of accuracy acquisition of picture marking of any one of claims 1-6.
14. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the accuracy rate acquisition method of picture annotation according to any one of claims 1-6.
CN202110491590.2A 2021-05-06 2021-05-06 Picture annotation accuracy obtaining method and device and electronic equipment Active CN113221999B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110491590.2A CN113221999B (en) 2021-05-06 2021-05-06 Picture annotation accuracy obtaining method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110491590.2A CN113221999B (en) 2021-05-06 2021-05-06 Picture annotation accuracy obtaining method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113221999A CN113221999A (en) 2021-08-06
CN113221999B true CN113221999B (en) 2024-01-12

Family

ID=77091068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110491590.2A Active CN113221999B (en) 2021-05-06 2021-05-06 Picture annotation accuracy obtaining method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113221999B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109473160A (en) * 2018-10-26 2019-03-15 广州金域医学检验中心有限公司 The mark qualification of medical image determines method and device
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
CN110245716A (en) * 2019-06-20 2019-09-17 杭州睿琪软件有限公司 Sample labeling auditing method and device
CN111080092A (en) * 2019-11-29 2020-04-28 北京云聚智慧科技有限公司 Data annotation management method and device, electronic equipment and readable storage medium
CN112270532A (en) * 2020-11-12 2021-01-26 北京百度网讯科技有限公司 Data processing method and device, electronic equipment and storage medium
KR102246266B1 (en) * 2020-06-08 2021-04-29 주식회사 루닛 Method and system for providing feedback on label information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
CN109473160A (en) * 2018-10-26 2019-03-15 广州金域医学检验中心有限公司 The mark qualification of medical image determines method and device
CN110245716A (en) * 2019-06-20 2019-09-17 杭州睿琪软件有限公司 Sample labeling auditing method and device
CN111080092A (en) * 2019-11-29 2020-04-28 北京云聚智慧科技有限公司 Data annotation management method and device, electronic equipment and readable storage medium
KR102246266B1 (en) * 2020-06-08 2021-04-29 주식회사 루닛 Method and system for providing feedback on label information
CN112270532A (en) * 2020-11-12 2021-01-26 北京百度网讯科技有限公司 Data processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
群智标注系统中质量管理设计与实现;胡平;陈敬东;曾真;;舰船电子工程(05);全文 *

Also Published As

Publication number Publication date
CN113221999A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
US20210272306A1 (en) Method for training image depth estimation model and method for processing image depth information
CN112560684B (en) Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN113239928A (en) Method, apparatus and program product for image difference detection and model training
CN116245193A (en) Training method and device of target detection model, electronic equipment and medium
CN112634366B (en) Method for generating position information, related device and computer program product
CN114299242A (en) Method, device and equipment for processing images in high-precision map and storage medium
CN113221999B (en) Picture annotation accuracy obtaining method and device and electronic equipment
CN114119990B (en) Method, apparatus and computer program product for image feature point matching
CN116485645A (en) Image stitching method, device, equipment and storage medium
CN113592981B (en) Picture labeling method and device, electronic equipment and storage medium
CN114581523A (en) Method and device for determining labeling data for monocular 3D target detection
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN113888635A (en) Visual positioning method, related device and computer program product
CN113284509B (en) Method and device for obtaining accuracy of voice annotation and electronic equipment
CN114419199B (en) Picture marking method and device, electronic equipment and storage medium
CN114612544B (en) Image processing method, device, equipment and storage medium
CN116229209B (en) Training method of target model, target detection method and device
CN113870142B (en) Method and device for enhancing image contrast
CN112507951B (en) Indicating lamp identification method, indicating lamp identification device, indicating lamp identification equipment, road side equipment and cloud control platform
CN112966606B (en) Image recognition method, related device and computer program product
CN113359669B (en) Method, device, electronic equipment and medium for generating test data
CN113658277B (en) Stereo matching method, model training method, related device and electronic equipment
CN117238129A (en) Method, device, equipment and storage medium for estimating uncertainty of detection result
CN118134856A (en) Data evaluation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant