CN111860302A - Image annotation method and device, electronic equipment and storage medium - Google Patents

Image annotation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111860302A
CN111860302A CN202010694095.7A CN202010694095A CN111860302A CN 111860302 A CN111860302 A CN 111860302A CN 202010694095 A CN202010694095 A CN 202010694095A CN 111860302 A CN111860302 A CN 111860302A
Authority
CN
China
Prior art keywords
image
labeling
annotation
result
annotated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010694095.7A
Other languages
Chinese (zh)
Other versions
CN111860302B (en
Inventor
杨雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010694095.7A priority Critical patent/CN111860302B/en
Publication of CN111860302A publication Critical patent/CN111860302A/en
Application granted granted Critical
Publication of CN111860302B publication Critical patent/CN111860302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses an image labeling method, an image labeling device, electronic equipment and a storage medium, relates to the technical field of computer vision, in particular to an image processing technology in the field of automatic driving, and comprises the following steps: acquiring a plurality of frames of images to be marked; dividing the image to be annotated into a plurality of sections of segmented images to be annotated; the segmented image to be marked comprises at least two frames of images to be marked; and carrying out parallel annotation on the annotation objects in the segmented images to be annotated. According to the embodiment of the application, the annotation efficiency and the annotation capacity of the image can be improved.

Description

Image annotation method and device, electronic equipment and storage medium
Technical Field
The application relates to the field of image processing, in particular to an image processing technology for automatic driving.
Background
The image annotation can be the annotation of the object in the image according to a set annotation rule. For example, a vehicle in the image may be selected, or key points of the face may be doted. The image annotation can be applied to the field of static single-frame image annotation and also can be applied to the field of video annotation. For example, in the process of video preview or video playback, the object is directly marked on the frame image of the video, so that the video has a more targeted video processing mode. The image annotation can be applied to many fields, for example, to the automatic driving field to locate obstacles, or to the video tracking field to lock important video cue information, etc.
Disclosure of Invention
The embodiment of the application provides an image annotation method, an image annotation device, electronic equipment and a storage medium, so as to improve the annotation efficiency and the annotation capacity of an image.
In a first aspect, an embodiment of the present application provides an image annotation method, including:
acquiring a plurality of frames of images to be marked;
dividing the image to be annotated into a plurality of sections of segmented images to be annotated; the segmented image to be marked comprises at least two frames of images to be marked;
and carrying out parallel annotation on the annotation objects in the segmented images to be annotated.
In a second aspect, an embodiment of the present application provides an image annotation device, including:
the image to be annotated acquisition module is used for acquiring a plurality of frames of images to be annotated;
the segmentation to-be-annotated image division module is used for dividing the to-be-annotated image into a plurality of segments of to-be-annotated images; the segmented image to be marked comprises at least two frames of images to be marked;
and the marking object parallel marking module is used for carrying out parallel marking on the marking objects in the segmented image to be marked.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the image annotation method provided by the embodiment of the first aspect.
In a fourth aspect, the present application further provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the image annotation method provided in the first aspect.
According to the image annotation method and device, the acquired multi-frame image to be annotated is divided into the multi-segment image to be annotated, so that the annotation objects in the image to be annotated in each segment are annotated in parallel, the problems of low annotation efficiency, insufficient annotation capacity and the like of the conventional image annotation method are solved, and the annotation efficiency and the annotation capacity of the image are improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
Fig. 1 is a flowchart of an image annotation method provided in an embodiment of the present application;
FIG. 2 is a flowchart of an image annotation method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an effect of an image annotation method according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an effect of an image annotation method according to an embodiment of the present application;
FIG. 5 is a block diagram of an image annotation device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device for implementing an image annotation method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Target tracking is a key technology in the field of computer vision. To solve the tracking problem, a large amount of continuous image labeling data is required to train the algorithm. Currently, the labeling of images is mainly performed in two ways:
(1) And directly labeling the key frame images in the acquired video, and automatically assigning the non-key frame images according to a frame difference method.
The image labeling method specifically comprises the following steps: firstly, marking a first frame image of a video, and secondly, selecting a frame image from a subsequent video segment as a key frame and marking the key frame. And then automatically labeling the non-key frame images in the middle process according to the labeling results of the first frame image and the subsequent key frame images and a continuous and average principle, and repeating the process until a complete video segment is labeled. The image labeling mode cannot accurately process the labeling of each frame of image, so that the labeling precision of the image cannot be ensured, the application range is limited, the image labeling mode is only suitable for scenes without high requirements on the labeling precision, such as traffic statistics or video monitoring, and cannot be applied to scenes with high labeling precision of required training data, such as the fields of automatic driving or semantic recognition.
(2) And performing frame-by-frame labeling on the collected video after frame extraction.
The image labeling method specifically comprises the following steps: firstly, performing frame extraction processing on a video, and then labeling each frame of image from a first frame of image to a last frame of image of the extracted video frame one by one until the last frame of image is labeled. If the computer that handles the image annotation has limited performance, all of the extracted images need to be divided into several segments. During labeling, the image of the first segment needs to be labeled first, then the second segment is labeled serially according to the labeling result of the image of the first segment, and so on. The labeling method can accurately process the labeling result of each frame of image, but has strong requirements on the labeling time sequence between the images due to the limitation of serial labeling. That is, the frame image that needs to be marked with the preamble is marked with the subsequent frame image, so the marking efficiency is low. The resulting larger labeling results are also a significant challenge to computer performance when the number of frame images is large.
In an example, fig. 1 is a flowchart of an image annotation method provided in an embodiment of the present application, and this embodiment is applicable to a case of performing image annotation quickly, and the method may be executed by an image annotation device, which may be implemented by software and/or hardware, and may be generally integrated in an electronic device. The electronic device may be a computer device or the like. Accordingly, as shown in fig. 1, the method comprises the following operations:
and S110, acquiring multiple frames of images to be annotated.
The image to be labeled can be an image which needs to label the labeled object.
It can be understood that before the image is labeled, the image to be labeled needs to be acquired first. In the embodiment of the application, multiple frames of images to be annotated can be acquired.
S120, dividing the image to be annotated into a plurality of sections of segmented images to be annotated; the segmented image to be annotated comprises at least two frames of images to be annotated.
The segmented image to be annotated can be a plurality of frames of images to be annotated obtained by segmenting a plurality of frames of images to be annotated. That is, each segmented image to be annotated may include multiple frames of images to be annotated, and the sum of the number of images included in each segmented image to be annotated is the sum of the number of images to be annotated.
Correspondingly, after the multiple frames of images to be annotated are obtained, the multiple frames of images to be annotated can be divided into multiple segments, so that multiple segments of segmented images to be annotated are obtained. Optionally, the number of images included in the segmented to-be-annotated image of each segment may be the same or different, and the number of images included in the segmented to-be-annotated image is not limited in the embodiment of the present application. Optionally, each segmented image to be annotated may include at least two frames of images to be annotated. It should be noted that, in order to improve the efficiency of image annotation, the number of images included in the segmented image to be annotated may be between 20 frames and 50 frames in general.
And S130, carrying out parallel annotation on the annotation objects in the segmented to-be-annotated images.
The marked object can comprise obstacles such as an automobile, a railing, a pedestrian, a tree or a billboard, and can also comprise characteristic objects such as a human face key point or a pupil, and the embodiment of the application does not limit the specific type of the marked object. That is, the image to be annotated may be an image that needs to be annotated in various application scenarios, such as an automatic driving field or a face recognition field, and the specific application scenario of the image to be annotated is not limited in the embodiment of the present application.
In the embodiment of the application, when the image to be annotated is divided into a plurality of segments of images to be annotated, the annotation objects in the images to be annotated in the segments can be annotated in parallel. The parallel annotation refers to that the annotation objects in the segmented images to be annotated are annotated simultaneously, and the annotation objects in the segmented images to be annotated are not required to be serially annotated in sequence according to the sequence of the segmented images to be annotated.
It should be noted that, when performing parallel annotation on the annotation objects in the to-be-annotated images of the segments, the to-be-annotated images of the segments may be annotated by using annotation rules that are independent of each other. Illustratively, when the number is used to label the label object, assuming that the image to be labeled in the first segment includes 3 label objects, the numbers "1, 2, and 3" may be used to label each label object of the image to be labeled in the first segment in sequence. Assuming that the image to be labeled of the second segment includes 4 labeled objects, the labeled objects of the image to be labeled of the first segment may be labeled sequentially by using the numbers "1, 2, 3, and 4", or the labeled objects of the image to be labeled of the first segment may be labeled sequentially by using the numbers "5, 6, 7, or 8". That is, the annotation behavior of the image to be annotated in each segment is not affected by the annotation behavior of the image to be annotated in other segments.
It can be understood that, when labeling the labeled object, in order to uniquely identify each labeled object, the same labeled object in each image to be labeled may adopt the same labeling manner, for example, adopt a uniform number. Different labeling modes are required for different labeling objects, if a labeling object is added, the labeling objects are accumulated on the basis of the current maximum number to obtain the added number, and the newly added labeling object is labeled.
In addition, although the images to be labeled in the segments can be labeled in parallel by using mutually independent labeling rules, in order to perform uniform processing on the labeling results of the images to be labeled in the segments, the images to be labeled in the segments need to be associated by using the common invariant content of the images to be labeled in the segments during the parallel labeling. Illustratively, when the numbers of the same annotation objects in the segmented to-be-annotated images are unified, the segmented to-be-annotated images can be associated by setting an overlapping frame. Or under the condition that the difference between two adjacent images to be labeled is not large and the labeling objects included in the two adjacent images to be labeled are basically the same, if the continuous frame images are directly labeled without performing frame extraction processing, the images to be labeled of each segment can be associated according to the time of the preamble image and the subsequent image in the video of the image to be labeled of each segment.
Therefore, the annotation objects in the segmented image to be annotated are annotated by adopting a parallel annotation mode, so that the time for annotating the image can be greatly shortened, and the annotation efficiency and the annotation capacity of the image are improved.
According to the image annotation method and device, the acquired multi-frame image to be annotated is divided into the multi-segment image to be annotated, so that the annotation objects in the image to be annotated in each segment are annotated in parallel, the problems of low annotation efficiency, insufficient annotation capacity and the like of the conventional image annotation method are solved, and the annotation efficiency and the annotation capacity of the image are improved.
In an example, fig. 2 is a flowchart of an image annotation method provided in an embodiment of the present application, fig. 3 is a schematic effect diagram of the image annotation method provided in the embodiment of the present application, and fig. 4 is a schematic effect diagram of the image annotation method provided in the embodiment of the present application.
As shown in fig. 2, fig. 3 and fig. 4, an image annotation method includes:
s210, performing frame extraction processing on the continuous frame image according to the set frame extraction frequency to obtain the image to be annotated.
The set frame extraction frequency may be set according to actual requirements, for example, 1 frame is extracted every 10 frames, or 5 frames are extracted every 1 second, and the specific value of the set frame extraction frequency is not limited in the embodiment of the present application. A continuous frame image is also all the images included in a video segment.
Optionally, a manner of performing frame extraction processing on the continuous frame image according to the set frame extraction frequency may be adopted to obtain multiple frames of images to be labeled. Alternatively, a segment of continuous frame image may also be directly used as an image to be annotated, which is not limited in the embodiment of the present application.
S220, dividing the image to be annotated into a plurality of sections of segmented images to be annotated; the segmented image to be annotated comprises at least two frames of images to be annotated.
Accordingly, S220 may specifically include the following operations:
and S221, dividing the current segmented image to be annotated according to the time sequence of the image to be annotated and the set image quantity of the segmented image to be annotated.
The number of the set images can be set according to requirements, and optionally, the number of the set images can be between 20 and 50. Meanwhile, the number of the set images corresponding to the to-be-labeled images of different segments may be the same or different, which is not limited in the embodiment of the present application. And the current segmented image to be annotated is also the segmented image to be annotated obtained by current division.
In the embodiment of the application, each segmented image to be annotated can be sequentially divided. Optionally, when the image to be annotated in the first segment of the segment is divided, the image to be annotated in the current segment may be divided according to the time sequence of the image to be annotated and the set image quantity of the image to be annotated in the segment, and the divided image to be annotated in the current segment is used as the image to be annotated in the first segment of the segment. For example, the first 20 frames of images of the continuous frames of images are taken as the first segment to be annotated image.
S222, determining the current overlapped frame of the current subsection image to be annotated according to the overlapped frame setting rule.
The overlapped frame setting rule can be used for setting overlapped frame images among the segmented images to be annotated. The current overlapped frame may be an image frame included in the image to be annotated of the current segment, which is overlapped with the image to be annotated of other segments.
Correspondingly, after the image to be annotated of the first segment as the image to be annotated of the current segment is divided, the current overlapping frame of the image to be annotated of the current segment can be determined according to the overlapping frame setting rule. Optionally, the number of the current overlapped frames of the images to be labeled in different segments may be the same or different, and this is not limited in this embodiment of the application.
It will be appreciated that in determining the current overlapping frame of the currently segmented image to be annotated, the overlapping frame of the subsequent image may be determined only for the currently segmented image to be annotated. For example, as shown in fig. 3, the last 1 frame of the image to be annotated of the current segment is taken as the current overlapped frame. Correspondingly, the first 1 frame image of the next segment of the image to be annotated of the current segment may be the last 1 frame image of the image to be annotated of the current segment. That is, 1 frame of the same image exists between the current segment of the image to be annotated and the next segment of the image to be annotated.
In an optional embodiment of the present application, the overlap frame setting rule includes: taking the number of frame images for judging that the image object disappears as the number of overlapped frames; or, the number of frame images set by default is taken as the number of overlapped frames.
The overlap frame setting rule may include a plurality of types. Alternatively, the number of frame images for determining that the image object disappears may be the number of overlapped frames. For example, in the unmanned vehicle obstacle labeling rule, a new number is assigned to the labeling object, which is a requirement of 5 frames for obstacle disappearance, and in this case, the number of overlapping frames may be set to 5 frames. Alternatively, the number of frame images set by default may be set as the number of overlapped frames. For example, 1 frame image is set as the number of overlapped frames by default. The overlapping frame setting rule taking the default frame image number as the number of the overlapping frames is applicable to application scenes without special labeling requirements.
In the scheme, the overlapped frames among the segmented images to be labeled are set through the multiple types of overlapped frame setting rules, so that the application requirements of the image labeling method on multiple application scenes can be met.
And S223, taking the current overlapped frame as a part of the image to be labeled of the next segment of the image to be labeled, and determining the residual image to be labeled of the next segment of the image to be labeled according to the set image quantity.
Correspondingly, after the division of the current segmented image to be annotated is completed and the current overlapped frame is determined, the current overlapped frame can be used as a part of the image to be annotated of the next segmented image to be annotated, and the residual image to be annotated of the next segmented image to be annotated is determined according to the set image quantity.
In an exemplary example, assuming that the image to be annotated for the first segment of segmentation is divided completely, and it is determined that the current overlapped frame is 2 frames, the last 2 frames of images of the image to be annotated for the first segment of segmentation can be used as the first two frames of images of the image to be annotated for the second segment of segmentation. Assuming that the number of the set images in each segmented image to be annotated is 20 frames, the images from the 20 th frame to the 37 th frame can be acquired from the continuous frame images according to the time sequence to be used as the remaining images to be annotated of the second segmented image to be annotated, so as to complete the division of the second segmented image to be annotated. That is, the image to be annotated in the first segment of segment includes images from 1 st frame to 20 th frame, the image to be annotated in the second segment of segment includes images from 19 th frame to 38 th frame, and the image to be annotated in the first segment of segment and the image to be annotated in the second segment include two overlapped images of the 19 th frame and the 20 th frame.
And S224, taking the image to be annotated of the next segment as the image to be annotated of the current segment.
And S225, judging whether the current segmentation image to be annotated is the last segmentation image to be annotated, if so, executing, otherwise, returning to the step S222.
And S226, completing the division of all the images to be annotated.
After the next segment of the image to be annotated of the current segment of the image to be annotated is divided, the image to be annotated of the next segment can be updated to the image to be annotated of the current segment, and the operation of determining the current overlapped frame of the image to be annotated of the current segment according to the overlapped frame setting rule is returned to be executed until the division of all the images to be annotated is completed. As shown in fig. 3, after all the images to be labeled are divided and the overlapped frames are set, parallel labeling can be performed on the segmented images to be labeled. For example, each labeled object is selected by using the square frame, and the labeled objects are numbered in sequence (i.e., label ID).
In the above scheme, the overlapped frame is set for each segmented image to be labeled by using the overlapped frame setting rule, so that the association between the segmented images to be labeled can be established, and the association is used for uniformly processing the labeling result of each segmented image to be labeled.
When the overlapped frame between the segmented images to be annotated is established, besides the above mentioned directly setting the unmarked image frame as the overlapped frame, there can be many other alternative setting schemes. As shown in fig. 4, the image to be labeled of each segment may be labeled in parallel first, each labeled object is selected by using a square frame, and the labeled objects are numbered sequentially (i.e., labeled IDs). After the image to be annotated of each segment is annotated, the image to be annotated with the same annotation frame (namely, the image to be annotated with the same annotation content) is used as an overlapped frame.
It should be noted that, in addition to the above-mentioned establishment of the association between the to-be-annotated images of the segments by using the overlapped frames, there may be other ways of establishing the association between the to-be-annotated images of the segments. If a section of continuous frame image is directly used as the image to be labeled, the image identification such as the frame number or the time point of the preamble image and the subsequent image between the sections of the image to be labeled can be associated. For example, the image to be annotated by the first segment of segment includes the images of the 1 st frame to the 20 th frame, and the image to be annotated by the second segment of segment includes the images of the 21 st frame to the 40 th frame. Because the difference between two adjacent frames of images is not large, and the included labeling objects are basically the same, the image to be labeled of the first segment and the image to be labeled of the second segment can be associated through the image of the 20 th frame and the image of the 21 st frame.
It should be noted that, if the number of the to-be-annotated images included in each segment of the to-be-annotated images is the same, and the number of the overlapping frames between the to-be-annotated images of each segment is also the same, the division of all the segment of the to-be-annotated images can be simultaneously completed at one time, and the segment of the to-be-annotated images does not need to be respectively divided according to the time sequence. For example, assuming that the total number of the frame-extracted images is 50 frames, each segment of the image to be annotated includes 20 frames of the image to be annotated, and 5 frames of overlapped frames are included between the segments of the image to be annotated, all the segments of the image to be annotated can be determined at the same time: the image to be annotated in the first segment of segment comprises images from 1 st frame to 20 th frame, the image to be annotated in the second segment of segment comprises images from 16 th frame to 35 th frame, and the image to be annotated in the third segment comprises images from 31 st frame to 50 th frame.
And S230, labeling the labeled objects simultaneously by adopting the matched labeling rules aiming at the images to be labeled of the segments to obtain the original parallel labeling results corresponding to the images to be labeled of the segments.
The original parallel annotation result can be a preliminary annotation result obtained by simultaneously annotating the images to be annotated of the segments.
Correspondingly, after the image to be annotated of each segment is divided, the image to be annotated of each segment can be annotated in parallel. The parallel annotation can comprise two links, wherein the first link is to adopt a matched annotation rule to label the annotation object simultaneously aiming at each segmented image to be annotated, so as to obtain an original parallel annotation result corresponding to each segmented image to be annotated. The second link is to perform normalization processing on each original parallel labeling result so as to unify each original parallel labeling result.
For example, as shown in fig. 3, labeling the labeled object in the first segment of the segmented image to be labeled by using the labeling rule may be: and (3) adopting a square frame as a marking tool in the image to be marked of the first section of the segmentation, selecting each marked object in the square frame, and marking the original parallel marking result of each marked object: truck-1, pedestrian-2, and car-10. In addition to these three annotation objects, there are also other annotation objects, which are numbered 3-9, respectively, and are not shown in the figure. The labeling of the labeled object in the second segmented image to be labeled by using the labeling rule may be: and (3) adopting a square frame as a marking tool in the image to be marked of the second section of segmentation, selecting each marked object in the square frame, and marking the original parallel marked result of each marked object: a large truck-1, a small car-2 and an electric vehicle-5. In addition to these three annotation objects, there are also other annotation objects, which are numbered 3-4, respectively, and are not shown in the figure.
S240, carrying out normalization check on each original parallel labeling result.
It should be noted that, in order to further ensure the quality of the annotation result and improve the efficiency of image annotation, before normalization processing is performed on each original parallel annotation result, normalization check may be performed on each original parallel annotation result. The normalization check is to check whether there is an artificial labeling error in the original parallel labeling result, for example, a certain attribute of the labeled object is labeled in error, or an erroneous labeling tool is used to label the labeled object. The marking tool may be a box, a line, a point, or an area, and the like, and the embodiment of the present application does not limit the type of the marking tool.
The normalization check of the original parallel annotation results can ensure the consistency of the original parallel annotation results, namely, the quality of the original parallel annotation results is improved. For example, when marking an obstacle, consistency may refer to whether or not the marking of attributes such as the type and the occlusion of the same obstacle are consistent, and does not include whether or not the ID numbers marked on the respective obstacles are consistent. Therefore, the normalization check of each original parallel annotation result can ensure the accuracy of subsequent normalization processing, and the problem that the normalization processing fails due to non-uniform annotation and the checking and normalization processing processes need to be carried out again can be avoided, so that the annotation efficiency is further improved.
Optionally, the normalization check may include a generic normalization check and a custom normalization check; wherein: the generic normalization check may include: the labeling quantity of the labeling object is wrong, the labeling type of the labeling object is wrong, and the labeling of the key attribute of the labeling object is wrong; the customized normalization check may include: and marking the marking object with error according to the customized marking rule.
The general normalization check can check the labeled objects according to a check rule general to all labeled objects. The customized normalization check can check the labeled object according to the rule formed by the special labeling requirement.
In the embodiment of the present application, the normalization check may optionally include two forms. The first form may be a general normalization check, which may be used to check whether the original parallel labeling result has the problems of the wrong labeling quantity of the labeled object, the wrong labeling of the labeled object type, the wrong labeling of the key attribute of the labeled object, and the like, that is, whether the original parallel labeling result has the problems of multiple labels, label missing, wrong type and key attribute, and the like. The type of the labeling object may be the type of an obstacle, the type of a face key point, the type of a tracking object, or the like. When an obstacle is used as the annotation object, the attribute of the annotation object may be, for example, a specific orientation, whether to block or move, or the like.
The second form can be a customized normalization check, which can be used to check whether the original parallel annotation result has the annotation error problem according to the requirement of the special annotation rule. As an example, it is assumed that the annotation object is a supermarket people flow, that is, the image annotation method is specifically applied to an application scenario of the supermarket people flow statistics. At this time, when the human stream is marked, the security personnel is required to be marked as "0" uniformly. When the customized normalization check is carried out, if the original parallel annotation results of the same security personnel in the two pictures are respectively '0' and '1', the annotation error of the original parallel annotation result is indicated.
And S250, carrying out normalization processing on the original parallel annotation result to obtain a target parallel annotation result corresponding to each segmented image to be annotated.
And the target parallel labeling result is the final labeling result.
The second step of carrying out parallel annotation on each segmented image to be annotated is to carry out normalization processing on the original parallel annotation result. The normalization processing is to perform uniform processing on the labeling results of the same labeling object in each original parallel labeling result. The original parallel labeling results are normalized, so that unique identification can be carried out on each labeling object, and a target parallel labeling result meeting the labeling requirement is obtained.
In an optional embodiment of the present application, the normalizing the original parallel annotation result may include: according to the time sequence, sequentially determining a reference original parallel annotation result and a next section of original parallel annotation result of the reference original parallel annotation result from the original parallel annotation result; taking the next section of original parallel marking result as a current processing parallel marking result; and under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result, taking the labeling result of the target labeling object in the reference original parallel labeling result as the labeling result of the target labeling object in the current processing parallel labeling result.
The reference original parallel annotation result can be an original parallel annotation result used as a unified reference. The current processing parallel annotation result can be an original parallel annotation result which needs to perform unified processing on the annotation results of the same annotation objects of the reference original parallel annotation result. The target annotation object may be the same annotation object in the reference original parallel annotation result and the current processing parallel annotation result.
Optionally, when the original parallel annotation result is normalized, the reference original parallel annotation result and the currently processed parallel annotation result may be sequentially determined in the original parallel annotation result. And then comparing the reference original parallel annotation result with the current processing parallel annotation result, and if the same target annotation object exists in the reference original parallel annotation result and the current processing parallel annotation result, taking the annotation result of the target annotation object in the reference original parallel annotation result as the annotation result of the target annotation object in the current processing parallel annotation result. In general, each annotation object included in the overlapped frame image, that is, the same target annotation object in the reference original parallel annotation result and the currently processed parallel annotation result, is included. For example, a car and a bicycle in the overlapping frame images may both be the same target annotation object.
Illustratively, a first section of original parallel annotation result is taken as a reference original parallel annotation result, and a second section of original parallel annotation result is taken as a current processing parallel annotation result. And comparing the reference original parallel annotation result with the annotation result corresponding to the overlapped frame image in the current processing parallel annotation result, wherein if the reference original parallel annotation result is different from the annotation result aiming at the same target annotation object in the overlapped frame image in the current processing parallel annotation result, the current processing parallel annotation result can automatically use the annotation result of the target annotation object in the reference original parallel annotation result. After the normalization processing of the first-stage original parallel annotation result and the second-stage original parallel annotation result is completed, the second-stage original parallel annotation result can be used as a reference original parallel annotation result, the third-stage original parallel annotation result can be used as a current processing parallel annotation result, and so on until the normalization processing of all the original parallel annotation results is completed.
In the above scheme, the reference original parallel annotation result is used for the annotation result of the target annotation object in the current processing parallel annotation result, so that the reference original parallel annotation result and the annotation result of the same target annotation object in the overlapped frame image in the current processing parallel annotation result are kept consistent, and thus the uniform annotation for the same annotation object is realized.
In an optional embodiment of the present application, the normalizing the original parallel annotation result may further include: and under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result and a new labeling object exists in the current processing parallel labeling result, labeling the new labeling object again according to a labeling sequence.
In an optional embodiment of the present application, the re-labeling the newly added labeled object according to the labeling order may include: determining a last labeling result in the currently processed parallel labeling results; continuing the last marking result to obtain a continued marking result; and taking the continuous labeling result as a target labeling result of the newly added labeling object.
In an optional embodiment of the present application, determining that a new annotation object exists in the currently processed parallel annotation result may include: and in the current processing parallel annotation result, taking a non-target annotation object which is the same as the partial annotation result of the target annotation object as the newly-added annotation object.
The new annotation object may be an annotation object that newly appears in the current processing parallel annotation result, that is, an annotation object that does not exist in the reference original parallel annotation result. The definition of the newly added annotation object may be set according to a specific annotation rule. For example, assume that the annotation rule requires that the object disappear 5 frames and that a new annotation result is given. If the first frame image has the annotation object of the car, if the car does not appear in the second frame image to the 8 th frame image and the same car appears in the 9 th frame image, even if the car in the 9 th frame image is the same as the car in the 1 st frame image, the car in the 9 th frame image is used as a new annotation object and is annotated according to the annotation sequence again. The last annotation result may be the result of annotating the last appearing annotation object. The continuous marking result can be a marking result obtained by sequentially continuing the last marking result. Illustratively, if the last annotation result is "2", then the continuation annotation result may be "3". The target labeling result is also the result of re-labeling the newly added labeling object. The partial labeling results of the non-target labeling object and the target labeling object are the same, and the labeling number (i.e. the labeling ID) in the non-target labeling object may be the same as the labeling number of the target labeling object.
Correspondingly, if the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result and a new labeling object exists in the current processing parallel labeling result, the new labeling object is labeled again according to the labeling sequence. In general, in other frame images than the overlapped frame image, there may be a case where a new annotation object is added to the currently processed parallel annotation result.
For example, assuming that the same target labeling object "car" exists in the reference original parallel labeling result and the current processing parallel labeling result, after normalization processing, the labeling IDs of the "cars" are unified into "10". If the labeling ID of the newly added labeling object 'motor coach' for the 8 th frame image in the current processing parallel labeling result is also '10', the maximum ID number already labeled in the current processing parallel labeling result, that is, the last labeling result, can be determined. If the maximum ID number already labeled is '15', the last labeling result is continued to obtain a continued labeling result '16', and the labeling ID of the newly added labeling object 'motor coach' is re-labeled as '16'.
In the scheme, the newly added annotation objects in the current processing parallel annotation result are annotated again according to the annotation sequence, so that the problem of conflict between the newly added annotation objects and the annotation result of the target annotation object can be avoided.
And S260, deleting redundant overlapped frames in the marked image.
The annotated image can be an image subjected to parallel annotation. The redundant overlapping frame may be a redundant overlapping frame. For example, it is assumed that the first segment of the image to be annotated and the second segment of the image to be annotated comprise overlapping frames: and in the 20 th frame, the 20 th frame in the image to be annotated in the first segment of segmentation or the 20 th frame in the image to be annotated in the second segment of segmentation can be regarded as redundant overlapped frames.
Because the overlapped frames are set in the previous steps, after the parallel annotation of each image to be annotated is completed to obtain the annotated image, redundant overlapped frames in the annotated image can be deleted, namely, the duplicate removal processing is carried out, so as to ensure that complete continuous frames are obtained. It can be understood that the segmented image to be labeled is processed in parallel to form the segmented labeled image. The labeling results of the overlapped frames after normalization processing are consistent, so that the deletion of the overlapped frames at the front end or the rear end in the segmented labeled image is feasible, and only the continuous frames can be obtained finally. That is, it should be noted that all the overlapped frames cannot be deleted, resulting in the missing of the image frame.
According to the technical scheme, the overlapped frames are arranged on the segmented image to be marked, and normalization processing is carried out according to the marking results of the overlapped frames, so that the final marking results can be unified. According to the image annotation method and device, the image annotation is carried out in a parallel annotation mode, and the annotation efficiency can be greatly improved. By way of example, when a segment of 1000 pictures is divided into 50 frames, the annotation time can be shortened to about 1/20. Meanwhile, because the intermediate check node of the normalization check is added, the marking quality can be obviously improved. Secondly, in the aspect of marking capability, the method is expanded to have no limitation on the video duration from the original fine marking of only supporting dozens of seconds of videos, and the basic data of the long videos can play a great assisting role in further optimizing the visual tracking algorithm. Thirdly, because the prior long continuous frame marking scheme has strong requirements on the performance of marking equipment, the splitting scheme relieves the performance pressure of the equipment caused by large data volume to a great extent.
In an example, fig. 5 is a structural diagram of an image annotation device provided in an embodiment of the present application, where the embodiment of the present application is applicable to a case of performing image annotation quickly, and the device is implemented by software and/or hardware and is specifically configured in an electronic device. The electronic device may be a computer device or the like.
An image annotation apparatus 300 as shown in fig. 5 includes: an image to be annotated acquisition module 310, a segmented image to be annotated dividing module 320 and an annotated object parallel annotation module 330. Wherein,
the to-be-annotated image acquisition module 310 is configured to acquire multiple frames of to-be-annotated images;
the segmentation to-be-annotated image dividing module 320 is configured to divide the to-be-annotated image into a plurality of segments of to-be-annotated images; the segmented image to be marked comprises at least two frames of images to be marked;
and the annotation object parallel annotation module 330 is configured to perform parallel annotation on the annotation object in each segmented image to be annotated.
According to the image annotation method and device, the acquired multi-frame image to be annotated is divided into the multi-segment image to be annotated, so that the annotation objects in the image to be annotated in each segment are annotated in parallel, the problems of low annotation efficiency, insufficient annotation capacity and the like of the conventional image annotation method are solved, and the annotation efficiency and the annotation capacity of the image are improved.
Optionally, the to-be-annotated image obtaining module 310 is specifically configured to: and performing frame extraction processing on the continuous frame images according to the set frame extraction frequency to obtain the image to be marked.
Optionally, the segmentation module 320 for dividing the image to be annotated is specifically configured to: dividing the current segmented image to be annotated according to the time sequence of the image to be annotated and the set image quantity of the segmented image to be annotated; determining the current overlapped frame of the current subsection image to be marked according to an overlapped frame setting rule; taking the current overlapped frame as a part of the image to be labeled of the next segment of the image to be labeled, and determining the residual image to be labeled of the next segment of the image to be labeled according to the set image quantity; and taking the image to be annotated of the next segment as the image to be annotated of the current segment, and returning to execute the operation of determining the current overlapped frame of the image to be annotated of the current segment according to the overlapped frame setting rule until the division of all the images to be annotated is completed.
Optionally, the overlap frame setting rule includes: taking the number of frame images for judging that the image object disappears as the number of overlapped frames; or, the number of frame images set by default is taken as the number of overlapped frames.
Optionally, the parallel annotation module 330 for annotation objects is specifically configured to: marking the marked objects by adopting matched marking rules aiming at the images to be marked of the segments at the same time to obtain the original parallel marking results corresponding to the images to be marked of the segments; and carrying out normalization processing on the original parallel annotation result to obtain a target parallel annotation result corresponding to each segmented image to be annotated.
Optionally, the parallel annotation module 330 for annotation objects is specifically configured to: according to the time sequence, sequentially determining a reference original parallel annotation result and a next section of original parallel annotation result of the reference original parallel annotation result from the original parallel annotation result; taking the next section of original parallel marking result as a current processing parallel marking result; and under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result, taking the labeling result of the target labeling object in the reference original parallel labeling result as the labeling result of the target labeling object in the current processing parallel labeling result.
Optionally, the parallel annotation module 330 for annotation objects is specifically configured to: and under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result and a new labeling object exists in the current processing parallel labeling result, labeling the new labeling object again according to a labeling sequence.
Optionally, the parallel annotation module 330 for annotation objects is specifically configured to: determining a last labeling result in the currently processed parallel labeling results; continuing the last marking result to obtain a continued marking result; and taking the continuous labeling result as a target labeling result of the newly added labeling object.
Optionally, the parallel annotation module 330 for annotation objects is specifically configured to: and in the current processing parallel annotation result, taking a non-target annotation object which is the same as the partial annotation result of the target annotation object as the newly-added annotation object.
Optionally, the parallel annotation module 330 for annotation objects is specifically configured to: carrying out normalization check on each original parallel labeling result; the normalization check comprises a general normalization check and a customized normalization check; wherein: the generic normalization check includes: the labeling quantity of the labeling object is wrong, the labeling type of the labeling object is wrong, and the labeling of the key attribute of the labeling object is wrong; the customized normalization check includes: and marking the marking object with error according to the customized marking rule.
Optionally, the parallel annotation module 330 for annotation objects is specifically configured to: and deleting redundant overlapped frames in the annotation image.
The image annotation device can execute the image annotation method provided by any embodiment of the application, and has corresponding functional modules and beneficial effects of the execution method. For details of the image annotation, reference may be made to the image annotation method provided in any embodiment of the present application.
Since the image annotation apparatus described above is an apparatus capable of executing the image annotation method in the embodiment of the present application, based on the image annotation method described in the embodiment of the present application, a person skilled in the art can understand the specific implementation of the image annotation apparatus of the present embodiment and various variations thereof, and therefore, how to implement the image annotation method in the embodiment of the present application by the image annotation apparatus is not described in detail herein. The scope of the present application is intended to be covered by the claims so long as those skilled in the art can implement the image annotation method in the embodiments of the present application.
In one example, the present application also provides an electronic device and a readable storage medium.
Fig. 6 is a schematic structural diagram of an electronic device for implementing an image annotation method according to an embodiment of the present application. Fig. 6 is a block diagram of an electronic device according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.
The memory 602 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the image annotation methods provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the image annotation method provided herein.
The memory 602, serving as a non-transitory computer readable storage medium, can be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the image annotation method in the embodiment of the present application (for example, the to-be-annotated image acquisition module 310, the segmented to-be-annotated image division module 320, and the annotated object parallel annotation module 330 shown in fig. 5). The processor 601 executes various functional applications of the server and data processing by running non-transitory software programs, instructions and modules stored in the memory 602, namely, implements the image annotation method in the above-described method embodiment.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of an electronic device implementing the image annotation method, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 optionally includes memory remotely located from the processor 601, and such remote memory may be connected over a network to an electronic device implementing the image annotation process. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for implementing the image annotation method may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an electronic apparatus implementing the image annotation method, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. The client may be a smart phone, a notebook computer, a desktop computer, a tablet computer, a smart speaker, etc., but is not limited thereto. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud computing, cloud service, a cloud database, cloud storage and the like. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the image annotation method and device, the acquired multi-frame image to be annotated is divided into the multi-segment image to be annotated, so that the annotation objects in the image to be annotated in each segment are annotated in parallel, the problems of low annotation efficiency, insufficient annotation capacity and the like of the conventional image annotation method are solved, and the annotation efficiency and the annotation capacity of the image are improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (24)

1. An image annotation method, comprising:
acquiring a plurality of frames of images to be marked;
Dividing the image to be annotated into a plurality of sections of segmented images to be annotated; the segmented image to be marked comprises at least two frames of the image to be marked;
and carrying out parallel annotation on the annotation objects in the segmented images to be annotated.
2. The method according to claim 1, wherein the acquiring multiple frames of images to be annotated comprises:
and performing frame extraction processing on the continuous frame images according to the set frame extraction frequency to obtain the image to be marked.
3. The method of claim 1, wherein the dividing the image to be annotated into a plurality of segmented images to be annotated comprises:
dividing the current segmented image to be annotated according to the time sequence of the image to be annotated and the set image quantity of the segmented image to be annotated;
determining the current overlapped frame of the current subsection image to be marked according to an overlapped frame setting rule;
taking the current overlapped frame as a part of the image to be labeled of the next segment of the image to be labeled, and determining the residual image to be labeled of the next segment of the image to be labeled according to the set image quantity;
and taking the image to be annotated of the next segment as the image to be annotated of the current segment, and returning to execute the operation of determining the current overlapped frame of the image to be annotated of the current segment according to the overlapped frame setting rule until the division of all the images to be annotated is completed.
4. The method of claim 3, wherein the overlapping frame setting rule comprises:
taking the number of frame images for judging that the image object disappears as the number of overlapped frames; or,
the number of frame images set by default is taken as the number of overlapped frames.
5. The method of claim 1, wherein the parallel labeling of the labeled objects in the segmented to-be-labeled images comprises:
marking the marked objects by adopting matched marking rules aiming at the images to be marked of the segments at the same time to obtain the original parallel marking results corresponding to the images to be marked of the segments;
and carrying out normalization processing on the original parallel annotation result to obtain a target parallel annotation result corresponding to each segmented image to be annotated.
6. The method of claim 5, wherein normalizing the original parallel annotation result comprises:
according to the time sequence, sequentially determining a reference original parallel annotation result and a next section of original parallel annotation result of the reference original parallel annotation result from the original parallel annotation result;
taking the next section of original parallel marking result as a current processing parallel marking result;
And under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result, taking the labeling result of the target labeling object in the reference original parallel labeling result as the labeling result of the target labeling object in the current processing parallel labeling result.
7. The method of claim 6, wherein normalizing the original parallel annotation result further comprises:
and under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result and a new labeling object exists in the current processing parallel labeling result, labeling the new labeling object again according to a labeling sequence.
8. The method of claim 7, wherein the newly added labeled objects are labeled again according to a labeling order, and the method comprises the following steps:
determining a last labeling result in the currently processed parallel labeling results;
continuing the last marking result to obtain a continued marking result;
and taking the continuous labeling result as a target labeling result of the newly added labeling object.
9. The method of claim 7, wherein determining that a new annotation object exists in the currently processed parallel annotation result comprises:
and in the current processing parallel annotation result, taking a non-target annotation object which is the same as the partial annotation result of the target annotation object as the newly-added annotation object.
10. The method according to any of claims 5-9, further comprising, before normalizing the original parallel annotation result:
carrying out normalization check on each original parallel labeling result; the normalization check comprises a general normalization check and a customized normalization check; wherein:
the generic normalization check includes: the labeling quantity of the labeling object is wrong, the labeling type of the labeling object is wrong, and the labeling of the key attribute of the labeling object is wrong;
the customized normalization check includes: and marking the marking object with error according to the customized marking rule.
11. The method according to any of claims 5-9, further comprising, after normalizing the original parallel annotation result:
and deleting redundant overlapped frames in the annotation image.
12. An image annotation apparatus comprising:
the image to be annotated acquisition module is used for acquiring a plurality of frames of images to be annotated;
The segmentation to-be-annotated image division module is used for dividing the to-be-annotated image into a plurality of segments of to-be-annotated images; the segmented image to be marked comprises at least two frames of the image to be marked;
and the marking object parallel marking module is used for carrying out parallel marking on the marking objects in the segmented image to be marked.
13. The apparatus according to claim 12, wherein the to-be-annotated image acquisition module is specifically configured to:
and performing frame extraction processing on the continuous frame images according to the set frame extraction frequency to obtain the image to be marked.
14. The apparatus according to claim 12, wherein the segmentation module for dividing the image to be labeled is specifically configured to:
dividing the current segmented image to be annotated according to the time sequence of the image to be annotated and the set image quantity of the segmented image to be annotated;
determining the current overlapped frame of the current subsection image to be marked according to an overlapped frame setting rule;
taking the current overlapped frame as a part of the image to be labeled of the next segment of the image to be labeled, and determining the residual image to be labeled of the next segment of the image to be labeled according to the set image quantity;
and taking the image to be annotated of the next segment as the image to be annotated of the current segment, and returning to execute the operation of determining the current overlapped frame of the image to be annotated of the current segment according to the overlapped frame setting rule until the division of all the images to be annotated is completed.
15. The apparatus of claim 14, wherein the overlapping frame setting rule comprises:
taking the number of frame images for judging that the image object disappears as the number of overlapped frames; or,
the number of frame images set by default is taken as the number of overlapped frames.
16. The apparatus according to claim 12, wherein the parallel annotation module is specifically configured to:
marking the marked objects by adopting matched marking rules aiming at the images to be marked of the segments at the same time to obtain the original parallel marking results corresponding to the images to be marked of the segments;
and carrying out normalization processing on the original parallel annotation result to obtain a target parallel annotation result corresponding to each segmented image to be annotated.
17. The apparatus according to claim 16, wherein the parallel annotation module is specifically configured to:
according to the time sequence, sequentially determining a reference original parallel annotation result and a next section of original parallel annotation result of the reference original parallel annotation result from the original parallel annotation result;
taking the next section of original parallel marking result as a current processing parallel marking result;
and under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result, taking the labeling result of the target labeling object in the reference original parallel labeling result as the labeling result of the target labeling object in the current processing parallel labeling result.
18. The apparatus of claim 17, wherein the parallel annotation module is specifically configured to:
and under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result and a new labeling object exists in the current processing parallel labeling result, labeling the new labeling object again according to a labeling sequence.
19. The apparatus of claim 18, wherein the parallel annotation module is specifically configured to:
determining a last labeling result in the currently processed parallel labeling results;
continuing the last marking result to obtain a continued marking result;
and taking the continuous labeling result as a target labeling result of the newly added labeling object.
20. The apparatus of claim 18, wherein the parallel annotation module is specifically configured to:
and in the current processing parallel annotation result, taking a non-target annotation object which is the same as the partial annotation result of the target annotation object as the newly-added annotation object.
21. The apparatus according to any one of claims 16 to 20, wherein the parallel annotation module is specifically configured to:
Carrying out normalization check on each original parallel labeling result; the normalization check comprises a general normalization check and a customized normalization check; wherein:
the generic normalization check includes: the labeling quantity of the labeling object is wrong, the labeling type of the labeling object is wrong, and the labeling of the key attribute of the labeling object is wrong;
the customized normalization check includes: and marking the marking object with error according to the customized marking rule.
22. The apparatus according to any one of claims 16 to 20, wherein the parallel annotation module is specifically configured to:
and deleting redundant overlapped frames in the annotation image.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image annotation method of any one of claims 1-11.
24. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the image annotation method of any one of claims 1-11.
CN202010694095.7A 2020-07-17 2020-07-17 Image labeling method and device, electronic equipment and storage medium Active CN111860302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010694095.7A CN111860302B (en) 2020-07-17 2020-07-17 Image labeling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010694095.7A CN111860302B (en) 2020-07-17 2020-07-17 Image labeling method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111860302A true CN111860302A (en) 2020-10-30
CN111860302B CN111860302B (en) 2024-03-01

Family

ID=73001638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010694095.7A Active CN111860302B (en) 2020-07-17 2020-07-17 Image labeling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111860302B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591580A (en) * 2021-06-30 2021-11-02 北京百度网讯科技有限公司 Image annotation method and device, electronic equipment and storage medium
CN114973063A (en) * 2022-04-25 2022-08-30 浙江大华技术股份有限公司 Target labeling method, electronic device and computer-readable storage medium
WO2023103329A1 (en) * 2021-12-08 2023-06-15 北京百度网讯科技有限公司 Data labeling method, apparatus, and system, device, and storage medium
CN118070108A (en) * 2024-04-22 2024-05-24 山东中翰软件有限公司 Intelligent labeling and classifying method for government affair data management

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080063287A1 (en) * 2006-09-13 2008-03-13 Paul Klamer Method And Apparatus For Providing Lossless Data Compression And Editing Media Content
CN103262632A (en) * 2010-06-04 2013-08-21 得克萨斯系统大学评议会 Wireless communication method, system and computer program product
CN108491774A (en) * 2018-03-12 2018-09-04 北京地平线机器人技术研发有限公司 The method and apparatus that multiple targets in video are marked into line trace
CN109819325A (en) * 2019-01-11 2019-05-28 平安科技(深圳)有限公司 Hot video marks processing method, device, computer equipment and storage medium
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
CN111367445A (en) * 2020-03-31 2020-07-03 中国建设银行股份有限公司 Image annotation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080063287A1 (en) * 2006-09-13 2008-03-13 Paul Klamer Method And Apparatus For Providing Lossless Data Compression And Editing Media Content
CN103262632A (en) * 2010-06-04 2013-08-21 得克萨斯系统大学评议会 Wireless communication method, system and computer program product
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
CN108491774A (en) * 2018-03-12 2018-09-04 北京地平线机器人技术研发有限公司 The method and apparatus that multiple targets in video are marked into line trace
CN109819325A (en) * 2019-01-11 2019-05-28 平安科技(深圳)有限公司 Hot video marks processing method, device, computer equipment and storage medium
CN111367445A (en) * 2020-03-31 2020-07-03 中国建设银行股份有限公司 Image annotation method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TIANHAN GAO: "Multi-frame Prediction Load Balancing Algorithm for Sortfirst Parallel Rendering", CGDIP \'17: PROCEEDINGS OF THE 2017 INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND DIGITAL IMAGE PROCESSING, 2 July 2017 (2017-07-02) *
邱程;葛迪;侯群;: "基于遥感图像的人工标注系统的设计与实现", 电脑知识与技术, no. 23, 15 August 2018 (2018-08-15) *
陈国军;陈巍;郁汉琪;王涵立;: "基于语义ORB-SLAM2算法的移动机器人自主导航方法研究", 机床与液压, no. 09, 15 May 2020 (2020-05-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591580A (en) * 2021-06-30 2021-11-02 北京百度网讯科技有限公司 Image annotation method and device, electronic equipment and storage medium
WO2023103329A1 (en) * 2021-12-08 2023-06-15 北京百度网讯科技有限公司 Data labeling method, apparatus, and system, device, and storage medium
CN114973063A (en) * 2022-04-25 2022-08-30 浙江大华技术股份有限公司 Target labeling method, electronic device and computer-readable storage medium
CN118070108A (en) * 2024-04-22 2024-05-24 山东中翰软件有限公司 Intelligent labeling and classifying method for government affair data management

Also Published As

Publication number Publication date
CN111860302B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN111860302B (en) Image labeling method and device, electronic equipment and storage medium
CN111107392B (en) Video processing method and device and electronic equipment
CN110933487B (en) Method, device and equipment for generating click video and storage medium
US10013487B2 (en) System and method for multi-modal fusion based fault-tolerant video content recognition
CN111860304B (en) Image labeling method, electronic device, equipment and storage medium
CN112528786B (en) Vehicle tracking method and device and electronic equipment
CN111709328A (en) Vehicle tracking method and device and electronic equipment
CN111860305B (en) Image labeling method and device, electronic equipment and storage medium
CN111522994A (en) Method and apparatus for generating information
CN112235613B (en) Video processing method and device, electronic equipment and storage medium
CN113591580B (en) Image annotation method and device, electronic equipment and storage medium
CN112270532B (en) Data processing method, device, electronic equipment and storage medium
CN110659600A (en) Object detection method, device and equipment
CN111522863B (en) Theme concept mining method, device, equipment and storage medium
CN111444819B (en) Cut frame determining method, network training method, device, equipment and storage medium
US11715372B2 (en) Signal lamp recognition method, device, and storage medium
CN111935506B (en) Method and apparatus for determining repeating video frames
US20210326599A1 (en) System and method for automatically detecting and marking logical scenes in media content
CN112241704A (en) Method and device for judging portrait infringement, electronic equipment and storage medium
CN112131414A (en) Signal lamp image labeling method and device, electronic equipment and road side equipment
CN111797801B (en) Method and apparatus for video scene analysis
CN114419493A (en) Image annotation method and device, electronic equipment and storage medium
CN111970560B (en) Video acquisition method and device, electronic equipment and storage medium
CN111783644A (en) Detection method, device, equipment and computer storage medium
CN112016524A (en) Model training method, face recognition device, face recognition equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant