CN111860302B - Image labeling method and device, electronic equipment and storage medium - Google Patents

Image labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111860302B
CN111860302B CN202010694095.7A CN202010694095A CN111860302B CN 111860302 B CN111860302 B CN 111860302B CN 202010694095 A CN202010694095 A CN 202010694095A CN 111860302 B CN111860302 B CN 111860302B
Authority
CN
China
Prior art keywords
labeling
image
result
marked
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010694095.7A
Other languages
Chinese (zh)
Other versions
CN111860302A (en
Inventor
杨雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010694095.7A priority Critical patent/CN111860302B/en
Publication of CN111860302A publication Critical patent/CN111860302A/en
Application granted granted Critical
Publication of CN111860302B publication Critical patent/CN111860302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses an image labeling method, an image labeling device, electronic equipment and a storage medium, relates to the technical field of computer vision, in particular to an image processing technology in the field of automatic driving, and comprises the following steps: obtaining a plurality of frames of images to be marked; dividing the image to be marked into a plurality of sections of segmented images to be marked; the segmented image to be marked comprises at least two frames of images to be marked; and labeling the labeling objects in the segmented images to be labeled in parallel. The method and the device can improve the labeling efficiency and the labeling capacity of the image.

Description

Image labeling method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing, and in particular to image processing techniques for autopilot.
Background
The image annotation can be to annotate the object in the image according to a set annotation rule. For example, a vehicle in the image may be framed, or a dotting process may be performed on a face key point. The image annotation can be applied to the field of static single-frame image annotation and can also be applied to the field of video annotation. For example, in the process of video preview or video playback, the object is directly marked on the frame image of the video, so that the video has a more targeted video processing mode. Image annotation can be applied to various fields, such as positioning obstacles in the automatic driving field, locking important video cue information in the video tracking field, and the like.
Disclosure of Invention
The embodiment of the application provides an image labeling method, an image labeling device, electronic equipment and a storage medium, so that the labeling efficiency and the labeling capacity of images are improved.
In a first aspect, an embodiment of the present application provides an image labeling method, including:
obtaining a plurality of frames of images to be marked;
dividing the image to be marked into a plurality of sections of segmented images to be marked; the segmented image to be marked comprises at least two frames of images to be marked;
and labeling the labeling objects in the segmented images to be labeled in parallel.
In a second aspect, an embodiment of the present application provides an image labeling apparatus, including:
the image to be marked acquisition module is used for acquiring a plurality of frames of images to be marked;
the segmentation image to be annotated is divided into a plurality of segments of segmentation images to be annotated by the segmentation image to be annotated; the segmented image to be marked comprises at least two frames of images to be marked;
and the labeling object parallel labeling module is used for labeling the labeling objects in the segmented images to be labeled in parallel.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image annotation method provided by the embodiment of the first aspect.
In a fourth aspect, embodiments of the present application further provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the image labeling method provided by the embodiments of the first aspect.
According to the method, the obtained multi-frame image to be marked is divided into the multi-section image to be marked so as to mark the marked objects in the image to be marked in parallel, and the problems of low marking efficiency, insufficient marking capacity and the like of the existing image marking method are solved, so that the marking efficiency and the marking capacity of the image are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a flowchart of an image labeling method according to an embodiment of the present application;
FIG. 2 is a flowchart of an image labeling method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating the effect of an image labeling method according to an embodiment of the present disclosure;
fig. 4 is an effect schematic diagram of an image labeling method provided in an embodiment of the present application;
fig. 5 is a block diagram of an image labeling apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device for implementing the image labeling method according to the embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Target tracking is a key technology in the field of computer vision. To solve the tracking problem, a large amount of continuous image annotation data is required to train the algorithm. Currently, labeling an image is mainly performed in two ways:
(1) And directly marking the key frame images in the acquired video, and automatically assigning values to the non-key frame images according to a frame difference method.
The image labeling method specifically comprises the following steps: firstly, labeling a first frame image of a video, and secondly, selecting a frame image in a later video segment as a key frame and labeling the key frame. And then, according to the labeling results of the first frame image and the key frame images after the first frame image, automatically labeling the non-key frame images in the middle process according to a continuous and average principle, and repeating the process until the whole video segment is labeled. The image annotation mode cannot guarantee the annotation precision of the images because the annotation of each frame of image is not accurately processed, has a limited application range, is only suitable for scenes with no high requirements on the annotation precision, such as scenes with high annotation precision of required training data, such as fields of automatic driving or semantic recognition, and the like.
(2) And marking the acquired video frame by frame after frame extraction.
The image labeling method specifically comprises the following steps: firstly, frame extraction processing is carried out on the video, then, each frame of image is marked from the first frame of image to the extracted video frame image, and the marking is completed until the last frame of image is marked. If the computer performance for processing the image annotation is limited, all the extracted images need to be divided into several segments. In the labeling process, the image of the first segment needs to be labeled first, then the second segment is labeled in series according to the labeling result of the image of the first segment, and so on. The labeling method can accurately process the labeling result of each frame of image, but the serial labeling is limited, and has strong requirements on labeling time sequences among images. That is, the marking efficiency is low because the preceding frame image needs to be marked and the subsequent frame image needs to be marked. When the number of frame images is large, the large labeling results are also a great challenge for computer performance.
In an example, fig. 1 is a flowchart of an image labeling method provided in an embodiment of the present application, where the embodiment may be applicable to a case of fast image labeling, and the method may be performed by an image labeling apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. The electronic device may be a computer device or the like. Accordingly, as shown in fig. 1, the method includes the following operations:
s110, acquiring a plurality of frames of images to be marked.
The image to be marked may be an image to be marked on the marked object.
It will be appreciated that prior to annotating an image, it is first necessary to obtain the image to be annotated. In the embodiment of the application, multiple frames of images to be marked can be acquired.
S120, dividing the image to be annotated into a plurality of sections of segmented images to be annotated; the segmented image to be marked comprises at least two frames of images to be marked.
The segmented image to be marked may be a multi-frame image to be marked obtained by segmenting the multi-frame image to be marked. That is, each segmented image to be annotated may include a plurality of frames of images to be annotated, and the sum of the number of images included in each segmented image to be annotated is the sum of the number of images to be annotated.
Correspondingly, after the multi-frame image to be marked is obtained, the multi-frame image to be marked can be divided into a plurality of segments to obtain multi-segment segmented images to be marked. Optionally, the number of images included in the segmented to-be-annotated image of each segment may be the same or different, and in this embodiment of the present application, the number of images included in the segmented to-be-annotated image is not limited. Alternatively, each segmented image to be annotated may include at least two frames of images to be annotated. It should be noted that, in order to improve the efficiency of image labeling, the number of images included in the segmented image to be labeled may be between 20 frames and 50 frames in general.
S130, labeling objects in the segmented images to be labeled in parallel.
The labeling object may include an obstacle, such as an automobile, a railing, a pedestrian, a tree, a billboard, or the like, and may further include a feature object, such as a face key point or a pupil, or the like. That is, the image to be annotated may be an image to be annotated of various application scenes, such as an automatic driving field or a face recognition field, and the embodiment of the present application does not limit a specific application scene of the image to be annotated.
In the embodiment of the application, when the image to be marked is divided into a plurality of segments of images to be marked, the marked objects in the segments of images to be marked can be marked in parallel. The parallel labeling is to label the labeling objects in the images to be labeled of all the segments at the same time, and serial labeling of the labeling objects in the images to be labeled of all the segments is not needed according to the sequence of the images to be labeled of all the segments.
When the labeling objects in the segmented images to be labeled are labeled in parallel, the segmented images to be labeled can be labeled by adopting mutually independent labeling rules. For example, when labeling objects by using numbers, assuming that the first segmented image to be labeled includes 3 labeling objects, each labeling object of the first segmented image to be labeled may be labeled sequentially by using numbers "1, 2 and 3". Assuming that the second segmented image to be annotated includes 4 annotation objects, each annotation object of the first segmented image to be annotated can be annotated sequentially by using numbers "1, 2, 3 and 4", or each annotation object of the first segmented image to be annotated can be annotated sequentially by using numbers "5, 6, 7 or 8". That is, the labeling behavior of each segment of the image to be labeled is not affected by the labeling behavior of other segments of the image to be labeled.
It can be appreciated that when labeling the labeling objects, in order to uniquely identify each labeling object, the same labeling objects in each image to be labeled may adopt the same labeling mode, for example, a uniform number or the like. Different labeling modes are needed for different labeling objects, if one labeling object is newly added, the labeling objects are accumulated on the basis of the current maximum number to obtain the newly added number, and the newly added labeling object is labeled.
In addition, although the images to be marked of the segments can be marked in parallel by adopting marking rules which are mutually independent, in order to uniformly process marking results of the images to be marked of the segments, the same invariable content of the images to be marked of the segments is required to be used for correlation during parallel marking. For example, when the numbers of the same labeling objects in the segmented images to be labeled are unified, the segmented images to be labeled can be associated by setting an overlapping frame. Or under the condition that the difference between two adjacent images to be marked is not large and the marked objects included in the two adjacent images to be marked are basically the same, if continuous frame images are not subjected to frame extraction processing and marked directly, the images to be marked of the segments can be correlated by taking the time of the preceding image and the subsequent image in the video in the images to be marked of the segments as the basis, and the embodiment of the invention does not limit the mode of correlating the images to be marked of the segments.
Therefore, the labeling objects in the images to be labeled in each segment are labeled in a parallel labeling mode, so that the image labeling time can be greatly shortened, and the labeling efficiency and the labeling capacity of the images are improved.
According to the method, the obtained multi-frame image to be marked is divided into the multi-section image to be marked so as to mark the marked objects in the image to be marked in parallel, and the problems of low marking efficiency, insufficient marking capacity and the like of the existing image marking method are solved, so that the marking efficiency and the marking capacity of the image are improved.
In an example, fig. 2 is a flowchart of an image labeling method provided by an embodiment of the present application, fig. 3 is an effect schematic diagram of an image labeling method provided by an embodiment of the present application, and fig. 4 is an effect schematic diagram of an image labeling method provided by an embodiment of the present application, and an embodiment of the present application performs optimization and improvement based on the technical solutions of the embodiments described above, and provides various specific alternative implementations of obtaining multiple frames of images to be labeled, dividing the images to be labeled into multiple segments of images to be labeled, and performing parallel labeling on labeling objects in each of the segments of images to be labeled.
An image labeling method as shown in fig. 2 and fig. 3 and fig. 4, comprising:
and S210, performing frame extraction processing on the continuous frame images according to the set frame extraction frequency to obtain the image to be marked.
The set frame extraction frequency may be set according to actual requirements, for example, 1 frame is extracted every 10 frames, or 5 frames are extracted every 1 second, etc., and the embodiment of the present application does not limit specific values of the set frame extraction frequency. Successive frame images, i.e. all images comprised by a piece of video.
Optionally, a mode of performing frame extraction processing on continuous frame images according to a set frame extraction frequency can be adopted to obtain multiple frames of images to be marked. Alternatively, a section of continuous frame image may also be directly used as the image to be annotated, which is not limited in the embodiment of the present application.
S220, dividing the image to be annotated into a plurality of sections of segmented images to be annotated; the segmented image to be marked comprises at least two frames of images to be marked.
Accordingly, S220 may specifically include the following operations:
s221, dividing the current segmented image to be marked according to the time sequence of the images to be marked and the set image quantity of the segmented images to be marked.
The number of the set images can be set according to the requirement, and optionally, the number of the set images can be between 20 and 50. Meanwhile, the number of the set images corresponding to the images to be marked in different segments can be the same or different, and the embodiment of the application does not limit the number of the set images. The current segmented image to be annotated is the segmented image to be annotated obtained by current division.
In the embodiment of the application, the images to be marked of each segment can be divided in sequence. Optionally, when dividing the first segment of the images to be marked, the current segment of the images to be marked may be divided according to the time sequence of the images to be marked and the set number of images of the segments of the images to be marked, and the divided images are used as the first segment of the images to be marked. For example, the first 20 frames of images of the continuous frames are taken as the first segment of the image to be annotated.
S222, determining the current overlapped frame of the image to be annotated of the current segment according to the overlapped frame setting rule.
Wherein, the overlapped frame setting rule can be used for setting overlapped frame images among the segmented images to be marked. The current overlapping frame may be an image frame included in the current segmented image to be annotated, and overlapping with other segmented images to be annotated.
Correspondingly, after the first segment of the image to be marked is used as the current segment of the image to be marked, the current overlapping frame of the current segment of the image to be marked can be determined according to the overlapping frame setting rule. Alternatively, the number of the current overlapped frames of the images to be annotated of different segments may be the same or different, which is not limited in the embodiment of the present application.
It will be appreciated that in determining the current overlapping frame of the current segmented image to be annotated, the overlapping frame of the subsequent image may be determined for the current segmented image to be annotated only. For example, as shown in fig. 3, the last 1 frame of the image to be annotated of the current segment is taken as the current overlapping frame. Correspondingly, the previous 1 frame image of the next segment of the current segment of the image to be annotated can be the next 1 frame image of the current segment of the image to be annotated. That is, 1 frame of the same image exists between the current segment to-be-annotated image and the next segment to-be-annotated image of the current segment to-be-annotated image.
In an alternative embodiment of the present application, the overlapping frame setting rule includes: the number of frame images for judging the disappearance of the image object is taken as the number of overlapped frames; or, the default number of frame images is set as the number of overlapped frames.
Wherein the overlapping frame setting rule may include a plurality of types. Alternatively, the number of frame images for determining disappearance of the image object may be taken as the number of overlapping frames. For example, in the unmanned vehicle obstacle marking rule, it is required that 5 frames of obstacle disappear, that is, a new number is given to the marking object, and at this time, the number of overlapping frames may be set to 5 frames. Alternatively, the default number of frame images may be set as the number of overlapping frames. For example, 1 frame image is set by default as the number of overlapping frames. The overlapping frame setting rule which takes the default frame image number as the number of overlapping frames is applicable to application scenes without special labeling requirements.
In the scheme, the overlapping frames among the segmented images to be marked are set through the overlapping frame setting rules of various types, so that the application requirements of the image marking method on various application scenes can be met.
S223, taking the current overlapped frame as a part of images to be annotated of the next segment of images to be annotated, and determining the rest images to be annotated of the next segment of images to be annotated according to the set number of images.
Correspondingly, after the current segmentation to-be-annotated image is divided and the current overlapped frame is determined, the current overlapped frame can be used as part of to-be-annotated images of the next segmentation to-be-annotated image, and the rest to-be-annotated images of the next segmentation to-be-annotated image are determined according to the set image quantity.
In an exemplary example, assuming that the division of the first segment to be marked image is completed and that the current overlapped frame is determined to be 2 frames, the last 2 frames of images of the first segment to be marked image may be used as the first two frames of images of the second segment to be marked image. Assuming that the number of the set images in each segmented image to be marked is 20 frames, the 20 th frame image to the 37 th frame image can be obtained from the continuous frame images according to the time sequence to serve as the rest images to be marked of the second segmented image to be marked, so that the second segmented image to be marked is divided. That is, the first segment of the segmented image to be annotated comprises the 1 st to 20 th frames of images, the second segment of the segmented image to be annotated comprises the 19 th to 38 th frames of images, and the first segment of the segmented image to be annotated and the second segment of the segmented image to be annotated comprise the 19 th and 20 th overlapping images.
S224, taking the image to be annotated of the next segment as the image to be annotated of the current segment.
S225, judging whether the current segmented image to be marked is the last segmented image to be marked, if so, executing, otherwise, returning to executing S222.
S226, dividing all the images to be marked.
After the next segment of the current segment to-be-annotated image is divided, the next segment to-be-annotated image can be updated into the current segment to-be-annotated image, and the operation of determining the current overlapping frame of the current segment to-be-annotated image according to the overlapping frame setting rule is carried out in a returning mode until the division of all to-be-annotated images is completed. As shown in fig. 3, after the division of all the images to be marked is completed and the overlapping frames are set, parallel marking can be started on each segmented image to be marked. If the box frame is utilized to select each labeling object, the labeling objects are numbered (i.e. labeling IDs) in turn.
In the scheme, the overlapping frames are set for the images to be marked of the segments by using the overlapping frame setting rule, so that the association between the images to be marked of the segments can be established, and the marking results of the images to be marked of the segments can be uniformly processed.
When the overlapped frames among the images to be marked of the segments are established, a plurality of other alternative setting schemes are available besides the above-mentioned method for directly setting the non-marked image frames into the overlapped frames. As shown in fig. 4, the images to be marked of each segment may be marked in parallel first, and each marked object is selected by using a box frame, and the marked objects are numbered (i.e. mark IDs) in sequence. After the images to be marked of each segment are marked, the images to be marked with the same marking frame (namely the same marking content) are used as overlapped frames.
It should be noted that, in addition to the above-mentioned establishment of the association between the segmented images to be annotated by using the overlapping frames, other ways of establishing the association between the segmented images to be annotated may exist. If a section of continuous frame image is directly used as the image to be annotated, the image identifications of the preceding image and the subsequent image or the time point and the like between the sections of the image to be annotated can be correlated. For example, the first segment of the segmented image to be annotated comprises the 1 st to 20 th frames of images, and the second segment of the segmented image to be annotated comprises the 21 st to 40 th frames of images. Because the difference between two adjacent frames of images is not large, the included labeling objects are basically the same, and therefore, the first segment of the image to be labeled and the second segment of the image to be labeled can be associated through the 20 th frame of image and the 21 st frame of image.
It should be noted that if the number of to-be-annotated images included in each segment of to-be-annotated image is the same and the number of overlapping frames between each segment of to-be-annotated images is also the same, the division of all segments of to-be-annotated images can be completed at one time at the same time without dividing each segment of to-be-annotated images according to time sequence. For example, assuming that the frame extraction image has 50 frames in total, each segment of the segmented image to be annotated includes 20 frames of images to be annotated, and 5 overlapping frames are included between each segment of the segmented image to be annotated, all segments of the segmented image to be annotated can be determined simultaneously: the first section of segmented image to be marked comprises 1 st to 20 th frames of images, the second section of segmented image to be marked comprises 16 th to 35 th frames of images, and the third section of segmented image to be marked comprises 31 st to 50 th frames of images.
And S230, for each segmented image to be marked, marking the marked object by adopting a matched marking rule at the same time, and obtaining an original parallel marking result corresponding to each segmented image to be marked.
The original parallel labeling result can be a preliminary labeling result obtained after the images to be labeled of each segment are labeled simultaneously.
Correspondingly, after the division of the images to be marked of each segment is completed, the images to be marked of each segment can be marked in parallel. The parallel labeling can comprise two links, wherein the first link is to label the labeling object at the same time by adopting a matched labeling rule aiming at each segmented image to be labeled, so as to obtain an original parallel labeling result corresponding to each segmented image to be labeled. And the second link is to normalize each original parallel labeling result so as to unify each original parallel labeling result.
For example, as shown in fig. 3, labeling the labeling object in the first segment of the segmented image to be labeled by using a labeling rule may be: and taking a box as an annotation tool in the first segment of segmented image to be annotated, marking each annotation object selected by the box, and marking the original parallel annotation result of each annotation object: large truck-1, pedestrian-2 and car-10. In addition to these three labeling objects, there are other labeling objects, numbered 3-9, respectively, which are not shown in the figures. The labeling of the labeling object in the second segment of segmented image to be labeled by adopting the labeling rule can be as follows: and taking a box as a marking tool in the second segment segmented image to be marked, marking each marking object selected by the box, and marking the original parallel marking result of each marking object: large truck-1, car-2 and electric car-5. In addition to these three labeling objects, there are other labeling objects, numbered 3-4, respectively, which are not shown in the figures.
S240, carrying out normalized inspection on each original parallel labeling result.
In order to further ensure the quality of the labeling results and improve the image labeling efficiency, before normalizing the original parallel labeling results, the normalization inspection can be performed on the original parallel labeling results. The normalization check is to check whether there is a human labeling error in the original parallel labeling result, for example, labeling an attribute of the labeling object by mistake, or labeling the labeling object by using a wrong labeling tool. The marking tool may be a box, a line, a point, or an area, and the embodiment of the present application does not limit the type of the marking tool.
The normalization check of each original parallel labeling result can ensure the consistency of each original parallel labeling result, namely, the quality of the original parallel labeling result is improved. For example, in the case of marking an obstacle, consistency may be whether or not marking the same obstacle with respect to the type, shade, or other attribute is consistent, and whether or not the ID numbers marked for the respective obstacles are consistent is not included. Therefore, the accuracy of subsequent normalization processing can be ensured by carrying out normalization inspection on each original parallel labeling result, normalization processing failure caused by non-uniform labeling can be avoided, inspection and normalization processing processes are required to be carried out again, and therefore labeling efficiency is further improved.
Alternatively, the normalization check may include a generic normalization check and a custom normalization check; wherein: the generic normalization check may include: the labeling quantity of the labeling objects is wrong, the labeling types of the labeling objects are wrong, and the labeling key attributes of the labeling objects are wrong; the custom normalization check may include: labeling errors of labeling objects labeled according to the customized labeling rules.
The universal normalization inspection can inspect the marked objects according to inspection rules which are universal to all the marked objects. Custom normalization inspection can inspect the marked object according to rules formed by special marking requirements.
In embodiments of the present application, alternatively, the normalized check may include two forms. The first form can be general normalization inspection, and can be used for inspecting whether the original parallel labeling result has the problems of labeling number errors of labeling objects, labeling types of labeling errors, labeling key attribute labeling errors of the labeling objects and the like, namely inspecting whether the original parallel labeling result has the problems of multi-label, label missing, type errors, key attribute errors and the like. The type of the labeling object can be the type of an obstacle, the type of a key point of a human face or the type of a tracking object, etc. The attribute of the object to be marked may be, for example, a specific orientation, whether it is blocked or whether it is moving, or the like.
The second form can be custom normalization check, which can be used to check whether there is a labeling error problem in the original parallel labeling result according to the requirement of the special labeling rule. The method is used for marking the flow of people in the supermarket, and is used for marking the flow of people in the supermarket. At this time, when labeling the people stream, the security personnel is required to uniformly label as "0". When the customized normalization check is carried out, if the original parallel labeling results of the same security personnel in the two pictures are respectively 0 and 1, the fact that the original parallel labeling results have labeling errors is indicated.
S250, carrying out normalization processing on the original parallel labeling results to obtain target parallel labeling results corresponding to the segmented images to be labeled.
The target parallel labeling result is the final labeling result.
The second link of parallel labeling of the segmented images to be labeled is normalization processing of the original parallel labeling result. The normalization processing is to perform unified processing on the labeling results of the same labeling object in the original parallel labeling results. The normalization processing of each original parallel labeling result can realize the unique identification of each labeling object, thereby obtaining the target parallel labeling result meeting the labeling requirement.
In an optional embodiment of the present application, normalizing the original parallel labeling result may include: determining a reference original parallel labeling result and a next original parallel labeling result of the reference original parallel labeling result in sequence from the original parallel labeling results according to a time sequence; taking the next-section original parallel labeling result as a current processing parallel labeling result; and under the condition that the same target annotation object exists in the reference original parallel annotation result and the current processing parallel annotation result, taking the annotation result of the target annotation object in the reference original parallel annotation result as the annotation result of the target annotation object in the current processing parallel annotation result.
The standard original parallel labeling result can be an original parallel labeling result used as a unified standard. The current processing parallel labeling result can be an original parallel labeling result which needs to be processed uniformly for the labeling result of the labeling object with the same reference original parallel labeling result. The target labeling object may be the same labeling object in the reference original parallel labeling result and the current processing parallel labeling result.
Optionally, when the normalization processing is performed on the original parallel labeling results, the reference original parallel labeling results and the current processing parallel labeling results can be sequentially determined in the original parallel labeling results. And comparing the reference original parallel labeling result with the current processing parallel labeling result, and if the reference original parallel labeling result is determined to have the same target labeling object with the current processing parallel labeling result, taking the labeling result of the target labeling object in the reference original parallel labeling result as the labeling result of the target labeling object in the current processing parallel labeling result. In general, each labeling object included in the overlapped frame image, that is, the same target labeling object in the reference original parallel labeling result and the current processing parallel labeling result. For example, the car and the bicycle in the overlapping frame images may both be the same target annotation object.
The first-segment original parallel labeling result is used as a reference original parallel labeling result, and the second-segment original parallel labeling result is used as a current processing parallel labeling result. Comparing the reference original parallel labeling result with the labeling result corresponding to the overlapped frame image in the current processing parallel labeling result, and if the reference original parallel labeling result is different from the labeling result aiming at the same target labeling object in the overlapped frame image in the current processing parallel labeling result, automatically using the labeling result of the target labeling object in the reference original parallel labeling result by the current processing parallel labeling result. After normalization processing of the first-segment original parallel labeling result and the second-segment original parallel labeling result is completed, the second-segment original parallel labeling result can be used as a reference original parallel labeling result, the third-segment original parallel labeling result is used as a current processing parallel labeling result, and the same is performed until normalization processing of all the original parallel labeling results is completed.
In the scheme, the standard original parallel labeling result is used for the labeling result of the target labeling object by the current processing parallel labeling result, so that the labeling results of the standard original parallel labeling result and the labeling results of the same target labeling object in the overlapped frame image in the current processing parallel labeling result are kept consistent, and uniform labeling of the same labeling object is realized.
In an optional embodiment of the present application, normalizing the original parallel labeling result may further include: and under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result and a newly added labeling object exists in the current processing parallel labeling result, labeling the newly added labeling object again according to a labeling sequence.
In an optional embodiment of the present application, the re-labeling the newly added labeling object according to the labeling sequence may include: determining the last labeling result in the currently processed parallel labeling results; carrying out continuous processing on the last labeling result to obtain a continuous labeling result; and taking the continuation marking result as a target marking result of the newly added marking object.
In an optional embodiment of the present application, determining that there is an added labeling object in the current processing parallel labeling result may include: and in the current processing parallel labeling result, taking the non-target labeling object which is the same as the partial labeling result of the target labeling object as the newly added labeling object.
The newly added labeling object can be a labeling object newly appeared in the parallel labeling result of the current processing, namely, a labeling object which does not exist in the original parallel labeling result of the reference. It should be noted that, the definition of the new labeling object may be set according to a specific labeling rule. Illustratively, assume that the labeling rules require that the object disappear for 5 frames to both assign a new labeling result. If the first frame image has a labeling object of "car", if the "car" does not appear in the second frame image to the 8 th frame image and the same "car" appears in the 9 th frame image, the "car" in the 9 th frame image is taken as a new labeling object and is labeled again according to the labeling sequence even if the "car" in the 9 th frame image is the same as the "car" in the 1 st frame image. The last labeling result may be a result of labeling the last occurring labeling object. The continuous labeling result can be a labeling result obtained after the last labeling result is sequentially continued. Illustratively, the last labeling result is "2", and the continuation labeling result may be "3". And the target labeling result is the result of labeling the newly added labeling object again. The non-target labeling object and the part of labeling result of the target labeling object are the same, and the labeling number (namely, labeling ID) of the non-target labeling object is the same as the labeling number of the target labeling object.
Correspondingly, if the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result and the newly added labeling object exists in the current processing parallel labeling result, the newly added labeling object is labeled again according to the labeling sequence. In general, in other frame images than the superimposed frame image, there may be a case where a new labeling object is added to the parallel labeling result in the current process.
For example, it is assumed that the same target labeling object "car" exists in the reference original parallel labeling result and the current processing parallel labeling result, and the labeling ID of the "car" is unified to be "10" after normalization processing. If the labeling ID of the newly added labeling object 'bus' for the 8 th frame image in the current processing parallel labeling result is also '10', the maximum ID number already labeled in the current processing parallel labeling result, namely the last labeling result, can be determined. If the maximum ID number marked is 15, continuing processing the last marking result to obtain a continued marking result 16, and re-marking the marking ID of the newly added marking object of the bus as 16.
In the scheme, the newly added labeling objects in the parallel labeling results processed at present are labeled again according to the labeling sequence, so that the problem of conflict between the newly added labeling objects and the labeling results of the target labeling objects can be avoided.
And S260, deleting redundant overlapped frames in the marked image.
The marked image may be an image that is marked in parallel. The redundant overlapping frames may be redundant overlapping frames. Illustratively, it is assumed that the first segment of the segmented image to be annotated and the second segment of the segmented image to be annotated comprise overlapping frames: and the 20 th frame in the first segment segmented image to be marked or the 20 th frame in the second segment segmented image to be marked can be regarded as redundant overlapped frames.
Because the overlapped frames are arranged in the preamble step, redundant overlapped frames in the marked images can be deleted, namely, the duplicate removal processing is performed after the parallel marking of the images to be marked is completed to obtain the marked images, so that the complete continuous frames are ensured to be obtained. It can be understood that the segmented image to be annotated forms the segmented annotation image after the segmentation is processed in parallel. The labeling results of the overlapped frames after normalization processing are consistent, so that it is feasible to delete the overlapped frames at the front end or the rear end in the segmented labeled image, as long as the continuous frames can be obtained finally. That is, it should be noted that all of the overlapping frames cannot be deleted, resulting in a deletion of the image frames.
According to the technical scheme, the overlapped frames are set for the segmented image to be marked, and normalization processing is carried out according to the marking results of the overlapped frames, so that the final marking results can be unified. According to the embodiment of the application, the image is marked in a parallel marking mode, so that marking efficiency can be greatly improved. By way of example, the labeling time can be shortened to about 1/20 of the original time by dividing a continuous frame of 1000 frames into 50 frames. Meanwhile, as the intermediate inspection nodes for normalized inspection are added, the labeling quality can be obviously improved. In the aspect of marking capability, the method is expanded into the situation that the video duration is unlimited by only supporting the fine marking of tens of seconds of videos originally, and the basic data of the long video can play a great assistance role in further optimizing the visual tracking algorithm. And thirdly, as the previous long continuous frame labeling scheme has strong requirements on the performance of labeling equipment, the splitting scheme greatly relieves the equipment performance pressure caused by large data volume.
In an example, fig. 5 is a block diagram of an image labeling apparatus provided in an embodiment of the present application, where the embodiment of the present application may be applicable to a case of fast image labeling, where the apparatus is implemented by software and/or hardware, and is specifically configured in an electronic device. The electronic device may be a computer device or the like.
An image annotation device 300 as shown in fig. 5, comprising: the image to be annotated acquisition module 310, the segmented image to be annotated partitioning module 320 and the annotation object parallel annotation module 330. Wherein,
the image to be annotated acquisition module 310 is configured to acquire multiple frames of images to be annotated;
the segmented image to be annotated dividing module 320 is configured to divide the image to be annotated into a plurality of segments of segmented images to be annotated; the segmented image to be marked comprises at least two frames of images to be marked;
the labeling object parallel labeling module 330 is configured to label the labeling objects in the segmented images to be labeled in parallel.
According to the method, the obtained multi-frame image to be marked is divided into the multi-section image to be marked so as to mark the marked objects in the image to be marked in parallel, and the problems of low marking efficiency, insufficient marking capacity and the like of the existing image marking method are solved, so that the marking efficiency and the marking capacity of the image are improved.
Optionally, the image to be annotated acquisition module 310 is specifically configured to: and performing frame extraction processing on the continuous frame images according to the set frame extraction frequency to obtain the image to be marked.
Optionally, the segmentation to-be-annotated image partitioning module 320 is specifically configured to: dividing the current segmented image to be marked according to the time sequence of the images to be marked and the set image quantity of the segmented image to be marked; determining a current overlapped frame of the current segmented image to be annotated according to an overlapped frame setting rule; taking the current overlapped frame as a part of images to be annotated of the next segment of images to be annotated, and determining the rest images to be annotated of the next segment of images to be annotated according to the set image quantity; and taking the next segment of the image to be marked as the current segment of the image to be marked, and returning to execute the operation of determining the current overlapping frame of the current segment of the image to be marked according to the overlapping frame setting rule until the division of all the images to be marked is completed.
Optionally, the overlapping frame setting rule includes: the number of frame images for judging the disappearance of the image object is taken as the number of overlapped frames; or, the default number of frame images is set as the number of overlapped frames.
Optionally, the labeling object parallel labeling module 330 is specifically configured to: for each segmented image to be marked, marking the marked object by adopting a matched marking rule at the same time to obtain an original parallel marking result corresponding to each segmented image to be marked; and normalizing the original parallel labeling results to obtain target parallel labeling results corresponding to the segmented images to be labeled.
Optionally, the labeling object parallel labeling module 330 is specifically configured to: determining a reference original parallel labeling result and a next original parallel labeling result of the reference original parallel labeling result in sequence from the original parallel labeling results according to a time sequence; taking the next-section original parallel labeling result as a current processing parallel labeling result; and under the condition that the same target annotation object exists in the reference original parallel annotation result and the current processing parallel annotation result, taking the annotation result of the target annotation object in the reference original parallel annotation result as the annotation result of the target annotation object in the current processing parallel annotation result.
Optionally, the labeling object parallel labeling module 330 is specifically configured to: and under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result and a newly added labeling object exists in the current processing parallel labeling result, labeling the newly added labeling object again according to a labeling sequence.
Optionally, the labeling object parallel labeling module 330 is specifically configured to: determining the last labeling result in the currently processed parallel labeling results; carrying out continuous processing on the last labeling result to obtain a continuous labeling result; and taking the continuation marking result as a target marking result of the newly added marking object.
Optionally, the labeling object parallel labeling module 330 is specifically configured to: and in the current processing parallel labeling result, taking the non-target labeling object which is the same as the partial labeling result of the target labeling object as the newly added labeling object.
Optionally, the labeling object parallel labeling module 330 is specifically configured to: carrying out normalization inspection on each original parallel labeling result; the normalization check comprises a general normalization check and a custom normalization check; wherein: the universal normalization check includes: the labeling quantity of the labeling objects is wrong, the labeling types of the labeling objects are wrong, and the labeling key attributes of the labeling objects are wrong; the custom normalization check includes: labeling errors of labeling objects labeled according to the customized labeling rules.
Optionally, the labeling object parallel labeling module 330 is specifically configured to: and deleting redundant overlapped frames in the marked image.
The image marking device can execute the image marking method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. Technical details which are not described in detail in this embodiment can be referred to the image labeling method provided in any embodiment of the present application.
Since the image labeling device described above is a device capable of executing the image labeling method in the embodiment of the present application, based on the image labeling method described in the embodiment of the present application, those skilled in the art can understand the specific implementation of the image labeling device in the embodiment of the present application and various modifications thereof, so how the image labeling device implements the image labeling method in the embodiment of the present application will not be described in detail herein. The apparatus used by those skilled in the art to implement the image labeling method in the embodiments of the present application falls within the scope of protection intended by the present application.
In one example, the present application also provides an electronic device and a readable storage medium.
Fig. 6 is a schematic structural diagram of an electronic device for implementing the image labeling method according to the embodiment of the present application. As shown in fig. 6, a block diagram of an electronic device according to an image labeling method according to an embodiment of the present application is shown. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 6, the electronic device includes: one or more processors 601, memory 602, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 601 is illustrated in fig. 6.
Memory 602 is a non-transitory computer-readable storage medium provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the image annotation methods provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the image labeling methods provided herein.
The memory 602, which is a non-transitory computer readable storage medium, may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules corresponding to the image labeling method in the embodiments of the present application (e.g., the to-be-labeled image acquisition module 310, the segmented to-be-labeled image partition module 320, and the labeling object parallel labeling module 330 shown in fig. 5). The processor 601 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 602, that is, implements the image labeling method in the above-described method embodiments.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created by use of an electronic device implementing the image annotation method, and the like. In addition, the memory 602 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 602 may optionally include memory located remotely from processor 601, which may be connected to an electronic device implementing the image annotation method via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for implementing the image labeling method may further include: an input device 603 and an output device 604. The processor 601, memory 602, input device 603 and output device 604 may be connected by a bus or otherwise, for example in fig. 6.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an electronic device implementing the image annotation method, such as a touch screen, keypad, mouse, trackpad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, etc. input devices. The output means 604 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client may be, but is not limited to, a smart phone, a notebook computer, a desktop computer, a tablet computer, a smart speaker, etc. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud computing, cloud service, cloud database, cloud storage and the like. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the method, the obtained multi-frame image to be marked is divided into the multi-section image to be marked so as to mark the marked objects in the image to be marked in parallel, and the problems of low marking efficiency, insufficient marking capacity and the like of the existing image marking method are solved, so that the marking efficiency and the marking capacity of the image are improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (20)

1. An image annotation method comprising:
obtaining a plurality of frames of images to be marked;
Dividing the current segmented image to be annotated according to the time sequence of the images to be annotated and the set image quantity of the segmented images to be annotated; the segmented image to be marked comprises at least two frames of images to be marked;
determining a current overlapped frame of the current segmented image to be annotated according to an overlapped frame setting rule;
taking the current overlapped frame as a part of images to be annotated of the next segment of images to be annotated, and determining the rest images to be annotated of the next segment of images to be annotated according to the set image quantity;
taking the next segment of image to be marked as the current segment of image to be marked, and returning to execute the operation of determining the current overlapped frame of the current segment of image to be marked according to the overlapped frame setting rule until the division of all the images to be marked is completed;
for each segmented image to be marked, marking the marked object by adopting a matched marking rule at the same time to obtain an original parallel marking result corresponding to each segmented image to be marked;
and normalizing the original parallel labeling results to obtain target parallel labeling results corresponding to the segmented images to be labeled.
2. The method of claim 1, wherein the acquiring a plurality of frames of the image to be annotated comprises:
and performing frame extraction processing on the continuous frame images according to the set frame extraction frequency to obtain the image to be marked.
3. The method of claim 1, wherein the overlapping frame setting rule comprises:
the number of frame images for judging the disappearance of the image object is taken as the number of overlapped frames; or alternatively, the first and second heat exchangers may be,
the default number of frame images is set as the number of overlapping frames.
4. The method of claim 1, wherein the normalizing the original parallel annotation result comprises:
determining a reference original parallel labeling result and a next original parallel labeling result of the reference original parallel labeling result in sequence from the original parallel labeling results according to a time sequence;
taking the next-section original parallel labeling result as a current processing parallel labeling result;
and under the condition that the same target annotation object exists in the reference original parallel annotation result and the current processing parallel annotation result, taking the annotation result of the target annotation object in the reference original parallel annotation result as the annotation result of the target annotation object in the current processing parallel annotation result.
5. The method of claim 4, wherein the normalizing the original parallel annotation result further comprises:
and under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result and a newly added labeling object exists in the current processing parallel labeling result, labeling the newly added labeling object again according to a labeling sequence.
6. The method of claim 5, wherein the re-labeling the newly added labeling object in the labeling order comprises:
determining the last labeling result in the currently processed parallel labeling results;
carrying out continuous processing on the last labeling result to obtain a continuous labeling result;
and taking the continuation marking result as a target marking result of the newly added marking object.
7. The method of claim 5, wherein determining that there is an added annotation object in the current processed parallel annotation result comprises:
and in the current processing parallel labeling result, taking the non-target labeling object which is the same as the partial labeling result of the target labeling object as the newly added labeling object.
8. The method according to any one of claims 1-7, further comprising, prior to said normalizing said original parallel annotation result:
carrying out normalization inspection on each original parallel labeling result; the normalization check comprises a general normalization check and a custom normalization check; wherein:
the universal normalization check includes: the labeling quantity of the labeling objects is wrong, the labeling types of the labeling objects are wrong, and the labeling key attributes of the labeling objects are wrong;
the custom normalization check includes: labeling errors of labeling objects labeled according to the customized labeling rules.
9. The method according to any one of claims 1-7, further comprising, after said normalizing said original parallel annotation result:
and deleting redundant overlapped frames in the marked image.
10. An image annotation device comprising:
the image to be marked acquisition module is used for acquiring a plurality of frames of images to be marked;
the segmentation to-be-annotated image dividing module is used for:
dividing the current segmented image to be annotated according to the time sequence of the images to be annotated and the set image quantity of the segmented images to be annotated; the segmented image to be marked comprises at least two frames of images to be marked;
Determining a current overlapped frame of the current segmented image to be annotated according to an overlapped frame setting rule;
taking the current overlapped frame as a part of images to be annotated of the next segment of images to be annotated, and determining the rest images to be annotated of the next segment of images to be annotated according to the set image quantity;
taking the next segment of image to be marked as the current segment of image to be marked, and returning to execute the operation of determining the current overlapped frame of the current segment of image to be marked according to the overlapped frame setting rule until the division of all the images to be marked is completed;
the labeling object parallel labeling module is used for:
for each segmented image to be marked, marking the marked object by adopting a matched marking rule at the same time to obtain an original parallel marking result corresponding to each segmented image to be marked;
and normalizing the original parallel labeling results to obtain target parallel labeling results corresponding to the segmented images to be labeled.
11. The device of claim 10, wherein the image to be annotated acquisition module is specifically configured to:
and performing frame extraction processing on the continuous frame images according to the set frame extraction frequency to obtain the image to be marked.
12. The apparatus of claim 10, wherein the overlapping frame setting rule comprises:
the number of frame images for judging the disappearance of the image object is taken as the number of overlapped frames; or alternatively, the first and second heat exchangers may be,
the default number of frame images is set as the number of overlapping frames.
13. The apparatus of claim 10, wherein the labeling object parallel labeling module is specifically configured to:
determining a reference original parallel labeling result and a next original parallel labeling result of the reference original parallel labeling result in sequence from the original parallel labeling results according to a time sequence;
taking the next-section original parallel labeling result as a current processing parallel labeling result;
and under the condition that the same target annotation object exists in the reference original parallel annotation result and the current processing parallel annotation result, taking the annotation result of the target annotation object in the reference original parallel annotation result as the annotation result of the target annotation object in the current processing parallel annotation result.
14. The apparatus of claim 13, wherein the labeling object parallel labeling module is specifically configured to:
and under the condition that the same target labeling object exists in the reference original parallel labeling result and the current processing parallel labeling result and a newly added labeling object exists in the current processing parallel labeling result, labeling the newly added labeling object again according to a labeling sequence.
15. The apparatus of claim 14, wherein the labeling object parallel labeling module is specifically configured to:
determining the last labeling result in the currently processed parallel labeling results;
carrying out continuous processing on the last labeling result to obtain a continuous labeling result;
and taking the continuation marking result as a target marking result of the newly added marking object.
16. The apparatus of claim 14, wherein the labeling object parallel labeling module is specifically configured to:
and in the current processing parallel labeling result, taking the non-target labeling object which is the same as the partial labeling result of the target labeling object as the newly added labeling object.
17. The apparatus according to any one of claims 10-16, wherein the labeling object parallel labeling module is specifically configured to:
carrying out normalization inspection on each original parallel labeling result; the normalization check comprises a general normalization check and a custom normalization check; wherein:
the universal normalization check includes: the labeling quantity of the labeling objects is wrong, the labeling types of the labeling objects are wrong, and the labeling key attributes of the labeling objects are wrong;
the custom normalization check includes: labeling errors of labeling objects labeled according to the customized labeling rules.
18. The apparatus according to any one of claims 10-16, wherein the labeling object parallel labeling module is specifically configured to:
and deleting redundant overlapped frames in the marked image.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image annotation method of any of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the image annotation method of any one of claims 1-9.
CN202010694095.7A 2020-07-17 2020-07-17 Image labeling method and device, electronic equipment and storage medium Active CN111860302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010694095.7A CN111860302B (en) 2020-07-17 2020-07-17 Image labeling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010694095.7A CN111860302B (en) 2020-07-17 2020-07-17 Image labeling method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111860302A CN111860302A (en) 2020-10-30
CN111860302B true CN111860302B (en) 2024-03-01

Family

ID=73001638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010694095.7A Active CN111860302B (en) 2020-07-17 2020-07-17 Image labeling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111860302B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591580B (en) * 2021-06-30 2022-10-14 北京百度网讯科技有限公司 Image annotation method and device, electronic equipment and storage medium
CN114168767A (en) * 2021-12-08 2022-03-11 北京百度网讯科技有限公司 Data labeling method, device, system, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103262632A (en) * 2010-06-04 2013-08-21 得克萨斯系统大学评议会 Wireless communication methods, systems, and computer program products
CN108491774A (en) * 2018-03-12 2018-09-04 北京地平线机器人技术研发有限公司 The method and apparatus that multiple targets in video are marked into line trace
CN109819325A (en) * 2019-01-11 2019-05-28 平安科技(深圳)有限公司 Hot video marks processing method, device, computer equipment and storage medium
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
CN111367445A (en) * 2020-03-31 2020-07-03 中国建设银行股份有限公司 Image annotation method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7805011B2 (en) * 2006-09-13 2010-09-28 Warner Bros. Entertainment Inc. Method and apparatus for providing lossless data compression and editing media content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103262632A (en) * 2010-06-04 2013-08-21 得克萨斯系统大学评议会 Wireless communication methods, systems, and computer program products
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
CN108491774A (en) * 2018-03-12 2018-09-04 北京地平线机器人技术研发有限公司 The method and apparatus that multiple targets in video are marked into line trace
CN109819325A (en) * 2019-01-11 2019-05-28 平安科技(深圳)有限公司 Hot video marks processing method, device, computer equipment and storage medium
CN111367445A (en) * 2020-03-31 2020-07-03 中国建设银行股份有限公司 Image annotation method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Tianhan Gao.Multi-frame Prediction Load Balancing Algorithm for Sortfirst Parallel Rendering.CGDIP '17: Proceedings of the 2017 International Conference on Computer Graphics and Digital Image Processing.2017,全文. *
基于语义ORB-SLAM2算法的移动机器人自主导航方法研究;陈国军;陈巍;郁汉琪;王涵立;;机床与液压;20200515(第09期);全文 *
基于遥感图像的人工标注系统的设计与实现;邱程;葛迪;侯群;;电脑知识与技术;20180815(第23期);全文 *

Also Published As

Publication number Publication date
CN111860302A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111107392B (en) Video processing method and device and electronic equipment
CN111860304B (en) Image labeling method, electronic device, equipment and storage medium
CN111860305B (en) Image labeling method and device, electronic equipment and storage medium
CN112241764B (en) Image recognition method, device, electronic equipment and storage medium
CN112528786B (en) Vehicle tracking method and device and electronic equipment
CN111768381A (en) Part defect detection method and device and electronic equipment
CN111860302B (en) Image labeling method and device, electronic equipment and storage medium
CN110968718B (en) Target detection model negative sample mining method and device and electronic equipment
CN112149636A (en) Method, apparatus, electronic device and storage medium for detecting target object
CN111275011B (en) Mobile traffic light detection method and device, electronic equipment and storage medium
CN111753762A (en) Method, device, equipment and storage medium for identifying key identification in video
CN110659600A (en) Object detection method, device and equipment
CN111783646A (en) Training method, device, equipment and storage medium of pedestrian re-identification model
US11423650B2 (en) Visual positioning method and apparatus, and computer-readable storage medium
CN111984825A (en) Method and apparatus for searching video
CN111935506B (en) Method and apparatus for determining repeating video frames
CN111444819B (en) Cut frame determining method, network training method, device, equipment and storage medium
CN113778403A (en) Front-end code generation method and device
CN111783639A (en) Image detection method and device, electronic equipment and readable storage medium
CN111783644B (en) Detection method, detection device, detection equipment and computer storage medium
CN111797801B (en) Method and apparatus for video scene analysis
CN112990127A (en) Target identification method and device, electronic equipment and storage medium
CN113361303B (en) Temporary traffic sign board identification method, device and equipment
CN111966846A (en) Image query method and device, electronic equipment and storage medium
CN112183484B (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant