CN113378958A - Automatic labeling method, device, equipment, storage medium and computer program product - Google Patents

Automatic labeling method, device, equipment, storage medium and computer program product Download PDF

Info

Publication number
CN113378958A
CN113378958A CN202110701350.0A CN202110701350A CN113378958A CN 113378958 A CN113378958 A CN 113378958A CN 202110701350 A CN202110701350 A CN 202110701350A CN 113378958 A CN113378958 A CN 113378958A
Authority
CN
China
Prior art keywords
marked
frame
labeled
matching degree
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110701350.0A
Other languages
Chinese (zh)
Inventor
杨雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110701350.0A priority Critical patent/CN113378958A/en
Publication of CN113378958A publication Critical patent/CN113378958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an automatic labeling method, an automatic labeling device, electronic equipment, a computer readable storage medium and a computer program product, and relates to the technical fields of cloud platforms, data identification, data labeling and the like. The method comprises the following steps: determining frames to be marked in continuous image frames and marked frames with time sequences before the frames to be marked; calculating the actual matching degree between each object to be marked in the frame to be marked and each marked object in the marked frame; marking the object to be marked with the actual matching degree between the object to be marked and any marked object exceeding the preset matching degree with the same identity number as the corresponding marked object; and marking new identity numbers different from any marked object for the object to be marked, wherein the actual matching degree between the object to be marked and any marked object does not exceed the preset matching degree. According to the implementation mode, the repeated execution logic is abstracted, so that the electronic equipment can finish automatic marking of the same marked object in the continuous image frame, the marking efficiency is improved, and the marking cost is reduced.

Description

Automatic labeling method, device, equipment, storage medium and computer program product
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to the field of cloud platforms, data identification, data tagging, and the like, and in particular, to an automatic tagging method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
With the development and application of concepts such as artificial intelligence, neural networks, deep learning, etc., a large amount of labeled data is required as a training sample in the training stage of the model.
Due to the diversity and complexity of sample types, labeling modes and labeling targets, manual labeling is still commonly adopted in the current data labeling modes.
Disclosure of Invention
The embodiment of the disclosure provides an automatic labeling method, an automatic labeling device, electronic equipment, a computer readable storage medium and a computer program product.
In a first aspect, an embodiment of the present disclosure provides an automatic labeling method, including: determining frames to be marked in continuous image frames and marked frames with time sequences before the frames to be marked; calculating the actual matching degree between each object to be marked in the frame to be marked and each marked object in the marked frame; marking the object to be marked with the actual matching degree between the object to be marked and any marked object exceeding the preset matching degree with the same identity number as the corresponding marked object; and marking new identity numbers different from any marked object for the object to be marked, wherein the actual matching degree between the object to be marked and any marked object does not exceed the preset matching degree.
In a second aspect, an embodiment of the present disclosure provides an automatic labeling device, including: the device comprises a frame to be marked and marked frame determining unit, a frame marking unit and a marking unit, wherein the frame to be marked and the marked frame determining unit are configured to determine a frame to be marked in continuous image frames and a marked frame with a time sequence before the frame to be marked; the marking object matching degree calculating unit is configured to calculate actual matching degrees between each object to be marked in the frame to be marked and each marked object in the marked frame; the same identity number marking unit is configured to mark an identity number which is the same as that of a corresponding marked object as a to-be-marked object of which the actual matching degree with any marked object exceeds a preset matching degree; and the new identity number marking unit is configured to mark new identity numbers different from any marked objects as the objects to be marked, the actual matching degrees of which with any marked objects do not exceed the preset matching degree, are formed.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to implement the automatic annotation method as described in any of the implementations of the first aspect when executed.
In a fourth aspect, the disclosed embodiments provide a non-transitory computer-readable storage medium storing computer instructions for enabling a computer to implement the automatic labeling method as described in any implementation manner of the first aspect when executed.
In a fifth aspect, the present disclosure provides a computer program product including a computer program, which when executed by a processor is capable of implementing the automatic annotation method as described in any implementation manner of the first aspect.
The automatic labeling method provided by the embodiment of the disclosure comprises the steps of firstly, determining a frame to be labeled in continuous image frames and a labeled frame with a time sequence before the frame to be labeled; then, calculating the actual matching degree between each object to be labeled in the frame to be labeled and each labeled object in the labeled frame; then, marking the object to be marked with the actual matching degree between the object to be marked and any marked object exceeding the preset matching degree with the same identity number as the corresponding marked object; and finally, marking new identity numbers which are different from any marked objects for the objects to be marked, wherein the actual matching degree of the objects to be marked and any marked objects does not exceed the preset matching degree.
The automatic labeling method provided by the disclosure matches the object to be labeled of each frame to be labeled of the continuous image frames with the labeled object of the previous labeled frame, determines which objects to be labeled and which labeled objects are actually the same labeled objects according to the matching result, supplements the identity number of the objects to be labeled in the frames to be labeled according to the identity number of the labeled objects, and otherwise, gives a new identity number to the generated new labeled objects. By abstracting out the repeated execution logic, the electronic equipment can finish automatic labeling of the same labeled object in the continuous image frame, so that the labeling efficiency is improved, and the labeling cost is reduced.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture to which the present disclosure may be applied;
fig. 2 is a flowchart of an automatic labeling method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of another automatic labeling method provided by the embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for determining an actual matching degree between labeled objects according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of another method for determining an actual matching degree between labeled objects according to an embodiment of the present disclosure;
FIG. 6 is a flowchart of another method for determining an actual matching degree between labeled objects according to an embodiment of the present disclosure;
fig. 7 is a block diagram of an automatic labeling apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device adapted to execute an automatic labeling method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness. It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the automatic labeling methods, apparatuses, electronic devices, and computer-readable storage media of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 and the server 105 may be installed with various applications for implementing information communication between the two devices, such as a tagging task receiving/sending application, an automatic tagging application, an information interaction application, and the like.
The terminal apparatuses 101, 102, 103 and the server 105 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like; when the terminal devices 101, 102, and 103 are software, they may be installed in the electronic devices listed above, and they may be implemented as multiple software or software modules, or may be implemented as a single software or software module, and are not limited in this respect. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server; when the server is software, the server may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited herein.
The server 105 may provide various services through various built-in applications, taking an automatic labeling class application as an example, which may provide a service for labeling the same labeled object of consecutive image frames, when the server 105 runs the automatic labeling class application, the following effects may be achieved: firstly, receiving continuous image frames to be marked from terminal equipment 101, 102 and 103 through a network 104; then, determining a frame to be marked in the continuous image frames and a marked frame (such as a frame before the frame to be marked) with a time sequence before the frame to be marked; then, calculating the actual matching degree between each object to be labeled in the frame to be labeled and each labeled object in the labeled frame; next, marking the object to be marked with the actual matching degree between the object to be marked and any marked object exceeding the preset matching degree, and marking the identity number which is the same as the corresponding marked object; and finally, marking new identity numbers which are different from any marked objects for the objects to be marked, wherein the actual matching degree of the objects to be marked and any marked objects does not exceed the preset matching degree.
The server 105 performs the above processing on each frame to be labeled of the continuous image frames in a frame-by-frame labeling manner, and ends the labeling task until all the image frames in the continuous image frames become labeled frames.
It should be noted that the continuous image frames to be annotated, which characterize the annotation task, may be pre-stored locally in the server 105 in various ways, besides being acquired from the terminal devices 101, 102, 103 through the network 104. Thus, when the server 105 detects that such data is already stored locally (e.g., a pending annotation task remaining before starting processing), it may choose to retrieve such data directly from the local, in which case the exemplary system architecture 100 may also not include the terminal devices 101, 102, 103 and the network 104.
In addition, the continuous image frames may be obtained by shooting or generating by a single terminal device, or may be obtained by shooting or generating by a plurality of terminal devices, respectively, and then combining the images in time sequence.
Since the matching and comparing between the labeled objects need to occupy more computation resources and stronger computation capability, the automatic labeling method provided in the following embodiments of the present disclosure is generally executed by the server 105 having stronger computation capability and more computation resources, and accordingly, the automatic labeling apparatus is generally also disposed in the server 105. However, it should be noted that when the terminal devices 101, 102, and 103 also have computing capabilities and computing resources meeting the requirements, the terminal devices 101, 102, and 103 may also complete the above-mentioned operations that are originally delivered to the server 105 through the automatic mark-up application installed thereon, and then output the same result as the server 105. Particularly, when there are a plurality of types of terminal devices having different computation capabilities at the same time, but the automatic tagging type application determines that the terminal device has a strong computation capability and a large amount of computing resources are left, the terminal device may execute the above computation, thereby appropriately reducing the computation load of the server 105, and accordingly, the automatic tagging apparatus may be provided in the terminal devices 101, 102, and 103. In such a case, the exemplary system architecture 100 may also not include the server 105 and the network 104.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring to fig. 2, fig. 2 is a flowchart of an automatic labeling method according to an embodiment of the present disclosure, wherein the process 200 includes the following steps:
step 201: determining frames to be marked in continuous image frames and marked frames with time sequences before the frames to be marked;
this step is intended to determine, by an execution subject of the automatic labeling method (for example, the server 105 shown in fig. 1), a frame to be labeled that is currently to be labeled in consecutive image frames and a labeled frame that is used as a reference object for labeling, where the timing sequence of the labeled frame is definitely before the frame to be labeled because the labeling is performed on consecutive frames that are arranged in time sequence and have a consecutive relationship.
In order to maximize the reference value of the labeled frame serving as the labeling reference object, and to make the labeled frame consistent with the labeling object in the frame to be labeled and have the minimum difference, the frame before the frame to be labeled is usually used as the labeled frame. Certainly, it is not excluded that a plurality of marked frames before the frame to be marked are all selected as marking reference objects for use at the same time, so as to eliminate the problem caused by losing some marking objects in a certain frame due to the sporadic sensing abnormality.
Specifically, if the continuous image frames representing the entire annotation task have not been annotated, the first frame to be annotated is the first frame (i.e., the first image frame) in the continuous image frames in the frame-by-frame annotation manner adopted in the present application, and at this time, no annotated frame exists. Based on the particularity of the first frame, it is further necessary to assign different identity numbers to each object to be labeled of the first frame at this time, so that on the basis of completing the labeling of the first frame, which labeled objects are the same are determined according to the matching degree between the object to be labeled in the second frame and the labeled object in the first frame, and then the same identity numbers are assigned to the labeled objects.
Step 202: calculating the actual matching degree between each object to be marked in the frame to be marked and each marked object in the marked frame;
on the basis of step 201, this step is intended to calculate, by the execution subject, actual matching degrees between each object to be labeled in the frame to be labeled and each labeled object in the labeled frame, so as to determine, by means of the matching degrees between the objects to be labeled and the labeled objects, which objects to be labeled are actually the same as which labeled objects.
The calculated matching degree is used for representing whether the two labeled objects belong to the same labeled object, and the calculation mode of the matching degree can also be called similarity, the same degree, consistency and the like under different scenes, specifically, all the calculation modes capable of achieving the purposes can be used as the calculation modes available in the step, for example, the intersection and the ratio between the image areas occupied by the two objects, the phase speed between the pixels in the image areas covered by the two objects, the attribute label overlap ratio and the like are obtained, and the appropriate calculation mode of the matching degree can be flexibly selected according to the specific requirements on various factors such as calculation time consumption, matching precision and the like under the actual application scene, and the specific limitation is not made here.
Step 203: marking the object to be marked with the actual matching degree between the object to be marked and any marked object exceeding the preset matching degree with the same identity number as the corresponding marked object;
based on step 202, this step is intended to assign the identity number of the object to be labeled to the labeled object according to the actual matching degree between the object to be labeled and a labeled object, which exceeds the preset matching degree, that is, to determine that two labeled objects are actually the same labeled objects.
The preset matching degree is used as a critical value for dividing two objects into the same object or different objects, and the specific value of the preset matching degree can be a fixed value summarized by experienced technicians or experts according to historical experience, or a dynamic value determined according to a certain selection standard according to the actual matching degree calculated between the object to be labeled and all labeled objects, for example, the preset matching degree is set as the maximum value of all the actual matching degrees (namely, the labeled object with the largest actual matching is directly selected as the matched labeled object), or the preset matching degree is set as the 2 nd bit of all the actual matching degrees in the ranking according to the occupied size, and the like.
It should be understood that the preset matching degree is set to meet the requirement of determining the labeled object that is the same as the object to be labeled in the actual application scene, and therefore, the method for determining the preset matching degree can be flexibly selected according to the requirement in the actual application scene, which is not specifically limited herein.
Step 204: and marking new identity numbers different from any marked object for the object to be marked, wherein the actual matching degree between the object to be marked and any marked object does not exceed the preset matching degree.
On the basis of step 202, different from step 203, the execution subject in this step determines that the object to be labeled is a new labeling object different from any labeled object in the labeled frame according to the condition that the actual matching degree between the object to be labeled and any labeled object does not exceed the preset matching degree, so that a new identity number different from any labeled object is assigned to the new labeling object.
If the current frame to be labeled is the 10 th frame of the consecutive image frames with 20 frames in total, and if the first 9 frames have 25 different labeled objects in total, which are respectively labeled with identity numbers of 01-25, it is found through comparison that a new labeled object which does not appear in the first 9 frames appears in the current 10 th frame, and then 26 identity numbers can be assigned to the new labeled object.
The automatic labeling method provided by the embodiment of the present disclosure matches an object to be labeled of each frame to be labeled of consecutive image frames with an already labeled object of a previous labeled frame, determines which objects to be labeled and which labeled objects are actually the same labeled objects according to a matching result, and further supplements an identity number of the object to be labeled in the frame to be labeled according to the identity number of the labeled object, otherwise, assigns a new identity number to the generated new labeled object. By abstracting out the repeated execution logic, the electronic equipment can finish automatic labeling of the same labeled object in the continuous image frame, so that the labeling efficiency is improved, and the labeling cost is reduced.
Considering that in order to make each object to be labeled in the first frame of the continuous image frame have no omission when different identity numbers are given to the object as much as possible, the objects to be labeled can be respectively labeled with different identity numbers according to the preset sequence.
One specific ordering numbering, including and not limited to, may be:
according to the distance between the center point of each object to be marked in the first frame and the top left vertex, sequencing each object to be marked from near to far to obtain a first sequencing result;
and performing secondary sorting on at least two objects to be marked with the same distance in the first sorting result according to the ascending order of the X-axis coordinate values of the central points to obtain a second sorting result.
Referring to fig. 3, fig. 3 is a flowchart of another automatic labeling method according to an embodiment of the present disclosure, wherein the process 300 includes the following steps:
step 301: determining frames to be marked in continuous image frames and marked frames with time sequences before the frames to be marked;
this step is the same as step 201 shown in fig. 2, and please refer to the corresponding parts in the previous embodiment for the same contents, which will not be described herein again.
Step 302: determining an alternative image area in the marked frame according to the first position information and the estimated displacement range of the object to be marked in the frame to be marked;
the step aims to estimate the alternative image area in the marked frame inversely through the position information of the object to be marked in the frame to be marked and the estimated displacement range expected to be generated by the time neutral between different image frames by the execution main body.
The estimated displacement range is generally proportional to the time length of the time neutral gear, is also associated with the type of the object to which the marked object belongs, the historical position transformation condition and the motion state, and can be comprehensively considered based on the influence factors when the estimated displacement range is determined.
Step 303: respectively calculating the actual matching degree between the object to be labeled and each labeled object in the alternative image area;
on the basis of step 302, this step is intended to calculate only the actual matching degree between the object to be labeled and each labeled object in the candidate image region of the labeled frame by the executing entity, that is, the number of labeled objects to be subjected to actual matching degree calculation is reduced by the scheme provided in step 302, and all labeled objects in the labeled frame are reduced to the labeled objects contained in the candidate image region, which is contained in both full and partial terms.
Step 304: marking the object to be marked with the actual matching degree between the object to be marked and any marked object exceeding the preset matching degree with the same identity number as the corresponding marked object;
step 305: marking new identity numbers different from any marked object for the object to be marked, the actual matching degree of which with any marked object does not exceed the preset matching degree;
the above step 304-305 is the same as the step 203-204 shown in fig. 2, and the contents of the same portions refer to the corresponding portions of the previous embodiment, which are not repeated herein.
Step 306: marking the frame to be marked as a new marked frame in response to the fact that all objects to be marked in the frame to be marked finish marking the identity numbers;
on the basis of the inheriting of the identity numbers of the same annotation objects and the appending of the new identity numbers of the new annotation objects in the steps 304 and 305, respectively, in this step, the execution subject is to determine that the frame to be annotated at this time has already been annotated and can be determined as a new annotated frame, in the case that the annotation operation of the identity numbers is completed for all the objects to be annotated in the frame to be annotated.
It should be understood that, after the labeling of the current frame to be labeled is completed, the above labeling operation will be repeated on the next frame to be labeled in the following sequence.
Step 307: and in response to the fact that no new frame to be marked exists in the continuous image frames, determining position transformation information of the object with the same identity number according to each marked frame in the continuous image frames.
On the basis of step 306, in this step, the execution subject performs labeling on all image frames in the continuous image frames, and all the image frames belong to labeled frames, at this time, the continuous image frames can be considered to have all completed labeling, and position transformation information of objects with the same identity number can be further determined according to each labeled frame in the continuous image frames, that is, the position information of the objects with the same identity number is integrated in time sequence to form position transformation information of the object, so as to facilitate subsequent processing or other analysis.
On the basis of the process 200, the embodiment specifically provides a way of calculating the matching degree through steps 302 to 303, and reduces the number of marked objects for calculating the matching degree with the object to be marked by combining the position information of the object to be marked in the frame to be marked and the estimated displacement range expected to be generated by time neutral between different image frames, so as to reduce the amount of computation while maintaining the accuracy and improve the overall efficiency; additionally, step 306 provides how to process the current frame to be labeled after the labeling is completed, and step 307 provides a processing mode after all the continuous image frames are labeled, that is, the position transformation information of the objects with the same identity numbers is determined according to each labeled frame in the continuous image frames, so that the actual requirement can be solved by using the automatic labeling result.
It should be understood that the specific implementation manners provided in steps 302 to 303 and the specific implementation manners provided in steps 306 to 307 have no cause or dependency relationship, and that different individual embodiments may be formed by combining the embodiments shown in the flowchart 200, respectively, and this embodiment only exists as a preferred embodiment that simultaneously embodies the two specific implementation manners.
On the basis of step 303 in the flow 300, the present application also provides three different ways of calculating the actual matching degree, which are used in different actual application scenarios, in combination with fig. 4 to 6, respectively.
The implementation provided by the flow 400 shown in fig. 4 is:
step 401: respectively calculating the intersection ratio of the object to be marked and the selected frame of each marked object in the alternative image area;
step 402: the cross-over ratio is taken as the actual matching degree.
In this embodiment, the intersection ratio between each object to be labeled and the selected frame of each labeled object is calculated as the actual matching degree between the two corresponding labeled objects. The intersection-union ratio refers to a ratio between an intersection and a union of selected areas of the selected frames of the two labeled objects. In general, the selected frame of the annotation object is a frame body capable of selecting each edge of the annotation object, and a rectangular frame with the outline of the annotation object omitted is usually adopted, and the shape of the frame body can be adjusted according to actual requirements.
The implementation provided by the flow 500 shown in fig. 5 is:
step 501: respectively calculating the similarity of the boundary pixels of the selected frame of the object to be marked and each marked object in the alternative image area;
step 502: and taking the similarity of the boundary pixels of the selected frame as the actual matching degree.
That is, in the embodiment, the similarity of the boundary pixels between each object to be labeled and the selected frame of each labeled object is calculated as the actual matching degree between the two corresponding labeled objects. The similarity of the boundary pixels of the selected frame refers to the similarity between the pixels in the boundary area of the image area framed by the selected frame of the two labeled objects, that is, the similarity between all the pixels framed by the selected frame is not the similarity, which is the calculated amount that can be reduced in the scene of continuous image frames, and the similarity can be embodied by comparing the color values or the brightness distribution conditions of the pixels at the same position.
The implementation provided by the flow 600 shown in fig. 6 is:
step 601: respectively acquiring an object to be marked and an attribute label of each marked object in the alternative image area;
the attribute label of the labeled object is other than the identification number of the labeled object, such as the type of the object (e.g., car, pedestrian, tree, etc.), color, model, size, gender, etc.
Step 602: determining attribute categories and attribute values under each attribute category according to the attribute tags;
step 603: and determining the actual matching degree between the object to be labeled and each labeled object in the alternative image area according to the similarity of the attribute type and the attribute value.
That is, in the present embodiment, the similarity between the attribute class and the attribute value of each object to be labeled and each labeled object is calculated as the actual matching degree between the two corresponding labeled objects. Namely, the identification information of the labeled object is assisted to be distinguished by virtue of the attribute type and the attribute value which are written in the attribute label.
In the three specific implementation manners respectively provided in fig. 4 to fig. 6, the intersection provided in fig. 4 has a smaller calculation amount than the implementation manner, the calculation time is shorter, and the result can be obtained faster; the accuracy of the manner for comparing the similarity of the border pixel points provided in fig. 5 is high; the implementation provided in fig. 6 by the similarity between the other attribute information recorded in the attribute tag can be used in a scenario where the other attribute information exists.
In order to deepen understanding, the disclosure also provides a specific implementation scheme by combining a specific application scenario:
1. automatically assigning values to all objects to be detected in the first frame of the continuous image frames;
continuous image frames generally consist of dozens of frames and hundreds of frames, under a general labeling rule, identity numbers (subsequently identified by using IDs) of the same object of all the frames need to be ensured to be consistent, but for a specific object, specific numerical values of the numbers are not specified, and only different objects are required to use different IDs. Therefore, in this embodiment, all objects in the labeled first frame picture are automatically assigned, and the assignment rule is as follows: (1.1) sorting all the objects from near to far according to the 'position of the center point of the object from the upper left corner of the picture', and when the distance between the center points of the two objects and the upper left corner of the picture is consistent, taking the smaller coordinate of the center point X as a preorder position frame. (1.2) assigning values to all elements in sequence according to the sequence specified in step (1.1).
2. Recording the maximum value n of the ID in the current labeling frame;
3. automatically matching the object in the second frame of picture with the object in the first frame of picture;
when the second frame picture is loaded, the object in the second frame picture is matched with the object in the first frame picture, and the matching process is as follows:
(3.1) defining an intersection ratio threshold A. The larger the intersection ratio is, the higher the matching degree of the two objects is, namely the possibility of the same object is higher; the lower the intersection ratio, the lower the matching degree of the two objects of the preceding and following frames is considered, i.e., the lower the possibility of being the same object. The matching threshold value can be different according to different frame extraction frequencies;
(3.2) sequentially calculating the intersection ratio of each object of the second frame and each object of the first frame;
(3.3) sorting the intersection ratio values in (3.2) from large to small: if the current maximum cross-over ratio is greater than A, the two objects of the cross-over ratio are automatically paired, and the matching queue is withdrawn; if the current maximum intersection ratio is less than A, all the remaining objects of the current second frame and all the remaining objects of the first frame are considered to have no same object any more, and the matching is terminated;
(3.4) repeating steps (3.2) and (3.3) until the match is terminated.
4. Automatically marking an ID for the successfully paired object;
and for the object successfully matched in the second frame, automatically giving the ID value of the corresponding object in the first frame to the object in the second frame.
5. Automatically marking ID for the object which is not successfully paired;
for the objects which are not successfully paired in the second frame, the scene in which the newly added object appears is mainly considered, and different numbers are sequentially assigned to each object from n +1 according to the idea of the step 1.
6. The annotator modifies the automatically annotated number;
due to the accuracy problem of the automatic labeling algorithm and the value problem of the threshold value A in the step 3, a small number of inaccurate ID values are labeled, and manual alignment adjustment needs to be performed after the inspection of a labeling person.
With further reference to fig. 7, as an implementation of the method shown in the above-mentioned figures, the present disclosure provides an embodiment of an automatic labeling apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which can be applied to various electronic devices.
As shown in fig. 7, the automatic labeling apparatus 700 of the present embodiment may include: a frame to be annotated and annotated frame determining unit 701, an annotated object matching degree calculating unit 702, a same identity number annotating unit 703 and a new identity number annotating unit 704. The frame to be marked and marked frame determining unit 701 is configured to determine a frame to be marked in continuous image frames and a marked frame with a time sequence before the frame to be marked; an annotated object matching degree calculation unit 702 configured to calculate actual matching degrees between each object to be annotated in the frame to be annotated and each annotated object in the annotated frame, respectively; the same identity number marking unit 703 is configured to mark an identity number which is the same as that of a corresponding marked object as a to-be-marked object whose actual matching degree with any marked object exceeds a preset matching degree; and a new identity number labeling unit 704 configured to label the new identity number different from any labeled object as the object to be labeled whose actual matching degree with any labeled object does not exceed the preset matching degree.
In the present embodiment, in the automatic labeling apparatus 700: the specific processing and the technical effects of the unit 701 for determining the frame to be labeled and the labeled frame, the unit 702 for calculating the matching degree of the labeled object, the unit 703 for labeling the same identity number, and the unit 704 for labeling the new identity number can refer to the related description of step 201 and step 204 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some optional implementations of this embodiment, the annotation object matching degree calculation unit 702 may include:
the candidate image area determining subunit is configured to determine a candidate image area in the marked frame according to the first position information and the estimated displacement range of the object to be marked in the frame to be marked;
and the actual matching degree operator unit is configured to calculate the actual matching degree between the object to be labeled and each labeled object in the alternative image area respectively.
In some optional implementations of this embodiment, the actual match degree calculation subunit may be further configured to:
respectively calculating the intersection ratio of the object to be marked and the selected frame of each marked object in the alternative image area;
the cross-over ratio is taken as the actual matching degree.
In some optional implementations of this embodiment, the actual match degree calculation subunit may be further configured to:
respectively calculating the similarity of the boundary pixels of the selected frame of the object to be marked and each marked object in the alternative image area;
and taking the similarity of the boundary pixels of the selected frame as the actual matching degree. .
In some optional implementations of this embodiment, the actual match degree calculation subunit may be further configured to:
respectively acquiring an object to be marked and an attribute label of each marked object in the alternative image area;
determining attribute categories and attribute values under each attribute category according to the attribute tags;
and determining the actual matching degree between the object to be labeled and each labeled object in the alternative image area according to the similarity of the attribute type and the attribute value.
In some optional implementations of this embodiment, the automatic labeling apparatus 700 may further include:
and the first frame marking unit is configured to be each object to be marked in the first frame of the continuous image frames, and different identity numbers are respectively marked according to a preset sequence.
In some optional implementations of this embodiment, the first frame labeling unit may be further configured to:
according to the distance between the center point of each object to be marked in the first frame and the top left vertex, sequencing each object to be marked from near to far to obtain a first sequencing result;
and performing secondary sorting on at least two objects to be marked with the same distance in the first sorting result according to the ascending order of the X-axis coordinate values of the central points to obtain a second sorting result.
In some optional implementations of this embodiment, the automatic labeling apparatus 700 may further include:
the marking completion processing unit is configured to respond to the fact that all objects to be marked in the frame to be marked are marked with identity numbers, and mark the frame to be marked as a new marked frame;
and the position transformation information determining unit is configured to respond to the condition that no new frame to be marked exists in the continuous image frames, and determine the position transformation information of the object with the same identity number according to each marked frame in the continuous image frames.
This embodiment exists as an apparatus embodiment corresponding to the above method embodiment, and the automatic labeling apparatus provided by this embodimentFor successive image framesThe objects to be labeled of each frame to be labeled are matched with the labeled objects of the previous labeled frame, and the objects to be labeled and the labeled objects are determined to be the same labeled objects actually according to the matching result, so that the identity numbers of the objects to be labeled in the frames to be labeled are complemented according to the identity numbers of the labeled objects, otherwise, new identity numbers are given to the generated new labeled objects. By abstracting out the repeated execution logic, the electronic equipment can finish automatic labeling of the same labeled object in the continuous image frame, so that the labeling efficiency is improved, and the labeling cost is reduced.
According to an embodiment of the present disclosure, the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to implement the automatic labeling method described in any of the above embodiments when executed.
According to an embodiment of the present disclosure, the present disclosure further provides a readable storage medium, which stores computer instructions for enabling a computer to implement the automatic labeling method described in any of the above embodiments when executed.
According to an embodiment of the present disclosure, there is also provided a computer program product, which when executed by a processor is capable of implementing the automatic labeling method described in any of the above embodiments.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as the automatic labeling method. For example, in some embodiments, the automatic annotation method can be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by the computing unit 801, a computer program may perform one or more of the steps of the automatic annotation method described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the auto-annotation method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in the conventional physical host and Virtual Private Server (VPS) service.
According to the technical scheme of the embodiment of the disclosure, for the object to be labeled of each frame to be labeled of the continuous image frame, the object to be labeled is matched with the labeled object of the previous labeled frame, and according to the matching result, which objects to be labeled and which labeled objects are actually the same labeled objects are determined, so that the identity number of the object to be labeled in the frame to be labeled is complemented according to the identity number of the labeled object, otherwise, a new identity number is given to the generated new labeled object. By abstracting out the repeated execution logic, the electronic equipment can finish automatic labeling of the same labeled object in the continuous image frame, so that the labeling efficiency is improved, and the labeling cost is reduced.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. An automatic labeling method, comprising:
determining a frame to be marked in continuous image frames and a marked frame with a time sequence before the frame to be marked;
calculating actual matching degrees between each object to be marked in the frame to be marked and each marked object in the marked frame;
marking the object to be marked with the actual matching degree between the object to be marked and any marked object exceeding the preset matching degree with the same identity number as the corresponding marked object;
and marking new identity numbers different from any marked objects for the objects to be marked, wherein the actual matching degree of the objects to be marked and any marked objects does not exceed the preset matching degree.
2. The method of claim 1, wherein the calculating the actual matching degree between each object to be labeled in the frame to be labeled and each labeled object in the labeled frame comprises:
determining an alternative image area in the marked frame according to the first position information and the estimated displacement range of the object to be marked in the frame to be marked;
and respectively calculating the actual matching degree between the object to be labeled and each labeled object in the alternative image area.
3. The method according to claim 2, wherein the calculating the actual matching degree between the object to be labeled and each labeled object in the candidate image area comprises:
respectively calculating the intersection ratio of the object to be labeled and the selected frame of each labeled object in the alternative image area;
and taking the cross-over ratio as the actual matching degree.
4. The method according to claim 2, wherein the calculating the actual matching degree between the object to be labeled and each labeled object in the candidate image area comprises:
respectively calculating the similarity of the boundary pixels of the selected frame of the object to be marked and each marked object in the alternative image area;
and taking the selected frame boundary pixel similarity as the actual matching degree.
5. The method according to claim 2, wherein the calculating the actual matching degree between the object to be labeled and each labeled object in the candidate image area comprises:
respectively acquiring the object to be labeled and the attribute label of each labeled object in the alternative image area;
determining attribute categories and attribute values under each attribute category according to the attribute tags;
and determining the actual matching degree between the object to be labeled and each labeled object in the alternative image area according to the similarity between the attribute category and the attribute value.
6. The method of claim 1, further comprising:
and respectively labeling different identity numbers for each object to be labeled in the first frame of the continuous image frames according to a preset sequence.
7. The method according to claim 6, wherein the labeling of the objects to be labeled in the first frame of the consecutive image frames according to a preset sequence respectively with different identity numbers comprises:
according to the distance between the center point of each object to be marked in the first frame and the top left vertex, sequencing each object to be marked from near to far to obtain a first sequencing result;
and performing secondary sorting on at least two objects to be marked with the same distance in the first sorting result according to the ascending order of the X-axis coordinate values of the central point to obtain a second sorting result.
8. The method of any of claims 1-7, further comprising:
marking the frame to be marked as a new marked frame in response to the fact that all objects to be marked in the frame to be marked finish marking of identity numbers;
and in response to the fact that no new frame to be marked exists in the continuous image frames, determining position transformation information of the object with the same identity number according to each marked frame in the continuous image frames.
9. An automatic labeling apparatus comprising:
the device comprises a frame to be marked and marked frame determining unit, a marking unit and a marking unit, wherein the frame to be marked and the marked frame determining unit are configured to determine a frame to be marked in continuous image frames and a marked frame with a time sequence before the frame to be marked;
the marked object matching degree calculating unit is configured to calculate actual matching degrees between each object to be marked in the frame to be marked and each marked object in the marked frame;
the same identity number marking unit is configured to mark an identity number which is the same as that of a corresponding marked object as a to-be-marked object of which the actual matching degree with any marked object exceeds a preset matching degree;
and the new identity number marking unit is configured to mark new identity numbers different from any marked objects as the objects to be marked, of which the actual matching degree with any marked objects does not exceed the preset matching degree.
10. The apparatus of claim 9, wherein the annotation object matching degree calculation unit comprises:
the alternative image area determining subunit is configured to determine an alternative image area in the annotated frame according to first position information and an estimated displacement range of the object to be annotated in the frame to be annotated;
and the actual matching degree operator unit is configured to calculate the actual matching degree between the object to be labeled and each labeled object in the alternative image area respectively.
11. The apparatus of claim 10, wherein the actual match metric operator unit is further configured to:
respectively calculating the intersection ratio of the object to be labeled and the selected frame of each labeled object in the alternative image area;
and taking the cross-over ratio as the actual matching degree.
12. The apparatus of claim 10, wherein the actual match metric operator unit is further configured to:
respectively calculating the similarity of the boundary pixels of the selected frame of the object to be marked and each marked object in the alternative image area;
and taking the selected frame boundary pixel similarity as the actual matching degree.
13. The apparatus of claim 10, wherein the actual match metric operator unit is further configured to:
respectively acquiring the object to be labeled and the attribute label of each labeled object in the alternative image area;
determining attribute categories and attribute values under each attribute category according to the attribute tags;
and determining the actual matching degree between the object to be labeled and each labeled object in the alternative image area according to the similarity between the attribute category and the attribute value.
14. The apparatus of claim 8, further comprising:
and the first frame marking unit is configured to mark each object to be marked in the first frame of the continuous image frames according to a preset sequence.
15. The apparatus of claim 14, wherein the head frame annotation unit is further configured to:
according to the distance between the center point of each object to be marked in the first frame and the top left vertex, sequencing each object to be marked from near to far to obtain a first sequencing result;
and performing secondary sorting on at least two objects to be marked with the same distance in the first sorting result according to the ascending order of the X-axis coordinate values of the central point to obtain a second sorting result.
16. The method according to any one of claims 8-15, further comprising:
the marking completion processing unit is configured to respond to the fact that all objects to be marked in the frame to be marked are marked with identity numbers, and mark the frame to be marked as a new marked frame;
a position transformation information determination unit configured to determine position transformation information of objects having the same identity number according to each labeled frame in the continuous image frames in response to no new frame to be labeled in the continuous image frames.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the auto-tagging method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the automatic labeling method of any of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements an automatic labeling method according to any of claims 1-8.
CN202110701350.0A 2021-06-24 2021-06-24 Automatic labeling method, device, equipment, storage medium and computer program product Pending CN113378958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110701350.0A CN113378958A (en) 2021-06-24 2021-06-24 Automatic labeling method, device, equipment, storage medium and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110701350.0A CN113378958A (en) 2021-06-24 2021-06-24 Automatic labeling method, device, equipment, storage medium and computer program product

Publications (1)

Publication Number Publication Date
CN113378958A true CN113378958A (en) 2021-09-10

Family

ID=77578657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110701350.0A Pending CN113378958A (en) 2021-06-24 2021-06-24 Automatic labeling method, device, equipment, storage medium and computer program product

Country Status (1)

Country Link
CN (1) CN113378958A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103329A1 (en) * 2021-12-08 2023-06-15 北京百度网讯科技有限公司 Data labeling method, apparatus, and system, device, and storage medium
WO2024104239A1 (en) * 2022-11-15 2024-05-23 北京字跳网络技术有限公司 Video labeling method and apparatus, and device, medium and product

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170228940A1 (en) * 2016-02-09 2017-08-10 Intel Corporation Recognition-based object segmentation of a 3-dimensional image
CN109145752A (en) * 2018-07-23 2019-01-04 北京百度网讯科技有限公司 For assessing the method, apparatus, equipment and medium of object detection and track algorithm
CN109409364A (en) * 2018-10-16 2019-03-01 北京百度网讯科技有限公司 Image labeling method and device
CN109753975A (en) * 2019-02-02 2019-05-14 杭州睿琪软件有限公司 Training sample obtaining method and device, electronic equipment and storage medium
CN110457680A (en) * 2019-07-02 2019-11-15 平安科技(深圳)有限公司 Entity disambiguation method, device, computer equipment and storage medium
CN110705405A (en) * 2019-09-20 2020-01-17 阿里巴巴集团控股有限公司 Target labeling method and device
CN111145214A (en) * 2019-12-17 2020-05-12 深圳云天励飞技术有限公司 Target tracking method, device, terminal equipment and medium
CN111815595A (en) * 2020-06-29 2020-10-23 北京百度网讯科技有限公司 Image semantic segmentation method, device, equipment and readable storage medium
CN111860559A (en) * 2019-12-31 2020-10-30 滴图(北京)科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112101223A (en) * 2020-09-16 2020-12-18 北京百度网讯科技有限公司 Detection method, device, equipment and computer storage medium
CN112215205A (en) * 2020-11-06 2021-01-12 腾讯科技(深圳)有限公司 Target identification method and device, computer equipment and storage medium
CN112347810A (en) * 2019-08-07 2021-02-09 杭州萤石软件有限公司 Method and device for detecting moving target object and storage medium
CN112528079A (en) * 2020-12-22 2021-03-19 北京百度网讯科技有限公司 System detection method, apparatus, electronic device, storage medium, and program product
WO2021082692A1 (en) * 2019-10-30 2021-05-06 平安科技(深圳)有限公司 Pedestrian picture labeling method and device, storage medium, and intelligent apparatus

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170228940A1 (en) * 2016-02-09 2017-08-10 Intel Corporation Recognition-based object segmentation of a 3-dimensional image
CN109145752A (en) * 2018-07-23 2019-01-04 北京百度网讯科技有限公司 For assessing the method, apparatus, equipment and medium of object detection and track algorithm
CN109409364A (en) * 2018-10-16 2019-03-01 北京百度网讯科技有限公司 Image labeling method and device
CN109753975A (en) * 2019-02-02 2019-05-14 杭州睿琪软件有限公司 Training sample obtaining method and device, electronic equipment and storage medium
CN110457680A (en) * 2019-07-02 2019-11-15 平安科技(深圳)有限公司 Entity disambiguation method, device, computer equipment and storage medium
CN112347810A (en) * 2019-08-07 2021-02-09 杭州萤石软件有限公司 Method and device for detecting moving target object and storage medium
CN110705405A (en) * 2019-09-20 2020-01-17 阿里巴巴集团控股有限公司 Target labeling method and device
WO2021082692A1 (en) * 2019-10-30 2021-05-06 平安科技(深圳)有限公司 Pedestrian picture labeling method and device, storage medium, and intelligent apparatus
CN111145214A (en) * 2019-12-17 2020-05-12 深圳云天励飞技术有限公司 Target tracking method, device, terminal equipment and medium
CN111860559A (en) * 2019-12-31 2020-10-30 滴图(北京)科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111815595A (en) * 2020-06-29 2020-10-23 北京百度网讯科技有限公司 Image semantic segmentation method, device, equipment and readable storage medium
CN112101223A (en) * 2020-09-16 2020-12-18 北京百度网讯科技有限公司 Detection method, device, equipment and computer storage medium
CN112215205A (en) * 2020-11-06 2021-01-12 腾讯科技(深圳)有限公司 Target identification method and device, computer equipment and storage medium
CN112528079A (en) * 2020-12-22 2021-03-19 北京百度网讯科技有限公司 System detection method, apparatus, electronic device, storage medium, and program product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103329A1 (en) * 2021-12-08 2023-06-15 北京百度网讯科技有限公司 Data labeling method, apparatus, and system, device, and storage medium
WO2024104239A1 (en) * 2022-11-15 2024-05-23 北京字跳网络技术有限公司 Video labeling method and apparatus, and device, medium and product

Similar Documents

Publication Publication Date Title
CN113095336B (en) Method for training key point detection model and method for detecting key points of target object
CN112560862B (en) Text recognition method and device and electronic equipment
CN113436100B (en) Method, apparatus, device, medium, and article for repairing video
CN111860522B (en) Identity card picture processing method, device, terminal and storage medium
CN114550177A (en) Image processing method, text recognition method and text recognition device
CN113378958A (en) Automatic labeling method, device, equipment, storage medium and computer program product
CN113780098A (en) Character recognition method, character recognition device, electronic equipment and storage medium
CN113378712A (en) Training method of object detection model, image detection method and device thereof
CN113643260A (en) Method, apparatus, device, medium and product for detecting image quality
CN113361468A (en) Business quality inspection method, device, equipment and storage medium
CN113378855A (en) Method for processing multitask, related device and computer program product
CN114359932B (en) Text detection method, text recognition method and device
CN115311469A (en) Image labeling method, training method, image processing method and electronic equipment
CN112508005B (en) Method, apparatus, device and storage medium for processing image
CN113963186A (en) Training method of target detection model, target detection method and related device
CN113326766A (en) Training method and device of text detection model and text detection method and device
CN112990042A (en) Image annotation auditing method, device, equipment, storage medium and program product
CN114882313B (en) Method, device, electronic equipment and storage medium for generating image annotation information
CN115019057A (en) Image feature extraction model determining method and device and image identification method and device
CN114445682A (en) Method, device, electronic equipment, storage medium and product for training model
CN114120305A (en) Training method of text classification model, and recognition method and device of text content
CN114119990A (en) Method, apparatus and computer program product for image feature point matching
CN113936158A (en) Label matching method and device
CN114093006A (en) Training method, device and equipment of living human face detection model and storage medium
CN113435257A (en) Method, device and equipment for identifying form image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210910

RJ01 Rejection of invention patent application after publication