WO2022247628A1 - Data annotation method and related product - Google Patents

Data annotation method and related product Download PDF

Info

Publication number
WO2022247628A1
WO2022247628A1 PCT/CN2022/091951 CN2022091951W WO2022247628A1 WO 2022247628 A1 WO2022247628 A1 WO 2022247628A1 CN 2022091951 W CN2022091951 W CN 2022091951W WO 2022247628 A1 WO2022247628 A1 WO 2022247628A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge
target object
label
edge line
point
Prior art date
Application number
PCT/CN2022/091951
Other languages
French (fr)
Chinese (zh)
Inventor
晋周南
陈浩
李在旺
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022247628A1 publication Critical patent/WO2022247628A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Definitions

  • the present application relates to the technical field of data processing, in particular to a data labeling method and related products.
  • a tag is a data form used to describe the characteristics of a business entity.
  • the tag can be used to describe the business entity and reflect the characteristics of the business entity from multiple perspectives.
  • Labels can be obtained from data labeling, which is the act of labeling data by data labelers with the help of labeling tools.
  • the types of data annotation include: image annotation, voice annotation, text annotation, etc. Among them, image annotation can be applied to application scenarios such as face recognition and automatic driving vehicle recognition.
  • Data labelers need to mark the contours of different target markers with different colors, and then label the corresponding contours, and use the labels to outline the target markers in the contours, so that the model can automatically identify different markers in the image.
  • image annotation can be applied to application scenarios such as face recognition and automatic driving vehicle recognition.
  • Data labelers need to mark the contours of different target markers with different colors, and then label the corresponding contours, and use the labels to outline the target markers in the contours, so that the model can automatically identify different markers in the image.
  • Labels need to be obtained
  • the method of interactive image segmentation and labeling is usually used for data labeling.
  • the initial area is manually given, and the segmentation algorithm is used to segment the target object in the initial area to obtain the edge points of the target object, and then the segmentation result of the target object is corrected by correcting the edge points.
  • the embodiment of the present application provides a data labeling method and related products.
  • a deep learning method based on pre-selected boxes
  • the edge detection of the target object in the target image is obtained, and the first contour of the target object is obtained, and then the weak semantic information is introduced.
  • the correction of the edge points of the first contour of the target object is transformed into the correction of the edge line of the first contour of the target object, and the correction obtains the second contour of the target object, which can greatly improve the accuracy and efficiency of the labeling results.
  • the embodiment of the present application provides a data labeling method, which includes:
  • different edge lines of the target object included in the target image can be obtained, such as the first edge line and the second edge line.
  • the target object is initially marked based on the first edge line to obtain the first outline of the target object.
  • the second edge line is used to correct the edge line of the first contour obtained from the initial calibration, so as to obtain the second contour of the target object.
  • the segmentation algorithm usually segments the target object to obtain the edge points of the target object, and then corrects the outline of the target object by correcting the edge points. Therefore, it is necessary to manually correct the edge points one by one, which is time-consuming and labor-intensive, and the accuracy and efficiency of the labeling results are low.
  • the data labeling method provided by the embodiment of the present application uses the first edge line to initially mark the target object, and then uses the second edge line to perform initial mark on the edge line of the first contour obtained from the initial mark Correction, so that the correction of the edge point of the first contour of the target object is converted into the correction of the edge line of the first contour of the target object, and the second contour of the target object is obtained by correction, which can greatly improve the accuracy and efficiency of the labeling results .
  • the correcting the edge line of the first outline by using the second edge line includes:
  • a contour formed by the second edge line and the edge line of the first contour is determined as the second contour.
  • a possible implementation manner of using the second edge line to correct the edge line of the first contour is provided.
  • the contour formed by the second edge line and the edge line of the first contour is determined as the second contour of the target object, and the correction method improves the accuracy and efficiency of the labeling result.
  • the method before obtaining the first edge line and the second edge line of the target object contained in the target image, the method further includes:
  • the segmentation processing is used to separate the target object included in the target image;
  • the edge detection is used to separate the edge of the target object
  • the obtaining the first edge line and the second edge line of the target object contained in the target image includes:
  • the first edge line and the second edge line are determined according to the first edge point and the second edge point.
  • a possible implementation manner of obtaining the first edge line and the second edge line of the target object included in the target image is provided. That is, before determining the first edge line and the second edge line of the target object, the target object in the target image is separated by segmenting the target image to obtain the first edge point of the target object.
  • the segmentation processing here includes image segmentation methods based on non-deep learning (such as segmentation methods based on watershed algorithm, graph theory, etc.) and image segmentation methods based on deep learning. The main difference between the two is that image segmentation methods based on deep learning require Convolutional neural network is used.
  • an edge detection is performed on the target object, and the edge of the target object is separated to obtain a second edge point of the target object.
  • the edge detection here includes edge detection based on deep learning of non-preselected boxes and edge detection based on deep learning of preselected boxes. Whether the pixel is an edge point, but to detect the position and category of the edge point corresponding to each preselection box.
  • the above-mentioned first edge point and the second edge point are both edge points of the target object, they are edge points obtained by adopting different methods for the target object. Therefore, the above-mentioned first edge point and the second edge point are not completely the same.
  • the first edge line and the second edge line of the above-mentioned target object can be determined. Since the first edge point and the second edge point are not completely the same, correspondingly, the first edge line and the second edge line obtained accordingly are also not completely the same.
  • the accuracy of the acquired edge line of the target object can be improved, and it is beneficial to the subsequent edge points of the first contour of the target object
  • the correction of the target object is transformed into the correction of the edge line of the first contour of the target object, which improves the accuracy and efficiency of the labeling result.
  • the determining the first edge line and the second edge line according to the first edge point and the second edge point includes:
  • the first edge line is obtained;
  • the point on the first edge line is an edge point where the first edge point and the second edge point coincide. ;
  • the second edge line is obtained according to the first edge point, the second edge point, and the first edge line.
  • a possible specific implementation manner of determining the first edge line and the second edge line according to the first edge point and the second edge point is provided.
  • the line formed by the edge points where the first edge point and the second edge point overlap is determined as the first edge line, that is, the first edge line is located on the edge result obtained by the above-mentioned segmentation processing and edge detection, and the above-mentioned segmentation processing and edge
  • the first edge line can be obtained through detection.
  • the second edge line is determined according to the first edge point, the second edge point and the above-mentioned first edge line.
  • the accuracy of the first edge line and the second edge line obtained in the embodiment of the present application is higher, and thus the accuracy and efficiency of the labeling results are also higher.
  • the obtaining the second edge line according to the first edge point, the second edge point, and the first edge line includes:
  • a third edge line is obtained; the point on the third edge line is the first edge point or the second edge point, and is not the an edge point at which the first edge point coincides with the second edge point;
  • the third edge line having two or more intersection points with the first edge line is determined as the second edge line.
  • a possible specific implementation manner of determining the second edge line according to the first edge point, the second edge point, and the first edge line is provided.
  • the line formed by the union of the first edge point and the second edge point after removing the edge point where the first edge point coincides with the second edge point is determined as the third edge line, that is, the third edge line is located at the position obtained by the above segmentation process.
  • the edge result of the above edge detection, or on the edge result obtained by the above edge detection not all on the edge result obtained by the segmentation processing and edge detection. Then, a third edge line having two or more intersection points with the first edge line is determined as the second edge line.
  • the accuracy of the second edge line obtained in the embodiment of the present application is higher, and by using the second edge line, the correction of the edge point of the first contour of the target object can be converted into the correction of the edge line of the first contour of the target object, It is beneficial to improve the accuracy and efficiency of correction.
  • the performing edge detection on the target object to obtain the second edge point of the target object includes:
  • edge points on the edge line of the target object are adjusted to obtain the second edge point.
  • Edge detection is performed on the target object, and the edge of the target object is separated to obtain a second edge point of the target object.
  • the edge detection based on the deep learning of the preselection box is not to detect whether the pixel point on the target image is an edge point, but to detect the position and category of the edge point corresponding to each preselection box.
  • the edge point corresponding to each preselected box on the edge line determines the category label of the target object, and then adjust the edge point corresponding to each preselected box on the edge line according to the category label, and allocate the edge point corresponding to each preselected box For the edge direction and edge distance, move the edge distance of the edge points corresponding to each pre-selected box according to the edge direction to obtain the above-mentioned second edge point.
  • the position accuracy of the second edge point obtained can be higher, and the edge line of the target object obtained accordingly is also more accurate, which is conducive to improving the accuracy and efficiency of the labeling results .
  • the determining the label of the target object according to the position of the edge point on the edge line of the target object includes:
  • a possible implementation manner of determining the category label of the target object is provided.
  • the category label of the target object is further determined according to the keyness of the label.
  • the category label of the target object is determined by assigning categories to the pre-selected boxes, which can improve the accuracy and efficiency of determining the category of the target object, and adjust the edge points on the edge line of the target object accordingly, so that the obtained second The position accuracy of edge points is higher.
  • the performing segmentation processing on the target image to obtain the first edge point of the target object includes:
  • the image segmentation method based on deep learning is used to input the target image to the convolutional neural network for segmentation processing.
  • the edge points of the target object obtained by this method are more accurate. more accurate and more efficient.
  • the determining the tag of the target object according to the criticality of the first tag and the criticality of the second tag includes:
  • the second tag is determined as the tag of the target object.
  • the category label of the target object when the number of edge points on the first label is equal to the number of edge points on the second label, the category label of the target object will be further determined according to the keyness of the label. For example, when the criticality of the first label is higher than that of the second label, the first label is determined as the category label of the target object. Similarly, when the criticality of the second label is higher than that of the first label In the case of criticality, the second label is determined as the second label of the target object.
  • the category label of the target object is determined by further comparing the label criticality, which can improve the accuracy and efficiency of determining the category of the target object, and adjust the edge points on the edge line of the target object accordingly, so that the obtained second The position accuracy of edge points is higher.
  • the method also includes:
  • the target data of the target radar a first frame and a second frame of the target object are obtained; the first frame is different from the second frame, and the target data includes data obtained by the target radar detecting the target object;
  • different frames of the target object such as the first frame and the second frame
  • an initial mark is performed on the target object based on the first bounding box to obtain a first outline of the target object.
  • the second frame is used to correct the frame of the first contour obtained by the initial calibration, so as to obtain the second contour of the target object.
  • the data labeling method provided in the embodiment of this application is not only applicable to labeling two-dimensional image data, but also suitable for labeling three-dimensional radar data. Converting the correction of the edge points of the first contour of the target object to the correction of the frame of the first contour of the target object, and correcting to obtain the second contour of the target object can greatly improve the accuracy and efficiency of the labeling result.
  • the modifying the frame of the first outline by using the second frame includes:
  • a contour formed by the second frame and the frame of the first contour is determined as the second contour.
  • the method before obtaining the first frame and the second frame of the target object according to the target data of the target radar, the method further includes:
  • segmentation processing is used to separate the target object from the detection area of the target radar;
  • the edge detection is used to separate the edge of the target object
  • the first frame and the second frame of the target object are obtained, including:
  • the first frame and the second frame are determined according to the first edge surface and the second edge surface.
  • determining the first frame and the second frame according to the first edge surface and the second edge surface includes:
  • the first frame is obtained;
  • the edge surface on the first frame is an edge surface where the first edge surface and the second edge surface overlap;
  • the second frame is obtained according to the first edge surface, the second edge surface and the first frame.
  • obtaining the second frame according to the first edge surface, the second edge surface, and the first frame includes:
  • a third frame is obtained; the surface on the third frame is the first edge surface or the second edge surface, and is not the first edge surface an edge face coincident with said second edge face;
  • the third frame having two or more intersecting surfaces with the first frame is determined as the second frame.
  • the performing edge detection on the target object to obtain the second edge surface of the target object includes:
  • the edge surface on the frame of the target object is adjusted to obtain the second edge surface.
  • determining the label of the target object according to the position of the edge face on the frame of the target object includes:
  • the embodiment of the present application provides a data labeling device, which includes:
  • a determining unit configured to obtain a first edge line and a second edge line of a target object included in the target image; the first edge line and the second edge line are different;
  • the determining unit is further configured to obtain a first contour of the target object according to the first edge line;
  • a correction unit configured to use the second edge line to correct the edge line of the first contour to obtain the second contour of the target object.
  • the correction unit is specifically configured to determine a contour formed by the second edge line and edge lines of the first contour as the second contour.
  • the device also includes:
  • a segmentation unit configured to perform segmentation processing on the target image to obtain a first edge point of the target object; the segmentation processing is used to separate the target object included in the target image;
  • An edge detection unit configured to perform edge detection on the target object to obtain a second edge point of the target object; the edge detection is used to separate the edge of the target object;
  • the determining unit is specifically configured to determine the first edge line and the second edge line according to the first edge point and the second edge point.
  • the determining unit is further configured to obtain the first edge line according to the first edge point and the second edge point; the points on the first edge line An edge point where the first edge point and the second edge point coincide;
  • the determining unit is specifically further configured to obtain the second edge line according to the first edge point, the second edge point, and the first edge line.
  • the determining unit is further configured to obtain a third edge line according to the first edge point and the second edge point; the points on the third edge line are the the first edge point or the second edge point, and is not an edge point where the first edge point and the second edge point coincide;
  • the determining unit is further configured to determine the third edge line having two or more intersection points with the first edge line as the second edge line.
  • the edge detection unit is specifically configured to determine the label of the target object according to the position of the edge point on the edge line of the target object;
  • the edge detection unit is further configured to adjust edge points on the edge line of the target object according to the label of the target object to obtain the second edge point.
  • the edge detection unit is further configured to, when the number of edge points on the first label is greater than the number of edge points on the second label, The label is determined as the label of the target object;
  • the edge detection unit is further configured to determine the second label as the label of the target object
  • the edge detection unit is specifically further configured to: when the number of edge points on the first label is equal to the number of edge points on the second label, according to the key of the first label degree and the key degree of the second label, and determine the label of the target object.
  • the segmentation unit is specifically configured to input the target image into a convolutional neural network for the segmentation process to obtain the first edge point.
  • the determining unit is further configured to determine the first tag as the the label of the target audience
  • the determining unit is specifically further configured to determine the second tag as the tag of the target object when the criticality of the second tag is higher than that of the first tag.
  • the determining unit is configured to obtain the first frame and the second frame of the target object according to the target data of the target radar; the first frame and the second frame are different, and the The target data includes data obtained by the target radar detecting the target object;
  • the determining unit is further configured to obtain a first contour of the target object according to the first frame
  • the correction unit is configured to use the second frame to correct the frame of the first contour to obtain the second contour of the target object.
  • the correction unit is specifically configured to determine a contour formed by the second frame and a frame of the first contour as the second contour.
  • the segmentation unit is configured to perform segmentation processing on the target object to obtain a first edge surface of the target object; the segmentation processing is configured to segment the target object from the separated from the detection zone of the target radar;
  • the edge detection unit is configured to perform edge detection on the target object to obtain a second edge surface of the target object; the edge detection is used to separate the edge of the target object;
  • the determining unit is specifically configured to determine the first frame and the second frame according to the first edge surface and the second edge surface.
  • the determining unit is further configured to obtain the first frame according to the first edge surface and the second edge surface; the edge surface on the first frame is an edge face where the first edge face and the second edge face coincide;
  • the determining unit is specifically further configured to obtain the second frame according to the first edge surface, the second edge surface, and the first frame.
  • the determining unit is further configured to obtain a third frame according to the first edge surface and the second edge surface; the surface on the third frame is the an edge face or said second edge face, and is not an edge face where said first edge face and said second edge face coincide;
  • the determining unit is further configured to determine, as the second frame, the third frame that has two or more intersecting surfaces with the first frame.
  • the edge detection unit is specifically configured to determine the label of the target object according to the position of the edge face on the frame of the target object;
  • the edge detection unit is specifically further configured to adjust an edge surface on a border of the target object according to the label of the target object to obtain the second edge surface.
  • the edge detection unit is further configured to, when the number of edge faces on the first label is greater than the number of edge faces on the second label, The label is determined as the label of the target object;
  • the edge detection unit is further configured to determine the second label as the label of the target object
  • the edge detection unit is specifically further configured to: when the number of edge faces on the first label is equal to the number of edge faces on the second label, according to the key of the first label degree and the key degree of the second label, and determine the label of the target object.
  • the embodiment of the present application provides a data labeling device, the data labeling device includes a processor and a memory; the memory is used to store computer programs or instructions; the processor is used to execute the computer stored in the memory A program or an instruction, so that the data tagging device executes the method according to the above first aspect and any possible implementation manner.
  • the data tagging device further includes a transceiver, configured to receive data or send data.
  • an embodiment of the present application provides a computer-readable storage medium, which is used to store a computer program; when the computer program is executed, the first aspect and any possible implementation The method described in the manner is implemented.
  • the embodiment of the present application provides a computer program product, the computer program product includes a computer program; when the computer program is executed, the method described in the first aspect and any possible implementation manner is executed accomplish.
  • the embodiment of the present application provides a chip, the chip includes a processor, the processor is used to execute instructions, and when the processor executes the instructions, the chip performs the first aspect and any one of Possible implementations of the methods described.
  • the chip further includes a communication interface for inputting data or outputting data.
  • an embodiment of the present application provides a terminal, the terminal comprising at least one data tagging device according to the second aspect or the third aspect, or the chip according to the sixth aspect.
  • the embodiment of the present application provides a server, the server comprising at least one data tagging device according to the second aspect or the third aspect, or the chip according to the sixth aspect.
  • the above-mentioned processor may be a processor specially used to execute these methods, or may be executed by a computer in the memory instructions to perform these methods, such as a general-purpose processor.
  • the above-mentioned memory can be a non-transitory (non-transitory) memory, such as a read-only memory (Read Only Memory, ROM), which can be integrated with the processor on the same chip, or can be respectively arranged on different chips.
  • ROM read-only memory
  • the embodiment does not limit the type of the memory and the arrangement of the memory and the processor.
  • the above at least one memory is located outside the device.
  • the at least one memory is located within the device.
  • part of the memory of the at least one memory is located inside the device, and another part of the memory is located outside the device.
  • processor and the memory may also be integrated into one device, that is, the processor and the memory may also be integrated together.
  • the edge detection of the target object in the target image is carried out by proposing a deep learning method based on the pre-selected frame, and the first contour of the target object is obtained, and then the weak semantic information is introduced to the edge points of the first contour of the target object
  • the correction of the target object is transformed into the correction of the edge line of the first contour of the target object, and the second contour of the target object is obtained through correction, which can greatly improve the accuracy and efficiency of the labeling result.
  • Fig. 1 is a schematic diagram of the effect of a data annotation provided by the embodiment of the present application.
  • FIG. 2 is a schematic diagram of the architecture of a data labeling system provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of an application scenario of data labeling provided by an embodiment of the present application.
  • FIG. 4 is a schematic flow chart of a data labeling method provided by an embodiment of the present application.
  • Fig. 5a is a schematic diagram of the effect of data labeling provided by the embodiment of the present application.
  • Fig. 5b is a schematic diagram of the effect of data labeling provided by the embodiment of the present application.
  • Fig. 5c is a schematic diagram of the effect of data labeling provided by the embodiment of the present application.
  • FIG. 6a is a schematic diagram of the architecture of a data labeling system provided by an embodiment of the present application.
  • FIG. 6b is a schematic diagram of the architecture of a data labeling system provided by an embodiment of the present application.
  • FIG. 6c is a schematic diagram of the architecture of a data labeling system provided by the embodiment of the present application.
  • FIG. 7 is a schematic flowchart of another data labeling method provided by the embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a data labeling device provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • an embodiment means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application.
  • the occurrences of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is understood explicitly and implicitly by those skilled in the art that the embodiments described herein can be combined with other embodiments.
  • At least one (item) means one or more
  • “multiple” means two or more
  • at least two (items) means two or three And three or more
  • "and/or” is used to describe the association relationship of associated objects, indicating that there can be three types of relationships, for example, "A and/or B” can mean: only A exists, only B exists, and A exists at the same time and B, where A and B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an "or” relationship.
  • “At least one of the following” or similar expressions refer to any combination of these items, including any combination of single or plural items.
  • At least one item (piece) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c ", where a, b, c can be single or multiple.
  • FIG. 1 is a schematic diagram of an effect of data labeling provided by an embodiment of the present application.
  • Edge surface (A) in Figure 1 is the three-dimensional model of the target object (target vehicle) obtained by the vehicle-mounted radar according to the collected target data.
  • the three-dimensional model of the target vehicle is composed of several edge surfaces, which can be passed through It is obtained by marking the collected target data, as shown in Figure 1 (B) is the edge surface.
  • Edge line if (B) in Fig. 1 is used as the two-dimensional image model of the target object (target vehicle), then the two-dimensional image model of the target vehicle is composed of several edge lines, and the edge line can be collected by The two-dimensional image data is marked, and (C) in Figure 1 is the edge line.
  • Edge points Several edge points are obtained by marking the collected two-dimensional image data, and the edge line (C) in Figure 1 can be obtained from these edge points, and the black points marked on the edge line are edge points.
  • Image annotation refers to the process of adding textual feature information to the image to reflect its content through the method of machine learning for the visual content of the image.
  • the basic idea is to use the labeled image set or other available information to automatically learn the potential association or mapping relationship between the semantic concept space and the visual feature space, and add text keywords to unknown images.
  • image automatic labeling technology Through the processing of image automatic labeling technology, the problem of image information can be transformed into the problem of text information processing with relatively mature technology.
  • Image data annotation is widely used in fields such as autonomous driving, face recognition, and medical image recognition.
  • there are two main methods of labeling image data in the field of autonomous driving one is frame labeling, and the other is fine segmentation labeling.
  • the method of drawing frame labeling is to make the edge of the frame close to the edge of the target object when labeling the image, and at the same time indicate the attributes of each frame.
  • each box is a small image, and each small image corresponds to an object category.
  • image data labeling requires a large amount of labeling data.
  • machine algorithms will summarize the high-dimensional features of these objects by themselves. When identifying new images, they can use the summarized high-dimensional features to identify new images. Each possible outcome is given a probability.
  • the segmentation and labeling method uses an interactive image segmentation method. For a given initial area of the original image, the segmentation algorithm is used to segment the target object in the initial area, thereby generating a series of edge points, and then correcting by correcting the edge points. Split results.
  • segmentation and labeling method Assuming that there are various types of objects such as people, cars, street trees, and buildings in the original image (a), the segmentation and labeling needs to identify different types of objects in the image and detect their positions. to split.
  • the first is to perform semantic segmentation on the original image (a), that is, classify each pixel in the original image (a), determine the category of each pixel (such as belonging to the background, person or car, etc.), so that the original image
  • Each object in (a) is marked with a category label, and the objects contained in the image are divided into people, street trees, cars, and building labels, and are divided into regions to obtain image (b).
  • instance segmentation is performed on the image (b) obtained by semantic segmentation.
  • Instance segmentation is a combination of target detection and semantic segmentation, that is, the target object is detected in the image, and then combined with the category of each pixel determined by the above semantic segmentation, Determine the category label of the target object, resulting in image (c). For example, taking a car as the target object, semantic segmentation does not distinguish different instances belonging to the same car category (for example, all car objects are marked in red), while instance segmentation combines the results of semantic segmentation and target object detection to distinguish different instances of the same type (For example, use different colors to distinguish different car categories).
  • panoramic segmentation is performed on the image (c) obtained by instance segmentation.
  • Panoramic segmentation is a combination of semantic segmentation and instance segmentation, that is, all objects in the image are detected, and at the same time, different instances in the same category must be distinguished.
  • the edge points of the generated target objects need to be corrected in the end, and it is time-consuming and labor-intensive to correct the edge points one by one, and the accuracy and efficiency of the labeling results are low.
  • the embodiment of the present application provides a data labeling system and a new data labeling method based on the data labeling system.
  • This method detects the edge of the target object in the target image by proposing a deep learning method based on the pre-selected frame to obtain the first contour of the target object, and then introduces weak semantic information to convert the correction of the edge points of the first contour of the target object into The correction of the edge line of the first contour of the target object, and the correction to obtain the second contour of the target object can greatly improve the accuracy and efficiency of the labeling result.
  • FIG. 2 is a schematic diagram of an architecture of a data labeling system provided by an embodiment of the present application.
  • the system architecture mainly includes three aspects: a data acquisition module, a storage module, and a data labeling module.
  • the data collection module is used to collect the data to be marked, specifically, the image data can be collected through the camera, or the target data can be collected through the radar.
  • the storage module is used to store the data to be marked collected by the data acquisition module.
  • the storage module can be a cloud or a local server, and realizes data communication with the data acquisition module through a mobile network.
  • the data labeling module can be an independent module, or an intelligent driving platform, such as a mobile data center (Mobile Data Center, MDC), or an intelligent cockpit platform, such as a cockpit domain controller (Cockpit Domain Controller, CDC), or It is a vehicle control platform, such as a vehicle domain controller (Vehicle Domain Controller, VDC), etc., which is used to obtain the data to be marked from the cloud or server for labeling, obtain the labeling results, and store the labeling results in the cloud or server. so that it can be used in subsequent data identification.
  • an intelligent driving platform such as a mobile data center (Mobile Data Center, MDC)
  • an intelligent cockpit platform such as a cockpit domain controller (Cockpit Domain Controller, CDC)
  • CDC cockpit domain controller
  • VDC Vehicle Domain Controller
  • data annotation system architecture in FIG. 2 is only an exemplary implementation in the embodiment of the present application, and the data annotation system architecture in the embodiment of the application includes but not limited to the above data annotation system architecture.
  • a new data labeling method is also provided, and the data labeling process of the method is completed by the data labeling module in the above data labeling system.
  • This method can be applied to various scenarios such as automatic driving, portrait recognition, medical image recognition, etc. The following will take the data labeling in the application scenario of automatic driving as an example to illustrate.
  • FIG. 3 is a schematic diagram of an application scenario of data labeling provided by an embodiment of the present application.
  • the application scenario may include a cloud network and multiple vehicle driving control devices (test vehicle, user vehicle a, user vehicle b, and user vehicle c).
  • the above-mentioned multiple vehicle driving control devices can communicate data with the cloud network, thereby realizing data interaction between the test vehicle and the user vehicle, as well as data interaction between different user vehicles (user car a, user b, user car c) .
  • the above-mentioned vehicle driving control device is a smart car that senses the road environment through a camera or an on-board sensor system, automatically plans a driving route, and controls the vehicle to reach a predetermined target. Smart cars use computers, modern sensing, information fusion, communication, artificial intelligence, and automatic control technologies in a concentrated manner.
  • the vehicles in this application can be vehicles that mainly rely on the computer system-based intelligent driver in the vehicle to achieve the purpose of unmanned driving. It is an intelligent vehicle with an assisted driving system or a fully automatic driving system, and it can also be a wheeled mobile robot.
  • the data labeling method provided in the embodiment of the present application is completed by a test vehicle.
  • the test vehicle collects the image data of the target road section during driving through the camera, and relies on the computer system in the car to complete the labeling of the image data, obtain the corresponding data label, and transmit the data label to the cloud network middle.
  • the image data of the driving section is collected through the camera, and the data label marked by the test car is obtained from the cloud network, and the data label is combined with the image recognition algorithm to automatically identify the objects in the collected image, thereby To achieve the purpose of sensing the road environment, automatically planning the driving route and controlling the vehicle to avoid obstacles and objects, and arrive at the predetermined address.
  • the data labeling method provided in the embodiment of the present application is completed by the user's car. Due to the complex and changeable road conditions in the actual driving process of the vehicle, it is difficult to complete the recognition of all the collected image data only by relying on the data labels obtained by the test vehicle in the test of the target road section. At this time, the user's car needs to rely on the computer system in the car to complete the labeling of the image data, obtain the corresponding data label, and use the data label combined with the image recognition algorithm to automatically identify the objects in the collected image, so as to achieve road perception. Environment, automatically plan the driving route and control the vehicle to avoid obstacles and objects, and reach the purpose of the predetermined address. In addition, the user's car will also transmit the data label to the cloud network so that it can be used for subsequent image recognition of other user's cars.
  • Figure 4 is a schematic flow chart of a data labeling method provided by the embodiment of the present application, which includes but is not limited to the following steps:
  • Step 401 The electronic device obtains the first edge line and the second edge line of the target object included in the target image.
  • the electronic device in the embodiment of the present application is a device equipped with a processor capable of executing instructions executed by a computer.
  • the electronic device may be a terminal such as a computer or a controller, or may be a server or the like.
  • it can be the vehicle-mounted device equipped with a computer system in the above-mentioned Figure 3, which is used to label the collected image data, obtain the corresponding data label, and use the data label combined with the image recognition algorithm to automatically identify the collected images.
  • Existing objects in order to realize the purpose of sensing the road environment, automatically planning the driving route and controlling the vehicle to avoid obstacles and objects, and reach the predetermined address.
  • the electronic device obtains the first edge line and the second edge line of the target object included in the target image by performing different processes on the target image, and the first edge line and the second edge line are different.
  • the electronic device performs different processing on the target image.
  • the target image may be segmented to separate the target object in the target image, so as to obtain the first edge point of the target object.
  • the segmentation processing here includes image segmentation methods based on non-deep learning (such as segmentation methods based on watershed algorithm, graph theory, etc.) and image segmentation methods based on deep learning.
  • image segmentation methods based on deep learning require Using the convolutional neural network, the target image is input to the convolutional neural network for segmentation.
  • an edge detection is performed on the target object, and the edge of the target object is separated to obtain a second edge point of the target object.
  • the edge detection here includes edge detection based on deep learning of non-preselected boxes and edge detection based on deep learning of preselected boxes. Whether the pixel is an edge point, but to detect the position and category of the edge point corresponding to each preselection box.
  • the above-mentioned first edge point and the second edge point are both edge points of the target object, they are edge points obtained by adopting different methods for the target object. Therefore, the above-mentioned first edge point and the second edge point are not completely the same. According to the first edge point and the second edge point, the first edge line and the second edge line of the above-mentioned target object can be determined. Since the first edge point and the second edge point are not completely the same, correspondingly, the first edge line and the second edge line obtained accordingly are also not completely the same.
  • the above-mentioned determination of the first edge line and the second edge line of the target object according to the first edge point and the second edge point may specifically be that the first edge point and the second edge point coincide
  • the line formed by the edge points is determined as the first edge line, that is, the first edge line is located on the edge result obtained by the above-mentioned segmentation processing and edge detection, and the above-mentioned segmentation processing and edge detection can both obtain the first edge line.
  • the line formed by the union of the first edge point and the second edge point after removing the edge points where the first edge point coincides with the second edge point is determined as the third edge line, that is, the third edge line is located in the above-mentioned segmentation process.
  • the edge results obtained, or located on the edge results obtained by the above-mentioned edge detection, are not all located on the edge results obtained by the segmentation process and edge detection.
  • the third edge line having two or more intersection points with the first edge line is determined as the second edge line.
  • the accuracy of the acquired edge line of the target object can be improved, and it is beneficial to the subsequent analysis of the edge of the first contour of the target object
  • the correction of the point is transformed into the correction of the edge line of the first contour of the target object, which improves the accuracy and efficiency of the labeling result.
  • the above implementation manner of obtaining the first edge line and the second edge line of the target object included in the target image can also be further described through a schematic diagram of the effect of data annotation.
  • FIG. 5a is a schematic diagram of an effect of data labeling provided by the embodiment of the present application.
  • the target object in the target image is separated to obtain the first edge point of the target object.
  • perform edge detection on the target object separate the edge of the target object, and obtain the second edge point of the target object.
  • the line formed by the coincident edge points of the first edge point and the second edge point is determined as the first edge line of the target object (ie, the edge line a in FIG. 5 a ). It can be seen that the first edge lines are located on the edge results obtained by the above segmentation processing and edge detection.
  • the third edge line is located on the edge result obtained by the above-mentioned segmentation processing, or on the edge result obtained by the above-mentioned edge detection, but not all of them are located on the edge results obtained by the segmentation processing and edge detection.
  • the third edge line having two or more intersection points with the first edge line is determined as the second edge line (ie, edge line b in FIG. 5 a ).
  • the implementation manner of the above-mentioned edge detection based on the deep learning of the pre-selection box can also be further described through the schematic diagram of the effect of the data labeling.
  • FIG. 5b is a schematic diagram of an effect of data labeling provided by the embodiment of the present application.
  • the hexagon is the actual outline of the target object, and the closed area within the hexagon is the internal area of the target object, including the hexagon
  • the ellipse area of is the sensitive area of the outline of the target object, and it is necessary to mark the target object in this sensitive area to obtain its precise outline.
  • the edge detection of the target object based on the deep learning of the pre-selection box can be separated to obtain the edge point of the target object.
  • FIG. 5c is a schematic diagram of an effect of data labeling provided by the embodiment of the present application.
  • the target object in the process of labeling the target image, the target object needs to be divided into several grid units, and the preselected content in the preselection box here is all the horizontal axes of the grid into which the target image is divided segment, such as the line segment marked by the oval area in Figure 5c.
  • the edge detection based on the deep learning of the preselection box is not to detect whether the pixel point on the target image is an edge point, but to detect the position and category of the edge point on the horizontal axis segment corresponding to each preselection box.
  • the category label of the target object determines the category label of the target object, and label the edge point on the horizontal axis segment corresponding to each preselected box on the edge line of the target object
  • the higher category is determined as the category label of the target object. For example, when the number of edge points on the first label is greater than the number of edge points on the second label, the first label is determined as the category label of the target object. Similarly, on the first label If the number of edge points of is less than the number of edge points located on the second label, the second label is determined as the category label of the target object.
  • the category label of the target object is further determined according to the keyness of the label. For example, when the criticality of the first label is higher than that of the second label, the first label is determined as the category label of the target object. Similarly, when the criticality of the second label is higher than that of the first label In the case of criticality, the second label is determined as the second label of the target object.
  • the criticality of tags is not a fixed value, and can vary according to different application scenarios. In automatic driving application scenarios, by default, people are more important than cars, and cars are more important than street trees.
  • the criticality of people is higher than that of cars, and the criticality of cars is higher than that of street trees.
  • adjust the edge points corresponding to each preselection box on the edge line according to the category label assign the edge direction and edge distance to the edge points corresponding to each preselection box, and move the corresponding edge points of each preselection box according to the edge direction The edge distance of the edge point, so as to obtain the edge point of the target object, that is, the above-mentioned second edge point.
  • the pre-selected frame is assigned a category label, and the key degree of the label is further compared to determine the category label of the target object, which can improve the accuracy and efficiency of determining the target object category, and based on this Adjusting the edge points on the edge line of the target object can make the position accuracy of the second edge point obtained higher, and the edge line of the target object obtained accordingly is also more accurate, which is conducive to improving the accuracy and efficiency of the labeling results.
  • Step 402 Obtain a first contour of the target object according to the first edge line.
  • the electronic device After obtaining the first edge line of the target object, the electronic device initially marks the target object based on the first edge line to obtain the first outline of the target object.
  • the first contour can be used as the final labeled contour result of the target object, or as the candidate contour result of the target object label, and on this basis, the accuracy of the labeling result can be improved by correcting the edge of the candidate contour.
  • the obtained edge line a is used to initially mark the target object to obtain the first contour of the target object. It can be seen that the actual contour of the target object is slightly different from the obtained first contour, and the first contour does not completely cover the target object and is tangent to the edge of the target object. It is necessary to correct the edge of the first contour by using the edge line b on the basis of the initial mark to improve the accuracy of the labeling result.
  • Step 403 Using the second edge line to correct the edge line of the first contour to obtain a second contour of the target object.
  • the electronic device uses the second edge line to correct the edge line of the first contour obtained from the initial calibration to obtain the second contour of the target object.
  • the contour formed by the second edge line and the edge line of the first contour may be determined as the second contour of the target object. This correction method improves the accuracy and efficiency of the labeling result.
  • the edge line a to initially mark the target object to obtain the first contour, and then using the edge line b to correct the edge of the first contour to improve the accuracy of the labeling result
  • the second contour of the target object obtained after correction completely covers the target object and is tangent to the edge of the target object.
  • the segmentation algorithm usually segments the target object to obtain the edge points of the target object, and then corrects the outline of the target object by correcting the edge points. Therefore, it is necessary to manually correct the edge points one by one, which is time-consuming and labor-intensive, and the accuracy and efficiency of the labeling results are low.
  • the data labeling method provided by the embodiment of the present application uses the first edge line to initially mark the target object, and then uses the second edge line to perform initial mark on the edge line of the first contour obtained from the initial mark Correction, so that the correction of the edge point of the first contour of the target object is converted into the correction of the edge line of the first contour of the target object, and the second contour of the target object is obtained by correction, which can greatly improve the accuracy and efficiency of the labeling results .
  • the data labeling method provided in FIG. 4 above may correspond to the data labeling system architecture shown in FIGS. 6a to 6c.
  • the architecture of the data labeling system in FIGS. 6 a to 6 c will be described below in combination with the data labeling method in FIG. 4 .
  • FIG. 6a is a schematic structural diagram of a data labeling system provided by an embodiment of the present application.
  • the image data of the target image is first obtained, and then image segmentation based on non-deep learning, image segmentation based on deep learning, edge detection based on deep learning of preselected boxes, and depth of non-preselected boxes are respectively performed on the image data.
  • the learned edge detection obtains the results of different edge information about the target object, and generates strong semantic information and weak semantic information according to the results obtained by the above different processing.
  • the strong semantic information is the edge line a in the above-mentioned Figure 5a or the first edge line in the above-mentioned Figure 4, that is, both are located on the result of the edge information obtained by the above-mentioned different processing, in other words, the edge a obtained by the above-mentioned different processing
  • the overlapping part of the information is the strong semantic information.
  • the weak semantic information is the edge line b in the above-mentioned Figure 5a or the second edge line in the above-mentioned Figure 4, that is, it is only located on the result of the edge information obtained by one of the above-mentioned different processes, in other words, the above-mentioned different
  • the non-overlapping part of the processed edge information is the weak semantic information.
  • the target object is initially marked based on the generated strong semantic information, and the result of the initial mark is corrected based on the generated weak semantic information.
  • step 402 obtain the first outline of the target object according to the first edge line
  • step 403 use the second edge line to perform Correction, to obtain the second contour of the target object.
  • step 403 use the edge line a to initially mark the target object to obtain the first outline of the target object
  • edge line b to perform initial marking on the edge line of the first outline Correction to obtain the second contour of the target object.
  • FIG. 6b is a schematic diagram of the architecture of a data labeling system provided by an embodiment of the present application.
  • image segmentation based on deep learning and image segmentation based on non-deep learning are respectively performed on the image data of the target image, and the results of edge information about the target object obtained by the two are fused to generate edge points of the target object .
  • perform edge detection based on deep learning of pre-selected boxes and edge detection based on deep learning of non-pre-selected boxes on the image data of the target image fuse the edge detection results obtained by the two, and use the edge detection fusion results for the target generated above The correction of the edge points of the object, thereby obtaining strong semantic information, representing the first edge line of the target object.
  • FIG. 6c is a schematic diagram of the architecture of a data labeling system provided by the embodiment of the present application.
  • image segmentation based on non-deep learning image segmentation based on deep learning
  • edge detection based on deep learning of preselected boxes image segmentation based on preselected boxes
  • deep learning based on non-preselected boxes are respectively performed on image data.
  • Edge detection which can generate weak semantic information.
  • Figure 7 is a schematic flowchart of another data labeling method provided by the embodiment of the present application, which includes but is not limited to the following steps:
  • Step 701 The electronic device obtains the first frame and the second frame of the target object according to the target data of the target radar.
  • the electronic device in the embodiment of the present application is a device equipped with a processor capable of executing instructions executed by a computer.
  • the electronic device may be a terminal such as a computer or a controller, or may be a server or the like.
  • it can be the vehicle-mounted equipment equipped with a computer system in the above-mentioned Figure 3, which is used to mark the target data collected by the vehicle-mounted target radar, obtain the corresponding data label, and use the data label combined with the recognition algorithm to automatically identify the collected data.
  • Objects in the target data to realize the purpose of sensing the road environment, automatically planning the driving route and controlling the vehicle to avoid obstacles and objects, and reach the predetermined address.
  • the electronic device obtains the first frame and the second frame of the target object by performing different processes on the target data collected by the target radar to detect the target object, and the first frame and the second frame are different.
  • the electronic equipment performs different processing on the target data collected by the target radar to detect the target object.
  • the target object can be separated from the detection area of the target radar by performing segmentation processing on the target object, so as to obtain the first an edge face. Then perform edge detection on the target object, separate the edge of the target object, and obtain the second edge surface of the target object.
  • first edge surface and the second edge surface are both edge surfaces of the target object, they are edge surfaces obtained by adopting different methods for the target object. Therefore, the above-mentioned first edge surface and the second edge surface are not completely the same. According to the first edge surface and the second edge surface, the first frame and the second frame of the target object can be determined. Since the above-mentioned first edge surface and the second edge surface are not completely the same, correspondingly, the first frame and the second frame obtained accordingly are also not completely the same.
  • the above-mentioned determination of the first frame and the second frame of the target object according to the first edge surface and the second edge surface may specifically be the edge where the first edge surface and the second edge surface overlap
  • the frame formed by the plane is determined as the first frame, that is, the first frame is located on the edge result obtained by the above-mentioned segmentation processing and edge detection, and the above-mentioned segmentation processing and edge detection can both obtain the first frame.
  • the frame formed by the union of the first edge face and the second edge face is determined as the third frame after removing the overlapping edge face of the first edge face and the second edge face, that is, the third frame is located at the position obtained by the above segmentation process.
  • the edge result, or on the edge result obtained by the above-mentioned edge detection not all on the edge result obtained by the segmentation process and the edge detection.
  • the third frame that has two or more intersecting faces with the first frame is determined as the second frame.
  • the above-mentioned edge detection is performed on the target object to obtain the second edge surface of the target object.
  • the label of the target object may be determined according to the position of the edge surface on the border of the target object. For example, when the number of edge faces on the first label is greater than the number of edge faces on the second label, the first label is determined as the label of the target object; similarly, the edge faces on the first label If the number of faces is smaller than the number of edge faces on the second label, the second label is determined as the label of the target object.
  • the number of edge faces on the first label is equal to the number of edge faces on the second label, further determine the label of the target object according to the keyness of the first label and the keyness of the second label.
  • the criticality of tags is not a fixed value, and can vary according to different application scenarios. In automatic driving application scenarios, by default, people are more important than cars, and cars are more important than street trees. , therefore, the criticality of people is higher than that of cars, and the criticality of cars is higher than that of street trees.
  • the category label of the target object is determined, the position of the edge surface on the border of the target object is adjusted according to the category label to obtain a second edge surface.
  • Step 702 Obtain a first contour of the target object according to the first frame.
  • the electronic device After obtaining the first frame of the target object, the electronic device initially marks the target object based on the first frame to obtain the first contour of the target object.
  • the first contour can be used as the final labeled contour result of the target object, or as the candidate contour result of the target object label, and on this basis, the accuracy of the labeling result can be improved by correcting the frame of the candidate contour.
  • Step 703 Use the second frame to correct the frame of the first contour to obtain a second contour of the target object.
  • the electronic device uses the second frame to correct the frame of the first contour obtained from the initial calibration to obtain the second contour of the target object.
  • the contour formed by the second frame and the frame of the first contour may be determined as the second contour of the target object. This correction method improves the accuracy and efficiency of the labeling result.
  • different frames of the target object can be obtained by performing different processes on the target data collected by the target radar to detect the target object. Then, an initial mark is performed on the target object based on the first bounding box to obtain a first outline of the target object. Then, the second frame is used to correct the frame of the first contour obtained by the initial calibration, so as to obtain the second contour of the target object.
  • the data labeling method provided in the embodiment of the present application is not only suitable for labeling two-dimensional image data, but also suitable for labeling three-dimensional radar data. Converting the correction of the edge points of the first contour of the target object to the correction of the frame of the first contour of the target object, and correcting to obtain the second contour of the target object can greatly improve the accuracy and efficiency of the labeling result.
  • FIG. 8 is a schematic structural diagram of a data labeling device provided by an embodiment of the present application.
  • the data labeling device 80 may include a determination unit 801 and a correction unit 802, where each unit is described as follows:
  • a determining unit 801 configured to obtain a first edge line and a second edge line of a target object included in the target image; the first edge line and the second edge line are different;
  • the determining unit 801 is further configured to obtain a first contour of the target object according to the first edge line;
  • the correction unit 802 is configured to use the second edge line to correct the edge line of the first contour to obtain the second contour of the target object.
  • the correction unit 802 is specifically configured to determine a contour formed by the second edge line and edge lines of the first contour as the second contour.
  • the device also includes:
  • a segmentation unit 803 configured to perform segmentation processing on the target image to obtain a first edge point of the target object; the segmentation processing is used to separate the target object contained in the target image;
  • An edge detection unit 804 configured to perform edge detection on the target object to obtain a second edge point of the target object; the edge detection is used to separate the edge of the target object;
  • the determining unit 801 is specifically configured to determine the first edge line and the second edge line according to the first edge point and the second edge point.
  • the determining unit 801 is further configured to obtain the first edge line according to the first edge point and the second edge point;
  • the point is an edge point where the first edge point and the second edge point coincide;
  • the determining unit 801 is specifically further configured to obtain the second edge line according to the first edge point, the second edge point, and the first edge line.
  • the determining unit 801 is further configured to obtain a third edge line according to the first edge point and the second edge point; the points on the third edge line are The first edge point or the second edge point, which is not an edge point where the first edge point and the second edge point overlap;
  • the determining unit 801 is further configured to determine the third edge line having two or more intersection points with the first edge line as the second edge line.
  • the edge detection unit 804 is specifically configured to determine the label of the target object according to the position of the edge point on the edge line of the target object;
  • the edge detection unit 804 is further configured to adjust edge points on the edge line of the target object according to the label of the target object to obtain the second edge point.
  • the edge detection unit 804 is further configured to: when the number of edge points on the first label is greater than the number of edge points on the second label, a label is determined as the label of the target object;
  • the edge detection unit 804 is specifically further configured to determine the second label when the number of edge points on the first label is less than the number of edge points on the second label. a label for the target audience;
  • the edge detection unit 804 is specifically further configured to: when the number of edge points on the first label is equal to the number of edge points on the second label, according to the The key degree and the key degree of the second label determine the label of the target object.
  • the segmentation unit 803 is specifically configured to input the target image into a convolutional neural network for the segmentation process to obtain the first edge point.
  • the determining unit 801 is further configured to determine the first tag as the label of the target object
  • the determining unit 801 is specifically further configured to determine the second tag as the tag of the target object in the case that the criticality of the second tag is higher than that of the first tag.
  • the determining unit 801 is configured to obtain the first frame and the second frame of the target object according to the target data of the target radar; the first frame and the second frame are different, so The target data includes data obtained by the target radar detecting the target object;
  • the determining unit 801 is further configured to obtain a first contour of the target object according to the first frame;
  • the correction unit 802 is configured to use the second frame to correct the frame of the first contour to obtain the second contour of the target object.
  • the correction unit 802 is specifically configured to determine the contour formed by the second frame and the frame of the first contour as the second contour.
  • the segmentation unit 803 is configured to perform segmentation processing on the target object to obtain the first edge surface of the target object; the segmentation processing is used to segment the target object from the separated from the detection area of the target radar;
  • the edge detection unit 804 is configured to perform edge detection on the target object to obtain a second edge surface of the target object; the edge detection is used to separate the edge of the target object;
  • the determining unit 801 is specifically configured to determine the first frame and the second frame according to the first edge surface and the second edge surface.
  • the determining unit 801 is further configured to obtain the first frame according to the first edge face and the second edge face; the edge face on the first frame an edge surface where the first edge surface and the second edge surface coincide;
  • the determining unit 801 is specifically further configured to obtain the second frame according to the first edge surface, the second edge surface, and the first frame.
  • the determining unit 801 is further configured to obtain a third frame according to the first edge surface and the second edge surface; the surface on the third frame is the the first edge face or the second edge face, and is not an edge face where the first edge face and the second edge face coincide;
  • the determining unit 801 is further configured to determine, as the second frame, the third frame that has two or more intersecting faces with the first frame.
  • the edge detection unit 804 is specifically configured to determine the label of the target object according to the position of the edge plane on the frame of the target object;
  • the edge detection unit 804 is further configured to adjust the edge plane on the frame of the target object according to the label of the target object to obtain the second edge plane.
  • the edge detection unit 804 is specifically further configured to, when the number of edge faces on the first label is greater than the number of edge faces on the second label, a label is determined as the label of the target object;
  • the edge detection unit 804 is further configured to determine the number of edge faces on the first label is less than the number of edge faces on the second label a label for the target audience;
  • the edge detection unit 804 is specifically further configured to: when the number of edge faces on the first label is equal to the number of edge faces on the second label, according to the The key degree and the key degree of the second label determine the label of the target object.
  • each unit in the device shown in FIG. 8 can be separately or all combined into one or several other units to form, or one (some) units can be further divided into more functional units. It is composed of multiple small units, which can achieve the same operation without affecting the realization of the technical effects of the embodiments of the present application.
  • the above-mentioned units are divided based on logical functions. In practical applications, the functions of one unit may also be realized by multiple units, or the functions of multiple units may be realized by one unit. In other embodiments of the present application, the network-based device may also include other units. In practical applications, these functions may also be assisted by other units, and may be implemented cooperatively by multiple units.
  • the edge detection of the target object in the target image is performed by proposing a deep learning method based on the pre-selected frame, and the first contour of the target object is obtained, and then the weak semantic information is introduced to the target object.
  • the correction of the edge points of the first contour is transformed into the correction of the edge line of the first contour of the target object, and the correction obtains the second contour of the target object, which can greatly improve the accuracy and efficiency of the labeling result.
  • FIG. 9 is a schematic structural diagram of an electronic device 90 provided in an embodiment of the present application.
  • the electronic device 90 may include a memory 901 and a processor 902 .
  • a communication interface 903 and a bus 904 may also be included, wherein the memory 901 , the processor 902 and the communication interface 903 are connected to each other through the bus 904 .
  • the communication interface 903 is used for data interaction with the above-mentioned data tagging device 80 .
  • the memory 901 is used to provide a storage space, in which data such as operating systems and computer programs can be stored.
  • Memory 901 includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (read-only memory, ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), or Portable read-only memory (compact disc read-only memory, CD-ROM).
  • the processor 902 is a module for performing arithmetic operations and logic operations, and may be in a processing module such as a central processing unit (central processing unit, CPU), a graphics processing unit (graphics processing unit, GPU) or a microprocessor (microprocessor unit, MPU). one or a combination of more.
  • a processing module such as a central processing unit (central processing unit, CPU), a graphics processing unit (graphics processing unit, GPU) or a microprocessor (microprocessor unit, MPU). one or a combination of more.
  • a computer program is stored in the memory 901, and the processor 902 calls the computer program stored in the memory 901 to execute the data labeling method shown in FIG. 4 above:
  • the processor 902 invokes the computer program stored in the memory 901 to execute the data labeling method shown in FIG. 7 above:
  • the target data of the target radar a first frame and a second frame of the target object are obtained; the first frame is different from the second frame, and the target data includes data obtained by the target radar detecting the target object;
  • the processor 902 calls the computer program stored in the memory 901, which can also be used to execute the method steps performed by the various units in the data labeling device 80 shown in FIG. I won't repeat them here.
  • the edge detection of the target object in the target image is carried out by proposing a deep learning method based on the pre-selected frame, and the first contour of the target object is obtained, and then the weak semantic information is introduced to the second contour of the target object.
  • the correction of the edge points of the first contour is transformed into the correction of the edge line of the first contour of the target object, and the second contour of the target object is obtained through correction, which can greatly improve the accuracy and efficiency of the labeling results.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the above-mentioned computer-readable storage medium stores a computer program.
  • the above-mentioned computer program is run on one or more processors, the above-mentioned FIG. 4 and FIG. 7 can be implemented. method shown.
  • An embodiment of the present application further provides a computer program product, where the computer program product includes a computer program, and when the computer program runs on a processor, the methods shown in FIG. 4 and FIG. 7 above can be implemented.
  • the embodiment of the present application also provides a chip, the chip includes a processor, and the processor is configured to execute instructions, and when the processor executes the instructions, the above methods shown in FIG. 4 and FIG. 7 can be implemented.
  • the chip also includes a communication interface, which is used to input data or output data.
  • the embodiment of the present application also provides a terminal, which includes at least one of the above-mentioned data labeling apparatus 80, or the electronic device 90, or the above-mentioned chip.
  • the embodiment of the present application also provides a server, which includes at least one of the above-mentioned data labeling apparatus 80, or the electronic device 90, or the above-mentioned chip.
  • the first contour of the target object is obtained, and then the introduction of weak semantic information will correct the edge points of the first contour of the target object It is transformed into the correction of the edge line of the first contour of the target object, and the second contour of the target object is obtained through correction, which can greatly improve the accuracy and efficiency of the labeling result.
  • the processes can be completed by hardware related to computer programs, and the computer programs can be stored in computer-readable storage media.
  • the computer programs When executed, , may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: various media capable of storing computer program codes such as read-only memory ROM or random access memory RAM, magnetic disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

A data annotation method and a related product, which belong to the technical field of data processing. The method comprises: obtaining a first edge line and a second edge line of a target object, which are contained in a target image, wherein the first edge line is different from the second edge line; obtaining a first contour of the target object according to the first edge line; and correcting an edge line of the first contour by using the second edge line, so as to obtain a second contour of the target object. By means of the method, edge detection is performed on a target object in a target image by means of proposing a deep learning method based on a pre-selected box, so as to obtain a first contour of the target object, and then, weak semantic information is introduced to convert the correction of an edge point of the first contour of the target object into the correction of an edge line of the first contour of the target object, so as to obtain a second contour of the target object by means of the correction, such that the accuracy and efficiency of an annotation result can be greatly improved.

Description

一种数据标注方法及相关产品A data labeling method and related products
本申请要求于2021年05月24日提交中国专利局、申请号为202110564875.4、申请名称为“一种数据标注方法及相关产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110564875.4 and the application name "A Data Labeling Method and Related Products" submitted to the China Patent Office on May 24, 2021, the entire contents of which are incorporated in this application by reference middle.
技术领域technical field
本申请涉及数据处理技术领域,尤其涉及一种数据标注方法及相关产品。The present application relates to the technical field of data processing, in particular to a data labeling method and related products.
背景技术Background technique
标签是一种用来描述业务实体特征的数据形式,通过标签可以对业务实体进行刻画,从多角度反映业务实体的特征。标签可以由数据标注得到,数据标注是通过数据标注人员借助标记工具对数据进行标注的行为。数据标注的类型包括:图像标注、语音标注、文本标注等。其中,图像标注可应用于人脸识别以及自动驾驶车辆识别等应用场景中。数据标注人员需要对不同的目标标记物用不同的颜色进行轮廓标记,然后对相应的轮廓打标签,用标签来概述轮廓内的目标标记物,以便让模型能够自动识别图像中的不同标记物。随着基于数据驱动的图像处理技术在各个领域获得广泛使用,越来越多的应用场景需要使用标签来自动识别图像中包含的不同标记物。而标签需要通过数据标注得到,因此,如何优化数据标注的准确率和效率至关重要。A tag is a data form used to describe the characteristics of a business entity. The tag can be used to describe the business entity and reflect the characteristics of the business entity from multiple perspectives. Labels can be obtained from data labeling, which is the act of labeling data by data labelers with the help of labeling tools. The types of data annotation include: image annotation, voice annotation, text annotation, etc. Among them, image annotation can be applied to application scenarios such as face recognition and automatic driving vehicle recognition. Data labelers need to mark the contours of different target markers with different colors, and then label the corresponding contours, and use the labels to outline the target markers in the contours, so that the model can automatically identify different markers in the image. With the widespread use of data-driven image processing techniques in various fields, more and more application scenarios require the use of labels to automatically identify different markers contained in images. Labels need to be obtained through data labeling. Therefore, how to optimize the accuracy and efficiency of data labeling is crucial.
目前,通常采用交互式图像分割标注的方法进行数据标注。具体为,人工给定初始区域,利用分割算法在初始区域中对目标对象进行分割,得到目标对象的边缘点,进而通过修正边缘点来修正对目标对象的分割结果。At present, the method of interactive image segmentation and labeling is usually used for data labeling. Specifically, the initial area is manually given, and the segmentation algorithm is used to segment the target object in the initial area to obtain the edge points of the target object, and then the segmentation result of the target object is corrected by correcting the edge points.
但是,上述标注方法需要人工对边缘点逐一修正,耗时耗力,且标注结果的准确率及效率较低。However, the above-mentioned labeling method needs to manually correct the edge points one by one, which is time-consuming and labor-intensive, and the accuracy and efficiency of the labeling results are low.
发明内容Contents of the invention
本申请实施例提供了一种数据标注方法及相关产品,通过提出基于预选框的深度学习方法对目标图像中的目标对象进行边缘检测,得到目标对象的第一轮廓,再引入弱语义信息将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边缘线的修正,修正得到目标对象的第二轮廓,可以大大提高标注结果的准确率及效率。The embodiment of the present application provides a data labeling method and related products. By proposing a deep learning method based on pre-selected boxes, the edge detection of the target object in the target image is obtained, and the first contour of the target object is obtained, and then the weak semantic information is introduced. The correction of the edge points of the first contour of the target object is transformed into the correction of the edge line of the first contour of the target object, and the correction obtains the second contour of the target object, which can greatly improve the accuracy and efficiency of the labeling results.
第一方面,本申请实施例提供了一种数据标注方法,该方法包括:In the first aspect, the embodiment of the present application provides a data labeling method, which includes:
获得目标图像中包含的目标对象的第一边缘线和第二边缘线;所述第一边缘线和所述第二边缘线不同;Obtaining a first edge line and a second edge line of the target object contained in the target image; the first edge line and the second edge line are different;
根据所述第一边缘线,得到所述目标对象的第一轮廓;Obtain a first outline of the target object according to the first edge line;
利用所述第二边缘线对所述第一轮廓的边缘线进行修正,得到所述目标对象的第二轮廓。Using the second edge line to correct the edge line of the first contour to obtain the second contour of the target object.
本申请实施例中,通过对目标图像做不同的处理,可以得到目标图像中包含的目标对象的不同边缘线,比如第一边缘线和第二边缘线。然后基于第一边缘线对目标对象进行初标,得到目标对象的第一轮廓。再利用第二边缘线对初标得到的第一轮廓的边缘线进行修正,得到目标对象的第二轮廓。In the embodiment of the present application, by performing different processing on the target image, different edge lines of the target object included in the target image can be obtained, such as the first edge line and the second edge line. Then the target object is initially marked based on the first edge line to obtain the first outline of the target object. Then, the second edge line is used to correct the edge line of the first contour obtained from the initial calibration, so as to obtain the second contour of the target object.
目前的数据标注方法,通常分割算法对目标对象进行分割,得到目标对象的边缘点,进 而通过修正边缘点来修正目标对象的轮廓。因此,需要人工对边缘点逐一修正,耗时耗力,且标注结果的准确率及效率较低。In the current data labeling method, the segmentation algorithm usually segments the target object to obtain the edge points of the target object, and then corrects the outline of the target object by correcting the edge points. Therefore, it is necessary to manually correct the edge points one by one, which is time-consuming and labor-intensive, and the accuracy and efficiency of the labeling results are low.
本申请实施例提供的数据标注方法,与目前常用的数据标注方法相比,采用第一边缘线对目标对象进行初标,再利用第二边缘线对初标得到的第一轮廓的边缘线进行修正,从而将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边缘线的修正,修正得到目标对象的第二轮廓,可以大大提高标注结果的准确率及效率。The data labeling method provided by the embodiment of the present application, compared with the currently commonly used data labeling method, uses the first edge line to initially mark the target object, and then uses the second edge line to perform initial mark on the edge line of the first contour obtained from the initial mark Correction, so that the correction of the edge point of the first contour of the target object is converted into the correction of the edge line of the first contour of the target object, and the second contour of the target object is obtained by correction, which can greatly improve the accuracy and efficiency of the labeling results .
在一种可能的实施方式中,所述利用所述第二边缘线对所述第一轮廓的边缘线进行修正,包括:In a possible implementation manner, the correcting the edge line of the first outline by using the second edge line includes:
将所述第二边缘线与所述第一轮廓的边缘线构成的轮廓,确定为所述第二轮廓。A contour formed by the second edge line and the edge line of the first contour is determined as the second contour.
在本申请实施例中,提供了一种利用第二边缘线对第一轮廓的边缘线进行修正的可能的实施方式。将第二边缘线与第一轮廓的边缘线构成的轮廓,确定为目标对象的第二轮廓,该修正方法提高了标注结果的准确率及效率。In the embodiment of the present application, a possible implementation manner of using the second edge line to correct the edge line of the first contour is provided. The contour formed by the second edge line and the edge line of the first contour is determined as the second contour of the target object, and the correction method improves the accuracy and efficiency of the labeling result.
在一种可能的实施方式中,所述获得目标图像中包含的目标对象的第一边缘线和第二边缘线之前,所述方法还包括:In a possible implementation manner, before obtaining the first edge line and the second edge line of the target object contained in the target image, the method further includes:
对所述目标图像做分割处理,得到所述目标对象的第一边缘点;所述分割处理用于将所述目标图像中包含的所述目标对象分离出来;performing segmentation processing on the target image to obtain a first edge point of the target object; the segmentation processing is used to separate the target object included in the target image;
对所述目标对象做边缘检测,得到所述目标对象的第二边缘点;所述边缘检测用于将所述目标对象的边缘分离出来;performing edge detection on the target object to obtain a second edge point of the target object; the edge detection is used to separate the edge of the target object;
所述获得目标图像中包含的目标对象的第一边缘线和第二边缘线,包括:The obtaining the first edge line and the second edge line of the target object contained in the target image includes:
根据所述第一边缘点和所述第二边缘点,确定所述第一边缘线和所述第二边缘线。The first edge line and the second edge line are determined according to the first edge point and the second edge point.
在本申请实施例中,提供了一种获得目标图像中包含的目标对象的第一边缘线和第二边缘线的可能的实施方式。即在确定目标对象的第一边缘线和第二边缘线之前,先通过对目标图像做分割处理,将目标图像中的目标对象分离出来,以得到目标对象的第一边缘点。此处的分割处理包括基于非深度学习的图像分割方法(如基于分水岭算法、图论的分割方法等)与基于深度学习的图像分割方法,两者的主要差别是基于深度学习的图像分割方法需要使用到卷积神经网络。在将目标图像中的目标对象分离出来之后,再对该目标对象做边缘检测,将该目标对象的边缘分离出来,得到目标对象的第二边缘点。此处的边缘检测包括基于非预选框深度学习的边缘检测与基于预选框深度学习的边缘检测,两者的区别在于是否使用到预选框,基于预选框深度学习的边缘检测不是检测目标图像上的像素点是否为边缘点,而是检测每一个预选框对应的边缘点的位置及类别。上述第一边缘点和第二边缘点虽然都是目标对象的边缘点,但是通过对目标对象采取不同的方式得到的边缘点,因此,上述第一边缘点和第二边缘点不完全相同。根据第一边缘点和第二边缘点,可以确定上述目标对象的第一边缘线和第二边缘线。由于上述第一边缘点和第二边缘点不完全相同,相应的,据此得到的第一边缘线和第二边缘线也不完全相同。In the embodiment of the present application, a possible implementation manner of obtaining the first edge line and the second edge line of the target object included in the target image is provided. That is, before determining the first edge line and the second edge line of the target object, the target object in the target image is separated by segmenting the target image to obtain the first edge point of the target object. The segmentation processing here includes image segmentation methods based on non-deep learning (such as segmentation methods based on watershed algorithm, graph theory, etc.) and image segmentation methods based on deep learning. The main difference between the two is that image segmentation methods based on deep learning require Convolutional neural network is used. After the target object in the target image is separated, an edge detection is performed on the target object, and the edge of the target object is separated to obtain a second edge point of the target object. The edge detection here includes edge detection based on deep learning of non-preselected boxes and edge detection based on deep learning of preselected boxes. Whether the pixel is an edge point, but to detect the position and category of the edge point corresponding to each preselection box. Although the above-mentioned first edge point and the second edge point are both edge points of the target object, they are edge points obtained by adopting different methods for the target object. Therefore, the above-mentioned first edge point and the second edge point are not completely the same. According to the first edge point and the second edge point, the first edge line and the second edge line of the above-mentioned target object can be determined. Since the first edge point and the second edge point are not completely the same, correspondingly, the first edge line and the second edge line obtained accordingly are also not completely the same.
通过本申请实施例所提供的获得第一边缘线和第二边缘线的实施方式,可以提高获取到的目标对象的边缘线的精度,且有利于后续将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边缘线的修正,提高标注结果的准确率及效率。Through the implementation of obtaining the first edge line and the second edge line provided by the embodiment of the present application, the accuracy of the acquired edge line of the target object can be improved, and it is beneficial to the subsequent edge points of the first contour of the target object The correction of the target object is transformed into the correction of the edge line of the first contour of the target object, which improves the accuracy and efficiency of the labeling result.
在一种可能的实施方式中,所述根据所述第一边缘点和所述第二边缘点,确定所述第一边缘线和所述第二边缘线,包括:In a possible implementation manner, the determining the first edge line and the second edge line according to the first edge point and the second edge point includes:
根据所述第一边缘点和所述第二边缘点,得到所述第一边缘线;所述第一边缘线上的点为所述第一边缘点和所述第二边缘点重合的边缘点;According to the first edge point and the second edge point, the first edge line is obtained; the point on the first edge line is an edge point where the first edge point and the second edge point coincide. ;
根据所述第一边缘点、所述第二边缘点以及所述第一边缘线,得到所述第二边缘线。The second edge line is obtained according to the first edge point, the second edge point, and the first edge line.
在本申请实施例中,提供了一种根据第一边缘点和第二边缘点确定第一边缘线和第二边缘线的可能的具体实施方式。将第一边缘点和第二边缘点重合的边缘点所构成的线确定为第一边缘线,即该第一边缘线均位于上述分割处理和边缘检测得到的边缘结果上,上述分割处理和边缘检测均可得到该第一边缘线。再根据第一边缘点、第二边缘点以及上述第一边缘线,确定第二边缘线。本申请实施例得到的第一边缘线和第二边缘线的精度更高,据此标注的结果准确率及效率也更高。In the embodiment of the present application, a possible specific implementation manner of determining the first edge line and the second edge line according to the first edge point and the second edge point is provided. The line formed by the edge points where the first edge point and the second edge point overlap is determined as the first edge line, that is, the first edge line is located on the edge result obtained by the above-mentioned segmentation processing and edge detection, and the above-mentioned segmentation processing and edge The first edge line can be obtained through detection. Then, according to the first edge point, the second edge point and the above-mentioned first edge line, the second edge line is determined. The accuracy of the first edge line and the second edge line obtained in the embodiment of the present application is higher, and thus the accuracy and efficiency of the labeling results are also higher.
在一种可能的实施方式中,所述根据所述第一边缘点、所述第二边缘点以及所述第一边缘线,得到所述第二边缘线,包括:In a possible implementation manner, the obtaining the second edge line according to the first edge point, the second edge point, and the first edge line includes:
根据所述第一边缘点和所述第二边缘点,得到第三边缘线;所述第三边缘线上的点为所述第一边缘点或所述第二边缘点,且不为所述第一边缘点和所述第二边缘点重合的边缘点;According to the first edge point and the second edge point, a third edge line is obtained; the point on the third edge line is the first edge point or the second edge point, and is not the an edge point at which the first edge point coincides with the second edge point;
将与所述第一边缘线有两个或两个以上的交点的所述第三边缘线,确定为所述第二边缘线。The third edge line having two or more intersection points with the first edge line is determined as the second edge line.
在本申请实施例中,提供了一种根据第一边缘点、第二边缘点以及第一边缘线确定第二边缘线的可能的具体实施方式。将第一边缘点和第二边缘点的并集去除第一边缘点和第二边缘点重合的边缘点后所构成的线确定为第三边缘线,即该第三边缘线位于上述分割处理得到的边缘结果上,或位于上述边缘检测得到的边缘结果上,不是均位于分割处理和边缘检测得到的边缘结果上。再将与上述第一边缘线有两个或两个以上交点的第三边缘线,确定为上述第二边缘线。本申请实施例得到的第二边缘线的精度更高,利用该第二边缘线可将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边缘线的修正,有利于提高修正的准确率及效率。In the embodiment of the present application, a possible specific implementation manner of determining the second edge line according to the first edge point, the second edge point, and the first edge line is provided. The line formed by the union of the first edge point and the second edge point after removing the edge point where the first edge point coincides with the second edge point is determined as the third edge line, that is, the third edge line is located at the position obtained by the above segmentation process. On the edge result of the above edge detection, or on the edge result obtained by the above edge detection, not all on the edge result obtained by the segmentation processing and edge detection. Then, a third edge line having two or more intersection points with the first edge line is determined as the second edge line. The accuracy of the second edge line obtained in the embodiment of the present application is higher, and by using the second edge line, the correction of the edge point of the first contour of the target object can be converted into the correction of the edge line of the first contour of the target object, It is beneficial to improve the accuracy and efficiency of correction.
在一种可能的实施方式中,所述对所述目标对象做边缘检测,得到所述目标对象的第二边缘点,包括:In a possible implementation manner, the performing edge detection on the target object to obtain the second edge point of the target object includes:
根据所述目标对象的边缘线上的边缘点的位置,确定所述目标对象的标签;determining the label of the target object according to the position of the edge point on the edge line of the target object;
根据所述目标对象的标签,调整所述目标对象的边缘线上的边缘点,得到所述第二边缘点。According to the label of the target object, edge points on the edge line of the target object are adjusted to obtain the second edge point.
在本申请实施例中,提供了一种边缘检测的可能的实施方式。对该目标对象做边缘检测,将该目标对象的边缘分离出来,得到目标对象的第二边缘点。其中,基于预选框深度学习的边缘检测不是检测目标图像上的像素点是否为边缘点,而是检测每一个预选框对应的边缘点的位置及类别。即根据边缘线上的各个预选框对应的边缘点的位置,确定该目标对象的类别标签,再根据类别标签去调整边缘线上各个预选框对应的边缘点,为各个预选框对应的边缘点分配边缘方向和边缘距离,按照边缘方向移动各个预选框对应的边缘点的边缘距离,得到上述第二边缘点。本申请实施例通过基于预选框深度学习的边缘检测,可以使得到的第二边缘点位置精度更高,据此得到的目标对象的边缘线也更准确,有利于提高标注结果的准确率及效率。In the embodiment of the present application, a possible implementation manner of edge detection is provided. Edge detection is performed on the target object, and the edge of the target object is separated to obtain a second edge point of the target object. Among them, the edge detection based on the deep learning of the preselection box is not to detect whether the pixel point on the target image is an edge point, but to detect the position and category of the edge point corresponding to each preselection box. That is, according to the position of the edge point corresponding to each preselected box on the edge line, determine the category label of the target object, and then adjust the edge point corresponding to each preselected box on the edge line according to the category label, and allocate the edge point corresponding to each preselected box For the edge direction and edge distance, move the edge distance of the edge points corresponding to each pre-selected box according to the edge direction to obtain the above-mentioned second edge point. In the embodiment of the present application, through the edge detection based on the deep learning of the pre-selected frame, the position accuracy of the second edge point obtained can be higher, and the edge line of the target object obtained accordingly is also more accurate, which is conducive to improving the accuracy and efficiency of the labeling results .
在一种可能的实施方式中,所述根据所述目标对象的边缘线上的边缘点的位置,确定所述目标对象的标签,包括:In a possible implementation manner, the determining the label of the target object according to the position of the edge point on the edge line of the target object includes:
在位于第一标签上的边缘点个数大于位于第二标签上的边缘点个数的情况下,将所述第一标签确定为所述目标对象的标签;When the number of edge points on the first label is greater than the number of edge points on the second label, determining the first label as the label of the target object;
或者,在位于所述第一标签上的边缘点个数小于位于所述第二标签上的边缘点个数的情况下,将所述第二标签确定为所述目标对象的标签;Or, when the number of edge points on the first label is less than the number of edge points on the second label, determine the second label as the label of the target object;
或者,在位于所述第一标签上的边缘点个数等于位于所述第二标签上的边缘点个数的情况下,根据所述第一标签的关键度和所述第二标签的关键度,确定所述目标对象的标签。Or, when the number of edge points on the first label is equal to the number of edge points on the second label, according to the key degree of the first label and the key degree of the second label , to determine the label of the target object.
在本申请实施例中,提供了一种确定目标对象的类别标签的可能的实施方式。将目标对象的边缘线上各个预选框对应的边缘点所在标签高的类别,确定为目标对象的类别标签,即边缘线上的点位于哪一类别,则预选框类别即为那个类别。比如,在位于第一标签上的边缘点个数大于位于第二标签上的边缘点个数的情况下,将该第一标签确定为目标对象的类别标签,同理,在位于第一标签上的边缘点个数小于位于第二标签上的边缘点个数的情况下,将该第二标签确定为目标对象的类别标签。在位于第一标签上的边缘点个数等于位于第二标签上的边缘点个数的情况下,进一步根据标签的关键度来确定目标对象的类别标签。本申请实施例通过为预选框分配类别来确定目标对象的类别标签,可以提高确定目标对象类别的准确率及效率,并据此调整目标对象的边缘线上的边缘点,可以使得到的第二边缘点位置精度更高。In the embodiment of the present application, a possible implementation manner of determining the category label of the target object is provided. Determine the category label of the target object as the category label of the target object, that is, which category the point on the edge line is in, and the category of the preselection box is that category. For example, when the number of edge points on the first label is greater than the number of edge points on the second label, the first label is determined as the category label of the target object. Similarly, on the first label If the number of edge points of is less than the number of edge points located on the second label, the second label is determined as the category label of the target object. When the number of edge points on the first label is equal to the number of edge points on the second label, the category label of the target object is further determined according to the keyness of the label. In the embodiment of the present application, the category label of the target object is determined by assigning categories to the pre-selected boxes, which can improve the accuracy and efficiency of determining the category of the target object, and adjust the edge points on the edge line of the target object accordingly, so that the obtained second The position accuracy of edge points is higher.
在一种可能的实施方式中,所述对所述目标图像做分割处理,得到所述目标对象的第一边缘点,包括:In a possible implementation manner, the performing segmentation processing on the target image to obtain the first edge point of the target object includes:
将所述目标图像输入至卷积神经网络中做所述分割处理,得到所述第一边缘点。Inputting the target image into a convolutional neural network for the segmentation process to obtain the first edge point.
在本申请实施例中,利用基于深度学习的图像分割方法将目标图像输入至卷积神经网络做分割处理,与目前常用的交互式图像分割方法相比,本方法得到的目标对象的边缘点更为准确,且分割效率更高。In the embodiment of this application, the image segmentation method based on deep learning is used to input the target image to the convolutional neural network for segmentation processing. Compared with the currently commonly used interactive image segmentation method, the edge points of the target object obtained by this method are more accurate. more accurate and more efficient.
在一种可能的实施方式中,所述根据所述第一标签的关键度和所述第二标签的关键度,确定所述目标对象的标签,包括:In a possible implementation manner, the determining the tag of the target object according to the criticality of the first tag and the criticality of the second tag includes:
在所述第一标签的关键度高于所述第二标签的关键度的情况下,将所述第一标签确定为所述目标对象的标签;If the criticality of the first tag is higher than that of the second tag, determining the first tag as the tag of the target object;
在所述第二标签的关键度高于所述第一标签的关键度的情况下,将所述第二标签确定为所述目标对象的标签。If the criticality of the second tag is higher than that of the first tag, the second tag is determined as the tag of the target object.
在本申请实施例中,提供了一种确定目标对象的类别标签的可能的具体实施方式。即在位于第一标签上的边缘点个数等于位于第二标签上的边缘点个数的情况下,将进一步根据标签的关键度来确定目标对象的类别标签。比如,在第一标签的关键度高于第二标签的关键度的情况下,将该第一标签确定为目标对象的类别标签,同理,在第二标签的关键度高于第一标签的关键度的情况下,将该第二标签确定为目标对象的第二标签。本申请实施例通过进一步比较标签关键度来确定目标对象的类别标签,可以提高确定目标对象类别的准确率及效率,并据此调整目标对象的边缘线上的边缘点,可以使得到的第二边缘点位置精度更高。In the embodiment of the present application, a possible specific implementation manner of determining the category label of the target object is provided. That is, when the number of edge points on the first label is equal to the number of edge points on the second label, the category label of the target object will be further determined according to the keyness of the label. For example, when the criticality of the first label is higher than that of the second label, the first label is determined as the category label of the target object. Similarly, when the criticality of the second label is higher than that of the first label In the case of criticality, the second label is determined as the second label of the target object. In the embodiment of the present application, the category label of the target object is determined by further comparing the label criticality, which can improve the accuracy and efficiency of determining the category of the target object, and adjust the edge points on the edge line of the target object accordingly, so that the obtained second The position accuracy of edge points is higher.
在一种可能的实施方式中,所述方法还包括:In a possible implementation manner, the method also includes:
根据目标雷达的目标数据,得到目标对象的第一边框和第二边框;所述第一边框和所述第二边框不同,所述目标数据包括所述目标雷达探测所述目标对象得到的数据;According to the target data of the target radar, a first frame and a second frame of the target object are obtained; the first frame is different from the second frame, and the target data includes data obtained by the target radar detecting the target object;
根据所述第一边框,得到所述目标对象的第一轮廓;Obtain a first contour of the target object according to the first frame;
利用所述第二边框对所述第一轮廓的边框进行修正,得到所述目标对象的第二轮廓。Using the second frame to correct the frame of the first contour to obtain the second contour of the target object.
本申请实施例中,通过对目标雷达探测目标对象所采集到的目标数据做不同的处理,可以得到目标对象的不同边框,比如第一边框和第二边框。然后基于第一边框对目标对象进行初标,得到目标对象的第一轮廓。再利用第二边框对初标得到的第一轮廓的边框进行修正,得到目标对象的第二轮廓。In the embodiment of the present application, by performing different processing on the target data collected by the target radar to detect the target object, different frames of the target object, such as the first frame and the second frame, can be obtained. Then, an initial mark is performed on the target object based on the first bounding box to obtain a first outline of the target object. Then, the second frame is used to correct the frame of the first contour obtained by the initial calibration, so as to obtain the second contour of the target object.
本申请实施例提供的数据标注方法,不仅适用于对二维图像数据的标注,还适用于对三 维雷达数据的标注。将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边框的修正,修正得到目标对象的第二轮廓,可以大大提高标注结果的准确率及效率。The data labeling method provided in the embodiment of this application is not only applicable to labeling two-dimensional image data, but also suitable for labeling three-dimensional radar data. Converting the correction of the edge points of the first contour of the target object to the correction of the frame of the first contour of the target object, and correcting to obtain the second contour of the target object can greatly improve the accuracy and efficiency of the labeling result.
在一种可能的实施方式中,所述利用所述第二边框对所述第一轮廓的边框进行修正,包括:In a possible implementation manner, the modifying the frame of the first outline by using the second frame includes:
将所述第二边框与所述第一轮廓的边框构成的轮廓,确定为所述第二轮廓。A contour formed by the second frame and the frame of the first contour is determined as the second contour.
在一种可能的实施方式中,所述根据目标雷达的目标数据,得到目标对象的第一边框和第二边框之前,所述方法还包括:In a possible implementation manner, before obtaining the first frame and the second frame of the target object according to the target data of the target radar, the method further includes:
对所述目标对象做分割处理,得到所述目标对象的第一边缘面;所述分割处理用于将所述目标对象从所述目标雷达的探测区域中分离出来;performing segmentation processing on the target object to obtain a first edge surface of the target object; the segmentation processing is used to separate the target object from the detection area of the target radar;
对所述目标对象做边缘检测,得到所述目标对象的第二边缘面;所述边缘检测用于将所述目标对象的边缘分离出来;performing edge detection on the target object to obtain a second edge surface of the target object; the edge detection is used to separate the edge of the target object;
所述根据目标雷达的目标数据,得到目标对象的第一边框和第二边框,包括:According to the target data of the target radar, the first frame and the second frame of the target object are obtained, including:
根据所述第一边缘面和所述第二边缘面,确定所述第一边框和所述第二边框。The first frame and the second frame are determined according to the first edge surface and the second edge surface.
在一种可能的实施方式中,根据所述第一边缘面和所述第二边缘面,确定所述第一边框和所述第二边框,包括:In a possible implementation manner, determining the first frame and the second frame according to the first edge surface and the second edge surface includes:
根据所述第一边缘面和所述第二边缘面,得到所述第一边框;所述第一边框上的边缘面为所述第一边缘面和所述第二边缘面重合的边缘面;According to the first edge surface and the second edge surface, the first frame is obtained; the edge surface on the first frame is an edge surface where the first edge surface and the second edge surface overlap;
根据所述第一边缘面、所述第二边缘面以及所述第一边框,得到所述第二边框。The second frame is obtained according to the first edge surface, the second edge surface and the first frame.
在一种可能的实施方式中,根据所述第一边缘面、所述第二边缘面以及所述第一边框,得到所述第二边框,包括:In a possible implementation manner, obtaining the second frame according to the first edge surface, the second edge surface, and the first frame includes:
根据所述第一边缘面和所述第二边缘面,得到第三边框;所述第三边框上的面为所述第一边缘面或所述第二边缘面,且不为所述第一边缘面和所述第二边缘面重合的边缘面;According to the first edge surface and the second edge surface, a third frame is obtained; the surface on the third frame is the first edge surface or the second edge surface, and is not the first edge surface an edge face coincident with said second edge face;
将与所述第一边框有两个或两个以上相交的面的所述第三边框,确定为所述第二边框。The third frame having two or more intersecting surfaces with the first frame is determined as the second frame.
在一种可能的实施方式中,所述对所述目标对象做边缘检测,得到所述目标对象的第二边缘面,包括:In a possible implementation manner, the performing edge detection on the target object to obtain the second edge surface of the target object includes:
根据所述目标对象的边框上的边缘面的位置,确定所述目标对象的标签;determining the label of the target object according to the position of the edge face on the frame of the target object;
根据所述目标对象的标签,调整所述目标对象的边框上的边缘面,得到所述第二边缘面。According to the label of the target object, the edge surface on the frame of the target object is adjusted to obtain the second edge surface.
在一种可能的实施方式中,根据所述目标对象的边框上的边缘面的位置,确定所述目标对象的标签,包括:In a possible implementation manner, determining the label of the target object according to the position of the edge face on the frame of the target object includes:
在位于第一标签上的边缘面个数大于位于第二标签上的边缘面个数的情况下,将所述第一标签确定为所述目标对象的标签;When the number of edge faces on the first label is greater than the number of edge faces on the second label, determining the first label as the label of the target object;
或者,在位于所述第一标签上的边缘面个数小于位于所述第二标签上的边缘面个数的情况下,将所述第二标签确定为所述目标对象的标签;Or, when the number of edge faces on the first label is less than the number of edge faces on the second label, determine the second label as the label of the target object;
或者,在位于所述第一标签上的边缘面个数等于位于所述第二标签上的边缘面个数的情况下,根据所述第一标签的关键度和所述第二标签的关键度,确定所述目标对象的标签。Or, when the number of edge faces on the first label is equal to the number of edge faces on the second label, according to the key degree of the first label and the key degree of the second label , to determine the label of the target object.
第二方面,本申请实施例提供了一种数据标注装置,该装置包括:In the second aspect, the embodiment of the present application provides a data labeling device, which includes:
确定单元,用于获得目标图像中包含的目标对象的第一边缘线和第二边缘线;所述第一边缘线和所述第二边缘线不同;A determining unit, configured to obtain a first edge line and a second edge line of a target object included in the target image; the first edge line and the second edge line are different;
所述确定单元,还用于根据所述第一边缘线,得到所述目标对象的第一轮廓;The determining unit is further configured to obtain a first contour of the target object according to the first edge line;
修正单元,用于利用所述第二边缘线对所述第一轮廓的边缘线进行修正,得到所述目标 对象的第二轮廓。A correction unit, configured to use the second edge line to correct the edge line of the first contour to obtain the second contour of the target object.
在一种可能的实施方式中,所述修正单元,具体用于将所述第二边缘线与所述第一轮廓的边缘线构成的轮廓,确定为所述第二轮廓。In a possible implementation manner, the correction unit is specifically configured to determine a contour formed by the second edge line and edge lines of the first contour as the second contour.
在一种可能的实施方式中,所述装置还包括:In a possible implementation manner, the device also includes:
分割单元,用于对所述目标图像做分割处理,得到所述目标对象的第一边缘点;所述分割处理用于将所述目标图像中包含的所述目标对象分离出来;a segmentation unit, configured to perform segmentation processing on the target image to obtain a first edge point of the target object; the segmentation processing is used to separate the target object included in the target image;
边缘检测单元,用于对所述目标对象做边缘检测,得到所述目标对象的第二边缘点;所述边缘检测用于将所述目标对象的边缘分离出来;An edge detection unit, configured to perform edge detection on the target object to obtain a second edge point of the target object; the edge detection is used to separate the edge of the target object;
所述确定单元,具体用于根据所述第一边缘点和所述第二边缘点,确定所述第一边缘线和所述第二边缘线。The determining unit is specifically configured to determine the first edge line and the second edge line according to the first edge point and the second edge point.
在一种可能的实施方式中,所述确定单元,具体还用于根据所述第一边缘点和所述第二边缘点,得到所述第一边缘线;所述第一边缘线上的点为所述第一边缘点和所述第二边缘点重合的边缘点;In a possible implementation manner, the determining unit is further configured to obtain the first edge line according to the first edge point and the second edge point; the points on the first edge line An edge point where the first edge point and the second edge point coincide;
所述确定单元,具体还用于根据所述第一边缘点、所述第二边缘点以及所述第一边缘线,得到所述第二边缘线。The determining unit is specifically further configured to obtain the second edge line according to the first edge point, the second edge point, and the first edge line.
在一种可能的实施方式中,所述确定单元,具体还用于根据所述第一边缘点和所述第二边缘点,得到第三边缘线;所述第三边缘线上的点为所述第一边缘点或所述第二边缘点,且不为所述第一边缘点和所述第二边缘点重合的边缘点;In a possible implementation manner, the determining unit is further configured to obtain a third edge line according to the first edge point and the second edge point; the points on the third edge line are the the first edge point or the second edge point, and is not an edge point where the first edge point and the second edge point coincide;
所述确定单元,具体还用于将与所述第一边缘线有两个或两个以上的交点的所述第三边缘线,确定为所述第二边缘线。The determining unit is further configured to determine the third edge line having two or more intersection points with the first edge line as the second edge line.
在一种可能的实施方式中,所述边缘检测单元,具体用于根据所述目标对象的边缘线上的边缘点的位置,确定所述目标对象的标签;In a possible implementation manner, the edge detection unit is specifically configured to determine the label of the target object according to the position of the edge point on the edge line of the target object;
所述边缘检测单元,具体还用于根据所述目标对象的标签,调整所述目标对象的边缘线上的边缘点,得到所述第二边缘点。The edge detection unit is further configured to adjust edge points on the edge line of the target object according to the label of the target object to obtain the second edge point.
在一种可能的实施方式中,所述边缘检测单元,具体还用于在位于第一标签上的边缘点个数大于位于第二标签上的边缘点个数的情况下,将所述第一标签确定为所述目标对象的标签;In a possible implementation manner, the edge detection unit is further configured to, when the number of edge points on the first label is greater than the number of edge points on the second label, The label is determined as the label of the target object;
或者,所述边缘检测单元,具体还用于在位于所述第一标签上的边缘点个数小于位于所述第二标签上的边缘点个数的情况下,将所述第二标签确定为所述目标对象的标签;Alternatively, the edge detection unit is further configured to determine the second label as the label of the target object;
或者,所述边缘检测单元,具体还用于在位于所述第一标签上的边缘点个数等于位于所述第二标签上的边缘点个数的情况下,根据所述第一标签的关键度和所述第二标签的关键度,确定所述目标对象的标签。Alternatively, the edge detection unit is specifically further configured to: when the number of edge points on the first label is equal to the number of edge points on the second label, according to the key of the first label degree and the key degree of the second label, and determine the label of the target object.
在一种可能的实施方式中,所述分割单元,具体用于将所述目标图像输入至卷积神经网络中做所述分割处理,得到所述第一边缘点。In a possible implementation manner, the segmentation unit is specifically configured to input the target image into a convolutional neural network for the segmentation process to obtain the first edge point.
在一种可能的实施方式中,所述确定单元,具体还用于在所述第一标签的关键度高于所述第二标签的关键度的情况下,将所述第一标签确定为所述目标对象的标签;In a possible implementation manner, the determining unit is further configured to determine the first tag as the the label of the target audience;
所述确定单元,具体还用于在所述第二标签的关键度高于所述第一标签的关键度的情况下,将所述第二标签确定为所述目标对象的标签。The determining unit is specifically further configured to determine the second tag as the tag of the target object when the criticality of the second tag is higher than that of the first tag.
在一种可能的实施方式中,所述确定单元,用于根据目标雷达的目标数据,得到目标对象的第一边框和第二边框;所述第一边框和所述第二边框不同,所述目标数据包括所述目标雷达探测所述目标对象得到的数据;In a possible implementation manner, the determining unit is configured to obtain the first frame and the second frame of the target object according to the target data of the target radar; the first frame and the second frame are different, and the The target data includes data obtained by the target radar detecting the target object;
所述确定单元,还用于根据所述第一边框,得到所述目标对象的第一轮廓;The determining unit is further configured to obtain a first contour of the target object according to the first frame;
所述修正单元,用于利用所述第二边框对所述第一轮廓的边框进行修正,得到所述目标对象的第二轮廓。The correction unit is configured to use the second frame to correct the frame of the first contour to obtain the second contour of the target object.
在一种可能的实施方式中,所述修正单元,具体用于将所述第二边框与所述第一轮廓的边框构成的轮廓,确定为所述第二轮廓。In a possible implementation manner, the correction unit is specifically configured to determine a contour formed by the second frame and a frame of the first contour as the second contour.
在一种可能的实施方式中,所述分割单元,用于对所述目标对象做分割处理,得到所述目标对象的第一边缘面;所述分割处理用于将所述目标对象从所述目标雷达的探测区域中分离出来;In a possible implementation manner, the segmentation unit is configured to perform segmentation processing on the target object to obtain a first edge surface of the target object; the segmentation processing is configured to segment the target object from the separated from the detection zone of the target radar;
所述边缘检测单元,用于对所述目标对象做边缘检测,得到所述目标对象的第二边缘面;所述边缘检测用于将所述目标对象的边缘分离出来;The edge detection unit is configured to perform edge detection on the target object to obtain a second edge surface of the target object; the edge detection is used to separate the edge of the target object;
所述确定单元,具体用于根据所述第一边缘面和所述第二边缘面,确定所述第一边框和所述第二边框。The determining unit is specifically configured to determine the first frame and the second frame according to the first edge surface and the second edge surface.
在一种可能的实施方式中,所述确定单元,具体还用于根据所述第一边缘面和所述第二边缘面,得到所述第一边框;所述第一边框上的边缘面为所述第一边缘面和所述第二边缘面重合的边缘面;In a possible implementation manner, the determining unit is further configured to obtain the first frame according to the first edge surface and the second edge surface; the edge surface on the first frame is an edge face where the first edge face and the second edge face coincide;
所述确定单元,具体还用于根据所述第一边缘面、所述第二边缘面以及所述第一边框,得到所述第二边框。The determining unit is specifically further configured to obtain the second frame according to the first edge surface, the second edge surface, and the first frame.
在一种可能的实施方式中,所述确定单元,具体还用于根据所述第一边缘面和所述第二边缘面,得到第三边框;所述第三边框上的面为所述第一边缘面或所述第二边缘面,且不为所述第一边缘面和所述第二边缘面重合的边缘面;In a possible implementation manner, the determining unit is further configured to obtain a third frame according to the first edge surface and the second edge surface; the surface on the third frame is the an edge face or said second edge face, and is not an edge face where said first edge face and said second edge face coincide;
所述确定单元,具体还用于将与所述第一边框有两个或两个以上相交的面的所述第三边框,确定为所述第二边框。The determining unit is further configured to determine, as the second frame, the third frame that has two or more intersecting surfaces with the first frame.
在一种可能的实施方式中,所述边缘检测单元,具体用于根据所述目标对象的边框上的边缘面的位置,确定所述目标对象的标签;In a possible implementation manner, the edge detection unit is specifically configured to determine the label of the target object according to the position of the edge face on the frame of the target object;
所述边缘检测单元,具体还用于根据所述目标对象的标签,调整所述目标对象的边框上的边缘面,得到所述第二边缘面。The edge detection unit is specifically further configured to adjust an edge surface on a border of the target object according to the label of the target object to obtain the second edge surface.
在一种可能的实施方式中,所述边缘检测单元,具体还用于在位于第一标签上的边缘面个数大于位于第二标签上的边缘面个数的情况下,将所述第一标签确定为所述目标对象的标签;In a possible implementation manner, the edge detection unit is further configured to, when the number of edge faces on the first label is greater than the number of edge faces on the second label, The label is determined as the label of the target object;
或者,所述边缘检测单元,具体还用于在位于所述第一标签上的边缘面个数小于位于所述第二标签上的边缘面个数的情况下,将所述第二标签确定为所述目标对象的标签;Alternatively, the edge detection unit is further configured to determine the second label as the label of the target object;
或者,所述边缘检测单元,具体还用于在位于所述第一标签上的边缘面个数等于位于所述第二标签上的边缘面个数的情况下,根据所述第一标签的关键度和所述第二标签的关键度,确定所述目标对象的标签。Alternatively, the edge detection unit is specifically further configured to: when the number of edge faces on the first label is equal to the number of edge faces on the second label, according to the key of the first label degree and the key degree of the second label, and determine the label of the target object.
关于第二方面以及任一项可能的实施方式所带来的技术效果,可参考对应于第一方面以及相应的实施方式的技术效果的介绍。Regarding the technical effect brought about by the second aspect and any possible implementation manner, reference may be made to the introduction corresponding to the technical effect of the first aspect and the corresponding implementation manner.
第三方面,本申请实施例提供一种数据标注装置,所述数据标注装置包括处理器和存储器;所述存储器用于存储计算机程序或指令;所述处理器用于执行所述存储器所存储的计算机程序或指令,以使所述数据标注装置执行如上述第一方面以及任一项可能的实施方式的方法。In the third aspect, the embodiment of the present application provides a data labeling device, the data labeling device includes a processor and a memory; the memory is used to store computer programs or instructions; the processor is used to execute the computer stored in the memory A program or an instruction, so that the data tagging device executes the method according to the above first aspect and any possible implementation manner.
可选的,所述数据标注装置还包括收发器,所述收发器,用于接收数据或者发送数据。Optionally, the data tagging device further includes a transceiver, configured to receive data or send data.
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序;当所述计算机程序被执行时,使得第一方面以及任一项可能的实施方式所述的方法被实现。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, which is used to store a computer program; when the computer program is executed, the first aspect and any possible implementation The method described in the manner is implemented.
第五方面,本申请实施例提供一种计算机程序产品,所述计算机程序产品包括计算机程序;当所述计算机程序被执行时,使得第一方面以及任一项可能的实施方式所述的方法被实现。In the fifth aspect, the embodiment of the present application provides a computer program product, the computer program product includes a computer program; when the computer program is executed, the method described in the first aspect and any possible implementation manner is executed accomplish.
第六方面,本申请实施例提供一种芯片,所述芯片包括处理器,所述处理器用于执行指令,当该处理器执行所述指令时,使得该芯片执行如第一方面以及任一项可能的实施方式所述的方法。In the sixth aspect, the embodiment of the present application provides a chip, the chip includes a processor, the processor is used to execute instructions, and when the processor executes the instructions, the chip performs the first aspect and any one of Possible implementations of the methods described.
可选的,所述芯片还包括通信接口,所述通信接口用于输入数据或输出数据。Optionally, the chip further includes a communication interface for inputting data or outputting data.
第七方面,本申请实施例提供一种终端,所述终端包括至少一个如第二方面或第三方面所述的数据标注装置,或第六方面所述的芯片。In a seventh aspect, an embodiment of the present application provides a terminal, the terminal comprising at least one data tagging device according to the second aspect or the third aspect, or the chip according to the sixth aspect.
第八方面,本申请实施例提供一种服务器,所述服务器包括至少一个如第二方面或第三方面所述的数据标注装置,或第六方面所述的芯片。In an eighth aspect, the embodiment of the present application provides a server, the server comprising at least one data tagging device according to the second aspect or the third aspect, or the chip according to the sixth aspect.
可选的,在执行上述第一方面以及任一项可能的实施方式所述的方法的过程中,上述处理器可以是专门用于执行这些方法的处理器,也可以是通过执行存储器中的计算机指令来执行这些方法的处理器,例如通用处理器。上述存储器可以为非瞬时性(non-transitory)存储器,例如只读存储器(Read Only Memory,ROM),其可以与处理器集成在同一块芯片上,也可以分别设置在不同的芯片上,本申请实施例对存储器的类型以及存储器与处理器的设置方式不做限定。Optionally, during the process of executing the method described in the above first aspect and any possible implementation manner, the above-mentioned processor may be a processor specially used to execute these methods, or may be executed by a computer in the memory instructions to perform these methods, such as a general-purpose processor. The above-mentioned memory can be a non-transitory (non-transitory) memory, such as a read-only memory (Read Only Memory, ROM), which can be integrated with the processor on the same chip, or can be respectively arranged on different chips. The embodiment does not limit the type of the memory and the arrangement of the memory and the processor.
在一种可能的实施方式中,上述至少一个存储器位于装置之外。In a possible implementation manner, the above at least one memory is located outside the device.
在又一种可能的实施方式中,上述至少一个存储器位于装置之内。In yet another possible implementation manner, the at least one memory is located within the device.
在又一种可能的实施方式之中,上述至少一个存储器的部分存储器位于装置之内,另一部分存储器位于装置之外。In yet another possible implementation manner, part of the memory of the at least one memory is located inside the device, and another part of the memory is located outside the device.
本申请中,处理器和存储器还可能集成于一个器件中,即处理器和存储器还可以被集成在一起。In this application, the processor and the memory may also be integrated into one device, that is, the processor and the memory may also be integrated together.
本申请实施例中,通过提出基于预选框的深度学习方法对目标图像中的目标对象进行边缘检测,得到目标对象的第一轮廓,再引入弱语义信息将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边缘线的修正,修正得到目标对象的第二轮廓,可以大大提高标注结果的准确率及效率。In the embodiment of the present application, the edge detection of the target object in the target image is carried out by proposing a deep learning method based on the pre-selected frame, and the first contour of the target object is obtained, and then the weak semantic information is introduced to the edge points of the first contour of the target object The correction of the target object is transformed into the correction of the edge line of the first contour of the target object, and the second contour of the target object is obtained through correction, which can greatly improve the accuracy and efficiency of the labeling result.
附图说明Description of drawings
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following will briefly introduce the accompanying drawings that need to be used in the embodiments of the present application. Obviously, the accompanying drawings described below are only some embodiments of the present application. Those of ordinary skill in the art can also obtain other drawings based on these drawings without making creative efforts.
图1为本申请实施例提供的一种数据标注的效果示意图;Fig. 1 is a schematic diagram of the effect of a data annotation provided by the embodiment of the present application;
图2为本申请实施例提供的一种数据标注系统的架构示意图;FIG. 2 is a schematic diagram of the architecture of a data labeling system provided by an embodiment of the present application;
图3为本申请实施例提供的一种数据标注的应用场景示意图;FIG. 3 is a schematic diagram of an application scenario of data labeling provided by an embodiment of the present application;
图4为本申请实施例提供的一种数据标注方法的流程示意图;FIG. 4 is a schematic flow chart of a data labeling method provided by an embodiment of the present application;
图5a为本申请实施例提供的一种数据标注的效果示意图;Fig. 5a is a schematic diagram of the effect of data labeling provided by the embodiment of the present application;
图5b为本申请实施例提供的一种数据标注的效果示意图;Fig. 5b is a schematic diagram of the effect of data labeling provided by the embodiment of the present application;
图5c为本申请实施例提供的一种数据标注的效果示意图;Fig. 5c is a schematic diagram of the effect of data labeling provided by the embodiment of the present application;
图6a为本申请实施例提供的一种数据标注系统的架构示意图;FIG. 6a is a schematic diagram of the architecture of a data labeling system provided by an embodiment of the present application;
图6b为本申请实施例提供的一种数据标注系统的架构示意图;FIG. 6b is a schematic diagram of the architecture of a data labeling system provided by an embodiment of the present application;
图6c为本申请实施例提供的一种数据标注系统的架构示意图;FIG. 6c is a schematic diagram of the architecture of a data labeling system provided by the embodiment of the present application;
图7为本申请实施例提供的另一种数据标注方法的流程示意图;FIG. 7 is a schematic flowchart of another data labeling method provided by the embodiment of the present application;
图8为本申请实施例提供的一种数据标注装置的结构示意图;FIG. 8 is a schematic structural diagram of a data labeling device provided by an embodiment of the present application;
图9为本申请实施例提供的一种电子设备的结构示意图。FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
为了使本申请的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图对本申请实施例进行描述。In order to make the purpose, technical solution and advantages of the present application clearer, the following will describe the embodiments of the present application in conjunction with the accompanying drawings in the embodiments of the present application.
本申请的说明书、权利要求书及附图中的术语“第一”和“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备等,没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元等,或可选地还包括对于这些过程、方法、产品或设备等固有的其它步骤或单元。The terms "first" and "second" in the specification, claims and drawings of the present application are used to distinguish different objects, rather than to describe a specific order. Furthermore, the terms "comprising" and "having", as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, product, or device that includes a series of steps or units is not limited to the listed steps or units, but optionally also includes steps or units that are not listed, or optionally It also includes other steps or units inherent to these processes, methods, products, or devices.
在本文中提及的“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员可以显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The occurrences of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is understood explicitly and implicitly by those skilled in the art that the embodiments described herein can be combined with other embodiments.
应当理解,在本申请中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上,“至少两个(项)”是指两个或三个及三个以上,“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。It should be understood that in this application, "at least one (item)" means one or more, "multiple" means two or more, and "at least two (items)" means two or three And three or more, "and/or", is used to describe the association relationship of associated objects, indicating that there can be three types of relationships, for example, "A and/or B" can mean: only A exists, only B exists, and A exists at the same time and B, where A and B can be singular or plural. The character "/" generally indicates that the contextual objects are an "or" relationship. "At least one of the following" or similar expressions refer to any combination of these items, including any combination of single or plural items. For example, at least one item (piece) of a, b or c can mean: a, b, c, "a and b", "a and c", "b and c", or "a and b and c ", where a, b, c can be single or multiple.
本申请提供了一种数据标注方法,为了更清楚地描述本申请的方案,下面将结合本申请实施例中的附图,先介绍一些与数据标注相关的知识,再对本申请实施例进行描述。This application provides a data labeling method. In order to describe the solution of this application more clearly, the following will introduce some knowledge related to data labeling in combination with the drawings in the embodiments of this application, and then describe the embodiments of this application.
请参阅图1,图1为本申请实施例提供的一种数据标注的效果示意图。Please refer to FIG. 1 . FIG. 1 is a schematic diagram of an effect of data labeling provided by an embodiment of the present application.
边缘面:图1中的(A)为车载雷达根据采集到的目标数据所得到的目标对象(目标车辆)的立体模型,该目标车辆的立体模型由若干个边缘面构成,该边缘面可以通过对采集到的目标数据进行标注得到,如图1中的(B)即为边缘面。Edge surface: (A) in Figure 1 is the three-dimensional model of the target object (target vehicle) obtained by the vehicle-mounted radar according to the collected target data. The three-dimensional model of the target vehicle is composed of several edge surfaces, which can be passed through It is obtained by marking the collected target data, as shown in Figure 1 (B) is the edge surface.
边缘线:如果将图1中的(B)作为目标对象(目标车辆)的二维图像模型,则该目标车辆的二维图像模型由若干条边缘线构成,该边缘线可以通过对采集到的二维图像数据进行标注得到,如图1中的(C)即为边缘线。Edge line: if (B) in Fig. 1 is used as the two-dimensional image model of the target object (target vehicle), then the two-dimensional image model of the target vehicle is composed of several edge lines, and the edge line can be collected by The two-dimensional image data is marked, and (C) in Figure 1 is the edge line.
边缘点:通过对采集到的二维图像数据进行标注得到若干个边缘点,由这些边缘点可得到图1中的边缘线(C),边缘线上标注的黑点即为边缘点。Edge points: Several edge points are obtained by marking the collected two-dimensional image data, and the edge line (C) in Figure 1 can be obtained from these edge points, and the black points marked on the edge line are edge points.
图像标注:是指针对图像的视觉内容,通过机器学习的方法给图像添加反应其内容的文本特征信息的过程。基本思想是利用已标注图像集或其他可获得的信息,自动学习语义概念空间与视觉特征空间的潜在关联或者映射关系,给未知图像添加文本关键词。经过图像自动标注技术的处理,将图像信息问题可以转化为技术已经相对较成熟的文本信息处理问题。Image annotation: refers to the process of adding textual feature information to the image to reflect its content through the method of machine learning for the visual content of the image. The basic idea is to use the labeled image set or other available information to automatically learn the potential association or mapping relationship between the semantic concept space and the visual feature space, and add text keywords to unknown images. Through the processing of image automatic labeling technology, the problem of image information can be transformed into the problem of text information processing with relatively mature technology.
图像数据标注广泛应用于自动驾驶、人像识别、医疗图像识别等领域中。其中,自动驾驶领域内的图像数据标注,主要有两种标注方式,一种是拉框标注,一种是精细的分割标注。Image data annotation is widely used in fields such as autonomous driving, face recognition, and medical image recognition. Among them, there are two main methods of labeling image data in the field of autonomous driving, one is frame labeling, and the other is fine segmentation labeling.
拉框标注方法,是在对图像标注的时候需要将框的边紧贴目标对象边缘,同时注明每一个框的属性。对于算法而言,每一个框都是一个小图,每一个小图对应一种对象类别,以车辆为例,通过上述拉框标注的方法,可以得到图像中的小汽车、面包车、小型货车等不同类别的车。标注的时候需要特别注意框与车辆的边缘相切,如果不相切,比如把不属于车辆的部分框选了进来,那么机器在学习的时候就有可能将多框选进来的部分识别为车辆,从而造成机器识别不准确甚至是识别错误的问题。此外,图像数据标注需要大量的标注数据,机器算法在大量数据的学习中,会自行总结这些对象的高维特征,在识别新图像时能够通过总结的高维特征对新的图像进行识别,对每一种可能的结果给出一个概率。The method of drawing frame labeling is to make the edge of the frame close to the edge of the target object when labeling the image, and at the same time indicate the attributes of each frame. For the algorithm, each box is a small image, and each small image corresponds to an object category. Taking vehicles as an example, through the above method of drawing frame labels, you can get cars, vans, minivans, etc. in the image Different classes of cars. When labeling, you need to pay special attention to the tangent between the frame and the edge of the vehicle. If it is not tangent, such as selecting a part that does not belong to the vehicle, then the machine may recognize the part selected by multiple frames as a vehicle during learning. , resulting in inaccurate or even wrong recognition of the machine. In addition, image data labeling requires a large amount of labeling data. During the learning of large amounts of data, machine algorithms will summarize the high-dimensional features of these objects by themselves. When identifying new images, they can use the summarized high-dimensional features to identify new images. Each possible outcome is given a probability.
分割标注方法,是采用交互式图像分割的方法,对给定的原始图像的初始区域,利用分割算法在初始区域中对目标对象进行分割,从而产生一系列边缘点,进而通过修正边缘点来修正分割结果。The segmentation and labeling method uses an interactive image segmentation method. For a given initial area of the original image, the segmentation algorithm is used to segment the target object in the initial area, thereby generating a series of edge points, and then correcting by correcting the edge points. Split results.
下面以分割标注方法为例进行简要说明,假设原始图像(a)中存在人、车、行道树、建筑等多种类别对象,分割标注需要识别出图像中存在的不同类别对象并检测其位置对其进行分割。The following is a brief description of the segmentation and labeling method as an example. Assuming that there are various types of objects such as people, cars, street trees, and buildings in the original image (a), the segmentation and labeling needs to identify different types of objects in the image and detect their positions. to split.
首先是对原始图像(a)进行语义分割,即对原始图像(a)中的每一个像素点进行分类,确定每个像素点的类别(如属于背景、人或车等),从而对原始图像(a)中的每个对象打上类别标签,把图像中包含的对象分为人、行道树、车、建筑标签,并进行区域划分,得到图像(b)。The first is to perform semantic segmentation on the original image (a), that is, classify each pixel in the original image (a), determine the category of each pixel (such as belonging to the background, person or car, etc.), so that the original image Each object in (a) is marked with a category label, and the objects contained in the image are divided into people, street trees, cars, and building labels, and are divided into regions to obtain image (b).
然后是对语义分割得到的图像(b)进行实例分割,实例分割是目标检测和语义分割的结合,即在图像中将目标对象检测出来,然后结合上述语义分割确定的每个像素点的类别,确定目标对象的类别标签,得到图像(c)。如以车为目标对象为例,语义分割不区分属于相同车类别的不同实例(比如所有车对象都标为红色),而实例分割会结合语义分割的结果和目标对象检测,区分同类的不同实例(比如使用不同颜色区分不同的车类别)。Then, instance segmentation is performed on the image (b) obtained by semantic segmentation. Instance segmentation is a combination of target detection and semantic segmentation, that is, the target object is detected in the image, and then combined with the category of each pixel determined by the above semantic segmentation, Determine the category label of the target object, resulting in image (c). For example, taking a car as the target object, semantic segmentation does not distinguish different instances belonging to the same car category (for example, all car objects are marked in red), while instance segmentation combines the results of semantic segmentation and target object detection to distinguish different instances of the same type (For example, use different colors to distinguish different car categories).
最后是对实例分割得到的图像(c)进行全景分割,全景分割是语义分割和实例分割的结合,即对图像中的所有目标都检测出来,同时又要区分出同个类别中的不同实例,得到图像(d)。对比图像(c)和图像(d)可以看出,实例分割只对图像中的目标对象(如图中的车)进行检测和按像素类别分割,区分不同实例(使用不同颜色),而全景分割是对图中的所有物体包括背景均进行检测和分割,以区分不同实例(使用不同颜色)。Finally, panoramic segmentation is performed on the image (c) obtained by instance segmentation. Panoramic segmentation is a combination of semantic segmentation and instance segmentation, that is, all objects in the image are detected, and at the same time, different instances in the same category must be distinguished. Get image (d). Comparing image (c) and image (d), it can be seen that instance segmentation only detects the target object in the image (such as the car in the picture) and segments it by pixel category to distinguish different instances (using different colors), while panoramic segmentation It is to detect and segment all objects in the picture, including the background, to distinguish different instances (using different colors).
无论上述拉框标注方法还是分割标注方法,最后均需要对产生的目标对象的边缘点进行修正,而对边缘点逐一修正耗时耗力,且标注结果的准确率及效率较低。Regardless of the above-mentioned frame-drawing labeling method or segmentation labeling method, the edge points of the generated target objects need to be corrected in the end, and it is time-consuming and labor-intensive to correct the edge points one by one, and the accuracy and efficiency of the labeling results are low.
针对上述数据标注方法中存在的标注结果准确率及效率较低的问题,本申请实施例提供了一种数据标注系统,以及基于该数据标注系统的新的数据标注方法。该方法通过提出基于预选框的深度学习方法对目标图像中的目标对象进行边缘检测,得到目标对象的第一轮廓,再引入弱语义信息将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边缘线的修正,修正得到目标对象的第二轮廓,可以大大提高标注结果的准确率及效率。To solve the problem of low accuracy and efficiency of labeling results in the above data labeling methods, the embodiment of the present application provides a data labeling system and a new data labeling method based on the data labeling system. This method detects the edge of the target object in the target image by proposing a deep learning method based on the pre-selected frame to obtain the first contour of the target object, and then introduces weak semantic information to convert the correction of the edge points of the first contour of the target object into The correction of the edge line of the first contour of the target object, and the correction to obtain the second contour of the target object can greatly improve the accuracy and efficiency of the labeling result.
下面将分别对数据标注系统以及基于该数据标注系统的数据标注方法进行说明。The data labeling system and the data labeling method based on the data labeling system will be described respectively below.
请参阅图2,图2为本申请实施例提供的一种数据标注系统的架构示意图。Please refer to FIG. 2 . FIG. 2 is a schematic diagram of an architecture of a data labeling system provided by an embodiment of the present application.
如图2所示,该系统架构主要包括三方面:数据采集模块,存储模块,以及数据标注模块。其中,数据采集模块用于采集待标注的数据,具体可以通过摄像头采集图像数据,或通过雷达采集目标数据等。存储模块用于存储数据采集模块采集的待标注数据,该存储模块可以是云端或本地服务器,与数据采集模块通过移动网络实现数据通信。数据标注模块可以是独立的模块,也可以是智能驾驶平台,如移动数据中心(Mobile Data Center,MDC),还可以是智能座舱平台,如座舱域控制器(Cockpit Domain Controller,CDC),还可以是整车控制平台,如整车域控制器(Vehicle Domain Controller,VDC)等,用于从云端或者服务器中获取待标注数据进行标注,得到标注结果,并将该标注结果存储至云端或服务器,以便可以用于后续的数据识别中。As shown in Figure 2, the system architecture mainly includes three aspects: a data acquisition module, a storage module, and a data labeling module. Among them, the data collection module is used to collect the data to be marked, specifically, the image data can be collected through the camera, or the target data can be collected through the radar. The storage module is used to store the data to be marked collected by the data acquisition module. The storage module can be a cloud or a local server, and realizes data communication with the data acquisition module through a mobile network. The data labeling module can be an independent module, or an intelligent driving platform, such as a mobile data center (Mobile Data Center, MDC), or an intelligent cockpit platform, such as a cockpit domain controller (Cockpit Domain Controller, CDC), or It is a vehicle control platform, such as a vehicle domain controller (Vehicle Domain Controller, VDC), etc., which is used to obtain the data to be marked from the cloud or server for labeling, obtain the labeling results, and store the labeling results in the cloud or server. so that it can be used in subsequent data identification.
可以理解的是,图2中的数据标注系统架构只是本申请实施例中的一种示例性的实施方式,本申请实施例中的数据标注系统架构包括但不仅限于以上数据标注系统架构。It can be understood that the data annotation system architecture in FIG. 2 is only an exemplary implementation in the embodiment of the present application, and the data annotation system architecture in the embodiment of the application includes but not limited to the above data annotation system architecture.
相应的,基于本申请实施例所提供的数据标注系统,还提供了一种新的数据标注方法,该方法的数据标注过程由上述数据标注系统中的数据标注模块完成。该方法可应用于如自动驾驶、人像识别、医疗图像识别等多种场景中,下面将以自动驾驶应用场景下的数据标注为例进行说明。Correspondingly, based on the data labeling system provided in the embodiment of the present application, a new data labeling method is also provided, and the data labeling process of the method is completed by the data labeling module in the above data labeling system. This method can be applied to various scenarios such as automatic driving, portrait recognition, medical image recognition, etc. The following will take the data labeling in the application scenario of automatic driving as an example to illustrate.
请参阅图3,图3为本申请实施例提供的一种数据标注的应用场景示意图。Please refer to FIG. 3 . FIG. 3 is a schematic diagram of an application scenario of data labeling provided by an embodiment of the present application.
如图3所示,该应用场景可以包括云端网络和多个车辆行驶控制装置(测试车、用户车a、用户车b、用户车c)。上述多个车辆行驶控制装置可以通过与云端网络进行数据通信,从而实现测试车与用户车之间的数据交互,以及不同用户车(用户车a、用户b、用户车c)之间的数据交互。特别的,上述车辆行驶控制装置是通过摄像头或车载传感系统感知道路环境,自动规划行车路线并控制车辆到达预定目标的智能汽车。智能汽车集中运用了计算机、现代传感、信息融合、通讯、人工智能以及自动控制等技术,是一个集环境感知、规划决策、多等级辅助驾驶等功能于一体的高新技术综合体。其中,本申请中的车辆(如测试车、用户车a、用户车b、用户车c)可以是主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶目的的车辆,可以是拥有辅助驾驶系统或者全自动驾驶系统的智能车辆,还可以是轮式移动机器人等。As shown in FIG. 3 , the application scenario may include a cloud network and multiple vehicle driving control devices (test vehicle, user vehicle a, user vehicle b, and user vehicle c). The above-mentioned multiple vehicle driving control devices can communicate data with the cloud network, thereby realizing data interaction between the test vehicle and the user vehicle, as well as data interaction between different user vehicles (user car a, user b, user car c) . In particular, the above-mentioned vehicle driving control device is a smart car that senses the road environment through a camera or an on-board sensor system, automatically plans a driving route, and controls the vehicle to reach a predetermined target. Smart cars use computers, modern sensing, information fusion, communication, artificial intelligence, and automatic control technologies in a concentrated manner. They are a high-tech complex integrating environmental perception, planning and decision-making, and multi-level assisted driving. Among them, the vehicles in this application (such as test vehicle, user vehicle a, user vehicle b, and user vehicle c) can be vehicles that mainly rely on the computer system-based intelligent driver in the vehicle to achieve the purpose of unmanned driving. It is an intelligent vehicle with an assisted driving system or a fully automatic driving system, and it can also be a wheeled mobile robot.
基于上述应用场景,在一种可能的实施方式中,本申请实施例所提供的数据标注方法由测试车完成。测试车在目标路段对应的行驶场景下,通过摄像头采集行驶过程中目标路段的图像数据,依靠车内的计算机系统完成对图像数据的标注,得到相应的数据标签,并将数据标签传输至云端网络中。用户车在实际行驶中,通过摄像头采集行驶路段的图像数据,并从云端网络获取测试车标注得到的数据标签,利用该数据标签结合图像识别算法,自动识别采集到的图像中存在的对象,从而达到感知道路环境,自动规划行车路线并控制车辆躲避障碍物对象,到达预定地址的目的。Based on the above application scenarios, in a possible implementation manner, the data labeling method provided in the embodiment of the present application is completed by a test vehicle. In the driving scene corresponding to the target road section, the test vehicle collects the image data of the target road section during driving through the camera, and relies on the computer system in the car to complete the labeling of the image data, obtain the corresponding data label, and transmit the data label to the cloud network middle. During the actual driving of the user's car, the image data of the driving section is collected through the camera, and the data label marked by the test car is obtained from the cloud network, and the data label is combined with the image recognition algorithm to automatically identify the objects in the collected image, thereby To achieve the purpose of sensing the road environment, automatically planning the driving route and controlling the vehicle to avoid obstacles and objects, and arrive at the predetermined address.
在另一种可能的实施方式中,本申请实施例所提供的数据标注方法由用户车完成。由于在车辆实际行驶过程中,路况复杂多变,很难仅仅依靠测试车在目标路段的测试中得到的数据标签,来完成对所有采集到的图像数据的识别。此时,用户车需要依靠车内的计算机系统完成对图像数据的标注,得到相应的数据标签,并利用该数据标签结合图像识别算法,自动识别采集到的图像中存在的对象,从而达到感知道路环境,自动规划行车路线并控制车辆躲避障碍物对象,到达预定地址的目的。此外,用户车还将数据标签传输至云端网络中,以便 可以用于其他用户车后续的图像识别中。In another possible implementation manner, the data labeling method provided in the embodiment of the present application is completed by the user's car. Due to the complex and changeable road conditions in the actual driving process of the vehicle, it is difficult to complete the recognition of all the collected image data only by relying on the data labels obtained by the test vehicle in the test of the target road section. At this time, the user's car needs to rely on the computer system in the car to complete the labeling of the image data, obtain the corresponding data label, and use the data label combined with the image recognition algorithm to automatically identify the objects in the collected image, so as to achieve road perception. Environment, automatically plan the driving route and control the vehicle to avoid obstacles and objects, and reach the purpose of the predetermined address. In addition, the user's car will also transmit the data label to the cloud network so that it can be used for subsequent image recognition of other user's cars.
可以理解的是,图3中的自动驾驶应用场景下的数据标注只是本申请实施例中的一种示例性的实施方式,本申请实施例中的数据标注应用场景包括但不仅限于以上自动驾驶应用场景。It can be understood that the data labeling in the automatic driving application scenario in Figure 3 is only an exemplary implementation in the embodiment of the present application, and the data labeling application scenario in the embodiment of the present application includes but is not limited to the above automatic driving application Scenes.
请参阅图4,图4为本申请实施例提供的一种数据标注方法的流程示意图,该方法包括但不限于如下步骤:Please refer to Figure 4, Figure 4 is a schematic flow chart of a data labeling method provided by the embodiment of the present application, which includes but is not limited to the following steps:
步骤401:电子设备获得目标图像中包含的目标对象的第一边缘线和第二边缘线。Step 401: The electronic device obtains the first edge line and the second edge line of the target object included in the target image.
本申请实施例中的电子设备为搭载了可用于执行计算机执行指令的处理器的设备。该电子设备可以是如计算机、控制器等终端,也可以是服务器等。具体可以是上述图3中的搭载了计算机系统的车载设备,用于对采集到的图像数据进行标注,得到相应的数据标签,并利用该数据标签结合图像识别算法,自动识别采集到的图像中存在的对象,以实现感知道路环境,自动规划行车路线并控制车辆躲避障碍物对象,到达预定地址的目的。The electronic device in the embodiment of the present application is a device equipped with a processor capable of executing instructions executed by a computer. The electronic device may be a terminal such as a computer or a controller, or may be a server or the like. Specifically, it can be the vehicle-mounted device equipped with a computer system in the above-mentioned Figure 3, which is used to label the collected image data, obtain the corresponding data label, and use the data label combined with the image recognition algorithm to automatically identify the collected images. Existing objects, in order to realize the purpose of sensing the road environment, automatically planning the driving route and controlling the vehicle to avoid obstacles and objects, and reach the predetermined address.
本申请实施例中,电子设备通过对目标图像做不同的处理,获得目标图像中包含的目标对象的第一边缘线和第二边缘线,该第一边缘线和第二边缘线不同。In the embodiment of the present application, the electronic device obtains the first edge line and the second edge line of the target object included in the target image by performing different processes on the target image, and the first edge line and the second edge line are different.
电子设备对目标图像做不同的处理,具体可以是,通过对目标图像做分割处理,将目标图像中的目标对象分离出来,以得到目标对象的第一边缘点。此处的分割处理包括基于非深度学习的图像分割方法(如基于分水岭算法、图论的分割方法等)与基于深度学习的图像分割方法,两者的主要差别是基于深度学习的图像分割方法需要使用到卷积神经网络,将目标图像输入至卷积神经网络做分割处理。在将目标图像中的目标对象分离出来之后,再对该目标对象做边缘检测,将该目标对象的边缘分离出来,得到目标对象的第二边缘点。此处的边缘检测包括基于非预选框深度学习的边缘检测与基于预选框深度学习的边缘检测,两者的区别在于是否使用到预选框,基于预选框深度学习的边缘检测不是检测目标图像上的像素点是否为边缘点,而是检测每一个预选框对应的边缘点的位置及类别。The electronic device performs different processing on the target image. Specifically, the target image may be segmented to separate the target object in the target image, so as to obtain the first edge point of the target object. The segmentation processing here includes image segmentation methods based on non-deep learning (such as segmentation methods based on watershed algorithm, graph theory, etc.) and image segmentation methods based on deep learning. The main difference between the two is that image segmentation methods based on deep learning require Using the convolutional neural network, the target image is input to the convolutional neural network for segmentation. After the target object in the target image is separated, an edge detection is performed on the target object, and the edge of the target object is separated to obtain a second edge point of the target object. The edge detection here includes edge detection based on deep learning of non-preselected boxes and edge detection based on deep learning of preselected boxes. Whether the pixel is an edge point, but to detect the position and category of the edge point corresponding to each preselection box.
上述第一边缘点和第二边缘点虽然都是目标对象的边缘点,但是通过对目标对象采取不同的方式得到的边缘点,因此,上述第一边缘点和第二边缘点不完全相同。根据第一边缘点和第二边缘点,可以确定上述目标对象的第一边缘线和第二边缘线。由于上述第一边缘点和第二边缘点不完全相同,相应的,据此得到的第一边缘线和第二边缘线也不完全相同。Although the above-mentioned first edge point and the second edge point are both edge points of the target object, they are edge points obtained by adopting different methods for the target object. Therefore, the above-mentioned first edge point and the second edge point are not completely the same. According to the first edge point and the second edge point, the first edge line and the second edge line of the above-mentioned target object can be determined. Since the first edge point and the second edge point are not completely the same, correspondingly, the first edge line and the second edge line obtained accordingly are also not completely the same.
在一种可能的实施方式中,上述根据第一边缘点和第二边缘点确定目标对象的第一边缘线和第二边缘线,具体可以是,将第一边缘点和第二边缘点重合的边缘点所构成的线确定为第一边缘线,即该第一边缘线均位于上述分割处理和边缘检测得到的边缘结果上,上述分割处理和边缘检测均可得到该第一边缘线。然后将第一边缘点和第二边缘点的并集去除第一边缘点和第二边缘点重合的边缘点后所构成的线确定为第三边缘线,即该第三边缘线位于上述分割处理得到的边缘结果上,或位于上述边缘检测得到的边缘结果上,不是均位于分割处理和边缘检测得到的边缘结果上。最后将与上述第一边缘线有两个或两个以上交点的第三边缘线,确定为上述第二边缘线。In a possible implementation manner, the above-mentioned determination of the first edge line and the second edge line of the target object according to the first edge point and the second edge point may specifically be that the first edge point and the second edge point coincide The line formed by the edge points is determined as the first edge line, that is, the first edge line is located on the edge result obtained by the above-mentioned segmentation processing and edge detection, and the above-mentioned segmentation processing and edge detection can both obtain the first edge line. Then the line formed by the union of the first edge point and the second edge point after removing the edge points where the first edge point coincides with the second edge point is determined as the third edge line, that is, the third edge line is located in the above-mentioned segmentation process. The edge results obtained, or located on the edge results obtained by the above-mentioned edge detection, are not all located on the edge results obtained by the segmentation process and edge detection. Finally, the third edge line having two or more intersection points with the first edge line is determined as the second edge line.
通过本申请实施例所提供的获得第一边缘线和第二边缘线的具体实施方式,可以提高获取到的目标对象的边缘线的精度,且有利于后续将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边缘线的修正,提高标注结果的准确率及效率。Through the specific implementation of obtaining the first edge line and the second edge line provided by the embodiment of the present application, the accuracy of the acquired edge line of the target object can be improved, and it is beneficial to the subsequent analysis of the edge of the first contour of the target object The correction of the point is transformed into the correction of the edge line of the first contour of the target object, which improves the accuracy and efficiency of the labeling result.
示例性的,还可以通过数据标注的效果示意图,对上述获得目标图像中包含的目标对象的第一边缘线和第二边缘线的实施方式做进一步说明。Exemplarily, the above implementation manner of obtaining the first edge line and the second edge line of the target object included in the target image can also be further described through a schematic diagram of the effect of data annotation.
请参阅图5a,图5a为本申请实施例提供的一种数据标注的效果示意图。Please refer to FIG. 5a, which is a schematic diagram of an effect of data labeling provided by the embodiment of the present application.
如图5a所示,通过对目标图像做分割处理,将目标图像中的目标对象分离出来,以得到目标对象的第一边缘点。再对该目标对象做边缘检测,将该目标对象的边缘分离出来,得到目标对象的第二边缘点。将上述第一边缘点和第二边缘点重合的边缘点(即图5a中标出的边缘点)所构成的线确定为目标对象的第一边缘线(即图5a中的边缘线a)。可以看出,该第一边缘线均位于上述分割处理和边缘检测得到的边缘结果上。As shown in FIG. 5 a , by performing segmentation processing on the target image, the target object in the target image is separated to obtain the first edge point of the target object. Then perform edge detection on the target object, separate the edge of the target object, and obtain the second edge point of the target object. The line formed by the coincident edge points of the first edge point and the second edge point (ie, the edge point marked in FIG. 5 a ) is determined as the first edge line of the target object (ie, the edge line a in FIG. 5 a ). It can be seen that the first edge lines are located on the edge results obtained by the above segmentation processing and edge detection.
然后将上述第一边缘点和第二边缘点的并集去除第一边缘点和第二边缘点重合的边缘点(即图5a中标出的边缘点)后所构成的线确定为第三边缘线。可以看出,该第三边缘线位于上述分割处理得到的边缘结果上,或位于上述边缘检测得到的边缘结果上,并非均位于分割处理和边缘检测得到的边缘结果上。最后将与上述第一边缘线(即图5a中的边缘线a)有两个或两个以上交点的第三边缘线,确定为第二边缘线(即图5a中的边缘线b)。Then the line formed after the union of the above-mentioned first edge point and the second edge point is removed after the edge point where the first edge point and the second edge point coincide (that is, the edge point marked in Fig. 5a) is determined as the third edge line . It can be seen that the third edge line is located on the edge result obtained by the above-mentioned segmentation processing, or on the edge result obtained by the above-mentioned edge detection, but not all of them are located on the edge results obtained by the segmentation processing and edge detection. Finally, the third edge line having two or more intersection points with the first edge line (ie, edge line a in FIG. 5 a ) is determined as the second edge line (ie, edge line b in FIG. 5 a ).
示例性的,还可以通过数据标注的效果示意图,对上述基于预选框深度学习的边缘检测的实施方式做进一步说明。Exemplarily, the implementation manner of the above-mentioned edge detection based on the deep learning of the pre-selection box can also be further described through the schematic diagram of the effect of the data labeling.
请参阅图5b,图5b为本申请实施例提供的一种数据标注的效果示意图。Please refer to FIG. 5b, which is a schematic diagram of an effect of data labeling provided by the embodiment of the present application.
如图5b所示,对给定的目标图像包含的内容做了简要的区域划分,六边形为目标对象的实际轮廓,六边形以内的封闭区域为目标对象的内部区域,包含六边形的椭圆区域为目标对象的轮廓的敏感区域,需要在该敏感区域中标注目标对象,得到其精准轮廓。在该敏感区域中标注目标对象的过程中,可以基于预选框深度学习的边缘检测将该目标对象的边缘分离出来,得到目标对象的边缘点。As shown in Figure 5b, a brief area division is made for the content contained in the given target image. The hexagon is the actual outline of the target object, and the closed area within the hexagon is the internal area of the target object, including the hexagon The ellipse area of is the sensitive area of the outline of the target object, and it is necessary to mark the target object in this sensitive area to obtain its precise outline. In the process of marking the target object in the sensitive area, the edge detection of the target object based on the deep learning of the pre-selection box can be separated to obtain the edge point of the target object.
其中,基于预选框深度学习的边缘检测具体可参阅图5c,图5c为本申请实施例提供的一种数据标注的效果示意图。Wherein, the edge detection based on the deep learning of the pre-selection box can refer to FIG. 5c for details, and FIG. 5c is a schematic diagram of an effect of data labeling provided by the embodiment of the present application.
如图5c所示,在对目标图像进行标注的过程中,需要将该目标对象划分成若干个网格单元,此处的预选框中的预选内容为目标图像划分成的网格的所有横轴线段,如图5c中椭圆区域标出的线段。基于预选框深度学习的边缘检测不是检测目标图像上的像素点是否为边缘点,而是检测每一个预选框对应的横轴线段上的边缘点的位置及类别。即根据边缘线上的各个预选框对应的横轴线段上的边缘点的位置,确定该目标对象的类别标签,将目标对象的边缘线上各个预选框对应的横轴线段上的边缘点所在标签高的类别,确定为目标对象的类别标签。比如,在位于第一标签上的边缘点个数大于位于第二标签上的边缘点个数的情况下,将该第一标签确定为目标对象的类别标签,同理,在位于第一标签上的边缘点个数小于位于第二标签上的边缘点个数的情况下,将该第二标签确定为目标对象的类别标签。在位于第一标签上的边缘点个数等于位于第二标签上的边缘点个数的情况下,进一步根据标签的关键度来确定目标对象的类别标签。比如,在第一标签的关键度高于第二标签的关键度的情况下,将该第一标签确定为目标对象的类别标签,同理,在第二标签的关键度高于第一标签的关键度的情况下,将该第二标签确定为目标对象的第二标签。具体的,标签的关键度并不是固定的值,可以根据应用场景的不同而不同,在自动驾驶应用场景中,默认情况下,人的重要程度要高于车,车的重要程度要高于行道树,因此,人的关键度要高于车的关键度,车的关键度要高于行道树的关键度。在确定目标对象的类别标签之后,根据该类别标签去调整边缘线上各个预选框对应的边缘点,为各个预选框对应的边缘点分配边缘方向和边缘距离,按照边缘方向移动各个预选框对应的边缘点的边缘距离,从而得到目标对象的边缘点,即上述第二边缘点。As shown in Figure 5c, in the process of labeling the target image, the target object needs to be divided into several grid units, and the preselected content in the preselection box here is all the horizontal axes of the grid into which the target image is divided segment, such as the line segment marked by the oval area in Figure 5c. The edge detection based on the deep learning of the preselection box is not to detect whether the pixel point on the target image is an edge point, but to detect the position and category of the edge point on the horizontal axis segment corresponding to each preselection box. That is, according to the position of the edge point on the horizontal axis segment corresponding to each preselected box on the edge line, determine the category label of the target object, and label the edge point on the horizontal axis segment corresponding to each preselected box on the edge line of the target object The higher category is determined as the category label of the target object. For example, when the number of edge points on the first label is greater than the number of edge points on the second label, the first label is determined as the category label of the target object. Similarly, on the first label If the number of edge points of is less than the number of edge points located on the second label, the second label is determined as the category label of the target object. When the number of edge points on the first label is equal to the number of edge points on the second label, the category label of the target object is further determined according to the keyness of the label. For example, when the criticality of the first label is higher than that of the second label, the first label is determined as the category label of the target object. Similarly, when the criticality of the second label is higher than that of the first label In the case of criticality, the second label is determined as the second label of the target object. Specifically, the criticality of tags is not a fixed value, and can vary according to different application scenarios. In automatic driving application scenarios, by default, people are more important than cars, and cars are more important than street trees. , therefore, the criticality of people is higher than that of cars, and the criticality of cars is higher than that of street trees. After determining the category label of the target object, adjust the edge points corresponding to each preselection box on the edge line according to the category label, assign the edge direction and edge distance to the edge points corresponding to each preselection box, and move the corresponding edge points of each preselection box according to the edge direction The edge distance of the edge point, so as to obtain the edge point of the target object, that is, the above-mentioned second edge point.
本申请实施例通过基于预选框深度学习的边缘检测,为预选框分配类别标签,并进一步比较标签关键度来确定目标对象的类别标签,可以提高确定目标对象类别的准确率及效率,并据此调整目标对象的边缘线上的边缘点,可以使得到的第二边缘点位置精度更高,据此得 到的目标对象的边缘线也更准确,有利于提高标注结果的准确率及效率。In the embodiment of the present application, through the edge detection based on the deep learning of the pre-selected frame, the pre-selected frame is assigned a category label, and the key degree of the label is further compared to determine the category label of the target object, which can improve the accuracy and efficiency of determining the target object category, and based on this Adjusting the edge points on the edge line of the target object can make the position accuracy of the second edge point obtained higher, and the edge line of the target object obtained accordingly is also more accurate, which is conducive to improving the accuracy and efficiency of the labeling results.
步骤402:根据该第一边缘线,得到目标对象的第一轮廓。Step 402: Obtain a first contour of the target object according to the first edge line.
电子设备获得目标对象的第一边缘线之后,基于该第一边缘线对目标对象进行初标,得到目标对象的第一轮廓。该第一轮廓可作为目标对象最终标注的轮廓结果,也可以作为目标对象标注的候选轮廓结果,并在此基础上通过对候选轮廓的边缘进行修正来达到提高标注结果准确率的目的。After obtaining the first edge line of the target object, the electronic device initially marks the target object based on the first edge line to obtain the first outline of the target object. The first contour can be used as the final labeled contour result of the target object, or as the candidate contour result of the target object label, and on this basis, the accuracy of the labeling result can be improved by correcting the edge of the candidate contour.
示例性的,在上述图5a中,利用得到的边缘线a对目标对象进行初标,得到目标对象的第一轮廓。可以看出,目标对象的实际轮廓与得到的第一轮廓有细微差别,第一轮廓并未完全覆盖目标对象并与该目标对象的边缘相切。需要在初标的基础上,通过利用边缘线b对第一轮廓的边缘进行修正来提高标注结果的准确率。Exemplarily, in the above-mentioned FIG. 5a, the obtained edge line a is used to initially mark the target object to obtain the first contour of the target object. It can be seen that the actual contour of the target object is slightly different from the obtained first contour, and the first contour does not completely cover the target object and is tangent to the edge of the target object. It is necessary to correct the edge of the first contour by using the edge line b on the basis of the initial mark to improve the accuracy of the labeling result.
步骤403:利用该第二边缘线对该第一轮廓的边缘线进行修正,得到目标对象的第二轮廓。Step 403: Using the second edge line to correct the edge line of the first contour to obtain a second contour of the target object.
电子设备利用第二边缘线对初标得到的第一轮廓的边缘线进行修正,得到目标对象的第二轮廓。具体可以是,将第二边缘线与第一轮廓的边缘线构成的轮廓,确定为目标对象的第二轮廓,该修正方法提高了标注结果的准确率及效率。The electronic device uses the second edge line to correct the edge line of the first contour obtained from the initial calibration to obtain the second contour of the target object. Specifically, the contour formed by the second edge line and the edge line of the first contour may be determined as the second contour of the target object. This correction method improves the accuracy and efficiency of the labeling result.
示例性的,在上述图5a中,利用边缘线a对目标对象进行初标得到第一轮廓的基础上,再通过利用边缘线b对第一轮廓的边缘进行修正来提高标注结果的准确率,修正后得到的目标对象的第二轮廓,完全覆盖了目标对象且与该目标对象的边缘相切。Exemplarily, in the above-mentioned FIG. 5a, on the basis of using the edge line a to initially mark the target object to obtain the first contour, and then using the edge line b to correct the edge of the first contour to improve the accuracy of the labeling result, The second contour of the target object obtained after correction completely covers the target object and is tangent to the edge of the target object.
目前的数据标注方法,通常分割算法对目标对象进行分割,得到目标对象的边缘点,进而通过修正边缘点来修正目标对象的轮廓。因此,需要人工对边缘点逐一修正,耗时耗力,且标注结果的准确率及效率较低。In the current data labeling method, the segmentation algorithm usually segments the target object to obtain the edge points of the target object, and then corrects the outline of the target object by correcting the edge points. Therefore, it is necessary to manually correct the edge points one by one, which is time-consuming and labor-intensive, and the accuracy and efficiency of the labeling results are low.
本申请实施例提供的数据标注方法,与目前常用的数据标注方法相比,采用第一边缘线对目标对象进行初标,再利用第二边缘线对初标得到的第一轮廓的边缘线进行修正,从而将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边缘线的修正,修正得到目标对象的第二轮廓,可以大大提高标注结果的准确率及效率。The data labeling method provided by the embodiment of the present application, compared with the currently commonly used data labeling method, uses the first edge line to initially mark the target object, and then uses the second edge line to perform initial mark on the edge line of the first contour obtained from the initial mark Correction, so that the correction of the edge point of the first contour of the target object is converted into the correction of the edge line of the first contour of the target object, and the second contour of the target object is obtained by correction, which can greatly improve the accuracy and efficiency of the labeling results .
相应的,上述图4所提供的数据标注方法,可对应于图6a至图6c所示的数据标注系统架构。下面将结合图4中的数据标注方法,分别对图6a至图6c中的数据标注系统架构进行说明。Correspondingly, the data labeling method provided in FIG. 4 above may correspond to the data labeling system architecture shown in FIGS. 6a to 6c. The architecture of the data labeling system in FIGS. 6 a to 6 c will be described below in combination with the data labeling method in FIG. 4 .
请参阅图6a,图6a为本申请实施例提供的一种数据标注系统的架构示意图。Please refer to FIG. 6a. FIG. 6a is a schematic structural diagram of a data labeling system provided by an embodiment of the present application.
如图6a所示,首先获取目标图像的图像数据,然后分别对该图像数据做基于非深度学习的图像分割、基于深度学习的图像分割、基于预选框深度学习的边缘检测以及基于非预选框深度学习的边缘检测,得到关于目标对象的不同边缘信息的结果,并根据上述不同处理得到的结果,生成强语义信息和弱语义信息。As shown in Figure 6a, the image data of the target image is first obtained, and then image segmentation based on non-deep learning, image segmentation based on deep learning, edge detection based on deep learning of preselected boxes, and depth of non-preselected boxes are respectively performed on the image data. The learned edge detection obtains the results of different edge information about the target object, and generates strong semantic information and weak semantic information according to the results obtained by the above different processing.
可以理解为,该强语义信息为上述图5a中的边缘线a或上述图4中的第一边缘线,即均位于上述不同处理得到的边缘信息的结果上,换言之,上述不同处理得到的边缘信息的重合部分,为该强语义信息。相应的,该弱语义信息为上述图5a中的边缘线b或上述图4中的第二边缘线,即只位于上述不同处理中的某个处理得到的边缘信息的结果上,换言之,上述不同处理得到的边缘信息的非重合部分,为该弱语义信息。It can be understood that the strong semantic information is the edge line a in the above-mentioned Figure 5a or the first edge line in the above-mentioned Figure 4, that is, both are located on the result of the edge information obtained by the above-mentioned different processing, in other words, the edge a obtained by the above-mentioned different processing The overlapping part of the information is the strong semantic information. Correspondingly, the weak semantic information is the edge line b in the above-mentioned Figure 5a or the second edge line in the above-mentioned Figure 4, that is, it is only located on the result of the edge information obtained by one of the above-mentioned different processes, in other words, the above-mentioned different The non-overlapping part of the processed edge information is the weak semantic information.
最后基于生成的强语义信息对目标对象进行初标,基于生成的弱语义信息对初标的结果进行修正。结合上述图4中的数据标注方法,可分别对应于上述步骤402:根据第一边缘线,得到目标对象的第一轮廓;以及步骤403:利用第二边缘线对该第一轮廓的边缘线进行修正, 得到目标对象的第二轮廓。再结合图5a中的数据标注的具体实施例,可分别对应于,利用边缘线a对目标对象进行初标,得到目标对象的第一轮廓;以及利用边缘线b对第一轮廓的边缘线进行修正,得到目标对象的第二轮廓。Finally, the target object is initially marked based on the generated strong semantic information, and the result of the initial mark is corrected based on the generated weak semantic information. Combining with the above-mentioned data labeling method in FIG. 4, it can correspond to the above-mentioned step 402: obtain the first outline of the target object according to the first edge line; and step 403: use the second edge line to perform Correction, to obtain the second contour of the target object. Combined with the specific embodiment of the data labeling in Fig. 5a, it can respectively correspond to, use the edge line a to initially mark the target object to obtain the first outline of the target object; and use the edge line b to perform initial marking on the edge line of the first outline Correction to obtain the second contour of the target object.
进一步地,根据上述不同处理得到的结果生成强语义信息,具体流程可参阅图6b,图6b为本申请实施例提供的一种数据标注系统的架构示意图。Further, strong semantic information is generated according to the results obtained from the above-mentioned different processes. The specific process can be referred to FIG. 6b, which is a schematic diagram of the architecture of a data labeling system provided by an embodiment of the present application.
如图6b所示,分别对目标图像的图像数据做基于深度学习的图像分割和基于非深度学习的图像分割,将二者得到的关于目标对象的边缘信息的结果融合,生成目标对象的边缘点。再分别对目标图像的图像数据做基于预选框深度学习的边缘检测以及基于非预选框深度学习的边缘检测,将二者得到的边缘检测结果融合,将边缘检测融合结果用于对上述生成的目标对象的边缘点的修正,从而得到强语义信息,表示目标对象的第一边缘线。As shown in Figure 6b, image segmentation based on deep learning and image segmentation based on non-deep learning are respectively performed on the image data of the target image, and the results of edge information about the target object obtained by the two are fused to generate edge points of the target object . Then perform edge detection based on deep learning of pre-selected boxes and edge detection based on deep learning of non-pre-selected boxes on the image data of the target image, fuse the edge detection results obtained by the two, and use the edge detection fusion results for the target generated above The correction of the edge points of the object, thereby obtaining strong semantic information, representing the first edge line of the target object.
进一步地,根据上述不同处理得到的结果生成弱语义信息,具体流程可参阅图6c,图6c为本申请实施例提供的一种数据标注系统的架构示意图。Further, the weak semantic information is generated according to the results obtained from the above-mentioned different processes. For the specific process, please refer to FIG. 6c, which is a schematic diagram of the architecture of a data labeling system provided by the embodiment of the present application.
如图6c所示,结合上述生成的强语义信息,分别对图像数据做基于非深度学习的图像分割、基于深度学习的图像分割、基于预选框深度学习的边缘检测以及基于非预选框深度学习的边缘检测,可生成弱语义信息。As shown in Figure 6c, combined with the strong semantic information generated above, image segmentation based on non-deep learning, image segmentation based on deep learning, edge detection based on deep learning of preselected boxes, and deep learning based on non-preselected boxes are respectively performed on image data. Edge detection, which can generate weak semantic information.
请参阅图7,图7为本申请实施例提供的另一种数据标注方法的流程示意图,该方法包括但不限于如下步骤:Please refer to Figure 7, Figure 7 is a schematic flowchart of another data labeling method provided by the embodiment of the present application, which includes but is not limited to the following steps:
步骤701:电子设备根据目标雷达的目标数据,得到目标对象的第一边框和第二边框。Step 701: The electronic device obtains the first frame and the second frame of the target object according to the target data of the target radar.
本申请实施例中的电子设备为搭载了可用于执行计算机执行指令的处理器的设备。该电子设备可以是如计算机、控制器等终端,也可以是服务器等。具体可以是上述图3中的搭载了计算机系统的车载设备,用于对车载目标雷达采集到的目标数据进行标注,得到相应的数据标签,并利用该数据标签结合识别算法,自动识别采集到的目标数据中存在的对象,以实现感知道路环境,自动规划行车路线并控制车辆躲避障碍物对象,到达预定地址的目的。The electronic device in the embodiment of the present application is a device equipped with a processor capable of executing instructions executed by a computer. The electronic device may be a terminal such as a computer or a controller, or may be a server or the like. Specifically, it can be the vehicle-mounted equipment equipped with a computer system in the above-mentioned Figure 3, which is used to mark the target data collected by the vehicle-mounted target radar, obtain the corresponding data label, and use the data label combined with the recognition algorithm to automatically identify the collected data. Objects in the target data to realize the purpose of sensing the road environment, automatically planning the driving route and controlling the vehicle to avoid obstacles and objects, and reach the predetermined address.
本申请实施例中,电子设备通过对目标雷达探测目标对象所采集到的目标数据做不同的处理,获得目标对象的第一边框和第二边框,该第一边框和第二边框不同。In the embodiment of the present application, the electronic device obtains the first frame and the second frame of the target object by performing different processes on the target data collected by the target radar to detect the target object, and the first frame and the second frame are different.
电子设备对目标雷达探测目标对象所采集到的目标数据做不同的处理,具体可以是,通过对目标对象做分割处理,将目标对象从目标雷达的探测区域中分离出来,以得到目标对象的第一边缘面。再对该目标对象做边缘检测,将该目标对象的边缘分离出来,得到目标对象的第二边缘面。The electronic equipment performs different processing on the target data collected by the target radar to detect the target object. Specifically, the target object can be separated from the detection area of the target radar by performing segmentation processing on the target object, so as to obtain the first an edge face. Then perform edge detection on the target object, separate the edge of the target object, and obtain the second edge surface of the target object.
上述第一边缘面和第二边缘面虽然都是目标对象的边缘面,但是通过对目标对象采取不同的方式得到的边缘面,因此,上述第一边缘面和第二边缘面不完全相同。根据第一边缘面和第二边缘面,可以确定上述目标对象的第一边框和第二边框。由于上述第一边缘面和第二边缘面不完全相同,相应的,据此得到的第一边框和第二边框也不完全相同。Although the above-mentioned first edge surface and the second edge surface are both edge surfaces of the target object, they are edge surfaces obtained by adopting different methods for the target object. Therefore, the above-mentioned first edge surface and the second edge surface are not completely the same. According to the first edge surface and the second edge surface, the first frame and the second frame of the target object can be determined. Since the above-mentioned first edge surface and the second edge surface are not completely the same, correspondingly, the first frame and the second frame obtained accordingly are also not completely the same.
在一种可能的具体实施方式中,上述根据第一边缘面和第二边缘面确定目标对象的第一边框和第二边框,具体可以是,将第一边缘面和第二边缘面重合的边缘面所构成的边框确定为第一边框,即该第一边框均位于上述分割处理和边缘检测得到的边缘结果上,上述分割处理和边缘检测均可得到该第一边框。然后将第一边缘面和第二边缘面的并集去除第一边缘面和第二边缘面重合的边缘面后所构成的边框确定为第三边框,即该第三边框位于上述分割处理得到的边缘结果上,或位于上述边缘检测得到的边缘结果上,不是均位于分割处理和边缘检测得到的边缘结果上。最后将与上述第一边框有两个或两个以上相交的面的第三边框,确定为上述第二边框。In a possible specific implementation manner, the above-mentioned determination of the first frame and the second frame of the target object according to the first edge surface and the second edge surface may specifically be the edge where the first edge surface and the second edge surface overlap The frame formed by the plane is determined as the first frame, that is, the first frame is located on the edge result obtained by the above-mentioned segmentation processing and edge detection, and the above-mentioned segmentation processing and edge detection can both obtain the first frame. Then, the frame formed by the union of the first edge face and the second edge face is determined as the third frame after removing the overlapping edge face of the first edge face and the second edge face, that is, the third frame is located at the position obtained by the above segmentation process. On the edge result, or on the edge result obtained by the above-mentioned edge detection, not all on the edge result obtained by the segmentation process and the edge detection. Finally, the third frame that has two or more intersecting faces with the first frame is determined as the second frame.
在一种可能的具体实施方式中,上述对该目标对象做边缘检测,得到目标对象的第二边缘面,具体可以是,根据目标对象的边框上的边缘面的位置,确定目标对象的标签。比如,在位于第一标签上的边缘面个数大于位于第二标签上的边缘面个数的情况下,将第一标签确定为目标对象的标签;同理,在位于第一标签上的边缘面个数小于位于第二标签上的边缘面个数的情况下,将第二标签确定为目标对象的标签。在位于第一标签上的边缘面个数等于位于第二标签上的边缘面个数的情况下,进一步根据第一标签的关键度和第二标签的关键度,来确定目标对象的标签。比如,在第一标签的关键度高于第二标签的关键度的情况下,将该第一标签确定为目标对象的类别标签,同理,在第二标签的关键度高于第一标签的关键度的情况下,将该第二标签确定为目标对象的第二标签。具体的,标签的关键度并不是固定的值,可以根据应用场景的不同而不同,在自动驾驶应用场景中,默认情况下,人的重要程度要高于车,车的重要程度要高于行道树,因此,人的关键度要高于车的关键度,车的关键度要高于行道树的关键度。在确定目标对象的类别标签之后,根据该类别标签去调整目标对象的边框上的边缘面的位置,得到第二边缘面。In a possible specific implementation manner, the above-mentioned edge detection is performed on the target object to obtain the second edge surface of the target object. Specifically, the label of the target object may be determined according to the position of the edge surface on the border of the target object. For example, when the number of edge faces on the first label is greater than the number of edge faces on the second label, the first label is determined as the label of the target object; similarly, the edge faces on the first label If the number of faces is smaller than the number of edge faces on the second label, the second label is determined as the label of the target object. When the number of edge faces on the first label is equal to the number of edge faces on the second label, further determine the label of the target object according to the keyness of the first label and the keyness of the second label. For example, when the criticality of the first label is higher than that of the second label, the first label is determined as the category label of the target object. Similarly, when the criticality of the second label is higher than that of the first label In the case of criticality, the second label is determined as the second label of the target object. Specifically, the criticality of tags is not a fixed value, and can vary according to different application scenarios. In automatic driving application scenarios, by default, people are more important than cars, and cars are more important than street trees. , therefore, the criticality of people is higher than that of cars, and the criticality of cars is higher than that of street trees. After the category label of the target object is determined, the position of the edge surface on the border of the target object is adjusted according to the category label to obtain a second edge surface.
步骤702:根据该第一边框,得到目标对象的第一轮廓。Step 702: Obtain a first contour of the target object according to the first frame.
电子设备获得目标对象的第一边框之后,基于该第一边框对目标对象进行初标,得到目标对象的第一轮廓。该第一轮廓可作为目标对象最终标注的轮廓结果,也可以作为目标对象标注的候选轮廓结果,并在此基础上通过对候选轮廓的边框进行修正来达到提高标注结果准确率的目的。After obtaining the first frame of the target object, the electronic device initially marks the target object based on the first frame to obtain the first contour of the target object. The first contour can be used as the final labeled contour result of the target object, or as the candidate contour result of the target object label, and on this basis, the accuracy of the labeling result can be improved by correcting the frame of the candidate contour.
步骤703:利用该第二边框对该第一轮廓的边框进行修正,得到目标对象的第二轮廓。Step 703: Use the second frame to correct the frame of the first contour to obtain a second contour of the target object.
电子设备利用第二边框对初标得到的第一轮廓的边框进行修正,得到目标对象的第二轮廓。具体可以是,将第二边框与第一轮廓的边框构成的轮廓,确定为目标对象的第二轮廓,该修正方法提高了标注结果的准确率及效率。The electronic device uses the second frame to correct the frame of the first contour obtained from the initial calibration to obtain the second contour of the target object. Specifically, the contour formed by the second frame and the frame of the first contour may be determined as the second contour of the target object. This correction method improves the accuracy and efficiency of the labeling result.
本申请实施例,通过对目标雷达探测目标对象所采集到的目标数据做不同的处理,可以得到目标对象的不同边框,比如第一边框和第二边框。然后基于第一边框对目标对象进行初标,得到目标对象的第一轮廓。再利用第二边框对初标得到的第一轮廓的边框进行修正,得到目标对象的第二轮廓。In the embodiment of the present application, different frames of the target object, such as the first frame and the second frame, can be obtained by performing different processes on the target data collected by the target radar to detect the target object. Then, an initial mark is performed on the target object based on the first bounding box to obtain a first outline of the target object. Then, the second frame is used to correct the frame of the first contour obtained by the initial calibration, so as to obtain the second contour of the target object.
本申请实施例提供的数据标注方法,不仅适用于对二维图像数据的标注,还适用于对三维雷达数据的标注。将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边框的修正,修正得到目标对象的第二轮廓,可以大大提高标注结果的准确率及效率。The data labeling method provided in the embodiment of the present application is not only suitable for labeling two-dimensional image data, but also suitable for labeling three-dimensional radar data. Converting the correction of the edge points of the first contour of the target object to the correction of the frame of the first contour of the target object, and correcting to obtain the second contour of the target object can greatly improve the accuracy and efficiency of the labeling result.
上述详细阐述了本申请实施例的方法,下面提供本申请实施例的装置。The method of the embodiment of the present application has been described in detail above, and the device of the embodiment of the present application is provided below.
请参阅图8,图8为本申请实施例提供的一种数据标注装置的结构示意图,该数据标注装置80可以包括确定单元801以及修正单元802,其中,各个单元的描述如下:Please refer to FIG. 8. FIG. 8 is a schematic structural diagram of a data labeling device provided by an embodiment of the present application. The data labeling device 80 may include a determination unit 801 and a correction unit 802, where each unit is described as follows:
确定单元801,用于获得目标图像中包含的目标对象的第一边缘线和第二边缘线;所述第一边缘线和所述第二边缘线不同;A determining unit 801, configured to obtain a first edge line and a second edge line of a target object included in the target image; the first edge line and the second edge line are different;
所述确定单元801,还用于根据所述第一边缘线,得到所述目标对象的第一轮廓;The determining unit 801 is further configured to obtain a first contour of the target object according to the first edge line;
修正单元802,用于利用所述第二边缘线对所述第一轮廓的边缘线进行修正,得到所述目标对象的第二轮廓。The correction unit 802 is configured to use the second edge line to correct the edge line of the first contour to obtain the second contour of the target object.
在一种可能的实施方式中,所述修正单元802,具体用于将所述第二边缘线与所述第一轮廓的边缘线构成的轮廓,确定为所述第二轮廓。In a possible implementation manner, the correction unit 802 is specifically configured to determine a contour formed by the second edge line and edge lines of the first contour as the second contour.
在一种可能的实施方式中,所述装置还包括:In a possible implementation manner, the device also includes:
分割单元803,用于对所述目标图像做分割处理,得到所述目标对象的第一边缘点;所述分割处理用于将所述目标图像中包含的所述目标对象分离出来;A segmentation unit 803, configured to perform segmentation processing on the target image to obtain a first edge point of the target object; the segmentation processing is used to separate the target object contained in the target image;
边缘检测单元804,用于对所述目标对象做边缘检测,得到所述目标对象的第二边缘点;所述边缘检测用于将所述目标对象的边缘分离出来;An edge detection unit 804, configured to perform edge detection on the target object to obtain a second edge point of the target object; the edge detection is used to separate the edge of the target object;
所述确定单元801,具体用于根据所述第一边缘点和所述第二边缘点,确定所述第一边缘线和所述第二边缘线。The determining unit 801 is specifically configured to determine the first edge line and the second edge line according to the first edge point and the second edge point.
在一种可能的实施方式中,所述确定单元801,具体还用于根据所述第一边缘点和所述第二边缘点,得到所述第一边缘线;所述第一边缘线上的点为所述第一边缘点和所述第二边缘点重合的边缘点;In a possible implementation manner, the determining unit 801 is further configured to obtain the first edge line according to the first edge point and the second edge point; The point is an edge point where the first edge point and the second edge point coincide;
所述确定单元801,具体还用于根据所述第一边缘点、所述第二边缘点以及所述第一边缘线,得到所述第二边缘线。The determining unit 801 is specifically further configured to obtain the second edge line according to the first edge point, the second edge point, and the first edge line.
在一种可能的实施方式中,所述确定单元801,具体还用于根据所述第一边缘点和所述第二边缘点,得到第三边缘线;所述第三边缘线上的点为所述第一边缘点或所述第二边缘点,且不为所述第一边缘点和所述第二边缘点重合的边缘点;In a possible implementation manner, the determining unit 801 is further configured to obtain a third edge line according to the first edge point and the second edge point; the points on the third edge line are The first edge point or the second edge point, which is not an edge point where the first edge point and the second edge point overlap;
所述确定单元801,具体还用于将与所述第一边缘线有两个或两个以上的交点的所述第三边缘线,确定为所述第二边缘线。The determining unit 801 is further configured to determine the third edge line having two or more intersection points with the first edge line as the second edge line.
在一种可能的实施方式中,所述边缘检测单元804,具体用于根据所述目标对象的边缘线上的边缘点的位置,确定所述目标对象的标签;In a possible implementation manner, the edge detection unit 804 is specifically configured to determine the label of the target object according to the position of the edge point on the edge line of the target object;
所述边缘检测单元804,具体还用于根据所述目标对象的标签,调整所述目标对象的边缘线上的边缘点,得到所述第二边缘点。The edge detection unit 804 is further configured to adjust edge points on the edge line of the target object according to the label of the target object to obtain the second edge point.
在一种可能的实施方式中,所述边缘检测单元804,具体还用于在位于第一标签上的边缘点个数大于位于第二标签上的边缘点个数的情况下,将所述第一标签确定为所述目标对象的标签;In a possible implementation manner, the edge detection unit 804 is further configured to: when the number of edge points on the first label is greater than the number of edge points on the second label, a label is determined as the label of the target object;
或者,所述边缘检测单元804,具体还用于在位于所述第一标签上的边缘点个数小于位于所述第二标签上的边缘点个数的情况下,将所述第二标签确定为所述目标对象的标签;Alternatively, the edge detection unit 804 is specifically further configured to determine the second label when the number of edge points on the first label is less than the number of edge points on the second label. a label for the target audience;
或者,所述边缘检测单元804,具体还用于在位于所述第一标签上的边缘点个数等于位于所述第二标签上的边缘点个数的情况下,根据所述第一标签的关键度和所述第二标签的关键度,确定所述目标对象的标签。Alternatively, the edge detection unit 804 is specifically further configured to: when the number of edge points on the first label is equal to the number of edge points on the second label, according to the The key degree and the key degree of the second label determine the label of the target object.
在一种可能的实施方式中,所述分割单元803,具体用于将所述目标图像输入至卷积神经网络中做所述分割处理,得到所述第一边缘点。In a possible implementation manner, the segmentation unit 803 is specifically configured to input the target image into a convolutional neural network for the segmentation process to obtain the first edge point.
在一种可能的实施方式中,所述确定单元801,具体还用于在所述第一标签的关键度高于所述第二标签的关键度的情况下,将所述第一标签确定为所述目标对象的标签;In a possible implementation manner, the determining unit 801 is further configured to determine the first tag as the label of the target object;
所述确定单元801,具体还用于在所述第二标签的关键度高于所述第一标签的关键度的情况下,将所述第二标签确定为所述目标对象的标签。The determining unit 801 is specifically further configured to determine the second tag as the tag of the target object in the case that the criticality of the second tag is higher than that of the first tag.
在一种可能的实施方式中,所述确定单元801,用于根据目标雷达的目标数据,得到目标对象的第一边框和第二边框;所述第一边框和所述第二边框不同,所述目标数据包括所述目标雷达探测所述目标对象得到的数据;In a possible implementation manner, the determining unit 801 is configured to obtain the first frame and the second frame of the target object according to the target data of the target radar; the first frame and the second frame are different, so The target data includes data obtained by the target radar detecting the target object;
所述确定单元801,还用于根据所述第一边框,得到所述目标对象的第一轮廓;The determining unit 801 is further configured to obtain a first contour of the target object according to the first frame;
所述修正单元802,用于利用所述第二边框对所述第一轮廓的边框进行修正,得到所述目标对象的第二轮廓。The correction unit 802 is configured to use the second frame to correct the frame of the first contour to obtain the second contour of the target object.
在一种可能的实施方式中,所述修正单元802,具体用于将所述第二边框与所述第一轮 廓的边框构成的轮廓,确定为所述第二轮廓。In a possible implementation manner, the correction unit 802 is specifically configured to determine the contour formed by the second frame and the frame of the first contour as the second contour.
在一种可能的实施方式中,所述分割单元803,用于对所述目标对象做分割处理,得到所述目标对象的第一边缘面;所述分割处理用于将所述目标对象从所述目标雷达的探测区域中分离出来;In a possible implementation manner, the segmentation unit 803 is configured to perform segmentation processing on the target object to obtain the first edge surface of the target object; the segmentation processing is used to segment the target object from the separated from the detection area of the target radar;
所述边缘检测单元804,用于对所述目标对象做边缘检测,得到所述目标对象的第二边缘面;所述边缘检测用于将所述目标对象的边缘分离出来;The edge detection unit 804 is configured to perform edge detection on the target object to obtain a second edge surface of the target object; the edge detection is used to separate the edge of the target object;
所述确定单元801,具体用于根据所述第一边缘面和所述第二边缘面,确定所述第一边框和所述第二边框。The determining unit 801 is specifically configured to determine the first frame and the second frame according to the first edge surface and the second edge surface.
在一种可能的实施方式中,所述确定单元801,具体还用于根据所述第一边缘面和所述第二边缘面,得到所述第一边框;所述第一边框上的边缘面为所述第一边缘面和所述第二边缘面重合的边缘面;In a possible implementation manner, the determining unit 801 is further configured to obtain the first frame according to the first edge face and the second edge face; the edge face on the first frame an edge surface where the first edge surface and the second edge surface coincide;
所述确定单元801,具体还用于根据所述第一边缘面、所述第二边缘面以及所述第一边框,得到所述第二边框。The determining unit 801 is specifically further configured to obtain the second frame according to the first edge surface, the second edge surface, and the first frame.
在一种可能的实施方式中,所述确定单元801,具体还用于根据所述第一边缘面和所述第二边缘面,得到第三边框;所述第三边框上的面为所述第一边缘面或所述第二边缘面,且不为所述第一边缘面和所述第二边缘面重合的边缘面;In a possible implementation manner, the determining unit 801 is further configured to obtain a third frame according to the first edge surface and the second edge surface; the surface on the third frame is the the first edge face or the second edge face, and is not an edge face where the first edge face and the second edge face coincide;
所述确定单元801,具体还用于将与所述第一边框有两个或两个以上相交的面的所述第三边框,确定为所述第二边框。The determining unit 801 is further configured to determine, as the second frame, the third frame that has two or more intersecting faces with the first frame.
在一种可能的实施方式中,所述边缘检测单元804,具体用于根据所述目标对象的边框上的边缘面的位置,确定所述目标对象的标签;In a possible implementation manner, the edge detection unit 804 is specifically configured to determine the label of the target object according to the position of the edge plane on the frame of the target object;
所述边缘检测单元804,具体还用于根据所述目标对象的标签,调整所述目标对象的边框上的边缘面,得到所述第二边缘面。The edge detection unit 804 is further configured to adjust the edge plane on the frame of the target object according to the label of the target object to obtain the second edge plane.
在一种可能的实施方式中,所述边缘检测单元804,具体还用于在位于第一标签上的边缘面个数大于位于第二标签上的边缘面个数的情况下,将所述第一标签确定为所述目标对象的标签;In a possible implementation manner, the edge detection unit 804 is specifically further configured to, when the number of edge faces on the first label is greater than the number of edge faces on the second label, a label is determined as the label of the target object;
或者,所述边缘检测单元804,具体还用于在位于所述第一标签上的边缘面个数小于位于所述第二标签上的边缘面个数的情况下,将所述第二标签确定为所述目标对象的标签;Alternatively, the edge detection unit 804 is further configured to determine the number of edge faces on the first label is less than the number of edge faces on the second label a label for the target audience;
或者,所述边缘检测单元804,具体还用于在位于所述第一标签上的边缘面个数等于位于所述第二标签上的边缘面个数的情况下,根据所述第一标签的关键度和所述第二标签的关键度,确定所述目标对象的标签。Alternatively, the edge detection unit 804 is specifically further configured to: when the number of edge faces on the first label is equal to the number of edge faces on the second label, according to the The key degree and the key degree of the second label determine the label of the target object.
关于本申请实施例中任一项可能的实施方式所带来的技术效果,可参考对应于发明内容第一方面以及相应的实施方式的技术效果的介绍。Regarding the technical effect brought about by any possible implementation in the embodiments of the present application, reference may be made to the introduction corresponding to the first aspect of the summary of the invention and the technical effect of the corresponding implementation.
根据本申请实施例,图8所示的装置中的各个单元可以分别或全部合并为一个或若干个另外的单元来构成,或者其中的某个(些)单元还可以再拆分为功能上更小的多个单元来构成,这可以实现同样的操作,而不影响本申请的实施例的技术效果的实现。上述单元是基于逻辑功能划分的,在实际应用中,一个单元的功能也可以由多个单元来实现,或者多个单元的功能由一个单元实现。在本申请的其它实施例中,基于网络设备也可以包括其它单元,在实际应用中,这些功能也可以由其它单元协助实现,并且可以由多个单元协作实现。According to the embodiment of the present application, each unit in the device shown in FIG. 8 can be separately or all combined into one or several other units to form, or one (some) units can be further divided into more functional units. It is composed of multiple small units, which can achieve the same operation without affecting the realization of the technical effects of the embodiments of the present application. The above-mentioned units are divided based on logical functions. In practical applications, the functions of one unit may also be realized by multiple units, or the functions of multiple units may be realized by one unit. In other embodiments of the present application, the network-based device may also include other units. In practical applications, these functions may also be assisted by other units, and may be implemented cooperatively by multiple units.
需要说明的是,各个单元的实现还可以对应参照上述图4以及图7所示的方法实施例的相应描述。It should be noted that, for implementation of each unit, reference may also be made to corresponding descriptions of the method embodiments shown in FIG. 4 and FIG. 7 above.
在图8所描述的数据标注装置80中,通过提出基于预选框的深度学习方法对目标图像中 的目标对象进行边缘检测,得到目标对象的第一轮廓,再引入弱语义信息将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边缘线的修正,修正得到目标对象的第二轮廓,可以大大提高标注结果的准确率及效率。In the data labeling device 80 described in FIG. 8 , the edge detection of the target object in the target image is performed by proposing a deep learning method based on the pre-selected frame, and the first contour of the target object is obtained, and then the weak semantic information is introduced to the target object. The correction of the edge points of the first contour is transformed into the correction of the edge line of the first contour of the target object, and the correction obtains the second contour of the target object, which can greatly improve the accuracy and efficiency of the labeling result.
请参阅图9,图9为本申请实施例提供的一种电子设备90的结构示意图。该电子设备90可以包括存储器901、处理器902。进一步可选的,还可以包含通信接口903以及总线904,其中,存储器901、处理器902以及通信接口903通过总线904实现彼此之间的通信连接。通信接口903用于与上述数据标注装置80进行数据交互。Please refer to FIG. 9 . FIG. 9 is a schematic structural diagram of an electronic device 90 provided in an embodiment of the present application. The electronic device 90 may include a memory 901 and a processor 902 . Further optionally, a communication interface 903 and a bus 904 may also be included, wherein the memory 901 , the processor 902 and the communication interface 903 are connected to each other through the bus 904 . The communication interface 903 is used for data interaction with the above-mentioned data tagging device 80 .
其中,存储器901用于提供存储空间,存储空间中可以存储操作系统和计算机程序等数据。存储器901包括但不限于是随机存储记忆体(random access memory,RAM)、只读存储器(read-only memory,ROM)、可擦除可编程只读存储器(erasable programmable read only memory,EPROM)、或便携式只读存储器(compact disc read-only memory,CD-ROM)。Wherein, the memory 901 is used to provide a storage space, in which data such as operating systems and computer programs can be stored. Memory 901 includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (read-only memory, ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), or Portable read-only memory (compact disc read-only memory, CD-ROM).
处理器902是进行算术运算和逻辑运算的模块,可以是中央处理器(central processing unit,CPU)、显卡处理器(graphics processing unit,GPU)或微处理器(microprocessor unit,MPU)等处理模块中的一种或者多种的组合。The processor 902 is a module for performing arithmetic operations and logic operations, and may be in a processing module such as a central processing unit (central processing unit, CPU), a graphics processing unit (graphics processing unit, GPU) or a microprocessor (microprocessor unit, MPU). one or a combination of more.
存储器901中存储有计算机程序,处理器902调用存储器901中存储的计算机程序,以执行上述图4所示的数据标注方法:A computer program is stored in the memory 901, and the processor 902 calls the computer program stored in the memory 901 to execute the data labeling method shown in FIG. 4 above:
获得目标图像中包含的目标对象的第一边缘线和第二边缘线;所述第一边缘线和所述第二边缘线不同;Obtaining a first edge line and a second edge line of the target object contained in the target image; the first edge line and the second edge line are different;
根据所述第一边缘线,得到所述目标对象的第一轮廓;Obtain a first outline of the target object according to the first edge line;
利用所述第二边缘线对所述第一轮廓的边缘线进行修正,得到所述目标对象的第二轮廓。Using the second edge line to correct the edge line of the first contour to obtain the second contour of the target object.
上述处理器902执行方法的具体内容可参阅上述图4,此处不再赘述。For the specific content of the method executed by the processor 902, reference may be made to the above FIG. 4 , which will not be repeated here.
另一方面,处理器902调用存储器901中存储的计算机程序,以执行上述图7所示的数据标注方法:On the other hand, the processor 902 invokes the computer program stored in the memory 901 to execute the data labeling method shown in FIG. 7 above:
根据目标雷达的目标数据,得到目标对象的第一边框和第二边框;所述第一边框和所述第二边框不同,所述目标数据包括所述目标雷达探测所述目标对象得到的数据;According to the target data of the target radar, a first frame and a second frame of the target object are obtained; the first frame is different from the second frame, and the target data includes data obtained by the target radar detecting the target object;
根据所述第一边框,得到所述目标对象的第一轮廓;Obtain a first contour of the target object according to the first frame;
利用所述第二边框对所述第一轮廓的边框进行修正,得到所述目标对象的第二轮廓。Using the second frame to correct the frame of the first contour to obtain the second contour of the target object.
上述处理器902执行方法的具体内容可参阅上述图7,此处不再赘述。For the specific content of the method executed by the processor 902, reference may be made to the foregoing FIG. 7 , which will not be repeated here.
相应的,处理器902调用存储器901中存储的计算机程序,还可以用于执行上述图8所示的数据标注装置80中的各个单元所执行的方法步骤,其具体内容可参阅上述图8,此处不再赘述。Correspondingly, the processor 902 calls the computer program stored in the memory 901, which can also be used to execute the method steps performed by the various units in the data labeling device 80 shown in FIG. I won't repeat them here.
在图9所描述的电子设备90中,通过提出基于预选框的深度学习方法对目标图像中的目标对象进行边缘检测,得到目标对象的第一轮廓,再引入弱语义信息将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边缘线的修正,修正得到目标对象的第二轮廓,可以大大提高标注结果的准确率及效率。In the electronic device 90 described in FIG. 9 , the edge detection of the target object in the target image is carried out by proposing a deep learning method based on the pre-selected frame, and the first contour of the target object is obtained, and then the weak semantic information is introduced to the second contour of the target object. The correction of the edge points of the first contour is transformed into the correction of the edge line of the first contour of the target object, and the second contour of the target object is obtained through correction, which can greatly improve the accuracy and efficiency of the labeling results.
本申请实施例还提供一种计算机可读存储介质,上述计算机可读存储介质中存储有计算机程序,当上述计算机程序在一个或多个处理器上运行时,可以实现上述图4以及图7所示的方法。The embodiment of the present application also provides a computer-readable storage medium. The above-mentioned computer-readable storage medium stores a computer program. When the above-mentioned computer program is run on one or more processors, the above-mentioned FIG. 4 and FIG. 7 can be implemented. method shown.
本申请实施例还提供一种计算机程序产品,上述计算机程序产品包括计算机程序,当上述计算机程序在处理器上运行时,可以实现上述图4以及图7所示的方法。An embodiment of the present application further provides a computer program product, where the computer program product includes a computer program, and when the computer program runs on a processor, the methods shown in FIG. 4 and FIG. 7 above can be implemented.
本申请实施例还提供一种芯片,该芯片包括处理器,所述处理器用于执行指令,当该处理器执行所述指令时,可以实现上述图4以及图7所示的方法。The embodiment of the present application also provides a chip, the chip includes a processor, and the processor is configured to execute instructions, and when the processor executes the instructions, the above methods shown in FIG. 4 and FIG. 7 can be implemented.
可选的,该芯片还包括通信接口,该通信接口用于输入数据或输出数据。Optionally, the chip also includes a communication interface, which is used to input data or output data.
本申请实施例还提供了一种终端,该终端包括了至少一个如上述数据标注装置80,或电子设备90,或上述芯片。The embodiment of the present application also provides a terminal, which includes at least one of the above-mentioned data labeling apparatus 80, or the electronic device 90, or the above-mentioned chip.
本申请实施例还提供了一种服务器,该服务器包括了至少一个如上述数据标注装置80,或电子设备90,或上述芯片。The embodiment of the present application also provides a server, which includes at least one of the above-mentioned data labeling apparatus 80, or the electronic device 90, or the above-mentioned chip.
综上上述,通过提出基于预选框的深度学习方法对目标图像中的目标对象进行边缘检测,得到目标对象的第一轮廓,再引入弱语义信息将对目标对象的第一轮廓的边缘点的修正转化为对目标对象的第一轮廓的边缘线的修正,修正得到目标对象的第二轮廓,可以大大提高标注结果的准确率及效率。To sum up, by proposing a deep learning method based on the pre-selected frame to detect the edge of the target object in the target image, the first contour of the target object is obtained, and then the introduction of weak semantic information will correct the edge points of the first contour of the target object It is transformed into the correction of the edge line of the first contour of the target object, and the second contour of the target object is obtained through correction, which can greatly improve the accuracy and efficiency of the labeling result.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序相关的硬件完成,该计算机程序可存储于计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:只读存储器ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储计算机程序代码的介质。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be realized. The processes can be completed by hardware related to computer programs, and the computer programs can be stored in computer-readable storage media. When the computer programs are executed, , may include the processes of the foregoing method embodiments. The aforementioned storage medium includes: various media capable of storing computer program codes such as read-only memory ROM or random access memory RAM, magnetic disk or optical disk.

Claims (22)

  1. 一种数据标注方法,其特征在于,包括:A data labeling method, characterized in that, comprising:
    获得目标图像中包含的目标对象的第一边缘线和第二边缘线;所述第一边缘线和所述第二边缘线不同;Obtaining a first edge line and a second edge line of the target object contained in the target image; the first edge line and the second edge line are different;
    根据所述第一边缘线,得到所述目标对象的第一轮廓;Obtain a first outline of the target object according to the first edge line;
    利用所述第二边缘线对所述第一轮廓的边缘线进行修正,得到所述目标对象的第二轮廓。Using the second edge line to correct the edge line of the first contour to obtain the second contour of the target object.
  2. 根据权利要求1所述的方法,其特征在于,所述利用所述第二边缘线对所述第一轮廓的边缘线进行修正,包括:The method according to claim 1, wherein the modifying the edge line of the first outline by using the second edge line comprises:
    将所述第二边缘线与所述第一轮廓的边缘线构成的轮廓,确定为所述第二轮廓。A contour formed by the second edge line and the edge line of the first contour is determined as the second contour.
  3. 根据权利要求1或2所述的方法,其特征在于,所述获得目标图像中包含的目标对象的第一边缘线和第二边缘线之前,所述方法还包括:The method according to claim 1 or 2, wherein, before obtaining the first edge line and the second edge line of the target object contained in the target image, the method further comprises:
    对所述目标图像做分割处理,得到所述目标对象的第一边缘点;所述分割处理用于将所述目标图像中包含的所述目标对象分离出来;performing segmentation processing on the target image to obtain a first edge point of the target object; the segmentation processing is used to separate the target object included in the target image;
    对所述目标对象做边缘检测,得到所述目标对象的第二边缘点;所述边缘检测用于将所述目标对象的边缘分离出来;performing edge detection on the target object to obtain a second edge point of the target object; the edge detection is used to separate the edge of the target object;
    所述获得目标图像中包含的目标对象的第一边缘线和第二边缘线,包括:The obtaining the first edge line and the second edge line of the target object contained in the target image includes:
    根据所述第一边缘点和所述第二边缘点,确定所述第一边缘线和所述第二边缘线。The first edge line and the second edge line are determined according to the first edge point and the second edge point.
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述第一边缘点和所述第二边缘点,确定所述第一边缘线和所述第二边缘线,包括:The method according to claim 3, wherein said determining said first edge line and said second edge line according to said first edge point and said second edge point comprises:
    根据所述第一边缘点和所述第二边缘点,得到所述第一边缘线;所述第一边缘线上的点为所述第一边缘点和所述第二边缘点重合的边缘点;According to the first edge point and the second edge point, the first edge line is obtained; the point on the first edge line is an edge point where the first edge point and the second edge point coincide. ;
    根据所述第一边缘点、所述第二边缘点以及所述第一边缘线,得到所述第二边缘线。The second edge line is obtained according to the first edge point, the second edge point, and the first edge line.
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述第一边缘点、所述第二边缘点以及所述第一边缘线,得到所述第二边缘线,包括:The method according to claim 4, wherein said obtaining said second edge line according to said first edge point, said second edge point and said first edge line comprises:
    根据所述第一边缘点和所述第二边缘点,得到第三边缘线;所述第三边缘线上的点为所述第一边缘点或所述第二边缘点,且不为所述第一边缘点和所述第二边缘点重合的边缘点;According to the first edge point and the second edge point, a third edge line is obtained; the point on the third edge line is the first edge point or the second edge point, and is not the an edge point at which the first edge point coincides with the second edge point;
    将与所述第一边缘线有两个或两个以上的交点的所述第三边缘线,确定为所述第二边缘线。The third edge line having two or more intersection points with the first edge line is determined as the second edge line.
  6. 根据权利要求3至5中任一项所述的方法,其特征在于,所述对所述目标对象做边缘检测,得到所述目标对象的第二边缘点,包括:The method according to any one of claims 3 to 5, wherein the performing edge detection on the target object to obtain the second edge point of the target object comprises:
    根据所述目标对象的边缘线上的边缘点的位置,确定所述目标对象的标签;determining the label of the target object according to the position of the edge point on the edge line of the target object;
    根据所述目标对象的标签,调整所述目标对象的边缘线上的边缘点,得到所述第二边缘点。According to the label of the target object, edge points on the edge line of the target object are adjusted to obtain the second edge point.
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述目标对象的边缘线上的边缘点的位置,确定所述目标对象的标签,包括:The method according to claim 6, wherein the determining the label of the target object according to the position of the edge point on the edge line of the target object comprises:
    在位于第一标签上的边缘点个数大于位于第二标签上的边缘点个数的情况下,将所述第一标签确定为所述目标对象的标签;When the number of edge points on the first label is greater than the number of edge points on the second label, determining the first label as the label of the target object;
    或者,在位于所述第一标签上的边缘点个数小于位于所述第二标签上的边缘点个数的情况下,将所述第二标签确定为所述目标对象的标签;Or, when the number of edge points on the first label is less than the number of edge points on the second label, determine the second label as the label of the target object;
    或者,在位于所述第一标签上的边缘点个数等于位于所述第二标签上的边缘点个数的情况下,根据所述第一标签的关键度和所述第二标签的关键度,确定所述目标对象的标签。Or, when the number of edge points on the first label is equal to the number of edge points on the second label, according to the key degree of the first label and the key degree of the second label , to determine the label of the target object.
  8. 根据权利要求3至7中任一项所述的方法,其特征在于,所述对所述目标图像做分割处理,得到所述目标对象的第一边缘点,包括:The method according to any one of claims 3 to 7, wherein the segmenting the target image to obtain the first edge point of the target object comprises:
    将所述目标图像输入至卷积神经网络中做所述分割处理,得到所述第一边缘点。Inputting the target image into a convolutional neural network for the segmentation process to obtain the first edge point.
  9. 一种数据标注装置,其特征在于,包括:A data labeling device is characterized in that it comprises:
    确定单元,用于获得目标图像中包含的目标对象的第一边缘线和第二边缘线;所述第一边缘线和所述第二边缘线不同;A determining unit, configured to obtain a first edge line and a second edge line of a target object included in the target image; the first edge line and the second edge line are different;
    所述确定单元,还用于根据所述第一边缘线,得到所述目标对象的第一轮廓;The determining unit is further configured to obtain a first contour of the target object according to the first edge line;
    修正单元,用于利用所述第二边缘线对所述第一轮廓的边缘线进行修正,得到所述目标对象的第二轮廓。A correction unit, configured to use the second edge line to correct the edge line of the first contour to obtain the second contour of the target object.
  10. 根据权利要求9所述的装置,其特征在于,所述修正单元,具体用于将所述第二边缘线与所述第一轮廓的边缘线构成的轮廓,确定为所述第二轮廓。The device according to claim 9, wherein the correction unit is specifically configured to determine a contour formed by the second edge line and the edge lines of the first contour as the second contour.
  11. 根据权利要求9或10所述的装置,其特征在于,所述装置还包括:The device according to claim 9 or 10, wherein the device further comprises:
    分割单元,用于对所述目标图像做分割处理,得到所述目标对象的第一边缘点;所述分割处理用于将所述目标图像中包含的所述目标对象分离出来;a segmentation unit, configured to perform segmentation processing on the target image to obtain a first edge point of the target object; the segmentation processing is used to separate the target object contained in the target image;
    边缘检测单元,用于对所述目标对象做边缘检测,得到所述目标对象的第二边缘点;所述边缘检测用于将所述目标对象的边缘分离出来;An edge detection unit, configured to perform edge detection on the target object to obtain a second edge point of the target object; the edge detection is used to separate the edge of the target object;
    所述确定单元,具体用于根据所述第一边缘点和所述第二边缘点,确定所述第一边缘线和所述第二边缘线。The determining unit is specifically configured to determine the first edge line and the second edge line according to the first edge point and the second edge point.
  12. 根据权利要求11所述的装置,其特征在于,所述确定单元,具体还用于根据所述第一边缘点和所述第二边缘点,得到所述第一边缘线;所述第一边缘线上的点为所述第一边缘点和所述第二边缘点重合的边缘点;The device according to claim 11, wherein the determining unit is further configured to obtain the first edge line according to the first edge point and the second edge point; the first edge A point on the line is an edge point where the first edge point and the second edge point coincide;
    所述确定单元,具体还用于根据所述第一边缘点、所述第二边缘点以及所述第一边缘线,得到所述第二边缘线。The determining unit is specifically further configured to obtain the second edge line according to the first edge point, the second edge point, and the first edge line.
  13. 根据权利要求12所述的装置,其特征在于,所述确定单元,具体还用于根据所述第一边缘点和所述第二边缘点,得到第三边缘线;所述第三边缘线上的点为所述第一边缘点或所述第二边缘点,且不为所述第一边缘点和所述第二边缘点重合的边缘点;The device according to claim 12, wherein the determining unit is further configured to obtain a third edge line according to the first edge point and the second edge point; the third edge line The point is the first edge point or the second edge point, and is not an edge point where the first edge point and the second edge point coincide;
    所述确定单元,具体还用于将与所述第一边缘线有两个或两个以上的交点的所述第三边缘线,确定为所述第二边缘线。The determining unit is further configured to determine the third edge line having two or more intersection points with the first edge line as the second edge line.
  14. 根据权利要求11至13中任一项所述的装置,其特征在于,所述边缘检测单元,具体用于根据所述目标对象的边缘线上的边缘点的位置,确定所述目标对象的标签;The device according to any one of claims 11 to 13, wherein the edge detection unit is specifically configured to determine the label of the target object according to the position of the edge point on the edge line of the target object ;
    所述边缘检测单元,具体还用于根据所述目标对象的标签,调整所述目标对象的边缘线上的边缘点,得到所述第二边缘点。The edge detection unit is further configured to adjust edge points on the edge line of the target object according to the label of the target object to obtain the second edge point.
  15. 根据权利要求14所述的装置,其特征在于,所述边缘检测单元,具体还用于在位于第一标签上的边缘点个数大于位于第二标签上的边缘点个数的情况下,将所述第一标签确定为所述目标对象的标签;The device according to claim 14, wherein the edge detection unit is further configured to: when the number of edge points on the first label is greater than the number of edge points on the second label, The first label is determined as the label of the target object;
    或者,所述边缘检测单元,具体还用于在位于所述第一标签上的边缘点个数小于位于所述第二标签上的边缘点个数的情况下,将所述第二标签确定为所述目标对象的标签;Alternatively, the edge detection unit is further configured to determine the second label as the label of the target object;
    或者,所述边缘检测单元,具体还用于在位于所述第一标签上的边缘点个数等于位于所述第二标签上的边缘点个数的情况下,根据所述第一标签的关键度和所述第二标签的关键度,确定所述目标对象的标签。Alternatively, the edge detection unit is specifically further configured to: when the number of edge points on the first label is equal to the number of edge points on the second label, according to the key of the first label degree and the key degree of the second label, and determine the label of the target object.
  16. 根据权利要求11至15中任一项所述的装置,其特征在于,所述分割单元,具体用于将所述目标图像输入至卷积神经网络中做所述分割处理,得到所述第一边缘点。The device according to any one of claims 11 to 15, wherein the segmentation unit is specifically configured to input the target image into a convolutional neural network for the segmentation process to obtain the first edge point.
  17. 一种数据标注装置,其特征在于,包括:处理器和存储器;A data tagging device, characterized in that it includes: a processor and a memory;
    所述存储器用于存储计算机程序;The memory is used to store computer programs;
    所述处理器用于执行所述存储器所存储的计算机程序,以使所述数据标注装置执行如权利要求1至8中任一项所述的方法。The processor is configured to execute the computer program stored in the memory, so that the data labeling device executes the method according to any one of claims 1-8.
  18. 一种计算机可读存储介质,其特征在于,包括:A computer-readable storage medium, comprising:
    所述计算机可读存储介质用于存储计算机程序;当所述计算机程序被执行时,使如权利要求1至8中任一项所述的方法被实现。The computer-readable storage medium is used to store a computer program; when the computer program is executed, the method according to any one of claims 1 to 8 is realized.
  19. 一种计算机程序产品,其特征在于,包括:计算机程序;A computer program product, characterized by comprising: a computer program;
    所述计算机程序被执行时,使如权利要求1至8中任一项所述的方法被实现。When the computer program is executed, the method according to any one of claims 1 to 8 is realized.
  20. 一种芯片,其特征在于,包括:处理器;A chip, characterized in that it includes: a processor;
    所述处理器用于执行指令;当所述指令被执行时,使如权利要求1至8中任一项所述的方法被实现。The processor is configured to execute instructions; when the instructions are executed, the method according to any one of claims 1 to 8 is implemented.
  21. 一种终端,其特征在于,包括如权利要求9至16中任一项所述的数据标注装置,或者权利要求17所述的数据标注装置,或者权利要求20所述的芯片。A terminal, characterized by comprising the data tagging device according to any one of claims 9 to 16, or the data tagging device according to claim 17, or the chip according to claim 20.
  22. 一种服务器,其特征在于,包括如权利要求9至16中任一项所述的数据标注装置,或者权利要求17所述的数据标注装置,或者权利要求20所述的芯片。A server, characterized by comprising the data labeling device according to any one of claims 9 to 16, or the data labeling device according to claim 17, or the chip according to claim 20.
PCT/CN2022/091951 2021-05-24 2022-05-10 Data annotation method and related product WO2022247628A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110564875.4A CN115393379A (en) 2021-05-24 2021-05-24 Data annotation method and related product
CN202110564875.4 2021-05-24

Publications (1)

Publication Number Publication Date
WO2022247628A1 true WO2022247628A1 (en) 2022-12-01

Family

ID=84114079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091951 WO2022247628A1 (en) 2021-05-24 2022-05-10 Data annotation method and related product

Country Status (2)

Country Link
CN (1) CN115393379A (en)
WO (1) WO2022247628A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463217A (en) * 2022-02-08 2022-05-10 口碑(上海)信息技术有限公司 Image processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003006661A (en) * 2001-06-22 2003-01-10 Fuji Photo Film Co Ltd Thoracic contour detector
CN110378227A (en) * 2019-06-17 2019-10-25 北京达佳互联信息技术有限公司 Correct method, apparatus, equipment and the storage medium of sample labeled data
CN110488280A (en) * 2019-08-29 2019-11-22 广州小鹏汽车科技有限公司 A kind of modification method and device, vehicle, storage medium of parking stall profile
CN111259772A (en) * 2020-01-13 2020-06-09 广州虎牙科技有限公司 Image annotation method, device, equipment and medium
CN111325764A (en) * 2020-02-11 2020-06-23 广西师范大学 Fruit image contour recognition method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003006661A (en) * 2001-06-22 2003-01-10 Fuji Photo Film Co Ltd Thoracic contour detector
CN110378227A (en) * 2019-06-17 2019-10-25 北京达佳互联信息技术有限公司 Correct method, apparatus, equipment and the storage medium of sample labeled data
CN110488280A (en) * 2019-08-29 2019-11-22 广州小鹏汽车科技有限公司 A kind of modification method and device, vehicle, storage medium of parking stall profile
CN111259772A (en) * 2020-01-13 2020-06-09 广州虎牙科技有限公司 Image annotation method, device, equipment and medium
CN111325764A (en) * 2020-02-11 2020-06-23 广西师范大学 Fruit image contour recognition method

Also Published As

Publication number Publication date
CN115393379A (en) 2022-11-25

Similar Documents

Publication Publication Date Title
EP3961485A1 (en) Image processing method, apparatus and device, and storage medium
US11144889B2 (en) Automatic assessment of damage and repair costs in vehicles
US11017244B2 (en) Obstacle type recognizing method and apparatus, device and storage medium
US11763550B2 (en) Forming a dataset for fully-supervised learning
CN108038474B (en) Face detection method, convolutional neural network parameter training method, device and medium
CN108304835B (en) character detection method and device
CN110458095B (en) Effective gesture recognition method, control method and device and electronic equipment
US20190205665A1 (en) Method, apparatus, and device for determining lane line on road
WO2021249575A1 (en) Area semantic learning and map point identification method for power transformation operation scene
Keller et al. A new benchmark for stereo-based pedestrian detection
US20240092344A1 (en) Method and apparatus for detecting parking space and direction and angle thereof, device and medium
CN108734058B (en) Obstacle type identification method, device, equipment and storage medium
CN113936198B (en) Low-beam laser radar and camera fusion method, storage medium and device
US11475628B2 (en) Monocular 3D vehicle modeling and auto-labeling using semantic keypoints
TW201937405A (en) System and method for object labeling
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN113657409A (en) Vehicle loss detection method, device, electronic device and storage medium
CN111310737A (en) Lane line detection method and device
WO2023155581A1 (en) Image detection method and apparatus
CN110188766A (en) Image major heading detection method and device based on convolutional neural networks
WO2022247628A1 (en) Data annotation method and related product
CN114219936A (en) Object detection method, electronic device, storage medium, and computer program product
CN113780040A (en) Lip key point positioning method and device, storage medium and electronic equipment
CN114359493B (en) Method and system for generating three-dimensional semantic map for unmanned ship
WO2019165626A1 (en) Methods and apparatus to match images using semantic features

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22810355

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE