CN113345101A - Three-dimensional point cloud labeling method, device, equipment and storage medium - Google Patents

Three-dimensional point cloud labeling method, device, equipment and storage medium Download PDF

Info

Publication number
CN113345101A
CN113345101A CN202110552669.1A CN202110552669A CN113345101A CN 113345101 A CN113345101 A CN 113345101A CN 202110552669 A CN202110552669 A CN 202110552669A CN 113345101 A CN113345101 A CN 113345101A
Authority
CN
China
Prior art keywords
point cloud
dimensional point
cloud data
color
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110552669.1A
Other languages
Chinese (zh)
Other versions
CN113345101B (en
Inventor
李锐
洪至远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110552669.1A priority Critical patent/CN113345101B/en
Publication of CN113345101A publication Critical patent/CN113345101A/en
Application granted granted Critical
Publication of CN113345101B publication Critical patent/CN113345101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a three-dimensional point cloud labeling method, a device, equipment and a storage medium, relates to the field of artificial intelligence, and further relates to the technical field of image processing. The specific implementation scheme is as follows: after the acquired three-dimensional point cloud data is acquired, analyzing the three-dimensional point cloud data to obtain physical information of the three-dimensional point cloud data; the physical information comprises at least one of a gray value or an intensity value of each color channel, attribute information of the three-dimensional point cloud data is determined according to at least one of the gray value or the intensity value of each color channel, the three-dimensional point cloud data with the attribute information is input into a semantic segmentation model, an instance segmentation label of an object in the three-dimensional point cloud data is obtained, and the three-dimensional point cloud data is marked by the instance segmentation label. Therefore, the three-dimensional point cloud data is marked through the physical information and the semantic segmentation model, and the point cloud data marking accuracy is improved.

Description

Three-dimensional point cloud labeling method, device, equipment and storage medium
Technical Field
The application discloses a three-dimensional point cloud labeling method, a device, equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the technical field of image processing and the like.
Background
In the unmanned driving practice, a plurality of sensors are generally used for obtaining environmental information, and an algorithm makes a decision according to the obtained environmental information. In general automatic driving, multiple cameras can be used for obtaining images from multiple viewing angles, and a graphical algorithm is used for processing and deciding according to obtained image information. However, on one hand, the method is greatly influenced by weather, and the acquisition capacity of the sensor is sharply reduced in a dark environment and in dense fog weather, so that sufficient environmental information cannot be obtained for decision making; on the other hand, the collected data is not well judged due to the influence of the visual angle, and the precision is insufficient.
Disclosure of Invention
The application provides a three-dimensional point cloud labeling method, a device, equipment and a storage medium.
According to an aspect of the present application, there is provided a three-dimensional point cloud labeling method, including:
acquiring collected three-dimensional point cloud data;
analyzing the three-dimensional point cloud data to obtain physical information of the three-dimensional point cloud data; wherein the physical information comprises at least one of a gray value or an intensity value of each color channel;
determining attribute information of the three-dimensional point cloud data according to at least one of the gray value or the intensity value of each color channel;
inputting the three-dimensional point cloud data with the attribute information into a semantic segmentation model to obtain an example segmentation label of an object in the three-dimensional point cloud data; and labeling the three-dimensional point cloud data by using the example segmentation label.
According to another aspect of the present application, there is provided a three-dimensional point cloud annotation device, including:
the acquisition module is used for acquiring the acquired three-dimensional point cloud data;
the analysis module is used for analyzing the three-dimensional point cloud data to obtain physical information of the three-dimensional point cloud data; wherein the physical information comprises at least one of a gray value or an intensity value of each color channel;
the determining module is used for determining attribute information of the three-dimensional point cloud data according to at least one of the gray value or the intensity value of each color channel;
the input module is used for inputting the three-dimensional point cloud data with the attribute information into a semantic segmentation model to obtain an example segmentation label of an object in the three-dimensional point cloud data;
and the marking module is used for marking the three-dimensional point cloud data by adopting the example segmentation labels.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the three-dimensional point cloud annotation method of the above embodiments.
According to another aspect of the present application, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the three-dimensional point cloud labeling method according to the above embodiments.
According to another aspect of the present application, a computer program product is provided, which includes a computer program, and the computer program is executed by a processor to implement the three-dimensional point cloud labeling method of the above embodiments.
According to the technology of the application, the technical problem that the existing three-dimensional point cloud labeling is low in precision is solved, and the precision of point cloud labeling is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flow chart of a three-dimensional point cloud annotation method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a three-dimensional point cloud labeling method according to a second embodiment of the present application;
fig. 3 is a schematic flow chart of a three-dimensional point cloud labeling method according to a third embodiment of the present application;
fig. 4 is a schematic flow chart of a three-dimensional point cloud labeling method according to a fourth embodiment of the present application;
fig. 5 is an exemplary diagram illustrating a differentiated three-dimensional point cloud according to an embodiment of the present disclosure;
fig. 6 is a schematic flow chart of a three-dimensional point cloud labeling method provided in the fifth embodiment of the present application;
fig. 7 is an illustration diagram of a three-dimensional point cloud annotation method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a three-dimensional point cloud annotation device according to an embodiment of the present disclosure;
fig. 9 is a block diagram of an electronic device of a three-dimensional point cloud annotation method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The three-dimensional instance segmentation and labeling technology refers to the labeling capacity of distinguishing instance objects from point cloud files acquired by a laser radar and classifying areas in a multistage manner. In the related art, a frame selection mode is generally adopted to label the three-dimensional point cloud, and the labeled data can distinguish objects in space, and the labeled objects can be endowed with richer information through labeling of attributes such as identification and the like. However, since the object is approximately marked with an outline frame, the shape of the object cannot be well restored. For example, when labeling trees, many extra parts are framed because the part of the crown is very wide and the trunk is relatively narrow.
Therefore, the application provides a three-dimensional point cloud labeling method, after three-dimensional point cloud data to be labeled are obtained, the three-dimensional point cloud data are analyzed, and physical information of the three-dimensional point cloud data is obtained; the physical information comprises at least one of a gray value or an intensity value of each color channel, the three-dimensional point cloud data is labeled according to at least one of the gray value or the intensity value of each color channel to obtain the three-dimensional point cloud data with labeling information, and the three-dimensional point cloud data with labeling information is input into a semantic segmentation model to obtain an instance segmentation label of an object in the three-dimensional point cloud data.
The following describes a three-dimensional point cloud labeling method, apparatus, device, and storage medium according to an embodiment of the present application with reference to the drawings.
Fig. 1 is a schematic flow chart of a three-dimensional point cloud labeling method according to an embodiment of the present application.
The embodiment of the present application is exemplified by the configuration of the three-dimensional point cloud labeling method in a three-dimensional point cloud labeling apparatus, which can be applied to any electronic device, so that the electronic device can perform a three-dimensional point cloud labeling function.
The electronic device may be a Personal Computer (PC), a cloud device, a mobile device, and the like, and the mobile device may be a hardware device having various operating systems, such as a mobile phone, a tablet Computer, a Personal digital assistant, a wearable device, and an in-vehicle device.
As shown in fig. 1, the three-dimensional point cloud labeling method may include the following steps:
step 101, acquiring collected three-dimensional point cloud data.
In the embodiment of the application, after the three-dimensional point cloud data acquired by the laser radar is acquired, the acquired three-dimensional point cloud data can be marked, for example, the three-dimensional point cloud data of the surrounding environment of the automatic driving automobile acquired by the laser radar arranged on the automatic driving automobile can be acquired.
And 102, analyzing the three-dimensional point cloud data to obtain physical information of the three-dimensional point cloud data.
Wherein the physical information comprises at least one of a grey value or an intensity value of each color channel.
In the embodiment of the application, after the acquired three-dimensional point cloud data is acquired, the three-dimensional point cloud data can be analyzed to obtain at least one of the gray values or the intensity values of the various color channels contained in the three-dimensional point cloud data.
In the embodiment of the application, the collected three-dimensional point cloud data may include three-dimensional coordinates, color information, object reflection intensity information and the like, so that after the three-dimensional point cloud data is obtained, the three-dimensional point cloud data can be analyzed to obtain at least one of gray values or intensity values of various color channels.
Different materials have different effects of reflecting or absorbing laser due to different physical characteristics, the laser radar can obtain the intensity value of the laser after receiving the laser return in the process of collecting three-dimensional point cloud data, and materials such as lane lines (paint) and pavements (asphalt) can be distinguished by using the intensity value.
And 103, determining attribute information of the three-dimensional point cloud data according to at least one of the gray value or the intensity value of each color channel.
After determining the gray values of the color channels of the three-dimensional point cloud data, the attribute information of the three-dimensional point cloud data can be determined according to the gray values of the color channels. I.e. the attribute information may be the grey value of each color channel.
For example, the difference and the contour between the objects can be identified according to the gray values of the color channels, and further, the identified objects are labeled with attribute information to obtain three-dimensional point cloud data with the attribute information.
Under another possible condition, if the laser radar does not acquire the color information of the object, the three-dimensional point cloud data is analyzed to obtain an intensity value, and the attribute information of the three-dimensional point cloud data can be determined according to the intensity value.
For example, differences between objects can be identified according to the intensity values, and further, attribute information is labeled on the identified objects to obtain three-dimensional point cloud data labeled with the attribute information.
In yet another possible case, after determining the gray values of the color channels of the three-dimensional point cloud data, the differences and contours between the objects can be identified according to the gray values of the color channels, and then there are objects that cannot be identified, such as lane lines and road surfaces. In this case, the attribute information of the three-dimensional point cloud data can be determined simultaneously according to the gray value and the intensity value of the three-dimensional point cloud data, so as to obtain the three-dimensional point cloud data marked with the attribute information.
And 104, inputting the three-dimensional point cloud data with the attribute information into a semantic segmentation model to obtain an example segmentation label of an object in the three-dimensional point cloud data.
The semantic segmentation model is obtained by training by taking three-dimensional point cloud data for labeling object instances as training samples.
In the embodiment of the application, the three-dimensional point cloud data with the marked object instance can be used as a training sample to train the semantic segmentation model, so that the trained semantic segmentation model can accurately segment the three-dimensional point cloud data, and the precision of point cloud instance segmentation is improved.
In the embodiment of the application, after the three-dimensional point cloud data is identified according to at least one of the gray value or the intensity value of each color channel, the situation that the object instance is not identified may exist, and in this situation, the three-dimensional point cloud data with the attribute information may be input into the trained semantic segmentation model to perform semantic segmentation on the three-dimensional point cloud data, so as to obtain the instance segmentation label of the object in the three-dimensional point cloud data.
As an example, assuming that three-dimensional point cloud data acquired by a laser radar includes streets, pedestrians, roads and vehicles, analyzing the three-dimensional point cloud data to obtain physical information, identifying the three-dimensional point cloud data according to the physical information to obtain three-dimensional point cloud data labeled with each category, but an individual of each category may not be identified according to the physical information, and inputting the three-dimensional point cloud data with the labeled information into a semantic segmentation model to obtain an instance segmentation label of an object in the three-dimensional point cloud data. Therefore, the precision of three-dimensional point cloud data labeling is improved, and the unmanned algorithm with higher precision can be trained by the labeled three-dimensional point cloud data.
In the embodiment of the application, the semantic segmentation model is mainly used for predicting each point in the point cloud, and clustering is performed according to the collected physical information (intensity value, reflection value and the like) of the point, the position of the integrated point and the relation between the integrated point and surrounding points to obtain individual point clusters, so that each instance can be determined. And then classifying each instance according to the characteristics of each instance and the scene information to obtain an instance segmentation label of the object in the three-dimensional point cloud data. Thus both semantics and instances are obtained and the algorithm can be used to preprocess the annotation data.
And 105, marking the three-dimensional point cloud data by adopting the example segmentation labels.
In the embodiment of the application, after the example segmentation labels of the objects in the three-dimensional point cloud data are determined, the three-dimensional point cloud data can be labeled by the example segmentation labels.
According to the three-dimensional point cloud labeling method, after the acquired three-dimensional point cloud data are acquired, the three-dimensional point cloud data are analyzed, and physical information of the three-dimensional point cloud data is acquired; the physical information comprises at least one of a gray value or an intensity value of each color channel, the attribute information of the three-dimensional point cloud data is determined according to at least one of the gray value or the intensity value of each color channel, the three-dimensional point cloud data with the attribute information is input into a semantic segmentation model, an instance segmentation label of an object in the three-dimensional point cloud data is obtained, and the three-dimensional point cloud data is marked by the instance segmentation label. Therefore, the three-dimensional point cloud data is labeled in a mode of combining the physical information and the semantic segmentation model, and the accuracy of point cloud data labeling is improved.
On the basis of the above embodiment, when the three-dimensional point cloud data is analyzed, color information included in the three-dimensional point cloud data may be processed according to a preset operation method, which is described in detail below with reference to fig. 2, where fig. 2 is a schematic flow diagram of the three-dimensional point cloud labeling method provided in the second embodiment of the present application.
As shown in fig. 2, the three-dimensional point cloud labeling method may include the following steps:
step 201, acquiring collected three-dimensional point cloud data.
Step 202, color information of each point cloud in the three-dimensional point cloud data is obtained. The color information of each point cloud comprises binary gray values of all color channels which are sequentially arranged.
As an example, the color information of the point cloud is sequentially spliced by 8-bit binary gray values of R, G, B three color channels.
In a possible implementation of the embodiment of the application, the three-dimensional point cloud data may include color information of each point cloud, and after the acquired three-dimensional point cloud data is acquired, the three-dimensional point cloud data may be traversed to acquire the color information of each point cloud.
Step 203, dividing the sequentially arranged binary gray values of the color channels of the point clouds into 8-bit binary gray values of the corresponding color channels.
Step 204, converting the binary gray scale value of each color channel into a corresponding decimal gray scale value to obtain the gray scale value of each color channel.
In the embodiment of the application, after the three-dimensional point cloud data is read from the point cloud file, the three-dimensional point cloud data needs to be processed, and the color information included in the three-dimensional point cloud data is formed by sequentially splicing binary gray values of various color channels, so that the binary gray values of the various color channels, which are sequentially arranged in each point cloud, need to be divided into the binary gray values of the corresponding color channels, and further the binary gray values of the various color channels are converted into the corresponding decimal gray values to obtain the gray values of the various color channels.
As an example, the color information of each point cloud is obtained by sequentially arranging binary gray scale values of R, G, B three color channels, wherein the binary gray scale value of one color channel is 8 bits, and the color information of each point cloud is obtained by sequentially arranging 24-bit binary gray scale values. After the color information of each point cloud is obtained, the binary gray scale value corresponding to the color information of each point cloud can be divided into 8-bit binary gray scale values of R, G, B three color channels, and further, the 8-bit binary gray scale values of each color channel are converted into corresponding decimal gray scale values to obtain the gray scale values of each color channel.
Step 205, determining attribute information of the three-dimensional point cloud data according to at least one of the gray value or the intensity value of each color channel.
Step 206, inputting the three-dimensional point cloud data with the attribute information into a semantic segmentation model to obtain an instance segmentation label of an object in the three-dimensional point cloud data.
And step 207, marking the three-dimensional point cloud data by using the example segmentation labels.
In the embodiment of the present application, the implementation process of step 205 to step 207 may refer to the implementation process of step 103 to step 105 in the above embodiment, and is not described herein again.
According to the three-dimensional point cloud labeling method, after the acquired three-dimensional point cloud data is acquired, the color information of each point cloud in the three-dimensional point cloud data is acquired, and the binary gray values of all color channels which are sequentially arranged of each point cloud are divided into the binary gray values of the corresponding color channels; and converting the binary gray values of the color channels into corresponding decimal gray values to obtain the gray values of the color channels, and rendering the point cloud according to the gray values of the color channels.
On the basis of the above embodiment, when acquiring three-dimensional point cloud data to be labeled, the three-dimensional point cloud data can be read from a point cloud file in a column reading manner, which is described in detail below with reference to fig. 3, where fig. 3 is a schematic flow diagram of a three-dimensional point cloud labeling method provided in the third embodiment of the present application.
As shown in fig. 3, the three-dimensional point cloud labeling method may include the following steps:
step 301, point cloud files are obtained.
In the embodiment of the application, after the three-dimensional point cloud data is acquired by the laser radar, the three-dimensional point cloud data can be stored in the point cloud file.
When the three-dimensional point cloud data is marked, a pre-stored point cloud file can be obtained from a server, so that the three-dimensional point cloud data to be marked can be read from the point cloud file.
Step 302, reading three-dimensional point cloud data from the point cloud file in a column reading mode.
When reading three-dimensional point cloud data from a point cloud file, the three-dimensional point cloud data can be read by rows or columns.
Where reading by rows has the advantage that each reading is a complete dot, but the disadvantage is that the information in the dot may be missing. The advantage of reading by columns is that all information in the point cloud can be read completely, and the disadvantage is that it is more difficult to obtain a complete point by aligning, but in order to completely process the whole point cloud, the three-dimensional point cloud data to be marked is obtained by reading from the point cloud file in a column reading mode in the application.
Step 303, analyzing the three-dimensional point cloud data to obtain physical information of the three-dimensional point cloud data.
Wherein the physical information comprises at least one of a grey value or an intensity value of each color channel.
And step 304, determining attribute information of the three-dimensional point cloud data according to at least one of the gray value or the intensity value of each color channel.
Step 305, inputting the three-dimensional point cloud data with the attribute information into a semantic segmentation model to obtain an example segmentation label of an object in the three-dimensional point cloud data.
And step 306, marking the three-dimensional point cloud data by using the example segmentation labels.
The semantic segmentation model is obtained by training by taking three-dimensional point cloud data for labeling object instances as training samples.
In the embodiment of the present application, the implementation process of step 303 to step 306 may refer to the implementation process of step 102 to step 105 in the above embodiment, and is not described herein again.
According to the three-dimensional point cloud labeling method, after a point cloud file is obtained, three-dimensional point cloud data are obtained by reading the point cloud file in a column reading mode, the three-dimensional point cloud data are analyzed, physical information of the three-dimensional point cloud data is obtained, wherein the physical information comprises at least one of gray values or intensity values of all color channels, attribute information of the three-dimensional point cloud data is determined according to at least one of the gray values or the intensity values of all the color channels, the three-dimensional point cloud data with the attribute information are input into a semantic segmentation model, example segmentation labels of objects in the three-dimensional point cloud data are obtained, and the three-dimensional point cloud data are labeled by adopting the example segmentation labels. Therefore, the three-dimensional point cloud data of complete information is read from the point cloud file in a column reading mode, and further, the three-dimensional point cloud data is labeled in a mode of combining physical information and a semantic segmentation model, so that the accuracy of point cloud data labeling is improved.
On the basis of the embodiment, after the physical information of the three-dimensional point cloud data is acquired, the three-dimensional point cloud data can be displayed in a partitioned manner according to the physical information. Referring to fig. 4 for details, fig. 4 is a schematic flow chart of a three-dimensional point cloud labeling method according to a fourth embodiment of the present application.
As shown in fig. 4, the three-dimensional point cloud labeling method may include the following steps:
step 401, acquiring collected three-dimensional point cloud data.
Step 402, analyzing the three-dimensional point cloud data to obtain physical information of the three-dimensional point cloud data.
Wherein the physical information comprises at least one of a grey value or an intensity value of each color channel.
In the embodiment of the present application, the implementation process of step 401 to step 402 may refer to the implementation process of step 101 to step 102 in the above embodiment, and is not described herein again.
And 403, displaying the three-dimensional point cloud data in a partition mode according to the physical information.
The displayed three-dimensional point cloud data carries at least one of the gray value or the intensity value of each color channel.
Under a possible condition of the embodiment of the application, after the three-dimensional point cloud data is analyzed to obtain the physical information of the three-dimensional point cloud, the three-dimensional point cloud data can be displayed in a distinguishing mode according to the physical information.
In the application, when the three-dimensional point cloud data is displayed in a partitioned mode, the physical information obtained through analysis can be used as scalar value attributes to be mounted in the displayed point cloud, and the effect of visual partitioned display is achieved through switching the displayed scalar. That is, the displayed three-dimensional point cloud data carries at least one of the gray value or the intensity value of each color channel. As shown in fig. 5, the three-dimensional point clouds can be displayed in a partitioned manner according to the intensity values, and each three-dimensional point cloud carries corresponding physical information when the three-dimensional point clouds are displayed visually.
And step 404, labeling the three-dimensional point cloud data according to at least one of the gray value or the intensity value of each color channel to obtain the three-dimensional point cloud data with labeling information.
Step 405, inputting the three-dimensional point cloud data with the attribute information into a semantic segmentation model to obtain an example segmentation label of an object in the three-dimensional point cloud data.
And 406, marking the three-dimensional point cloud data by using the example segmentation labels.
The semantic segmentation model is obtained by training by taking three-dimensional point cloud data for labeling object instances as training samples.
In the embodiment of the present application, the implementation process of step 404 to step 406 may refer to the implementation process of step 103 to step 105 in the above embodiment, and is not described herein again.
According to the three-dimensional point cloud labeling method, after the acquired three-dimensional point cloud data are acquired, the three-dimensional point cloud data are analyzed to obtain the physical information of the three-dimensional point cloud data, and the three-dimensional point cloud data are displayed in a partitioned mode according to the physical information. Therefore, the partition display of the three-dimensional point cloud is realized, and the object example can be determined more intuitively according to the partition display result.
On the basis of the above embodiment, after the example segmentation labels of the objects in the three-dimensional point cloud data are obtained, the example segmentation labels may be labeled based on a user operation. Fig. 6 is a schematic flow chart of a three-dimensional point cloud labeling method according to a fifth embodiment of the present application.
As shown in fig. 6, the three-dimensional point cloud labeling method may include the following steps:
step 601, marking the example segmentation labels of the objects in the three-dimensional point cloud data in response to user operation.
In the embodiment of the application, after semantic segmentation is performed on the three-dimensional point cloud data to obtain the instance segmentation labels of the objects in the three-dimensional point cloud data, a labeling operator can label the three-dimensional point cloud according to the instance segmentation labels of the objects in the three-dimensional point cloud data.
As an example, assuming that the example segmentation labels of the object in the three-dimensional point cloud data are determined as the vehicle 1 and the vehicle 2, the vehicle 1 and the vehicle 2 may be labeled in the three-dimensional point cloud data in response to a user operation, so as to improve the accuracy of the unmanned algorithm training using the labeled three-dimensional point cloud data.
Step 602, aligning and storing the marked three-dimensional point cloud data into a point cloud file according to rows.
In the embodiment of the application, labeling can be performed by modifying the label attribute of the point cloud, for example, a cluster _ id attribute can be added on the basis of three-dimensional point cloud data, and the attribute is recorded by using an unsigned 32-bit integer. And after the marking is finished, the three-dimensional point cloud data are aligned according to columns and stored in a point cloud file. Since the data of the character string type cannot be stored in the point cloud file, the instance can be associated with the cluster _ id in the application.
In the method and the device, the example segmentation labels of the objects in the three-dimensional point cloud data are labeled in response to user operation, and the labeled three-dimensional point cloud data are aligned and stored in the point cloud file according to columns. Therefore, the three-dimensional point cloud data can be labeled, and further, after the labeled three-dimensional point cloud data is obtained from the point cloud file, the unmanned algorithm with higher precision can be trained based on the labeling result.
As an example, as shown in fig. 7, three-dimensional point cloud data included in a point cloud file (e.g., a pcd-file) is read in columns, the three-dimensional point cloud data is analyzed to obtain at least one of a gray value or an intensity value of each color channel, the three-dimensional point cloud data is labeled according to at least one of the gray value or the intensity value of each color channel to obtain three-dimensional point cloud data with labeling information, and the three-dimensional point cloud data is displayed in a partitioned manner according to physical information. Inputting the three-dimensional point cloud data with labeling information into a semantic segmentation model to obtain an instance segmentation label of an object in the three-dimensional point cloud data, labeling the three-dimensional point cloud data based on the instance segmentation label, and aligning and storing the labeled three-dimensional point cloud data into a point cloud file (such as a pcd file) according to columns.
In order to implement the above embodiments, the present application provides a three-dimensional point cloud labeling apparatus.
Fig. 8 is a schematic structural diagram of a three-dimensional point cloud annotation device according to an embodiment of the present application.
As shown in fig. 8, the three-dimensional point cloud labeling apparatus 800 may include: an acquisition module 810, a parsing module 820, a determination module 830, an input module 840, and a labeling module 850.
The acquiring module 810 is configured to acquire acquired three-dimensional point cloud data;
an analyzing module 820, configured to analyze the three-dimensional point cloud data to obtain physical information of the three-dimensional point cloud data; wherein the physical information comprises at least one of a gray value or an intensity value of each color channel;
a determining module 830, configured to determine attribute information of the three-dimensional point cloud data according to at least one of the gray values or the intensity values of the color channels;
an input module 840, configured to input the three-dimensional point cloud data with the attribute information into a semantic segmentation model, so as to obtain an instance segmentation label of an object in the three-dimensional point cloud data;
and the labeling module 850 is used for labeling the three-dimensional point cloud data by adopting the example segmentation labels.
In one possible case, the three-dimensional point cloud data includes color information; the parsing module 820 may also be configured to:
acquiring color information of each point cloud in the three-dimensional point cloud data; the color information comprises binary gray values of all color channels which are sequentially arranged; dividing the sequentially arranged binary gray values of the color channels of each point cloud into binary gray values of corresponding color channels; and converting the binary gray value of each color channel into a corresponding decimal gray value to obtain the gray value of each color channel.
In another possible case, the obtaining module 810 may further be configured to: acquiring a point cloud file; and reading the three-dimensional point cloud data to be marked from the point cloud file in a column reading mode.
In another possible case, the three-dimensional point cloud labeling apparatus 700 may further include:
and the display module is used for displaying the three-dimensional point cloud data in a partitioned manner according to the physical information, wherein the displayed three-dimensional point cloud data carries at least one of the gray value or the intensity value of each color channel.
In another possible case, the three-dimensional point cloud labeling apparatus 700 may further include:
the marking module is used for marking the example segmentation labels of the objects in the three-dimensional point cloud data in response to user operation;
and the storage module is used for aligning and storing the marked three-dimensional point cloud data into the point cloud file according to rows.
It should be noted that the explanation of the embodiment of the three-dimensional point cloud labeling method is also applicable to the three-dimensional point cloud labeling apparatus of the embodiment, and is not repeated herein.
According to the three-dimensional point cloud labeling device, after the acquired three-dimensional point cloud data is acquired, the three-dimensional point cloud data is analyzed, and physical information of the three-dimensional point cloud data is acquired; the physical information comprises at least one of a gray value or an intensity value of each color channel, the attribute information of the three-dimensional point cloud data is determined according to at least one of the gray value or the intensity value of each color channel, the three-dimensional point cloud data with the attribute information is input into a semantic segmentation model, an instance segmentation label of an object in the three-dimensional point cloud data is obtained, and the three-dimensional point cloud data is marked by the instance segmentation label. Therefore, the three-dimensional point cloud data is labeled in a mode of combining the physical information and the semantic segmentation model, and the accuracy of point cloud data labeling is improved.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
To achieve the above embodiments, the present application proposes an electronic device comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the three-dimensional point cloud annotation method of the above embodiments.
In order to achieve the above embodiments, the present application proposes a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the three-dimensional point cloud labeling method of the above embodiments.
In order to implement the above embodiments, the present application proposes a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the three-dimensional point cloud annotation method of the above embodiments.
As shown in fig. 9, fig. 9 is a block diagram of an electronic device of a three-dimensional point cloud annotation method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 9, the electronic apparatus includes: one or more processors 901, memory 902, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 9 illustrates an example of a processor 901.
Memory 902 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the three-dimensional point cloud annotation method provided by the present application. A non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the three-dimensional point cloud annotation method provided herein.
The memory 902, which is a non-transitory computer readable storage medium, can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the three-dimensional point cloud annotation method in the embodiment of the present application (for example, the obtaining module 810, the parsing module 820, the determining module 830, the inputting module 840, and the annotation module 850 shown in fig. 8). The processor 901 executes various functional applications and data processing of the server by running non-transitory software programs, instructions and modules stored in the memory 902, that is, the three-dimensional point cloud labeling method in the above method embodiment is realized.
The memory 902 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 902 may optionally include memory located remotely from the processor 901, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device may further include: an input device 903 and an output device 904. The processor 901, the memory 902, the input device 903 and the output device 904 may be connected by a bus or other means, and fig. 9 illustrates the connection by a bus as an example.
The input device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device, such as an input device like a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, etc. The output devices 904 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, which is also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a traditional physical host and a Virtual Private Server (VPS). The server may also be a server of a distributed system, or a server incorporating a blockchain.
According to the technical scheme, the acquisition, storage, application and the like of the personal information of the related user are all in accordance with the regulations of related laws and regulations, and the customs of the public order is not violated.
According to the technical scheme of the embodiment of the application, after the acquired three-dimensional point cloud data is acquired, the three-dimensional point cloud data is analyzed to obtain physical information of the three-dimensional point cloud data; the physical information comprises at least one of a gray value or an intensity value of each color channel, the attribute information of the three-dimensional point cloud data is determined according to at least one of the gray value or the intensity value of each color channel, the three-dimensional point cloud data with the attribute information is input into a semantic segmentation model, an instance segmentation label of an object in the three-dimensional point cloud data is obtained, and the three-dimensional point cloud data is marked by the instance segmentation label. Therefore, the three-dimensional point cloud data is labeled in a mode of combining the physical information and the semantic segmentation model, and the accuracy of point cloud data labeling is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (13)

1. A three-dimensional point cloud labeling method comprises the following steps:
acquiring collected three-dimensional point cloud data;
analyzing the three-dimensional point cloud data to obtain physical information of the three-dimensional point cloud data; wherein the physical information comprises at least one of a gray value or an intensity value of each color channel;
determining attribute information of the three-dimensional point cloud data according to at least one of the gray value or the intensity value of each color channel;
inputting the three-dimensional point cloud data with the attribute information into a semantic segmentation model to obtain an example segmentation label of an object in the three-dimensional point cloud data;
and labeling the three-dimensional point cloud data by using the example segmentation label.
2. The annotation method of claim 1, wherein the three-dimensional point cloud data comprises color information; the analyzing the three-dimensional point cloud data to obtain the physical information of the three-dimensional point cloud data comprises the following steps:
acquiring color information of each point cloud in the three-dimensional point cloud data; the color information comprises binary gray values of all color channels which are sequentially arranged;
dividing the sequentially arranged binary gray values of the color channels of each point cloud into binary gray values of corresponding color channels; and converting the binary gray value of each color channel into a corresponding decimal gray value to obtain the gray value of each color channel.
3. The annotation method of claim 1, wherein said obtaining the acquired three-dimensional point cloud data comprises:
acquiring a point cloud file;
and reading the three-dimensional point cloud data from the point cloud file in a column reading mode.
4. The labeling method of claim 1, wherein after analyzing the three-dimensional point cloud data to obtain physical information of the three-dimensional point cloud data, the method further comprises:
and displaying the three-dimensional point cloud data in a partitioned manner according to the physical information, wherein the displayed three-dimensional point cloud data carries at least one of the gray value or the intensity value of each color channel.
5. The annotation method of any one of claims 1 to 4, wherein the method further comprises:
marking an instance segmentation label of an object in the three-dimensional point cloud data in response to a user operation;
and aligning and storing the marked three-dimensional point cloud data into the point cloud file according to rows.
6. A three-dimensional point cloud annotation device comprising:
the acquisition module is used for acquiring the acquired three-dimensional point cloud data;
the analysis module is used for analyzing the three-dimensional point cloud data to obtain physical information of the three-dimensional point cloud data; wherein the physical information comprises at least one of a gray value or an intensity value of each color channel;
the determining module is used for determining attribute information of the three-dimensional point cloud data according to at least one of the gray value or the intensity value of each color channel;
the input module is used for inputting the three-dimensional point cloud data with the attribute information into a semantic segmentation model to obtain an example segmentation label of an object in the three-dimensional point cloud data;
and the marking module is used for marking the three-dimensional point cloud data by adopting the example segmentation labels.
7. The annotation device of claim 6, wherein the three-dimensional point cloud data comprises color information; the analysis module is further configured to:
acquiring color information of each point cloud in the three-dimensional point cloud data; the color information comprises binary gray values of all color channels which are sequentially arranged;
dividing the sequentially arranged binary gray values of the color channels of each point cloud into 8-bit binary gray values corresponding to the color channels; and converting the 8-bit binary gray value of each color channel into a corresponding decimal gray value to obtain the gray value of each color channel.
8. The annotating device of claim 6, wherein the obtaining module is further configured to:
acquiring a point cloud file;
and reading the three-dimensional point cloud data to be marked from the point cloud file in a column reading mode.
9. The annotating device of claim 6, wherein the device further comprises:
and the display module is used for displaying the three-dimensional point cloud data in a partitioned manner according to the physical information, wherein the displayed three-dimensional point cloud data carries at least one of the gray value or the intensity value of each color channel.
10. The annotating device of any one of claims 6-9, wherein the device further comprises:
the marking module is used for marking the example segmentation labels of the objects in the three-dimensional point cloud data in response to user operation;
and the storage module is used for aligning and storing the marked three-dimensional point cloud data into the point cloud file according to rows.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the three-dimensional point cloud annotation method of any one of claims 1-5.
12. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the three-dimensional point cloud annotation method of any one of claims 1-5.
13. A computer program product comprising a computer program which, when executed by a processor, implements the three-dimensional point cloud annotation method of any one of claims 1-5.
CN202110552669.1A 2021-05-20 2021-05-20 Three-dimensional point cloud labeling method, device, equipment and storage medium Active CN113345101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110552669.1A CN113345101B (en) 2021-05-20 2021-05-20 Three-dimensional point cloud labeling method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110552669.1A CN113345101B (en) 2021-05-20 2021-05-20 Three-dimensional point cloud labeling method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113345101A true CN113345101A (en) 2021-09-03
CN113345101B CN113345101B (en) 2023-07-25

Family

ID=77470054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110552669.1A Active CN113345101B (en) 2021-05-20 2021-05-20 Three-dimensional point cloud labeling method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113345101B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187713A (en) * 2022-09-08 2022-10-14 山东信通电子股份有限公司 Method, device and medium for accelerating point cloud point selection operation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190213790A1 (en) * 2018-01-11 2019-07-11 Mitsubishi Electric Research Laboratories, Inc. Method and System for Semantic Labeling of Point Clouds
CN111047596A (en) * 2019-12-12 2020-04-21 中国科学院深圳先进技术研究院 Three-dimensional point cloud instance segmentation method and system and electronic equipment
CN111489358A (en) * 2020-03-18 2020-08-04 华中科技大学 Three-dimensional point cloud semantic segmentation method based on deep learning
CN112785714A (en) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 Point cloud instance labeling method and device, electronic equipment and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190213790A1 (en) * 2018-01-11 2019-07-11 Mitsubishi Electric Research Laboratories, Inc. Method and System for Semantic Labeling of Point Clouds
CN111047596A (en) * 2019-12-12 2020-04-21 中国科学院深圳先进技术研究院 Three-dimensional point cloud instance segmentation method and system and electronic equipment
CN111489358A (en) * 2020-03-18 2020-08-04 华中科技大学 Three-dimensional point cloud semantic segmentation method based on deep learning
CN112785714A (en) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 Point cloud instance labeling method and device, electronic equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
顾军华;李炜;董永峰;: "基于点云数据的分割方法综述", 燕山大学学报, no. 02, pages 35 - 47 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187713A (en) * 2022-09-08 2022-10-14 山东信通电子股份有限公司 Method, device and medium for accelerating point cloud point selection operation
CN115187713B (en) * 2022-09-08 2023-01-13 山东信通电子股份有限公司 Method, equipment and medium for accelerating point cloud point selection operation

Also Published As

Publication number Publication date
CN113345101B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
US9317732B2 (en) Method and apparatus of determining air quality
US20230005257A1 (en) Illegal building identification method and apparatus, device, and storage medium
CN110675644B (en) Method and device for identifying road traffic lights, electronic equipment and storage medium
CN111626206A (en) High-precision map construction method and device, electronic equipment and computer storage medium
CN112507949A (en) Target tracking method and device, road side equipment and cloud control platform
CN111695488A (en) Interest plane identification method, device, equipment and storage medium
CN111709328A (en) Vehicle tracking method and device and electronic equipment
CN111783646B (en) Training method, device, equipment and storage medium of pedestrian re-identification model
CN111292531B (en) Tracking method, device and equipment of traffic signal lamp and storage medium
CN113091757B (en) Map generation method and device
CN111767831B (en) Method, apparatus, device and storage medium for processing image
CN111652153A (en) Scene automatic identification method and device, unmanned vehicle and storage medium
CN111832578A (en) Interest point information processing method and device, electronic equipment and storage medium
CN111523515A (en) Method and device for evaluating environment cognitive ability of automatic driving vehicle and storage medium
CN111967490A (en) Model training method for map detection and map detection method
CN111652112A (en) Lane flow direction identification method and device, electronic equipment and storage medium
CN111753911A (en) Method and apparatus for fusing models
CN114186007A (en) High-precision map generation method and device, electronic equipment and storage medium
CN111540010A (en) Road monitoring method and device, electronic equipment and storage medium
CN111339877A (en) Method and device for detecting length of blind area, electronic equipment and storage medium
CN113673281A (en) Speed limit information determining method, device, equipment and storage medium
CN113345101B (en) Three-dimensional point cloud labeling method, device, equipment and storage medium
CN110866504A (en) Method, device and equipment for acquiring marked data
CN113011298B (en) Truncated object sample generation, target detection method, road side equipment and cloud control platform
CN113297878A (en) Road intersection identification method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant