CN115453491A - Point cloud data processing method, device and system and storage medium - Google Patents

Point cloud data processing method, device and system and storage medium Download PDF

Info

Publication number
CN115453491A
CN115453491A CN202111356228.0A CN202111356228A CN115453491A CN 115453491 A CN115453491 A CN 115453491A CN 202111356228 A CN202111356228 A CN 202111356228A CN 115453491 A CN115453491 A CN 115453491A
Authority
CN
China
Prior art keywords
data
point cloud
cloud data
original
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111356228.0A
Other languages
Chinese (zh)
Inventor
刘艺博
冯宗宝
李昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing CHJ Automobile Technology Co Ltd
Original Assignee
Beijing CHJ Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing CHJ Automobile Technology Co Ltd filed Critical Beijing CHJ Automobile Technology Co Ltd
Priority to CN202111356228.0A priority Critical patent/CN115453491A/en
Publication of CN115453491A publication Critical patent/CN115453491A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application provides a point cloud data processing method, a point cloud data processing device, a point cloud data processing system and a storage medium. The method comprises the following steps: in response to an instruction to create a processing task, acquiring raw data, wherein the raw data comprises raw laser point cloud data and carrier motion data; grouping original data according to a preset rule to obtain N groups of target data, wherein N is a positive integer; and carrying out distortion correction on the original laser point cloud data in the N groups of target data based on the carrier motion data by utilizing N containers of the container cluster in parallel to obtain a plurality of single-frame point cloud data after distortion correction. The container cluster is configured, and the original point cloud data is processed in parallel, so that the production efficiency of large-scale point cloud data is improved; and distortion correction is carried out on the original laser point cloud data, so that more accurate laser point cloud data are obtained, and more reliable data support is provided for automatic driving.

Description

Point cloud data processing method, device and system and storage medium
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a method, an apparatus, a system, and a storage medium for processing point cloud data.
Background
In the field of automatic driving, the three-dimensional laser radar is widely used as a navigation sensor at present, can acquire point cloud data of an environment, is wide in detection range and high in precision, is not influenced by illumination conditions, and can accurately describe environment structure information. Point cloud data refers to a collection of vectors in a three-dimensional coordinate system. The inertial measurement unit IMU is a device that measures the three-axis attitude angles or angular rates and accelerations of an object. The IMU provides a relative positioning information which is used to measure the path of movement of the object relative to the starting point, so it does not provide location specific information and is therefore often used with GPS.
And collecting laser point cloud data through a laser radar, processing the laser point cloud data, and then combining an IMU (inertial measurement Unit) and a GPS (global positioning system) to obtain an obstacle avoidance result. However, the current point cloud data processing process mainly aims at single group of acquired data, and is difficult to process the acquired massive original data efficiently by fully utilizing computing resources, and the generated original point cloud data has motion distortion and error in spatial position due to relative displacement during the acquisition of laser radar equipment.
Disclosure of Invention
The application provides a point cloud data processing method, a point cloud data processing device and a storage medium, which are used for improving the processing efficiency of large-scale point cloud data. The technical scheme of the application is as follows:
in a first aspect, an embodiment of the present application provides a point cloud data processing method, including:
in response to an instruction to create a processing task, acquiring raw data, wherein the raw data comprises raw laser point cloud data and carrier motion data;
grouping the original data according to a preset rule to obtain N groups of target data, wherein N is a positive integer;
correspondingly starting N containers of the container cluster according to the N groups of target data;
and carrying out distortion correction on the original laser point cloud data in the N groups of target data based on the carrier motion data by utilizing the N containers in parallel to obtain a plurality of single-frame point cloud data after distortion correction.
In a second aspect, an embodiment of the present application provides a point cloud data processing apparatus, including:
the acquisition module is used for responding to an instruction for creating a processing task and acquiring original data, wherein the original data comprises original laser point cloud data and carrier motion data;
the grouping module is used for grouping the original data according to a preset rule to obtain N groups of target data, wherein N is a positive integer;
the distribution module is used for correspondingly starting N containers of the container cluster according to the N groups of target data;
and the processing module is used for utilizing the N containers to carry out distortion correction on the original laser point cloud data in the N groups of target data based on the carrier motion data in parallel to obtain a plurality of single-frame point cloud data after the distortion correction.
In a third aspect, an embodiment of the present application provides a point cloud data processing system, including a server, where a kubernets container cluster is configured on the server, and a point cloud data production service Pod and multiple point cloud data production Pod are established; the point cloud data production service Pod is used for receiving an instruction for creating a processing task, acquiring original data according to the instruction, grouping the original data according to a preset rule to obtain N groups of target data, and storing result data output by the point cloud data production Pod; the original data comprise original laser point cloud data and carrier motion data acquired by a laser radar; wherein N is a positive integer; and the plurality of point cloud data production Pod is used for carrying out distortion correction on the original laser point cloud data in the N groups of target data based on the carrier motion data in parallel to obtain a plurality of single-frame point cloud data after distortion correction.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the point cloud data processing method of the first aspect.
In a fifth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method of the first aspect.
In a sixth aspect, the present application provides a computer program product, which includes a computer program that, when executed by a processor, implements the steps of the method of the first aspect.
According to the point cloud data processing method, device and system and the storage medium, the container cluster is configured, the original point cloud data are processed in parallel, and the production efficiency of large-scale point cloud data is improved; and distortion correction is carried out on the original laser point cloud data, so that more accurate laser point cloud data are obtained, and more reliable data support is provided for automatic driving.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application and are not to be construed as limiting the application.
Fig. 1 is a schematic flow chart of a point cloud data processing method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a point cloud data processing method according to another embodiment of the present application.
Fig. 3 is a schematic flow chart of a point cloud data synchronization method according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of a point cloud data distortion correction method according to an embodiment of the application.
Fig. 5 is a schematic block diagram of a point cloud data processing apparatus according to an embodiment of the present application.
FIG. 6 is a schematic block diagram of a point cloud data processing system according to an embodiment of the present application.
Fig. 7 is a schematic block diagram of an electronic device according to an embodiment of the application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Autopilot, broadly refers to a technique that assists or replaces human driving of an automobile. Among the techniques of autopilot, high precision positioning is important because it directly affects the inputs of other autopilot modules. Accurate positioning is a prerequisite for performing other autonomous driving functions such as sensing and decision control. At present, the positioning of automatic driving mainly depends on the fusion of a GPS, a laser radar and an IMU (inertial measurement unit), and in order to ensure the accuracy of vehicle positioning, the calibration accuracy of the three must be ensured at first, namely the accuracy of the calibration of the IMU achieved by a vehicle-mounted laser radar is ensured.
In the field of automatic driving, the three-dimensional laser radar is widely used as a navigation sensor at present, can acquire point cloud data of an environment, is wide in detection range and high in precision, is not influenced by illumination conditions, and can accurately describe environment structure information. An IMU is a device that measures the three-axis attitude angles (or angular rates) and acceleration of an object. Point cloud data (point cloud data) refers to a collection of vectors in a three-dimensional coordinate system.
Generally, an IMU includes three single-axis accelerometers and three single-axis gyroscopes, the accelerometers detect acceleration signals of an object in three independent axes of a carrier coordinate system, and the gyroscopes detect angular velocity signals of the carrier relative to a navigation coordinate system, and measure angular velocity and acceleration of the object in three-dimensional space, and then solve the attitude of the object. The IMU provides a relative positioning information which is used to measure the path of movement of the object relative to the starting point, so it does not provide location specific information and is therefore often used with GPS.
Kubernetes, K8s for short, is a container arrangement engine of Google open source, and supports automatic deployment, large-scale scalable and application containerization management. When an application is deployed in a production environment, multiple instances of the application are typically deployed to load balance application requests. In kubernets, a plurality of containers can be created, each container runs an application instance, and then management, discovery and access of the group of application instances are realized through a built-in load balancing strategy, and the details do not need operation and maintenance personnel to perform complicated manual configuration and processing.
In order to solve the problem of low processing efficiency of large-scale point cloud data, the application provides a point cloud data processing method, device, system and storage medium.
Fig. 1 shows a schematic flow diagram of a method of point cloud data processing according to an embodiment of the application. It should be noted that the point cloud data processing method according to the embodiment of the present application can be applied to the point cloud data processing apparatus according to the embodiment of the present application. The point cloud data processing device can be configured on an electronic device. As shown in fig. 1, the point cloud data processing method may include the following steps.
In step S101, in response to an instruction to create a processing task, raw data is acquired, wherein the raw data includes raw laser point cloud data and carrier motion data.
And receiving an instruction for creating a processing task sent by a user side, and acquiring original data according to the instruction, wherein the original data comprises original laser point cloud data and carrier motion data acquired by a laser radar. Wherein the carrier motion data comprises IMU data and GPS data corresponding to the raw laser point cloud data.
As a possible implementation manner, a point cloud data production service Pod established based on kubernets receives a command for establishing a processing task sent by a user side; and according to the instruction, acquiring the original data by utilizing a parallel file storage PFS, storing the original data into a data production queue, and simultaneously recording the information and the production state of each piece of original data in a point cloud production database RDB. Wherein the data production queue is a storage structure of the Kubernets container cluster.
In step S102, the original data are grouped according to a preset rule to obtain N groups of target data, where N is a positive integer.
In the embodiment of the application, in order to improve the efficiency of processing massive original laser point cloud data, the original laser point cloud data is firstly grouped.
As a possible implementation, the raw data is grouped by data acquisition time.
The preset rule of grouping may also adopt other preset rules besides the data acquisition time, such as the data type of the original laser point cloud data, without limitation.
As a possible implementation mode, the original data in the data production queue are grouped according to a preset rule to obtain N groups of target data, namely N tasks.
In step S103, correspondingly starting N containers of the container cluster according to the N groups of target data;
as a possible implementation mode, a plurality of point cloud data production Pod are established based on Kubernetes, and parallel processing of massive original laser point cloud data is achieved. And the N groups of target data correspond to N tasks, and according to the number N of the tasks in the data production queue, starting the point cloud data with the corresponding number to produce the Pod. That is, N containers correspond to N tasks.
In step S104, N containers are used to perform distortion correction on the original laser point cloud data in the N sets of target data in parallel based on the carrier motion data, and a plurality of single-frame point cloud data after distortion correction are obtained.
In this embodiment, each point cloud data production Pod performs distortion correction on the original laser point cloud data in the target data based on the carrier motion data, and acquires a plurality of single-frame point cloud data after distortion correction.
It should be noted that, the original point cloud data generated by the laser radar apparatus due to the relative displacement when acquiring the point cloud data has motion distortion, and the spatial position of the point cloud has an error. In the embodiment, the original laser point cloud data is subjected to motion compensation based on the carrier motion data, and motion distortion is eliminated, so that the original point cloud data is subjected to distortion correction, and a plurality of single-frame point cloud data subjected to distortion correction are obtained.
Optionally, when parallel data processing is performed by using the kubernets container cluster, the task state is monitored in real time by the monitoring component, if an error state is encountered, the error state information is fed back to the user side, and an end message is sent to remind the user side after all production tasks are ended.
The point cloud data processing method can perform parallelization processing according to the input original laser radar data and the corresponding carrier motion data to obtain single-frame point cloud data, can process processing tasks of a plurality of original laser point cloud data at the same time, and improves data processing efficiency. And the original laser point cloud data is subjected to distortion correction through motion compensation, so that more accurate laser point cloud data is obtained, and more reliable data support is provided for automatic driving.
FIG. 2 shows a schematic flow diagram of a method of point cloud data processing according to another embodiment of the present application. On the basis of the above embodiments, the method for processing point cloud data includes steps S201 to S206.
In step S201, in response to an instruction to create a processing task, raw data is acquired, wherein the raw data includes raw laser point cloud data and carrier motion data.
In step S202, the original data are grouped according to a preset rule to obtain N groups of target data, where N is a positive integer.
In step S203, for each set of target data, the original laser point cloud data is analyzed to obtain a plurality of single-frame point cloud data.
As a possible implementation manner, calling a corresponding software development kit SDK based on the data type of the original laser point cloud data; and analyzing the data packet of the original laser point cloud data through the SDK to obtain a plurality of single-frame point cloud data.
As an example, if the original laser point cloud data has a pcap data message format and a pack serialized data format, corresponding SDKs are respectively used to perform data analysis, so as to obtain a plurality of analyzed single-frame point cloud data.
As a possible implementation manner, after the analysis, pod containers are produced based on the point cloud data of Kubernetes, the single-frame point cloud data are stored in a bilateral queue, and carrier motion data corresponding to the single-frame point cloud data are added to respective bilateral queues.
In the embodiment of the application, a plurality of analyzed single-frame point cloud data are stored in a bilateral queue, and meanwhile, corresponding carrier motion data are respectively read to respective bilateral queues to be stored in a memory. It should be noted that the bilateral queue is a storage structure of a Kubernetes container cluster, and the analyzed plurality of single-frame point cloud data and the analyzed carrier motion data are stored in different bilateral queues.
In step S204, a plurality of synchronous carrier motion data synchronized with the plurality of single-frame point cloud data respectively are acquired from the carrier motion data.
In the embodiment of the present application, a process of acquiring a plurality of pieces of motion data of a synchronization carrier respectively synchronized with the plurality of pieces of point cloud data of a single frame, that is, a point cloud data synchronization method, as shown in fig. 3, includes:
s301, establishing a corresponding relation between each frame of point cloud data and the corresponding carrier motion data according to the time stamps of the single frame of point cloud data and the time stamps of the corresponding carrier motion data.
Before distortion correction processing is carried out, original laser point cloud data and carrier motion data are synchronized, data synchronization is carried out according to a timestamp of the original laser point cloud data collected by an original laser radar and a timestamp of carrier motion data (IMU data and GPS data), and a corresponding relation between the original laser point cloud data collected by each frame of laser radar and the carrier motion data is established according to timestamp information.
And S302, obtaining the synchronized single-frame point cloud data and the carrier motion data based on the corresponding relation.
And acquiring each frame of point cloud data and synchronous carrier motion data corresponding to the frame of point cloud data, and performing distortion correction on each frame of point cloud data according to the synchronized carrier motion data.
According to the point cloud data processing method, the carrier motion data are synchronized by the method, and the accuracy of data correction is improved.
In step S205, based on the plurality of synchronous carrier motion data, performing distortion correction on the plurality of single-frame point cloud data, and obtaining a plurality of single-frame point cloud data after distortion correction.
In the embodiment of the present application, as shown in fig. 4, a process of obtaining multiple single-frame point cloud data after distortion correction, that is, a method for distortion correction of point cloud data, includes:
in step S401, acquiring single-frame point cloud data for each of a plurality of single-frame point cloud data;
and in the distortion correction process, processing is carried out on each single-frame point cloud data.
In step S402, based on the timestamp of the single-frame point cloud data, obtaining starting laser point data and ending laser point data, and taking the time of the starting laser point data as target time;
and selecting a point with the minimum timestamp as starting laser point data and a point with the maximum timestamp as ending laser point data, and taking the timestamp of the starting laser point as target time.
In step S403, acquiring a time difference between the time of the laser point data in the single-frame point cloud data except the starting laser point data and the target time, and an inertial sensor IMU speed in the carrier motion data corresponding to the single-frame point cloud data;
the IMU velocity is the velocity (angular and linear) at which the IMU is located.
In step S404, according to the calibration parameter of the laser radar reaching the IMU, the IMU speed is converted to the coordinate system of the laser radar, and the IMU conversion speed is obtained;
the calculation needs the speed of the position where the laser radar is located, so that the IMU speed needs to be converted to the position corresponding to the laser radar according to the relative coordinates between the laser radar and the IMU.
It should be noted that the IMU velocity is related to GPS data in the carrier motion data, and the IMU velocity is obtained based on the GPS data.
In step S405, based on the time difference and the IMU conversion speed, obtaining a coordinate conversion relationship between laser point data other than the starting laser point data in the single-frame point cloud data and the starting laser point data;
in step S406, according to the coordinate conversion relationship, converting the coordinates of the laser point data in the single-frame point cloud data except the initial laser point data into the coordinates of the laser point data in the coordinate system at the target time, and obtaining the single-frame point cloud data after distortion correction.
According to the method for distortion correction of the point cloud data, the coordinate transformation is carried out on each laser point based on the time difference and the IMU conversion speed, so that more accurate point clouds are obtained, and more reliable data support is provided for automatic driving.
As a possible implementation manner, in order to improve the efficiency of the operation in the distortion correction process, the GPU is used to perform the operation process of the distortion correction.
In step S206, the distortion corrected single-frame point cloud data is stored.
In the embodiment of the present application, two possible implementation manners may be adopted to store the acquired single-frame point cloud data after distortion correction, and other storage manners may also be selected, which is not limited herein, and the following only illustrates two exemplary possible implementation manners.
Optionally, the distortion-corrected multiple single-frame point cloud data are simultaneously stored in a parallel file storage system and an object storage system.
As a possible implementation manner, the distortion-corrected single-frame point cloud data is directly stored in a Parallel file storage system, and as an example, the Parallel file storage system may be a PFS (Parallel file storage Service) system. The storage efficiency of the mode is high, data can be directly obtained from the catalogue of the PFS, but the storage space of the PFS is limited, and the storage space needs to be cleaned regularly and timely after a user uses the data.
As another possible implementation manner, the distortion-corrected single-frame point cloud data is stored in an Object Storage system, and as an example, the Object Storage system may select BOS (Baidu Object Storage). The storage process comprises the steps of adding the single-frame point cloud data after the distortion correction into a production result queue, storing the data in the production result queue into an object storage BOS, and storing the storage address of the single-frame point cloud data after the distortion correction in the object storage BOS into a database. The method is that data is added into a production result queue, a persistent storage process is called through a point cloud data production service, the data in the production result queue is stored in an object storage (BOS), and the address of the data is updated in a Relational Database (RDB).
It should be noted that, in the embodiment of the present application, the implementation processes of step 201 and step 202 may refer to the description of the implementation processes of step 101 and step 102, and are not described herein again.
According to the method for processing the point cloud data, parallelization processing can be performed according to the input original laser radar data and the corresponding carrier motion data, single-frame point cloud data can be obtained, processing tasks of a plurality of original laser point cloud data can be processed at the same time, and the data processing efficiency is improved. Before distortion correction is carried out on original laser point cloud data, a corresponding Software Development Kit (SDK) is called according to the data type of the original laser point cloud data to carry out data packet analysis on the original laser point cloud data, the requirements of different data source analysis can be met, the analyzed point cloud data and carrier motion data are synchronized, and distortion correction is carried out after synchronization, so that reliable point cloud data are obtained. And based on the time difference and the IMU conversion speed, coordinate transformation is carried out on each laser point, so that more accurate point cloud is obtained, and more reliable data support is provided for automatic driving.
FIG. 5 is a schematic block diagram of a point cloud data processing apparatus shown in accordance with an exemplary embodiment. Referring to fig. 5, the point cloud data processing apparatus may include: an acquisition module 510, a grouping module 520, an assignment module 530, and a processing module 540.
Specifically, the obtaining module 510 is configured to obtain raw data in response to an instruction to create a processing task, where the raw data includes raw laser point cloud data and carrier motion data;
a grouping module 520, configured to group the original data according to a preset rule to obtain N groups of target data, where N is a positive integer;
an allocating module 530, configured to correspondingly start N containers of the container cluster according to the N groups of target data;
and the processing module 540 is configured to perform distortion correction on the original laser point cloud data in the N sets of target data based on the carrier motion data in parallel by using N containers of the container cluster, and obtain multiple single-frame point cloud data after distortion correction.
In some embodiments of the present application, the processing module 540 is specifically configured to:
analyzing the original laser point cloud data aiming at each group of target data to obtain a plurality of single-frame point cloud data;
acquiring a plurality of synchronous carrier motion data respectively synchronized with the single-frame point cloud data from the carrier motion data;
and carrying out distortion correction on the single-frame point cloud data based on the plurality of synchronous carrier motion data to obtain the single-frame point cloud data after the distortion correction.
In some embodiments of the present application, the processing module 540, when analyzing the original laser point cloud data for each set of target data and obtaining a plurality of single-frame point cloud data, is configured to:
calling a corresponding Software Development Kit (SDK) based on the data type of the original laser point cloud data;
and analyzing the data packet of the original laser point cloud data through the SDK to obtain a plurality of single-frame point cloud data.
In some embodiments of the present application, the processing module 540, when obtaining a plurality of synchronized carrier motion data respectively synchronized with the plurality of single-frame point cloud data from the carrier motion data, is configured to:
establishing a corresponding relation between each frame of point cloud data and the corresponding carrier motion data according to the timestamps of the plurality of single frame of point cloud data and the timestamps of the carrier motion data;
and acquiring a plurality of synchronous carrier motion data respectively synchronized with the single-frame point cloud data based on the corresponding relation.
In some embodiments of the present application, the processing module 540, when performing distortion correction on the plurality of single-frame point cloud data based on the plurality of synchronous carrier motion data and obtaining the plurality of single-frame point cloud data after distortion correction, is configured to:
acquiring single-frame point cloud data for each single-frame point cloud data of the plurality of single-frame point cloud data;
acquiring starting laser point data and ending laser point data based on the timestamp of the single-frame point cloud data, and taking the time of the starting laser point data as target time;
acquiring the time difference between the time of the laser point data except the initial laser point data in the single-frame point cloud data and the target time, and the inertial sensor IMU speed in the carrier motion data corresponding to the single-frame point cloud data;
according to the calibration parameter of the laser radar reaching the IMU, the IMU speed is converted into a coordinate system of the laser radar, and the IMU conversion speed is obtained;
acquiring a coordinate conversion relation of laser point data except initial laser point data in the single-frame point cloud data relative to the initial laser point data based on the time difference and the IMU conversion speed;
and converting the coordinates of the laser point data except the initial laser point data in the single-frame point cloud data into the coordinates of the laser point data in the coordinate system at the target time according to the coordinate conversion relation, and acquiring the single-frame point cloud data after distortion correction.
In some embodiments of the present application, after acquiring and storing the multiple single-frame point cloud data after the distortion correction, the processing module 540 is further configured to:
and storing the multiple single-frame point cloud data after the distortion correction in a parallel file storage system and an object storage system.
In some embodiments of the present application, the grouping module 520 is specifically configured to:
and grouping the original data according to data acquisition time.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
The device for processing the point cloud data can perform parallelization processing according to the input original laser radar data and the corresponding carrier motion data to obtain single-frame point cloud data, can process processing tasks of a plurality of original laser point cloud data at the same time, and improves the data processing efficiency. Before the original laser point cloud data is corrected through distortion correction, the carrier motion data is synchronized by adopting the method, and the accuracy of data correction is improved.
FIG. 6 is a block diagram illustrating a point cloud data processing system, as shown in FIG. 6, including a server on which a Kubernets container cluster is deployed and which establishes a point cloud data production service Pod and a plurality of point cloud data production Pod, in accordance with an exemplary embodiment;
the point cloud data production service Pod is used for receiving an instruction for creating a processing task, acquiring original data according to the instruction, grouping the original data according to a preset rule to obtain N groups of target data, and storing result data output by the plurality of point cloud data production Pod; the original data comprises original laser point cloud data and carrier motion data acquired by a laser radar; wherein N is a positive integer;
and the plurality of point cloud data production Pod is used for carrying out distortion correction on the original laser point cloud data in the N groups of target data in parallel based on the carrier motion data to obtain a plurality of single-frame point cloud data after the distortion correction.
The specific implementation manner of the point cloud data production Pod when obtaining the multiple single-frame point cloud data after distortion correction is the same as that of the above several method embodiments, and is not described herein again.
According to the point cloud data processing system, the container cluster is configured, the original point cloud data are processed in parallel, and the production efficiency of large-scale point cloud data is improved; and the original laser point cloud data is subjected to distortion correction through motion compensation, so that more accurate laser point cloud data is obtained, and more reliable data support is provided for automatic driving.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 7, the embodiment of the present application is a block diagram of an electronic device of a method for processing point cloud data. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of point cloud data processing provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of point cloud data processing provided herein.
Memory 702, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the methods of point cloud data processing in embodiments of the present application (e.g., shown in fig. 5: acquisition module 510, grouping module 520, assignment module 530, and processing module 540). The processor 701 executes various functional applications of the server and data processing, i.e., a method of implementing point cloud data processing in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 702.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for image processing, and the like. Further, the memory 702 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include memory located remotely from the processor 701, which may be connected to image processing electronics over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of point cloud data processing may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the image processing electronic apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the Internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present application can be achieved.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (11)

1. A point cloud data processing method is characterized by comprising the following steps:
in response to an instruction to create a processing task, obtaining raw data, wherein the raw data comprises raw laser point cloud data and carrier motion data;
grouping the original data according to a preset rule to obtain N groups of target data, wherein N is a positive integer;
correspondingly starting N containers of the container cluster according to the N groups of target data;
and carrying out distortion correction on the original laser point cloud data in the N groups of target data based on the carrier motion data by utilizing the N containers in parallel to obtain a plurality of single-frame point cloud data after distortion correction.
2. The method of claim 1, wherein the parallel distortion correction of the original laser point cloud data in the N sets of target data based on the carrier motion data by using the N containers to obtain a plurality of single-frame point cloud data after distortion correction comprises:
analyzing the original laser point cloud data aiming at each group of target data to obtain a plurality of single-frame point cloud data;
acquiring a plurality of synchronous carrier motion data respectively synchronized with the plurality of single-frame point cloud data from the carrier motion data;
and carrying out distortion correction on the plurality of single-frame point cloud data based on the plurality of synchronous carrier motion data to obtain the plurality of single-frame point cloud data after the distortion correction.
3. The method of claim 2, wherein parsing the raw laser point cloud data to obtain a plurality of single frame point cloud data comprises:
calling a corresponding Software Development Kit (SDK) based on the data type of the original laser point cloud data;
and carrying out data packet analysis on the original laser point cloud data through the SDK to obtain a plurality of single-frame point cloud data.
4. The method of claim 2, wherein the obtaining, from the carrier motion data, a plurality of synchronized carrier motion data that are respectively synchronized with the plurality of single-frame point cloud data comprises:
establishing a corresponding relation between each frame of point cloud data and the corresponding carrier motion data according to the time stamps of the single frame of point cloud data and the time stamps of the carrier motion data;
and acquiring a plurality of synchronous carrier motion data respectively synchronized with the single-frame point cloud data based on the corresponding relation.
5. The method of claim 2, wherein the performing distortion correction on the plurality of single-frame point cloud data based on the plurality of synchronous carrier motion data to obtain a plurality of single-frame point cloud data after distortion correction comprises:
acquiring single-frame point cloud data for each single-frame point cloud data of the plurality of single-frame point cloud data;
based on the timestamp of the single-frame point cloud data, acquiring starting laser point data and ending laser point data, and taking the time of the starting laser point data as target time;
acquiring the time difference between the time of the laser point data in the single-frame point cloud data except the initial laser point data and the target time, and the inertial sensor IMU speed in the carrier motion data corresponding to the single-frame point cloud data;
according to the calibration parameters from the laser radar to the IMU, converting the IMU speed to a coordinate system of the laser radar to obtain the IMU conversion speed;
acquiring coordinate conversion relation of laser point data except for initial laser point data in the single-frame point cloud data relative to the initial laser point data based on the time difference and the IMU conversion speed;
and converting the coordinates of the laser point data except the initial laser point data in the single-frame point cloud data into the coordinates of the laser point data in the coordinate system at the target time according to the coordinate conversion relation, and acquiring the single-frame point cloud data after distortion correction.
6. The method of claim 1, wherein after the obtaining the plurality of distortion corrected single frame point cloud data, further comprising:
and storing the multiple single-frame point cloud data after the distortion correction in a parallel file storage system and an object storage system.
7. The method according to claim 1, wherein the grouping the original data according to a preset rule comprises:
and grouping the original data according to data acquisition time.
8. A point cloud data processing apparatus, comprising:
the acquisition module is used for responding to an instruction for creating a processing task and acquiring original data, wherein the original data comprises original laser point cloud data and carrier motion data;
the grouping module is used for grouping the original data according to a preset rule to obtain N groups of target data, wherein N is a positive integer;
the distribution module is used for correspondingly starting N containers of the container cluster according to the N groups of target data;
and the processing module is used for utilizing the N containers to carry out distortion correction on the original laser point cloud data in the N groups of target data based on the carrier motion data in parallel to obtain a plurality of single-frame point cloud data after the distortion correction.
9. A point cloud data processing system is characterized by comprising a server, wherein a Kubernets container cluster is configured on the server, and a point cloud data production service Pod and a plurality of point cloud data production Pod are established;
the point cloud data production service Pod is used for receiving an instruction for creating a processing task, acquiring original data according to the instruction, grouping the original data according to a preset rule to obtain N groups of target data, and storing result data output by the plurality of point cloud data production Pod; wherein the raw data comprises raw laser point cloud data and carrier motion data; wherein N is a positive integer;
and the plurality of point cloud data production Pod is used for carrying out distortion correction on the original laser point cloud data in the N groups of target data based on the carrier motion data in parallel to obtain a plurality of single-frame point cloud data after distortion correction.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the point cloud data processing method of any of claims 1-7.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202111356228.0A 2021-11-16 2021-11-16 Point cloud data processing method, device and system and storage medium Pending CN115453491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111356228.0A CN115453491A (en) 2021-11-16 2021-11-16 Point cloud data processing method, device and system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111356228.0A CN115453491A (en) 2021-11-16 2021-11-16 Point cloud data processing method, device and system and storage medium

Publications (1)

Publication Number Publication Date
CN115453491A true CN115453491A (en) 2022-12-09

Family

ID=84295017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111356228.0A Pending CN115453491A (en) 2021-11-16 2021-11-16 Point cloud data processing method, device and system and storage medium

Country Status (1)

Country Link
CN (1) CN115453491A (en)

Similar Documents

Publication Publication Date Title
US11789455B2 (en) Control of autonomous vehicle based on fusion of pose information and visual data
CN110595494B (en) Map error determination method and device
US20210206390A1 (en) Positioning method and apparatus, vehicle device, and autonomous vehicle
US11713970B2 (en) Positioning method, electronic device and computer readable storage medium
CN112101209B (en) Method and apparatus for determining world coordinate point cloud for roadside computing device
CN111340860B (en) Registration and updating methods, devices, equipment and storage medium of point cloud data
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN110702139B (en) Time delay calibration method and device, electronic equipment and medium
US11866064B2 (en) Method and apparatus for processing map data
CN111553844A (en) Method and device for updating point cloud
CN111784837A (en) High-precision map generation method and device
CN113759384B (en) Method, device, equipment and medium for determining pose conversion relation of sensor
CN111666876A (en) Method and device for detecting obstacle, electronic equipment and road side equipment
CN111664844A (en) Navigation method, navigation device and electronic equipment
CN111142129A (en) Positioning system test method, device, equipment and storage medium
CN110487269A (en) GPS/INS Combinated navigation method, device, storage medium and electronic equipment
CN111783611B (en) Unmanned vehicle positioning method and device, unmanned vehicle and storage medium
CN112154480B (en) Positioning method and device for movable platform, movable platform and storage medium
CN111780757A (en) Positioning method and device, electronic equipment, vehicle-end equipment and automatic driving vehicle
CN115453491A (en) Point cloud data processing method, device and system and storage medium
CN111462072A (en) Dot cloud picture quality detection method and device and electronic equipment
CN114187509B (en) Object positioning method and device, electronic equipment and storage medium
CN110579779B (en) GPS quality determination method, apparatus, device and medium
CN111311654A (en) Camera position registration method and device, electronic equipment and storage medium
CN111968071A (en) Method, device, equipment and storage medium for generating spatial position of vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination