WO2022227096A1 - 点云数据的处理方法、设备及存储介质 - Google Patents

点云数据的处理方法、设备及存储介质 Download PDF

Info

Publication number
WO2022227096A1
WO2022227096A1 PCT/CN2021/091780 CN2021091780W WO2022227096A1 WO 2022227096 A1 WO2022227096 A1 WO 2022227096A1 CN 2021091780 W CN2021091780 W CN 2021091780W WO 2022227096 A1 WO2022227096 A1 WO 2022227096A1
Authority
WO
WIPO (PCT)
Prior art keywords
cloud data
point cloud
frame
point
sampling
Prior art date
Application number
PCT/CN2021/091780
Other languages
English (en)
French (fr)
Inventor
徐骥飞
潘志琛
周游
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/091780 priority Critical patent/WO2022227096A1/zh
Publication of WO2022227096A1 publication Critical patent/WO2022227096A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the present application relates to the technical field of data processing, and in particular, to a point cloud data processing method, a control device, a movable platform and a storage medium.
  • Movable platforms such as unmanned aerial vehicles
  • lidar When collecting point cloud data in the operation area, the drone needs to fly according to the predetermined route and use lidar to collect multiple frames of original point clouds, and then stitch the multiple frames of original point clouds together to obtain the point cloud data of the operation area.
  • the point cloud data obtained is prone to errors and the effect is not good, or the point cloud data is reconstructed using 3D reconstruction technology, but the current 3D reconstruction technology All require high-performance computers, which can be completed after a long time of calculation, which is not conducive to outdoor work.
  • the present application provides a point cloud data processing method, a movable platform, a control device and a storage medium, so as to quickly acquire high-precision point cloud data.
  • the present application provides a method for processing point cloud data, the method comprising:
  • each frame of the point cloud data includes the target area scanned within a preset time period. a plurality of sampling points, the time period during which the movable platform collects the first frame of point cloud data and the time period during which the second frame of point cloud data is collected are not continuous in time;
  • sampling point pairing information is used to indicate the first frame
  • the first sampling point of the frame point cloud data and the second sampling point in the second frame point cloud data correspond to the same object point in the target area
  • the point cloud data of the first frame and the point cloud data of the second frame are stitched together based on the pose deviation.
  • the present application further provides a movable platform, the movable platform includes a lidar, and the movable platform further includes a processor and a memory;
  • the lidar is used to scan the target area to obtain point cloud data
  • the memory is used to store computer programs
  • the processor is configured to execute the computer program, and when executing the computer program, implement the steps of the method for processing point cloud data according to any one of the embodiments of the present application.
  • the present application further provides a control device for acquiring point cloud data collected by a lidar mounted on a movable platform, the control device comprising a memory and a processor;
  • the memory is used to store computer programs
  • the processor is configured to execute the computer program, and when executing the computer program, implement the steps of the method for processing point cloud data according to any one of the embodiments of the present application.
  • the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor enables the processor to implement any one of the methods provided in the present application.
  • the method for processing point cloud data, the movable platform, the control device and the storage medium can eliminate the ghost phenomenon in the point cloud data, and can quickly obtain high-precision point cloud data.
  • High-precision large-scale 3D reconstruction scene also suitable for large-scale outdoor 3D surveying and mapping.
  • FIG. 1 is a schematic structural diagram of an unmanned aerial vehicle provided by an embodiment of the present application.
  • FIG. 2 is a schematic block diagram of a flight control system of an unmanned aerial vehicle provided by an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a flight system provided by an embodiment of the present application.
  • 4a and 4b are schematic diagrams of scenarios in which a drone flies according to a pre-planned route provided by an embodiment of the present application;
  • FIG. 5 is a schematic diagram of the effect of point cloud data with ghosting provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the effect of another point cloud data with ghosting provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of the effect of a scanning mode of a laser radar provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of steps of a method for processing point cloud data provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a projection effect of an edge feature point provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the effect of the point cloud data for eliminating ghosting provided by an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of a control apparatus provided by an embodiment of the present application.
  • a UAV equipped with a lidar can be used, and the lidar scans the operating area to obtain point cloud data.
  • 3D modeling can be used to perform 3D modeling for the operation.
  • the drone needs to fly according to a predetermined route and use lidar to collect multiple frames of original point cloud data, and then stitch the multiple frames of original point cloud data together to obtain the point cloud data of the operation area.
  • the multi-frame original point cloud data is simply spliced together using only the pose information measured by the inertial measurement unit, the point cloud data obtained is prone to errors and the effect is not good, or the point cloud data is reconstructed using 3D reconstruction technology, but the current 3D point cloud data is Reconstruction techniques all require high-performance computers, which can only be completed after a long period of calculation, which is not conducive to outdoor operations.
  • the embodiments of the present application provide a point cloud data processing method, a control device, a movable platform, and a storage medium, which can eliminate ghosting, so as to improve the accuracy of point cloud data, and have lower requirements on hardware configuration. It is suitable for fast and high-precision large-scale 3D reconstruction.
  • the movable platform includes a drone, a vehicle or a robot, etc., of course, it can also include a manned aircraft.
  • the following will take the movable platform as the drone as an example for introduction.
  • FIG. 1 shows the structure of an unmanned aerial vehicle 100 provided by the embodiment of the present application
  • FIG. 2 shows the structural framework of the flight control system of the unmanned aerial vehicle 100 provided by the embodiment of the present application.
  • the UAV 100 may include a frame 10 , a power system 11 , a control system 12 and a lidar 20 .
  • the frame 10 may include a fuselage and a foot frame (also known as a landing gear).
  • the fuselage may include a center frame and one or more arms connected to the center frame, the one or more arms extending radially from the center frame.
  • the tripod is connected with the fuselage, and is used for supporting when the drone 100 is landed.
  • the lidar 20 can be installed on the UAV, specifically, can be installed on the rack 10 of the UAV 100. During the flight of the UAV 100, it is used to measure the surrounding environment of the UAV 100, such as obstacles, etc. to ensure flight safety. In the embodiment of the present application, the lidar 20 is used to scan and measure the target area to obtain point cloud data of the target area for three-dimensional reconstruction.
  • the lidar 20 is installed on the tripod of the UAV 100, the lidar 20 is connected in communication with the control system 12, the lidar 20 transmits the collected point cloud data to the control system 12, and the control system 12 is processed.
  • the drone 100 may include two or more tripods, and the lidar 20 is mounted on one of the tripods.
  • the lidar 20 may also be mounted on other positions of the UAV 100, which is not specifically limited.
  • the power system 11 may include one or more electronic governors (referred to as ESCs for short), one or more propellers, and one or more motors corresponding to the one or more propellers, wherein the motors are connected between the electronic governors and the one or more propellers. Between the propellers, the motor and the propeller are arranged on the arm of the UAV 100; the electronic governor is used to receive the driving signal generated by the control system, and provide driving current to the motor according to the driving signal to control the speed of the motor.
  • ESCs electronic governors
  • the motor is used to drive the propeller to rotate, thereby providing power for the flight of the UAV 100, and the power enables the UAV 100 to achieve one or more degrees of freedom movement.
  • the drone 100 may rotate about one or more axes of rotation.
  • the above-mentioned rotation axes may include a roll axis, a yaw axis, and a pitch axis.
  • the motor may be a DC motor or a permanent magnet synchronous motor.
  • the motor may be a brushless motor or a brushed motor.
  • Control system 12 may include a controller and a sensing system.
  • the controller is used to control the flight of the UAV 100, for example, the flight of the UAV 100 can be controlled according to the attitude information measured by the sensing system. It should be understood that the controller can control the UAV 100 according to pre-programmed instructions.
  • the sensing system is used to measure the attitude information of the UAV 100, that is, the position information and state information of the UAV 100 in space, such as three-dimensional position, three-dimensional angle, three-dimensional velocity, three-dimensional acceleration, and three-dimensional angular velocity.
  • the sensing system may include at least one of a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (Inertial Measurement Unit, IMU), a visual sensor, a global navigation satellite system, a barometer, and other sensors.
  • the global navigation satellite system may be the Global Positioning System (GPS).
  • the inertial measurement unit and the global positioning system constitute a position and attitude measurement device, which is used to measure the position and attitude information such as the attitude information, attitude angle information and position information of the UAV.
  • position and attitude measurement device which is used to measure the position and attitude information such as the attitude information, attitude angle information and position information of the UAV.
  • sensors can also be other sensors to form the pose measurement device, such as vision sensors and global positioning systems.
  • the controller may include one or more processors and memory.
  • the processor may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), and the like.
  • the memory may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the memory of the movable platform provided by the embodiments of this application is used to store a computer program; the processor is used to execute the computer program, and when executing the computer program, implement any one of the point cloud data provided in this application. processing method.
  • the processor is configured to run a computer program stored in the memory, and implement the following steps when executing the computer program:
  • the sampling points are matched to obtain sampling point pairing information, and the sampling point pairing information is used to indicate that the first sampling point of the first frame of point cloud data and the second sampling point in the second frame of point cloud data correspond to the same object point in the target area; Based on the pose information of the first sampling point and the second sampling point, determine the pose deviation between the sampling point of the first frame of point cloud data and the sampling point of the second frame of point cloud data; stitch the first frame of point cloud based on the pose deviation data and the second frame of point cloud data.
  • high-precision point cloud data of the target area can be quickly obtained for 3D reconstruction.
  • the UAV 100 may include a rotary-wing UAV, such as a quad-rotor UAV, a hexa-rotor UAV, an octa-rotor UAV, a fixed-wing UAV, or a rotary-wing UAV.
  • a rotary-wing UAV such as a quad-rotor UAV, a hexa-rotor UAV, an octa-rotor UAV, a fixed-wing UAV, or a rotary-wing UAV.
  • the combination with the fixed-wing UAV is not limited here.
  • FIG. 3 shows a structure of a flight system provided by an embodiment of the present application, where the flight system includes an unmanned aerial vehicle 100 and a control terminal 200 .
  • the control terminal 200 is located at the ground end of the flight system, and can communicate with the drone 100 wirelessly to remotely control the drone 100 , such as controlling the drone 100 to fly according to a preset pre-planned route.
  • the UAV 100 specifically the controller of the control system of the UAV 100, can be used to execute any one of the point cloud data processing methods provided in the embodiments of this application, so as to quickly obtain a high-precision point cloud data.
  • the drone flies on a pre-planned route, and controls the lidar 20 to continuously collect data (point cloud data) during the flight, thereby collecting multiple frames of original point cloud data.
  • the existing method is to stack and stitch together multiple frames of original point cloud data using pose information measured by inertial measurement units, etc. Because the pose is not accurate enough, the point cloud data in the target area is prone to errors, and the effect is not good. Or use 3D reconstruction technology. Since the current 3D point cloud reconstruction method requires high-performance computer and long-term calculation, it is necessary to send multiple frames of original point cloud data to the ground side for 3D reconstruction, and send the reconstructed point cloud data. For drones, it takes a long time to wait, which is not friendly to outdoor work.
  • the point cloud will be dislocated due to the movement of the drone, resulting in a ghosting of the point cloud, as shown in Figure 6, specifically the box in Figure 6 The ghost part in .
  • the lidar in order to cover a large area and facilitate the extraction and matching of feature points, the lidar may adopt a non-repetitive scanning manner.
  • the "petal" scanning method is used, which is one of the non-repetitive scanning methods
  • the existing scanning method is generally the progressive scanning method
  • the "petal” scanning method is compared with the progressive scanning method.
  • the deviation of the point cloud data is 50cm.
  • the method for processing point cloud data provided by the embodiment of the present application can eliminate the deviation caused by the negative repetitive scanning method, thereby improving the accuracy of the point cloud data.
  • FIG. 8 is a schematic flowchart of steps of a method for processing point cloud data provided by an embodiment of the present application.
  • the method for processing point cloud data is applied to a movable platform, such as an unmanned aerial vehicle.
  • the movable platform is equipped with a laser radar, and the scanning mode of the laser radar can be non-repetitive scanning.
  • the method for processing point cloud data includes steps S101 to S104.
  • the first frame of point cloud data and the second frame of point cloud data are obtained by scanning the target area with the lidar mounted on the movable platform.
  • each frame of point cloud data includes a plurality of sampling points obtained by scanning the target area within a preset time period, and the time period during which the movable platform collects the first frame of point cloud data and the time period during which the second frame of point cloud data is collected are in time. not continuous.
  • the target area can be part or all of the working area of the UAV. For example, it is necessary to carry out 3D reconstruction of the working area to obtain a 3D map of the working area. You can use the UAV equipped with lidar to determine the working area. One or more target areas are scanned, the point cloud data of the target area is collected, and the point cloud data of the operation area is obtained.
  • the UAV can be controlled to fly on the operation area according to the pre-planned route, the lidar mounted on the movable platform moves on the pre-planned route, and the operation area is scanned by the lidar mounted on the UAV. , to obtain multiple frames of point cloud data in the work area.
  • Each frame of point cloud data refers to the radar data corresponding to multiple collection points collected by the lidar within a preset time period, and the collection points are the position information of the surface object points in the work area.
  • the first frame of point cloud data is collected when the movable platform is in a first flight segment of the pre-planned flight route
  • the second frame of point cloud data is collected when the movable platform is positioned in a second segment of the pre-planned flight line. Collected during the flight segment, as shown in Figure 4b.
  • first route segment and the second route segment may be parallel route segments, or substantially parallel route segments, and of course, may also be two route segments at a preset angle, such as 90 degrees. or other angles etc.
  • the UAV When the UAV scans the point cloud data through the lidar, it can also measure the corresponding pose information when scanning the point cloud data according to the pose measurement device, so as to use it to determine the corresponding point of each collection point in each frame of point cloud data. pose information.
  • each collection point in each frame of point cloud data it is first necessary to determine the collection time t of each collection point in each frame of point cloud data, and then determine the output of the UAV pose measurement device at time t pose information
  • the pose information corresponding to each collection point can be obtained, Indicates the pose of the pose measurement device in the world coordinate system at time t. From this, each collection point can be obtained
  • the acquisition time t of each acquisition point in each frame of point cloud data can be determined according to the radar acquisition duration and time stamp of each frame of point cloud data.
  • the radar collection time of each frame of point cloud data is 50ms, and the frame of point cloud data includes 100 collection points, then each collection point is 0.5ms, if the time stamp of the point cloud data of this frame of radar is 1 point 12 seconds and 5 milliseconds, then the collection time of the last collection point is 1 minute, 12 seconds and 5 milliseconds, and the collection time of the penultimate collection point is 1 minute, 12 seconds and 4.5 milliseconds, and so on. collection time.
  • the pose information at each acquisition moment can be obtained through the linear difference.
  • the measurement frequency of the pose information output by the pose measurement device is 200 Hz
  • the 200 Hz can be interpolated into 240 kHz data by means of linear interpolation fitting to ensure that each acquisition moment corresponds to pose information.
  • each sampling point in the first frame of point cloud data and the second frame of point cloud data may be acquired and then obtain the pose information and the measurement frequency of the pose information when the movable platform collects the first frame of point cloud data and the second frame of point cloud data; determine each point cloud data according to the measurement frequency and collection time.
  • the pose information corresponding to the sampling point may be acquired and then obtain the pose information and the measurement frequency of the pose information when the movable platform collects the first frame of point cloud data and the second frame of point cloud data.
  • the collected points on the same frame of point cloud data will also be misaligned, that is, the blurring phenomenon caused by the movement of the drone.
  • the accuracy of the final stitched point cloud data is improved.
  • the first frame of point cloud data and the second frame of point cloud data can be preprocessed to eliminate the ambiguity.
  • the pose relationship between the radar coordinate system corresponding to the lidar and the world coordinate system corresponding to the movable platform can be obtained.
  • the sampling points in the first frame of point cloud data and the second frame of point cloud data are converted from the radar coordinate system to the world coordinate system according to the pose relationship, so that the blurring phenomenon can be eliminated.
  • the pose relationship between the radar coordinate system of the lidar and the UAV coordinate system of the UAV can be queried According to the pose relationship Convert the i-th collection point in each frame of point cloud data pose information Convert from radar polar coordinate system to world coordinate system, specifically:
  • the pose relationship It can be obtained by algorithm calibration according to the positional relationship and angular relationship between the UAV and the radar during design, and will not be introduced in detail here.
  • the point cloud data can be subsequently processed.
  • the time points corresponding to the first frame of point cloud data and the second frame of point cloud data can also be obtained, wherein the time points include the start time, the middle time or the corresponding start time of the first frame of point cloud data and the second frame of point cloud data. end time; obtain the pose information corresponding to the time point; according to the pose information corresponding to the time point, convert each sampling point in the first frame of point cloud data and the second frame of point cloud data from the world coordinate system to the radar Coordinate system, resulting in denser point cloud data.
  • sampling point pairing information is used to indicate the first frame of point cloud data of the first frame.
  • One sampling point and the second sampling point in the second frame of point cloud data correspond to the same object point in the target area.
  • the multiple sampling points of the first frame of point cloud data and the multiple sampling points of the second frame of point cloud data may include edge feature points and/or plane feature points, wherein the edge feature points and plane feature points are obtained through the point cloud data.
  • the surface smoothness of the sampling points is determined.
  • the sampling point is determined to be a plane feature point; if the surface smoothness of the sampling point is less than or equal to the edge threshold, the sampling point is determined to be an edge feature point.
  • the plane threshold and edge threshold are empirical values, and their sizes are not limited here.
  • the surface smoothness of the sampling point may be determined according to the sampling point and other sampling points on the scan line where the sampling point is located. Since the lidar scans through multiple laser beams, each laser beam scans different object points at different times, and connecting the different object points scanned by each laser beam at different times is the scan line.
  • the difference between the position information of each sampling point in the point cloud data and the position information of other sampling points on the scan line where the sampling point is located may be determined, and the sum of the two-norm of the difference may be calculated. ; Calculate the second norm of the location information of the sampling point; then determine the sampling point of the point cloud data according to the sum of the second norm of the difference, the second norm of the sampling point and the number of sampling points on the same scan line Surface smoothness.
  • c represents the surface smoothness
  • S represents a certain scan line
  • N is the number of acquisitions on the scan line
  • k represents the specific point cloud data
  • the surface smoothness c of the collection points in each point cloud data can be calculated, that is, each collection point i has a corresponding surface smoothness c, and the edge is determined according to the surface smoothness.
  • Feature point or plane feature point Specifically, if c>threshold_edge is greater than the edge threshold, the collection point is determined to be an edge feature point, and if c ⁇ threshold_plane is less than the plane threshold, the collection point is determined to be a plane feature point.
  • the plane feature points and edge feature points can be determined by the above method, but there may be misjudgment for the edge feature points. Therefore, the position information corresponding to the edge feature points can be projected from the three-dimensional coordinates to the two-dimensional plane to obtain a two-dimensional image; Perform edge detection processing on the two-dimensional image to obtain filtered edge feature points.
  • the position information of the edge feature points located in the three-dimensional coordinates can be projected onto a unified two-dimensional plane, that is, the three-dimensional coordinates are converted into the two-dimensional plane, and then the image processing method (such as Canny operator) screening to obtain more accurate edge feature points, which can eliminate misjudged edge feature points.
  • image processing method Such as Canny operator
  • Matching multiple sampling points of the first frame of point cloud data and multiple sampling points of the second frame of point cloud data to obtain sampling point pairing information can also be calculated.
  • the overlap ratio of feature points and edge feature points for example, the overlap ratio is 60%, and the overlap ratio is also part of the pairing information of the sampling points.
  • the first sampling point and the second sampling point represent the same object point in the target area, and the distance from the first sampling point to the neighborhood plane or neighborhood edge of the first sampling point in the second frame of point cloud data can be determined first, Or, determine the distance from the second sampling point to the neighborhood plane or neighborhood edge of the second sampling point in the first frame of point cloud data; and then according to the distance (the distance corresponding to the first sampling point, or the The distance corresponding to the two sampling points) determines the pose deviation between the sampling points of the point cloud data of the first frame and the sampling points of the point cloud data of the second frame.
  • the neighborhood plane of the first sampling point in the second frame of point cloud data is a plane formed by a plurality of adjacent points adjacent to the first collection point in the second frame of point cloud data; the first sampling point is in the second frame of point cloud data.
  • the neighborhood edge in the frame point cloud data is an edge formed by a plurality of adjacent points adjacent to the first collection point in the second frame point cloud data.
  • a principal component analysis method may be used to determine the neighborhood plane or neighborhood edge corresponding to the first collection point and the second collection point.
  • the first frame of point cloud data may be determined based on the principal component analysis method.
  • the first collection point of the second frame of point cloud data corresponds to the neighborhood plane or neighborhood edge, or, based on the principal component analysis method, it is determined that the second collection point of the second frame of point cloud data is in the first frame of point cloud data.
  • the corresponding neighborhood plane or neighborhood edge in .
  • the first frame of point cloud data A and the second frame of point cloud data B for example, for the first collection point pointA1 in the first frame of point cloud data A, its N can be found in the second frame of point cloud data.
  • the nearest neighbor points (such as N collection points that are close to each other) are: pointB1, pointB2, pointB3...pointBN.
  • PCA Principal component analysis
  • the pose information of the first sampling point and the second sampling point Based on the pose information of the first sampling point and the second sampling point, determine the pose deviation between the sampling point of the first frame of point cloud data and the sampling point of the second frame of point cloud data, specifically using the distance as the loss cost, Iteratively optimizes the pose information of the sampling points in the first frame of point cloud data and the second frame of point cloud data to determine the pose deviation, where the iterative optimization refers to adjusting the first frame of point cloud data and the second frame of point cloud
  • the pose information of the sampling points in the data is used to minimize the loss cost (distance). Specifically, the minimum value can also be determined by being smaller than a preset value.
  • the pose information of the sampling points in the first frame of point cloud data and the second frame of point cloud data may be projected to the world coordinate system according to the optimized pose information, to obtain The pose information of the sampling point in the world coordinate system; after judging whether the change amount of the pose information in the world coordinate system before and after optimization is less than the preset threshold, if the pose information in the world coordinate system The amount of change before and after the optimization is less than the preset threshold, and the optimized pose information and the pose information of the sampling points in the first frame of point cloud data and the second frame of point cloud data are used to determine the pose deviation.
  • the optimized pose information is subtracted from the pose information of the sampling points in the first frame of point cloud data to obtain the pose deviation corresponding to the first frame of point cloud data, and the optimized pose information is subtracted from the second frame.
  • the pose information of the sampling points in the point cloud data is used to obtain the pose deviation corresponding to the second frame of point cloud data.
  • the target pose information of the sampling points of the first frame of point cloud data and the second frame of point cloud data can be determined according to the pose deviation, where the target pose information is the pose information and position of the sampling points of the frame of point cloud data.
  • Determine the pose deviation for example, subtract or add the pose deviation to the pose information to obtain the target pose information; then project the first frame of point cloud data and the second frame of point cloud data to the world coordinate system according to the target pose information to complete the first frame.
  • the stitching of one frame of point cloud data and the second frame of point cloud data can be determined according to the pose deviation, where the target pose information is the pose information and position of the sampling points of the frame of point cloud data.
  • the target area may include multiple sets of first frame point cloud data and second frame point cloud data, and each set of first frame point cloud data and second frame point cloud data corresponds to different object points in the target area, but both According to the processing method of the point cloud data provided above, each group of the first frame of point cloud data and the second frame of point cloud data is spliced to obtain point cloud data of a larger target area.
  • the multiple frames of point cloud data may also be divided into multiple point cloud segments;
  • the feature points of the point cloud segments match a plurality of the point cloud segments into at least one segment pair, and the segment pair includes two of the point cloud segments, wherein the two point cloud segments of the segment pair are, for example, points respectively.
  • the cloud segment StripA and the point cloud segment StripB, the point cloud segment StripA and the point cloud segment StripB can be used as the first frame of point cloud data and the second frame of point cloud data in the above-mentioned embodiment, and the above-mentioned point cloud data processing method can be processed, Complete the splicing of the point cloud fragment StripA and the point cloud fragment StripB.
  • each point cloud segment is extracted, the similarity of multiple point cloud segments is calculated according to the plane feature points and the edge feature points, and a segment pair is matched according to the similarity, and each segment pair at least includes Two point cloud fragments.
  • Strip calculate the overlap ratio of its plane feature points and edge feature points. If the overlap ratio exceeds a threshold, for example, the overlap ratio exceeds 60%, the two points are considered.
  • the cloud segments are matched in pairs, that is, they become segment pairs, specifically the segment pair Strip_pair (stripA, stripB).
  • time periods corresponding to the collection of the cloud segment StripA and the point cloud segment StripB are not continuous in time.
  • the point cloud segment in order to improve the data processing efficiency and at the same time ensure the accuracy of the spliced data, may be limited to include point cloud data of a preset number of frames within a preset time period.
  • point cloud data with a preset time period of 2s and a preset number of frames of 20 frames can be selected and stacked into a point cloud fragment Strip, and the collection points in the point cloud fragment Strip are first projected to the world coordinate system, Remove the blur caused by the jitter caused by the movement, and then uniformly project it to a radar coordinate system at an intermediate time (that is, under the radar coordinate system at 1s), of course, it can also be projected to the start or end time of the point cloud segment Strip. , so that the data of the point cloud fragment Strip needs to be unified into a coordinate system and stacked into a denser point cloud.
  • the laser radar used in the embodiment of the present application is non-repetitive scanning, which will cause the difference between the two frames of point cloud data. difficult to match each other.
  • Stacking 20 frames within 2 seconds into a point cloud fragment can ensure the accurate pose relationship between the point cloud fragment and the point cloud fragment.
  • the time in the point cloud segment Strip is short, and it can be considered that the pose information in this time period is accurate, and the pose information of the pose measurement device is used.
  • the method for processing point cloud data provided by the above embodiments can eliminate the influence of pose deviation, improve the accuracy of point cloud data, and eliminate ghosting, thereby being applicable to fast and high-precision large-scale 3D reconstruction scenarios.
  • the left part in FIG. 10 represents the image corresponding to the point cloud data of the object point a after the same object point is spliced, that is, the object point a has a ghost image.
  • the processing method of the point cloud data can eliminate the ghost, as shown in the right part of Figure 10, thereby improving the accuracy of the point cloud data.
  • control device 400 includes a processor 401 and a memory 402; the memory 402 is used for storing a computer program; the processor 401 is used for executing the computer program and, when executing the computer program, realizes this The steps of any one of the point cloud data processing methods provided in the application embodiments.
  • the processor 401 may be a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 402 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, a sliding hard disk, and the like.
  • ROM Read-Only Memory
  • the processor is configured to run a computer program stored in the memory, and implement any one of the point cloud data processing methods provided in the embodiments of the present application when the computer program is executed.
  • the processor is configured to run a computer program stored in the memory, and implement the following steps when executing the computer program:
  • each frame of the point cloud data includes the target area scanned within a preset time period. a plurality of sampling points, the time period during which the movable platform collects the first frame of point cloud data and the time period during which the second frame of point cloud data is collected are not continuous in time;
  • sampling point pairing information is used to indicate the first frame
  • the first sampling point of the frame point cloud data and the second sampling point in the second frame point cloud data correspond to the same object point in the target area
  • the point cloud data of the first frame and the point cloud data of the second frame are stitched together based on the pose deviation.
  • the lidar carried on the movable platform moves on a pre-planned route
  • the first frame of point cloud data is when the movable platform is located in a first segment of the pre-planned route Collected
  • the second frame of point cloud data is collected when the movable platform is located in the second route segment of the pre-planned route.
  • the first flight segment is parallel to the second flight segment.
  • the sampling points include edge feature points and/or plane feature points, and the edge feature points and plane feature points are determined by the surface smoothness of the sampling points of the point cloud data.
  • the surface smoothness of the sampling point is determined according to the sampling point and other sampling points on the scan line where the sampling point is located.
  • the processor is configured to: determine the difference between the position information of each sampling point in the point cloud data and the position information of other sampling points on the scan line where the sampling point is located, and calculate The sum of the two-norms of the difference values; calculating the two-norm of the location information of the sampling points; according to the sum of the two-norms of the difference values, the two-norms of the sampling points and the same The number of sampling points for a scanline, which determines the surface smoothness of the sampling points of the point cloud data.
  • the processor is configured to: project the edge feature points to a two-dimensional plane to obtain a two-dimensional image; perform edge detection processing on the two-dimensional image to obtain filtered edge feature points.
  • the sampling point of the first frame of point cloud data and the sampling of the second frame of point cloud data are determined based on the pose information of the first sampling point and the second sampling point
  • the pose deviation of the point including:
  • the neighborhood plane of the first sampling point in the second frame of point cloud data is a plurality of adjacent planes adjacent to the first sampling point in the second frame of point cloud data plane formed by points;
  • the neighborhood edge of the first sampling point in the second frame of point cloud data is an edge formed by a plurality of adjacent points adjacent to the first collection point in the second frame of point cloud data.
  • the processor is used to implement:
  • the iteratively optimizes the pose information of the sampling points in the first frame of point cloud data and the second frame of point cloud data, and determines the pose deviation; wherein, the iterative optimization refers to passing The pose information of the sampling points in the first frame of point cloud data and the second frame of point cloud data is adjusted so that the loss cost reaches a preset value.
  • the processor is used to implement:
  • the optimized pose information project the pose information of the sampling points in the first frame of point cloud data and the second frame of point cloud data to the world coordinate system, and obtain that the sampling points are in the world coordinate system If the amount of change of the pose information in the world coordinate system before and after optimization is less than the preset threshold, the optimized pose information and the first frame of point cloud data and the second frame The pose information of the sampling points in the point cloud data is used to determine the pose deviation.
  • the processor is used to implement:
  • the second collection point of the data corresponds to a neighborhood plane or neighborhood edge in the first frame of point cloud data.
  • the scanning manner of the lidar includes non-repetitive scanning.
  • the processor is used to implement:
  • the processor is used to implement:
  • the sampling points in the first frame of point cloud data and the second frame of point cloud data are Convert from the radar coordinate system to the world coordinate system.
  • the processor is used to implement:
  • time points corresponding to the first frame of point cloud data and the second frame of point cloud data wherein the time points include the start time, the middle point corresponding to the first frame of point cloud data and the second frame of point cloud data time or end time; obtain the pose information corresponding to the time point; Convert the world coordinate system to the radar coordinate system.
  • the splicing of the first frame of point cloud data and the second frame of point cloud data based on the pose deviation includes:
  • the second frame of point cloud data is projected to the world coordinate system.
  • the embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the above implementation The steps of any one of the point cloud data processing methods provided in the example.
  • the computer-readable storage medium may be an internal storage unit of the movable platform described in any of the foregoing embodiments, such as a memory or memory of the drone.
  • the computer-readable storage medium can also be an external storage device of the drone, such as a plug-in hard disk equipped on the drone, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

一种点云数据的处理方法、设备及存储介质,其中,所述方法包括:通过可移动平台搭载的激光雷达对目标区域扫描得到第一帧点云数据和第二帧点云数据(S101);将第一帧点云数据的多个采样点和第二帧点云数据的多个采样点进行匹配,得到采样点配对信息,采样点配对信息用于指示第一帧点云数据的第一采样点和第二帧点云数据中的第二采样点对应目标区域同一物点(S102);基于第一采样点和第二采样点的位姿信息,确定第一帧点云数据的采样点和第二帧点云数据的采样点的位姿偏差(S103);基于位姿偏差拼接第一帧点云数据和第二帧点云数据(S104)。

Description

点云数据的处理方法、设备及存储介质 技术领域
本申请涉及数据处理技术领域,尤其涉及一种点云数据的处理方法、控制装置、可移动平台及存储介质。
背景技术
可移动平台,比如无人机,通常搭载有激光雷达,并通过该激光雷达测量作业区域的点云数据,利用点云数据可以进行三维建模。对作业区域进行点云数据采集时,需要无人机按照预定的航线飞行并使用激光雷达采集多帧原始点云,再把多帧原始点云拼接在一起,得到该作业区域的点云数据。多帧原始点云如果只使用惯性测量单元测量的位姿信息拼接在一起,则得到的点云数据容易出错而且效果不好,或是使用三维重建技术重建点云数据,但目前的三维重建技术都需要高性能计算机,经过长时间的计算才能完成,不利于户外作业。
发明内容
为此,本申请提供了一种点云数据的处理方法、可移动平台、控制装置及存储介质,以便快速获取高精度的点云数据。
第一方面,本申请提供了一种点云数据的处理方法,所述方法包括:
获取通过可移动平台搭载的激光雷达对目标区域扫描得到的第一帧点云数据和第二帧点云数据,每帧所述点云数据包括预设时间段内对所述目标区域扫描得到的多个采样点,所述可移动平台采集所述第一帧点云数据的时间段与采集所述第二帧点云数据的时间段在时间上不连续;
将所述第一帧点云数据的多个采样点和所述第二帧点云数据的多个采样点进行匹配,得到采样点配对信息,所述采样点配对信息用于指示所述第一帧点云数据的第一采样点和所述第二帧点云数据中的第二采样点对应所述目标区域同一物点;
基于所述第一采样点和所述第二采样点的位姿信息,确定所述第一帧点云数据的采样点和所述第二帧点云数据的采样点的位姿偏差;
基于所述位姿偏差拼接所述第一帧点云数据和所述第二帧点云数据。
第二方面,本申请还提供了一种可移动平台,所述可移动平台包括激光雷达,所述可移动平台还包括处理器和存储器;
所述激光雷达用于对目标区域扫描得到点云数据;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如本申请实施例提供的任一项所述的点云数据的处理方法的步骤。
第三方面,本申请还提供了一种控制装置,用于获取搭载在可移动平台上的激光雷达采集的点云数据,所述控制装置包括存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如本申请实施例提供的任一项所述的点云数据的处理方法的步骤。
第四方面,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现本申请提供的任一项所述的点云数据的处理方法。
本申请实施例提供的点云数据的处理方法、可移动平台、控制装置及存储介质,可以消除点云数据中的重影现象,能够快速得到高精度的点云数据,由此可适用于快速、高精度的大规模三维重建场景,也适用于大范围的户外三维测绘测量。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种无人机的结构示意图;
图2是本申请实施例提供的无人机的飞行控制系统的示意性框图;
图3是本申请实施例提供的一种飞行系统的结构示意图;
图4a和图4b是本申请实施例提供的一种无人机按照预先规划的航线飞行的场景示意图;
图5是本申请实施例提供的一种存在重影的点云数据的效果示意图;
图6是本申请实施例提供的另一种存在重影的点云数据的效果示意图;
图7是本申请实施例提供的一种激光雷达的扫描方式的效果示意图;
图8是本申请实施例提供的一种点云数据的处理方法的步骤示意流程图;
图9是本申请实施例提供的边缘特征点的投影效果示意图;
图10是本申请实施例提供的消除重影的点云数据的效果示意图;
图11是本申请实施例提供的一种控制装置的示意框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
在对某些作业区域进行三维测绘或三维重建时,可以使用搭载有激光雷达 无人机,通过该激光雷达对作业区域扫描而得到点云数据,利用点云数据可以进行三维建模,对作业区域进行点云数据采集,需要无人机按照预定的航线飞行并使用激光雷达采集多帧原始点云数据,再把多帧原始点云数据拼接在一起,才能得到该作业区域的点云数据。
多帧原始点云数据如果只使用惯性测量单元测量的位姿信息简单地拼接在一起,得到的点云数据容易出错而且效果不好,或是使用三维重建技术重建点云数据,但目前的三维重建技术都需要高性能计算机,经过长时间的计算才能完成,不利于户外作业。
为此,本申请实施例提供了一种点云数据的处理方法、控制装置、可移动平台以及存储介质,可以剔除重影现象,以提高点云数据的精度,并且对硬件配置要求较低,适用于快速、高精度的大规模三维重建。
其中,可移动平台包括无人机、车辆或机器人等,当然也可以包括载人飞行器,以下将以可移动平台为无人机为例进行介绍。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
请参阅图1和图2,图1示出了本申请实施例提供的一种无人机100的结构,图2示出了本申请实施例提供的无人机100的飞行控制系统的结构框架。如图1和图2所示,无人机100可以包括机架10、动力系统11、控制系统12和激光雷达20。
机架10可以包括机身和脚架(也称为起落架)。机身可以包括中心架以及与中心架连接的一个或多个机臂,一个或多个机臂呈辐射状从中心架延伸出。脚架与机身连接,用于在无人机100着陆时起支撑作用。
激光雷达20可以安装在无人机上,具体可以安装在无人机100的机架10上,在无人机100的飞行过程中,用于测量无人机100的周围环境,比如障碍物等,以确保飞行的安全性。在本申请的实施例中,激光雷达20用于对目标区域进行扫描测量,得到目标区域的点云数据,以进行三维重建。
在一些实施例中,激光雷达20安装在无人机100的脚架上,该激光雷达20与控制系统12通信连接,激光雷达20将采集到的点云数据传输至控制系统12,由控制系统12进行处理。
需要说明的是,无人机100可以包括两个或两个以上脚架,激光雷达20 搭载在其中一个脚架上。激光雷达20也可以搭载在无人机100的其他位置,对此不作具体限定。
动力系统11可以包括一个或多个电子调速器(简称为电调)、一个或多个螺旋桨以及与一个或多个螺旋桨相对应的一个或多个电机,其中电机连接在电子调速器与螺旋桨之间,电机和螺旋桨设置在无人机100的机臂上;电子调速器用于接收控制系统产生的驱动信号,并根据驱动信号提供驱动电流给电机,以控制电机的转速。
电机用于驱动螺旋桨旋转,从而为无人机100的飞行提供动力,该动力使得无人机100能够实现一个或多个自由度的运动。在某些实施例中,无人机100可以围绕一个或多个旋转轴旋转。例如,上述旋转轴可以包括横滚轴、偏航轴和俯仰轴。应理解,电机可以是直流电机,也可以是永磁同步电机。或者,电机可以是无刷电机,也可以是有刷电机。
控制系统12可以包括控制器和传感系统。控制器用于控制无人机100的飞行,例如,可以根据传感系统测量的姿态信息控制无人机100的飞行。应理解,控制器可以按照预先编好的程序指令对无人机100进行控制。传感系统用于测量无人机100的姿态信息,即无人机100在空间的位置信息和状态信息,例如,三维位置、三维角度、三维速度、三维加速度和三维角速度等。
传感系统例如可以包括陀螺仪、超声传感器、电子罗盘、惯性测量单元(Inertial Measurement Unit,IMU)、视觉传感器、全球导航卫星系统和气压计等传感器中的至少一种。例如,全球导航卫星系统可以是全球定位系统(Global Positioning System,GPS)。
其中,惯性测量单元和全球定位系统组成位姿测量装置,用于测量无人机的姿态信息、姿态角度信息和位置信息等位姿信息。当然,也可以有其他传感器组成位姿测量装置,比如视觉传感器和全球定位系统。
控制器可以包括一个或多个处理器和存储器。处理器例如可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。存储器可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
本申请实施例提供的可移动平台的存储器用于存储计算机程序;所述处理 器用于执行所述计算机程序并在执行所述计算机程序时,实现本申请提供的任一项所述的点云数据的处理方法。
示例性的,所述处理器用于运行存储在存储器中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取通过可移动平台搭载的激光雷达对目标区域扫描得到的第一帧点云数据和第二帧点云数据;将第一帧点云数据的多个采样点和第二帧点云数据的多个采样点进行匹配,得到采样点配对信息,采样点配对信息用于指示第一帧点云数据的第一采样点和第二帧点云数据中的第二采样点对应目标区域同一物点;基于第一采样点和第二采样点的位姿信息,确定第一帧点云数据的采样点和第二帧点云数据的采样点的位姿偏差;基于位姿偏差拼接第一帧点云数据和第二帧点云数据。由此可以快速得到目标区域的高精度点云数据,以便用于三维重建。
需要说明的是,无人机100可以包括旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机,还可以是旋翼型与固定翼无人机的组合,在此不作限定。
如图3所示,图3示出了本申请实施例提供的一种飞行系统的结构,该飞行系统包括无人机100和控制终端200。控制终端200为位于飞行系统的地面端,可以通过无线方式与无人机100进行通信,用于对无人机100进行远程操纵,比如控制无人机100按照预设的预先规划的航线飞行。
应理解,上述对于无人机100各组成部分的命名仅是出于标识的目的,并不应理解为对本说明书的实施例的限制。该无人机100,具体为无人机100的控制系统的控制器,可以用于执行本申请实施例提供的任一项所述的点云数据的处理方法,以便快速获得高精度的点云数据。
一般目标区域的三维场景重建,需要控制无人机按照预设预先规划的航线在该目标区域上飞行,并控制激光雷达扫描该目标区域,得到多帧点云数据,再把多帧点云数据拼接在一起,才能得到该目标区域的点云数据。
示例性的,如图4a所示,无人机在预先规划的航线飞行,并在飞行过程中控制激光雷达20不断采集数据(点云数据),由此会采集到多帧原始的点云数据。现有的方式是将多帧原始的点云数据使用惯性测量单元等测量的位姿信息堆叠拼接在一起,因为位姿不够精准,会导致目标区域的点云数据容易出错, 且效果不好。或者使用三维重建技术,由于目前的三维点云重建方法需要高性能计算机,长时间计算,因此需要将多帧原始的点云数据发送给地面端进行三维重建,并将重建后的点云数据发送给无人机,由此需要等待的时间较长,对于户外作业很不友好。
其中,将多帧原始的点云数据使用惯性测量单元等测量的位姿信息堆叠拼接在一起,其效果如图4a中目标区域对应的图像所示,该目标区域中的同一物点的图像出现重影,具体如图5所示,同一物点比如为物点a,拼接后的点云数据对应的图像,物点a出现了重影,由此可以说明现有的方式得到的点云数据的精度较低。
此外,由于无人机在运动,而激光雷达是渐进式扫描的,所以会由于无人机的运动产生点云的错位,造成点云重影,如图6所示,具体如图6中方框中的重影部分。
在一些实施例中,为了覆盖较大区域,方便特征点的提取与匹配,激光雷达可以采用非重复式扫描的方式。具体如图7所示,比如采用“花瓣”扫描方式,非重复式扫描方式的一种,而现有的扫描方式一般为逐行扫描方式,采用“花瓣”扫描方式与逐行扫描方式相比,除了具有覆盖面积大,方便特征点提取等优点,但也会造成较大的点云数据偏差量,因此现有的点云拼接方法并不适用于本申请的激光雷达的扫描方式。
具体地,考虑一个时间跨度为Δt的点云数据,假设激光雷达匀速运动,速度为v,且观测同一物体,那么激光雷达在t=0至t=Δt的点云数据的偏差量为Δd=vΔt。示例性的,比如速度为5m/s,Δt=0.1s的频率发出点云数据,则点云数据的偏差量为50cm。
而本申请实施例提供的点云数据的处理方法,可以消除负重复式扫描方式带来的偏差量,进而提高点云数据的精度。
以下将以无人机为例,详细介绍本申请实施例提供的点云数据的处理方法。
请参阅图8,图8是本申请实施例提供的一种点云数据的处理方法的步骤示意流程图。该点云数据的处理方法应用于可移动平台,可移动平台比如为无人机。其中,该可移动平台搭载有激光雷达,激光雷达的扫描方式可以为非重复扫描。
如图8所示,该点云数据的处理方法包括步骤S101至步骤S104。
S101、通过可移动平台搭载的激光雷达对目标区域扫描得到的第一帧点云数据和第二帧点云数据。
其中,每帧点云数据包括预设时间段内对目标区域扫描得到的多个采样点,可移动平台采集第一帧点云数据的时间段与采集第二帧点云数据的时间段在时间上不连续。
目标区域可以为无人机的作业区域中的部分区域或者全部区域,比如需要对该作业区域进行三维重建,得到该作业区域的三维地图,可以通过搭载有激光雷达的无人机对该作业区域中一个或多个目标区域进行扫描,采集该目标区域的点云数据,进而得到该作业区域的点云数据。
具体地,可以控制无人机可以按照预先规划的航线在作业区域上飞行,可移动平台搭载的激光雷达在预先规划的航线运动,并通过搭载在无人机上的激光雷达对该作业区域进行扫描,得到作业区域的多帧点云数据,每帧点云数据是指激光雷达在预设时间段内采集多个采集点所对应的雷达数据,采集点为作业区域内表面物点的位置信息。
在一些实施例中,第一帧点云数据是在可移动平台位于预先规划的航线的第一航线段时采集的,第二帧点云数据是在可移动平台位于预先规划的航线的第二航线段时采集的,具体如图4b所示。
在一些实施例中,第一航线段与第二航线段可以为平行的航线段,或者大致平行的航线段,当然也可以是呈预设角度的两个航线段,预设角度比如为90度或其他角度等。
在无人机通过激光雷达扫描得到点云数据时,同时还可以根据位姿测量装置测量扫描点云数据时对应的位姿信息,以便用来确定每帧点云数据中每个采集点所对应的位姿信息。
确定每帧点云数据中每个采集点所对应的位姿信息,首先需要确定每帧点云数据中每个采集点的采集时刻t,然后在确定无人机位姿测量装置在t时刻输出的位姿信息
Figure PCTCN2021091780-appb-000001
即可得到每个采集点所对应的位姿信息,
Figure PCTCN2021091780-appb-000002
表示t时刻位姿测量装置在世界坐标系下的位姿。由此可以得到每个采集点
Figure PCTCN2021091780-appb-000003
的采集时刻t对应的无人机的位姿信息
Figure PCTCN2021091780-appb-000004
具体地,确定每帧点云数据中每个采集点的采集时刻t,可以根据每帧点云数据的雷达采集时长和时间戳确定。
示例性的,比如每帧点云数据的雷达采集时长为50ms,该帧点云数据包括100个采集点,那么每个采集点就是0.5ms,若该帧点云数据雷达的时间戳是1分12秒5毫秒,那最后一个采集点的采集时刻就是1分12秒5毫秒,倒数第二个采集点的采集时刻就是1分12秒4.5毫秒,依次类推,即可获得每个采集点的具体的采集时刻。
在一些实施例中,由于需要确保点云数据中的采集点的采集时刻均对应有位姿信息,但是若位姿测量装置的测量频率较小,可能无法满足该条件。由此可以通过线性差值获得每个采集时刻的位姿信息。
示例性的,比如位姿测量装置输出位姿信息的测量频率是200Hz,可以通过线性插值拟合的方式,将200Hz插值成240kHz的数据,以确保每个采集时刻均对应有位姿信息。
需要说明的是,200Hz插值成240kHz,仅是举例说明,并不对实际应用构成限定,以及在实际应用中可以根据实际情况进行插值。
在一些实施例中,针对第一帧点云数据和第二帧点云数据中采集点的位姿信息,具体可以获取第一帧点云数据和第二帧点云数据中的每个采样点的采集时刻,再获取可移动平台采集第一帧点云数据和第二帧点云数据时的位姿信息以及位姿信息的测量频率;根据测量频率和采集时刻确定点云数据中的每个采样点对应的位姿信息。
由于在激光雷达采集点云数据的过程中,无人机也在运动,所以会造成同一帧点云数据上的采集点也会错位现象,即由无人机运动造成的模糊现象。
在一些实施例中,为了消除运动造成的模糊现象,提高最后拼接的点云数据的精度。可以第一帧点云数据和第二帧点云数据进行预处理,以消除该模糊现象,具体可以获取激光雷达对应的雷达坐标系和可移动平台对应的世界坐标系之间的位姿关系,根据位姿关系将第一帧点云数据和第二帧点云数据中的采样点从雷达坐标系转换至世界坐标系,由此可以消除模糊现象。
示例性的,可以查询激光雷达的雷达坐标系和无人机的无人机坐标系之间的位姿关系
Figure PCTCN2021091780-appb-000005
根据位姿关系
Figure PCTCN2021091780-appb-000006
将每帧点云数据中的第i个采集点
Figure PCTCN2021091780-appb-000007
的位姿信息
Figure PCTCN2021091780-appb-000008
从雷达极坐标系转化为世界坐标系,具体为:
Figure PCTCN2021091780-appb-000009
其中,
Figure PCTCN2021091780-appb-000010
为第i个采集点
Figure PCTCN2021091780-appb-000011
在世界坐标系下的位姿信息。
需要说明的是,位姿关系
Figure PCTCN2021091780-appb-000012
可以根据设计时无人机和雷达之间的位置关系、角度关系,使用算法标定得到,在此不做详细介绍。
在一些实施例中,在将点云数据从雷达坐标系转换至世界坐标系后,为了获得更密集的点云数据,以便后续对点云数据进行处理。还可以获取第一帧点云数据和第二帧点云数据对应的时间点,其中,时间点包括所述第一帧点云数据和第二帧点云数据对应的起始时刻、中间时刻或结束时刻;获取时间点对应的位姿信息;根据所述时间点对应的位姿信息,将第一帧点云数据和第二帧点云数据中的每个采样点从世界坐标系转换至雷达坐标系,由此得到更为密集的点云数据。
S102、将第一帧点云数据的多个采样点和第二帧点云数据的多个采样点进行匹配,得到采样点配对信息,采样点配对信息用于指示第一帧点云数据的第一采样点和第二帧点云数据中的第二采样点对应目标区域同一物点。
第一帧点云数据的多个采样点和第二帧点云数据的多个采样点均可以包括边缘特征点和/或平面特征点,其中,边缘特征点和平面特征点是通过点云数据的采样点的表面平滑度确定的。
具体地,若采样点的表面平滑度小于或等于平面阈值,则确定采样点为平面特征点;若采样点的表面平滑度小于或等于边缘阈值,则确定采样点为边缘特征点。平面阈值和边缘阈值为经验值,其大小在此不做限定。
在一些实施例中,可以根据采样点和该采样点所在扫描线上其他采样点,确定该采样点的表面平滑度。由于激光雷达是通过多束激光扫描,每束激光在不同时刻扫描的不同物点,将每束激光在不同时刻扫描的不同物点连起来就是扫描线。
在一些实施例中,可以确定点云数据中每个采样点的位置信息与采样点所在的扫描线上的其他采样点的位置信息的差值,并求该差值的二范数的和值;计算采样点的位置信息的二范数;再根据该差值的二范数的和值、该采样点的二范数以及同一条扫描线采样点的数量,确定点云数据的采样点的表面平滑度。
具体地,点云数据的采样点的表面平滑度的计算公式,表示为:
Figure PCTCN2021091780-appb-000013
其中,c表示表面平滑度,S表示某条扫描线,N为该扫描线上的采集的的数量,
Figure PCTCN2021091780-appb-000014
表示第一帧点云数据中的第i个采集点,和同样是第一帧点云数据中的同一扫描线上的其他采集点j相减的差值,k表示具体的点云数据,
Figure PCTCN2021091780-appb-000015
表示对该差值求二范数(即各个元素平方之和再开根号),
Figure PCTCN2021091780-appb-000016
表示求和值。
通过上述表面平滑度的计算公式,可以计算出每个点云数据中采集点的表面平滑度c,即每个采集点i都有一个对应的表面平滑度c,根据该表面平滑度确定是边缘特征点或平面特征点,具体地,若c>threshold_edge即大于边缘阈值,则确定此采集点为边缘特征点,若c<threshold_plane即小于平面阈值,则确定此采集点为平面特征点。
通过上述方式可以确定平面特征点和边缘特征点,但是对于边缘特征点可能会存在误判,为此可以将边缘特征点对应的位置信息从三维坐标中投影至二维平面,得到二维图像;对所述二维图像进行边缘检测处理得到滤波后的边缘特征点。
具体地,如图9所示,可以将位于三维坐标下的边缘特征点的位置信息投影到一个统一的二维平面上,即从三维坐标转换至二维平面内,再用图像处理的方法(如Canny算子)筛选得到更加精准的边缘特征点,由此可以消除误判的边缘特征点。
将第一帧点云数据的多个采样点和第二帧点云数据的多个采样点进行匹配,得到采样点配对信息,具体还可以计算第一帧视频数据和第二帧视频数据的平面特征点和边缘特征点的重叠比例,比如重叠比例为60%,重叠比例也是采样点配对信息的部分。
S103、基于第一采样点和第二采样点的位姿信息,确定第一帧点云数据的采样点和第二帧点云数据的采样点的位姿偏差。
第一采样点和第二采样点表示目标区域中的同一物点,可以先确定第一采样点至该第一采样点在第二帧点云数据中的邻域平面或邻域边缘的距离,或者, 确定第二采样点至该第二采样点在所述第一帧点云数据中的邻域平面或邻域边缘的距离;再根据所述距离(第一采样点对应的距离,或第二采样点对应的距离)确定第一帧点云数据的采样点和第二帧点云数据的采样点的位姿偏差。
其中,第一采样点在第二帧点云数据中的邻域平面为在第二帧点云数据中与第一采集点相邻的多个临近点构成的平面;第一采样点在第二帧点云数据中的邻域边缘为在第二帧点云数据中与第一采集点相邻的多个临近点构成的边缘。
在一些实施例中,确定第一采集点和第二采集点各自对应的邻域平面或邻域边缘,可以利用主成分分析法,具体地,基于主成分分析法,确定第一帧点云数据的第一采集点在第二帧点云数据中对应的邻域平面或邻域边缘,或者,基于主成分分析法,确定第二帧点云数据的第二采集点在第一帧点云数据中对应的邻域平面或邻域边缘。
示例性的,对于第一帧点云数据A和第二帧点云数据B,比如对于第一帧点云数据A中的第一采集点pointA1,可以在第二帧点云数据中找到其N个近邻点(比如位置接近的N个采集点),分别为:pointB1,pointB2,pointB3…pointBN。
使主成分分析法(Principal component analysis,PCA)拟合第二帧点云数据中的N个近邻点,并根据阈值判定N个近邻点是否构成平面/边缘。若构成平面/边缘,则计算pointA1至平面/边缘之间的距离,记为距离D,并将此距离作为损失代价,对第一帧点云数据A和第二帧点云数据B中采集点的位姿信息进行迭代优化。
基于第一采样点和第二采样点的位姿信息,确定第一帧点云数据的采样点和第二帧点云数据的采样点的位姿偏差,具体是以所述距离为损失代价,迭代优化第一帧点云数据和第二帧点云数据中的采样点的位姿信息,以确定位姿偏差,其中,迭代优化是指通过调整第一帧点云数据和第二帧点云数据中的采样点的位姿信息以使损失代价(距离)最小,具体也可以通过小于预设值确定该最小。
在一些实施例中,还可以在每次迭代后,根据优化后的位姿信息将第一帧点云数据和第二帧点云数据中的采样点的位姿信息投影至世界坐标系,得到所述采样点在所述世界坐标系下的位姿信息;在判断在所述世界坐标系下的位姿信息在优化前后的变化量是否小于预设阈值,若在世界坐标系的位姿信息在优 化前后的变化量小于预设阈值,则将优化后的位姿信息以及第一帧点云数据和第二帧点云数据中的采样点的位姿信息,确定所述位姿偏差。即将优化后的位姿信息减去第一帧点云数据中的采样点的位姿信息,得到第一帧点云数据对应的位姿偏差,以及将优化后的位姿信息减去第二帧点云数据中的采样点的位姿信息,得到第二帧点云数据对应的位姿偏差。
S104、基于位姿偏差拼接第一帧点云数据和第二帧点云数据。
具体地,可以根据位姿偏差确定第一帧点云数据和第二帧点云数据的采样点的目标位姿信息,该目标位姿信息为帧点云数据的采样点的位姿信息和位姿偏差确定,比如为位姿信息减去或加上位姿偏差得到目标位姿信息;再根据目标位姿信息将第一帧点云数据和第二帧点云数据投影至世界坐标系,完成第一帧点云数据和第二帧点云数据的拼接。
需要说明的是,目标区域可能包括多组第一帧点云数据和第二帧点云数据,每组第一帧点云数据和第二帧点云数据对应目标区域的不同物点,但是均是按照上述提供的点云数据的处理方法,对每组这样的第一帧点云数据和第二帧点云数据进行拼接,可以得到较大的目标区域的点云数据。
在一些实施例中,对于较大的目标区域时,由于该目标区域可能包括多帧点云数据,为了提高数据处理效率,还可以将多帧点云数据划组成多个点云片段;再根据点云片段的特征点将多个所述点云片段匹配成至少一个片段对,所述片段对包括两个所述点云片段,其中,该片段对的两个点云片段,比如分别为点云片段StripA和点云片段StripB,该可以将点云片段StripA和点云片段StripB作为上述实施例第一帧点云数据和第二帧点云数据,进行上述的点云数据的处理方法处理,完成对点云片段StripA和点云片段StripB的拼接。
具体地,提取每个点云片段的平面特征点和边缘特征点,根据平面特征点和边缘特征点计算多个点云片段的相似度,根据相似度匹配成片段对,每个片段对至少包括两个点云片段。
示例性的,对于任意两个点云片段Strip,比如stripA和stripB,计算其平面特征点和边缘特征点的重叠比例,若重叠比例超过阈值,比如重叠比例超过60%,则认为这两个点云片段是成对匹配的,即成为片段对,具体为片段对Strip_pair(stripA、stripB)。
需要说明的是,采集云片段StripA和点云片段StripB所对应的时间段在时 间上也不连续。
在一些实施例,为了提高数据处理效率,同时还要确保拼接后的数据的准确性,可以限定点云片段包括预设时间段内的预设帧数的点云数据。
示例性的,可以选取预设时间段为2s,预设帧数为20帧的点云数据,堆叠成一个点云片段Strip,先将点云片段Strip内的采集点投影到世界坐标系下,去除运动引起的抖动造成的模糊现象,再统一投影到一个中间时刻的雷达坐标系(即1s时的雷达坐标系下),当然也可以投影到点云片段Strip的起始时刻,或是结束时刻,由此将点云片段Strip的数据需要统一到一个坐标系下,堆叠成一个更密集的点云。
需要说明的是,若选取预设时间段太短,对单帧点云数据,采集点的数量太少,同时本申请实施例使用的激光雷达又是非重复扫描,会造成两帧点云数据之间难以相互匹配。将2秒内的20帧堆叠在一起成为一个点云片段,可以确保求解出点云片段和点云片段之间精准的位姿关系。此外,点云片段Strip内时间较短,可以认为该时间段内的位姿信息是准确的,就用位姿测量装置的位姿信息。
上述实施例提供的点云数据的处理方法,能够消除位姿偏差的影响,提高点云数据的精度,消除重影现象,由此可适用于快速、高精度的大规模三维重建的场景。
示例性的,具体如图10所示,图10中左边部分表示同一物点拼接后物点a的点云数据对应的图像,即物点a出现了重影,通过使用本申请实施例提供的点云数据的处理方法,可以消除该重影,具体如图10中右边部分所示,由此提高点云数据的精度。
本申请的实施例还提供了一种控制装置,该控制装置可以设置可移动平台中。具体如图11所示,控制装置400包括处理器401和存储器402;存储器402用于存储计算机程序;所述处理器401,用于执行所述计算机程序并在执行所述计算机程序时,实现本申请实施例提供的任一种所述的点云数据的处理方法的步骤。
处理器401可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
存储器402可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或滑动硬盘等。
其中,所述处理器用于运行存储在存储器中的计算机程序,并在执行所述计算机程序时实现本申请实施例提供的任意一种所述的点云数据的处理方法。
示例性的,所述处理器用于运行存储在存储器中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取通过可移动平台搭载的激光雷达对目标区域扫描得到的第一帧点云数据和第二帧点云数据,每帧所述点云数据包括预设时间段内对所述目标区域扫描得到的多个采样点,所述可移动平台采集所述第一帧点云数据的时间段与采集所述第二帧点云数据的时间段在时间上不连续;
将所述第一帧点云数据的多个采样点和所述第二帧点云数据的多个采样点进行匹配,得到采样点配对信息,所述采样点配对信息用于指示所述第一帧点云数据的第一采样点和所述第二帧点云数据中的第二采样点对应所述目标区域同一物点;
基于所述第一采样点和所述第二采样点的位姿信息,确定所述第一帧点云数据的采样点和所述第二帧点云数据的采样点的位姿偏差;
基于所述位姿偏差拼接所述第一帧点云数据和所述第二帧点云数据。
在一些实施例中,所述可移动平台搭载的激光雷达在预先规划的航线运动,所述第一帧点云数据是在所述可移动平台位于所述预先规划的航线的第一航线段时采集的,所述第二帧点云数据是在所述可移动平台位于所述预先规划的航线的第二航线段时采集的。
在一些实施例中,所述第一航线段与所述第二航线段平行。
在一些实施例中,所述采样点包括边缘特征点和/或平面特征点,所述边缘特征点和平面特征点是通过点云数据的采样点的表面平滑度确定的。
在一些实施例中,所述采样点的表面平滑度是根据所述采样点和所述采样点所在扫描线上其他采样点确定的。
在一些实施例中,所述处理器用于实现:确定所述点云数据中每个采样点的位置信息与所述采样点所在的扫描线上的其他采样点的位置信息的差值,并求所述差值的二范数的和值;计算所述采样点的位置信息的二范数;根据所述差值的二范数的和值、所述采样点的二范数以及所述同一条扫描线采样点的数 量,确定点云数据的采样点的表面平滑度。
在一些实施例中,所述处理器用于实现:将所述边缘特征点投影至二维平面,得到二维图像;对所述二维图像进行边缘检测处理,得到滤波后的边缘特征点。
在一些实施例中,所述基于所述第一采样点和所述第二采样点的位姿信息,确定所述第一帧点云数据的采样点和所述第二帧点云数据的采样点的位姿偏差,包括:
确定所述第一采样点至所述第一采样点在所述第二帧点云数据中的邻域平面或邻域边缘的距离,或者,确定所述第二采样点至所述第二采样点在所述第一帧点云数据中的邻域平面或邻域边缘的距离;根据所述距离确定所述第一帧点云数据的采样点和所述第二帧点云数据的采样点的位姿偏差。
在一些实施例中,所述第一采样点在所述第二帧点云数据中的邻域平面为在所述第二帧点云数据中与所述第一采集点相邻的多个临近点构成的平面;
所述第一采样点在所述第二帧点云数据中的邻域边缘为在所述第二帧点云数据中与所述第一采集点相邻的多个临近点构成的边缘。
在一些实施例中,所述处理器用于实现:
以所述距离为损失代价,迭代优化所述第一帧点云数据和第二帧点云数据中的采样点的位姿信息,确定所述位姿偏差;其中,所述迭代优化是指通过调整所述第一帧点云数据和第二帧点云数据中的采样点的位姿信息以使所述损失代价达到预设值。
在一些实施例中,所述处理器用于实现:
根据优化后的位姿信息,将所述第一帧点云数据和第二帧点云数据中的采样点的位姿信息投影至世界坐标系,得到所述采样点在所述世界坐标系下的位姿信息;若在所述世界坐标系的位姿信息在优化前后的变化量小于预设阈值,则将所述优化后的位姿信息以及所述第一帧点云数据和第二帧点云数据中的采样点的位姿信息,确定所述位姿偏差。
在一些实施例中,所述处理器用于实现:
基于主成分分析法,确定所述第一帧点云数据的第一采集点在所述第二帧点云数据中对应的邻域平面或邻域边缘,或者,确定所述第二帧点云数据的第二采集点在所述第一帧点云数据中对应的邻域平面或邻域边缘。
在一些实施例中,所述激光雷达的扫描方式包括非重复扫描。
在一些实施例中,所述处理器用于实现:
获取所述第一帧点云数据和第二帧点云数据中的每个采样点的采集时刻;获取可移动平台采集所述第一帧点云数据和第二帧点云数据时的位姿信息以及所述位姿信息的测量频率;根据所述测量频率和采集时刻,确定点云数据中的每个采样点对应的位姿信息。
在一些实施例中,所述处理器用于实现:
获取激光雷达对应的雷达坐标系和可移动平台对应的世界坐标系之间的位姿关系;根据所述位姿关系将所述第一帧点云数据和第二帧点云数据中的采样点从所述雷达坐标系转换至所述世界坐标系。
在一些实施例中,所述处理器用于实现:
获取所述第一帧点云数据和第二帧点云数据对应的时间点,其中,所述时间点包括所述第一帧点云数据和第二帧点云数据对应的起始时刻、中间时刻或结束时刻;获取所述时间点对应的位姿信息;根据所述时间点对应的位姿信息,将所述第一帧点云数据和第二帧点云数据中的每个采样点从世界坐标系转换至雷达坐标系。
在一些实施例中,所述基于所述位姿偏差拼接所述第一帧点云数据和所述第二帧点云数据,包括:
根据所述位姿偏差,确定所述第一帧点云数据和第二帧点云数据的采样点的目标位姿信息;根据所述目标位姿信息,将所述第一帧点云数据和第二帧点云数据投影至世界坐标系。
本申请的实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述处理器执行所述程序指令,实现上述实施例提供的任一种所述的点云数据的处理方法的步骤。
其中,所述计算机可读存储介质可以是前述任一实施例所述可移动平台的内部存储单元,例如所述无人机的存储器或内存。所述计算机可读存储介质也可以是所述无人机的外部存储设备,例如所述无人机上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于 此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (36)

  1. 一种点云数据的处理方法,其特征在于,所述方法包括:
    获取通过可移动平台搭载的激光雷达对目标区域扫描得到的第一帧点云数据和第二帧点云数据,每帧所述点云数据包括预设时间段内对所述目标区域扫描得到的多个采样点,所述可移动平台采集所述第一帧点云数据的时间段与采集所述第二帧点云数据的时间段在时间上不连续;
    将所述第一帧点云数据的多个采样点和所述第二帧点云数据的多个采样点进行匹配,得到采样点配对信息,所述采样点配对信息用于指示所述第一帧点云数据的第一采样点和所述第二帧点云数据中的第二采样点对应所述目标区域同一物点;
    基于所述第一采样点和所述第二采样点的位姿信息,确定所述第一帧点云数据的采样点和所述第二帧点云数据的采样点的位姿偏差;
    基于所述位姿偏差拼接所述第一帧点云数据和所述第二帧点云数据。
  2. 根据权利要求1所述的方法,其特征在于,所述可移动平台搭载的激光雷达在预先规划的航线运动,所述第一帧点云数据是在所述可移动平台位于所述预先规划的航线的第一航线段时采集的,所述第二帧点云数据是在所述可移动平台位于所述预先规划的航线的第二航线段时采集的。
  3. 根据权利要求1所述的方法,其特征在于,所述第一航线段与所述第二航线段平行。
  4. 根据权利要求1所述的方法,其特征在于,所述采样点包括边缘特征点和/或平面特征点,所述边缘特征点和平面特征点是通过点云数据的采样点的表面平滑度确定的。
  5. 根据权利要求4所述的方法,其特征在于,所述采样点的表面平滑度是根据所述采样点和所述采样点所在扫描线上其他采样点确定的。
  6. 根据权利要求5所述的方法,其特征在于,所述方法包括:
    确定所述点云数据中每个采样点的位置信息与所述采样点所在的扫描线上的其他采样点的位置信息的差值,并求所述差值的二范数的和值;
    计算所述采样点的位置信息的二范数;
    根据所述差值的二范数的和值、所述采样点的二范数以及所述同一条扫描线采样点的数量,确定点云数据的采样点的表面平滑度。
  7. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    将所述边缘特征点投影至二维平面,得到二维图像;
    对所述二维图像进行边缘检测处理,得到滤波后的边缘特征点。
  8. 根据权利要求1所述的方法,其特征在于,所述基于所述第一采样点和所述第二采样点的位姿信息,确定所述第一帧点云数据的采样点和所述第二帧点云数据的采样点的位姿偏差,包括:
    确定所述第一采样点至所述第一采样点在所述第二帧点云数据中的邻域平面或邻域边缘的距离,或者,确定所述第二采样点至所述第二采样点在所述第一帧点云数据中的邻域平面或邻域边缘的距离;
    根据所述距离确定所述第一帧点云数据的采样点和所述第二帧点云数据的采样点的位姿偏差。
  9. 根据权利要求8所述的方法,其特征在于,所述第一采样点在所述第二帧点云数据中的邻域平面为在所述第二帧点云数据中与所述第一采集点相邻的多个临近点构成的平面;
    所述第一采样点在所述第二帧点云数据中的邻域边缘为在所述第二帧点云数据中与所述第一采集点相邻的多个临近点构成的边缘。
  10. 根据权利要求8所述的方法,其特征在于,所述方法包括:
    以所述距离为损失代价,迭代优化所述第一帧点云数据和第二帧点云数据中的采样点的位姿信息,确定所述位姿偏差;
    其中,所述迭代优化是指通过调整所述第一帧点云数据和第二帧点云数据中的采样点的位姿信息以使所述损失代价达到预设值。
  11. 根据权利要求10所述的方法,其特征在于,所述方法包括:
    根据优化后的位姿信息,将所述第一帧点云数据和第二帧点云数据中的采样点的位姿信息投影至世界坐标系,得到所述采样点在所述世界坐标系下的位姿信息;
    若在所述世界坐标系的位姿信息在优化前后的变化量小于预设阈值,则将所述优化后的位姿信息以及所述第一帧点云数据和第二帧点云数据中的采样点的位姿信息,确定所述位姿偏差。
  12. 根据权利要求8所述的方法,其特征在于,所述方法包括:
    基于主成分分析法,确定所述第一帧点云数据的第一采集点在所述第二帧点云数据中对应的邻域平面或邻域边缘,或者,确定所述第二帧点云数据的第二采集点在所述第一帧点云数据中对应的邻域平面或邻域边缘。
  13. 根据权利要求1所述的方法,其特征在于,所述激光雷达的扫描方式包括非重复扫描。
  14. 根据权利要求1-13所述的方法,其特征在于,所述方法包括:
    获取所述第一帧点云数据和第二帧点云数据中的每个采样点的采集时刻;
    获取可移动平台采集所述第一帧点云数据和第二帧点云数据时的位姿信息以及所述位姿信息的测量频率;
    根据所述测量频率和采集时刻,确定点云数据中的每个采样点对应的位姿信息。
  15. 根据权利要求1-13任一项所述的方法,其特征在于,所述方法还包括:
    获取激光雷达对应的雷达坐标系和可移动平台对应的世界坐标系之间的位姿关系;
    根据所述位姿关系将所述第一帧点云数据和第二帧点云数据中的采样点从所述雷达坐标系转换至所述世界坐标系。
  16. 根据权利要求15所述的方法,其特征在于,所述方法还包括:
    获取所述第一帧点云数据和第二帧点云数据对应的时间点,其中,所述时间点包括所述第一帧点云数据和第二帧点云数据对应的起始时刻、中间时刻或结束时刻;
    获取所述时间点对应的位姿信息;
    根据所述时间点对应的位姿信息,将所述第一帧点云数据和第二帧点云数据中的每个采样点从世界坐标系转换至雷达坐标系。
  17. 根据权利要求1-13任一项所述的方法,其特征在于,所述基于所述位姿偏差拼接所述第一帧点云数据和所述第二帧点云数据,包括:
    根据所述位姿偏差,确定所述第一帧点云数据和第二帧点云数据的采样点的目标位姿信息;
    根据所述目标位姿信息,将所述第一帧点云数据和第二帧点云数据投影至世界坐标系。
  18. 一种可移动平台,其特征在于,所述可移动平台包括激光雷达,所述可移动平台还包括处理器和存储器;
    所述激光雷达用于对目标区域扫描得到点云数据;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现:
    获取通过可移动平台搭载的激光雷达对目标区域扫描得到的第一帧点云数据和第二帧点云数据,每帧所述点云数据包括预设时间段内对所述目标区域扫描得到的多个采样点,所述可移动平台采集所述第一帧点云数据的时间段与采集所述第二帧点云数据的时间段在时间上不连续;
    将所述第一帧点云数据的多个采样点和所述第二帧点云数据的多个采样点进行匹配,得到采样点配对信息,所述采样点配对信息用于指示所述第一帧点云数据的第一采样点和所述第二帧点云数据中的第二采样点对应所述目标区域同一物点;
    基于所述第一采样点和所述第二采样点的位姿信息,确定所述第一帧点云数据的采样点和所述第二帧点云数据的采样点的位姿偏差;
    基于所述位姿偏差拼接所述第一帧点云数据和所述第二帧点云数据。
  19. 根据权利要求18所述的可移动平台,其特征在于,所述可移动平台搭载的激光雷达在预先规划的航线运动,所述第一帧点云数据是在所述可移动平台位于所述预先规划的航线的第一航线段时采集的,所述第二帧点云数据是在所述可移动平台位于所述预先规划的航线的第二航线段时采集的。
  20. 根据权利要求18所述的可移动平台,其特征在于,所述第一航线段与所述第二航线段平行。
  21. 根据权利要求18所述的可移动平台,其特征在于,所述采样点包括边缘特征点和/或平面特征点,所述边缘特征点和平面特征点是通过点云数据的采样点的表面平滑度确定的。
  22. 根据权利要求21所述的可移动平台,其特征在于,所述采样点的表面平滑度是根据所述采样点和所述采样点所在扫描线上其他采样点确定的。
  23. 根据权利要求22所述的可移动平台,其特征在于,所述处理器用于:
    确定所述点云数据中每个采样点的位置信息与所述采样点所在的扫描线上的其他采样点的位置信息的差值,并求所述差值的二范数的和值;
    计算所述采样点的位置信息的二范数;
    根据所述差值的二范数的和值、所述采样点的二范数以及所述同一条扫描线采样点的数量,确定点云数据的采样点的表面平滑度。
  24. 根据权利要求21所述的可移动平台,其特征在于,所述处理器还用于:
    将所述边缘特征点投影至二维平面,得到二维图像;
    对所述二维图像进行边缘检测处理,得到滤波后的边缘特征点。
  25. 根据权利要求18所述的可移动平台,其特征在于,所述基于所述第一采样点和所述第二采样点的位姿信息,确定所述第一帧点云数据的采样点和所述第二帧点云数据的采样点的位姿偏差,包括:
    确定所述第一采样点至所述第一采样点在所述第二帧点云数据中的邻域平面或邻域边缘的距离,或者,确定所述第二采样点至所述第二采样点在所述第一帧点云数据中的邻域平面或邻域边缘的距离;
    根据所述距离确定所述第一帧点云数据的采样点和所述第二帧点云数据的采样点的位姿偏差。
  26. 根据权利要求25所述的可移动平台,其特征在于,所述第一采样点在所述第二帧点云数据中的邻域平面为在所述第二帧点云数据中与所述第一采集点相邻的多个临近点构成的平面;
    所述第一采样点在所述第二帧点云数据中的邻域边缘为在所述第二帧点云数据中与所述第一采集点相邻的多个临近点构成的边缘。
  27. 根据权利要求25所述的可移动平台,其特征在于,所述处理器用于:
    以所述距离为损失代价,迭代优化所述第一帧点云数据和第二帧点云数据中的采样点的位姿信息,确定所述位姿偏差;
    其中,所述迭代优化是指通过调整所述第一帧点云数据和第二帧点云数据中的采样点的位姿信息以使所述损失代价达到预设值。
  28. 根据权利要求27所述的可移动平台,其特征在于,所述处理器用于:
    根据优化后的位姿信息,将所述第一帧点云数据和第二帧点云数据中的采样点的位姿信息投影至世界坐标系,得到所述采样点在所述世界坐标系下的位姿信息;
    若在所述世界坐标系的位姿信息在优化前后的变化量小于预设阈值,则将所述优化后的位姿信息以及所述第一帧点云数据和第二帧点云数据中的采样点 的位姿信息,确定所述位姿偏差。
  29. 根据权利要求25所述的可移动平台,其特征在于,所述处理器用于:
    基于主成分分析法,确定所述第一帧点云数据的第一采集点在所述第二帧点云数据中对应的邻域平面或邻域边缘,或者,确定所述第二帧点云数据的第二采集点在所述第一帧点云数据中对应的邻域平面或邻域边缘。
  30. 根据权利要求18所述的可移动平台,其特征在于,所述激光雷达的扫描方式包括非重复扫描。
  31. 根据权利要求18-30所述的可移动平台,其特征在于,所述处理器用于:
    获取所述第一帧点云数据和第二帧点云数据中的每个采样点的采集时刻;
    获取可移动平台采集所述第一帧点云数据和第二帧点云数据时的位姿信息以及所述位姿信息的测量频率;
    根据所述测量频率和采集时刻,确定点云数据中的每个采样点对应的位姿信息。
  32. 根据权利要求18-30任一项所述的可移动平台,其特征在于,所述处理器还用于:
    获取激光雷达对应的雷达坐标系和可移动平台对应的世界坐标系之间的位姿关系;
    根据所述位姿关系将所述第一帧点云数据和第二帧点云数据中的采样点从所述雷达坐标系转换至所述世界坐标系。
  33. 根据权利要求32所述的可移动平台,其特征在于,所述处理器还用于:
    获取所述第一帧点云数据和第二帧点云数据对应的时间点,其中,所述时间点包括所述第一帧点云数据和第二帧点云数据对应的起始时刻、中间时刻或结束时刻;
    获取所述时间点对应的位姿信息;
    根据所述时间点对应的位姿信息,将所述第一帧点云数据和第二帧点云数据中的每个采样点从世界坐标系转换至雷达坐标系。
  34. 根据权利要求18-30任一项所述的可移动平台,其特征在于,所述基于所述位姿偏差拼接所述第一帧点云数据和所述第二帧点云数据,包括:
    根据所述位姿偏差,确定所述第一帧点云数据和第二帧点云数据的采样点 的目标位姿信息;
    根据所述目标位姿信息,将所述第一帧点云数据和第二帧点云数据投影至世界坐标系。
  35. 一种控制装置,其特征在于,所述控制装置用于获取激光雷达采集的点云数据;
    所述控制装置包括处理器和存储器;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现权利要求1-17任一项所述的点云数据的处理方法。
  36. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1-17任一项所述的点云数据的处理方法的步骤。
PCT/CN2021/091780 2021-04-30 2021-04-30 点云数据的处理方法、设备及存储介质 WO2022227096A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/091780 WO2022227096A1 (zh) 2021-04-30 2021-04-30 点云数据的处理方法、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/091780 WO2022227096A1 (zh) 2021-04-30 2021-04-30 点云数据的处理方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022227096A1 true WO2022227096A1 (zh) 2022-11-03

Family

ID=83847597

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/091780 WO2022227096A1 (zh) 2021-04-30 2021-04-30 点云数据的处理方法、设备及存储介质

Country Status (1)

Country Link
WO (1) WO2022227096A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230379A (zh) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 用于融合点云数据的方法和装置
CN110930495A (zh) * 2019-11-22 2020-03-27 哈尔滨工业大学(深圳) 基于多无人机协作的icp点云地图融合方法、系统、装置及存储介质
CN111080682A (zh) * 2019-12-05 2020-04-28 北京京东乾石科技有限公司 点云数据的配准方法及装置
CN112348897A (zh) * 2020-11-30 2021-02-09 上海商汤临港智能科技有限公司 位姿确定方法及装置、电子设备、计算机可读存储介质
WO2021062587A1 (en) * 2019-09-30 2021-04-08 Beijing Voyager Technology Co., Ltd. Systems and methods for automatic labeling of objects in 3d point clouds

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230379A (zh) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 用于融合点云数据的方法和装置
WO2021062587A1 (en) * 2019-09-30 2021-04-08 Beijing Voyager Technology Co., Ltd. Systems and methods for automatic labeling of objects in 3d point clouds
CN110930495A (zh) * 2019-11-22 2020-03-27 哈尔滨工业大学(深圳) 基于多无人机协作的icp点云地图融合方法、系统、装置及存储介质
CN111080682A (zh) * 2019-12-05 2020-04-28 北京京东乾石科技有限公司 点云数据的配准方法及装置
CN112348897A (zh) * 2020-11-30 2021-02-09 上海商汤临港智能科技有限公司 位姿确定方法及装置、电子设备、计算机可读存储介质

Similar Documents

Publication Publication Date Title
US11237572B2 (en) Collision avoidance system, depth imaging system, vehicle, map generator and methods thereof
US20200346753A1 (en) Uav control method, device and uav
JP7263630B2 (ja) 無人航空機による3次元再構成の実行
Loianno et al. Cooperative localization and mapping of MAVs using RGB-D sensors
BR102018012662B1 (pt) Método de operação de um veículo aéreo não tripulado e veículo aéreo não tripulado
WO2020113423A1 (zh) 目标场景三维重建方法、系统及无人机
JP5990453B2 (ja) 自律移動ロボット
WO2020103049A1 (zh) 旋转微波雷达的地形预测方法、装置、系统和无人机
WO2019084868A1 (zh) 一种避障方法、装置、可移动物体及计算机可读存储介质
WO2018120350A1 (zh) 对无人机进行定位的方法及装置
EP3972235A1 (en) Focusing method and apparatus, aerial photography camera, and unmanned aerial vehicle
WO2020181418A1 (en) Techniques for collaborative map construction between unmanned aerial vehicle and ground vehicle
CN109031312A (zh) 适用于烟囱内部作业的飞行平台定位装置和定位方法
CN110568860A (zh) 一种无人飞行器的返航方法、装置及无人飞行器
WO2023036260A1 (zh) 一种图像获取方法、装置、飞行器和存储介质
CN112136137A (zh) 一种参数优化方法、装置及控制设备、飞行器
WO2018068193A1 (zh) 控制方法、控制装置、飞行控制系统与多旋翼无人机
WO2021081958A1 (zh) 地形检测方法、可移动平台、控制设备、系统及存储介质
WO2019183789A1 (zh) 无人机的控制方法、装置和无人机
US20210229810A1 (en) Information processing device, flight control method, and flight control system
WO2020062089A1 (zh) 磁传感器校准方法以及可移动平台
WO2022227096A1 (zh) 点云数据的处理方法、设备及存储介质
WO2020113417A1 (zh) 目标场景三维重建方法、系统及无人机
WO2020062255A1 (zh) 拍摄控制方法和无人机
US20210256732A1 (en) Image processing method and unmanned aerial vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21938569

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21938569

Country of ref document: EP

Kind code of ref document: A1