CN111699410A - Point cloud processing method, device and computer readable storage medium - Google Patents

Point cloud processing method, device and computer readable storage medium Download PDF

Info

Publication number
CN111699410A
CN111699410A CN201980012171.7A CN201980012171A CN111699410A CN 111699410 A CN111699410 A CN 111699410A CN 201980012171 A CN201980012171 A CN 201980012171A CN 111699410 A CN111699410 A CN 111699410A
Authority
CN
China
Prior art keywords
dimensional point
point cloud
frame
coordinate system
height value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980012171.7A
Other languages
Chinese (zh)
Other versions
CN111699410B (en
Inventor
郑杨杨
刘晓洋
张晓炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuoyu Technology Co ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN111699410A publication Critical patent/CN111699410A/en
Application granted granted Critical
Publication of CN111699410B publication Critical patent/CN111699410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a point cloud processing method, a device and a computer readable storage medium, wherein the method comprises the following steps: acquiring a multi-frame three-dimensional point cloud containing a target area; preprocessing the multi-frame three-dimensional point cloud; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. According to the embodiment of the invention, the multi-frame three-dimensional point cloud is corrected, so that the problem of surface blurring caused by time sequence difference of the multi-frame sparse point cloud can be solved, the identification precision of the target area is improved, and a high-quality three-dimensional scene is reconstructed.

Description

Point cloud processing method, device and computer readable storage medium
Technical Field
The embodiment of the invention relates to the field of automatic driving, in particular to a point cloud processing method, point cloud processing equipment and a computer readable storage medium.
Background
The laser radar is one of main sensors used in the field of three-dimensional scene reconstruction, and can generate sparse point clouds of a three-dimensional scene in real time according to a light reflection principle, and then reconstruct the three-dimensional scene of the current position by fusing multi-frame sparse point clouds.
Because single-frame laser point clouds are generally sparse, the existing method for reconstructing the three-dimensional scene by using the laser point clouds can reconstruct the three-dimensional scene with higher quality only by accumulating multi-frame point clouds within a period of time and performing time sequence fusion. However, in an automatic driving system, a vehicle-mounted laser radar moves along with a vehicle, and due to the influence of vehicle positioning errors, the problem that the accumulated multi-frame point clouds have large surface jitter after being fused occurs, so that the identification precision of a target area is low, and the reconstruction precision of a three-dimensional scene is not ideal enough. Especially, in a three-dimensional scene reconstructed on the ground, missed detection or false detection of short obstacles can be caused.
Disclosure of Invention
The embodiment of the invention provides a point cloud processing method, point cloud processing equipment and a computer readable storage medium, which are used for improving the identification precision of a target area and reconstructing a high-quality three-dimensional scene.
The first aspect of the embodiments of the present invention provides a method for processing a point cloud, including:
acquiring a multi-frame three-dimensional point cloud containing a target area;
preprocessing the multi-frame three-dimensional point cloud;
determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model;
and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area.
A second aspect of an embodiment of the present invention is to provide a point cloud processing system, including: a detection device, a memory, and a processor;
the detection equipment is used for detecting multi-frame three-dimensional point cloud containing a target area;
the memory is used for storing program codes; the processor, invoking the program code, when executed, is configured to:
acquiring a multi-frame three-dimensional point cloud containing a target area;
preprocessing the multi-frame three-dimensional point cloud;
determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model;
and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area.
A third aspect of an embodiment of the present invention is to provide a movable platform, including: the fuselage, the power system and the processing system of the point cloud of the second aspect.
A fourth aspect of embodiments of the present invention is to provide a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method of the first aspect.
In the method, the device and the computer-readable storage medium for processing the point cloud provided by the embodiment, a multi-frame three-dimensional point cloud containing a target area is obtained; preprocessing a plurality of frames of three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. The correction model can determine a height value correction parameter for correcting the height values of the multi-frame three-dimensional point clouds, so that the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point clouds are corrected according to the height value correction parameter.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a flowchart of a point cloud processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present invention;
FIG. 3 is a flow chart of a method for processing a point cloud according to another embodiment of the present invention;
FIG. 4 is a flowchart of a point cloud processing method according to another embodiment of the present invention;
FIG. 5 is a flowchart of a point cloud processing method according to another embodiment of the present invention;
FIG. 6 is an effect diagram before correction of ground point cloud;
FIG. 7 is a diagram illustrating the effect of the method of the present invention after correcting the ground point cloud;
FIG. 8 is a block diagram of a system for processing a point cloud according to an embodiment of the present invention;
FIG. 9 is a block diagram of a moveable platform according to an embodiment of the present invention.
Reference numerals:
21: a vehicle; 22: a detection device; 23: a preceding vehicle;
80: a processing system for the point cloud; 81: a detection device; 82: a memory; 83: a processor;
90: a movable platform; 91: a body; 92: a power system; 93: a processing system of the point cloud.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The embodiment of the invention provides a point cloud processing method. The point cloud processing method provided by the embodiment of the invention can be applied to vehicles, such as unmanned vehicles, vehicles with Advanced Driver Assistance Systems (ADAS) and the like. It can be understood that the point cloud processing method may also be applied to an unmanned aerial vehicle, for example, an unmanned aerial vehicle equipped with a detection device for acquiring point cloud data. The method for processing the point cloud provided by the embodiment of the invention can be applied to real-time ground three-dimensional reconstruction, and the significance of the ground three-dimensional reconstruction is that the point cloud obtained by scanning the laser radar contains most ground points, and the ground points can influence the classification, identification and tracking of subsequent obstacle point clouds, for example, in a typical application scene, the front area of a vehicle comprises a ground area, other vehicles, buildings, trees, fences, pedestrians and the like. The bottom of the wheel of the vehicle in front is in contact with the ground, in other embodiments, objects such as traffic signs and the like may be arranged in the front area of the vehicle, and the bottom of the traffic signs is also in contact with the ground. Therefore, when objects such as front vehicles and traffic signs are identified, due to the sparse characteristic of a single-frame laser point cloud, the existing method for reconstructing the three-dimensional scene by using the laser point cloud has to accumulate multiple frames of point clouds within a period of time for time sequence fusion, so that the three-dimensional scene with higher quality can be reconstructed. However, in the automatic driving system, the vehicle-mounted laser radar moves along with the vehicle, and due to the influence of vehicle positioning errors, the accumulated multi-frame point clouds are fused to cause that the same surface shakes greatly on the z axis, so that the reconstruction accuracy is not ideal enough, and the ground points at the bottom of the front vehicle and/or the ground points at the bottom of the traffic sign are easily mistakenly recognized as the three-dimensional points of the front vehicle or the traffic sign, or the bottom points of the front vehicle and/or the bottom points of the traffic sign are/is missed to be detected as the three-dimensional points. Therefore, when vehicles, traffic signs, buildings, trees, fences, pedestrians and the like are identified in the three-dimensional point cloud, the ground point cloud in the three-dimensional point cloud needs to be identified and filtered. However, the existing ground point cloud identification method is low in accuracy, so that errors exist in the identification of the ground point cloud, and further the problem of false detection or missing detection of obstacles, especially short and small obstacles, is caused. The point cloud processing method provided by the embodiment of the invention can correct the point cloud, reduce the negative influence of multi-frame accumulation and further obtain a more ideal result.
The embodiment of the invention provides a point cloud processing method. Fig. 1 is a flowchart of a point cloud processing method according to an embodiment of the present invention. As shown in fig. 1, the method in this embodiment may include:
step S101, obtaining a multi-frame three-dimensional point cloud containing a target area.
In the embodiment of the invention, the multi-frame three-dimensional point cloud is under a local coordinate system.
In an optional implementation manner, the multiple frames of three-dimensional point clouds including the target region are obtained by directly obtaining the multiple frames of three-dimensional point clouds in the local coordinate system. The local coordinate system is a coordinate system established with a carrier carrying a probe for detecting a plurality of frames of three-dimensional point clouds as an origin, for example, a coordinate system established with a vehicle as an origin. The carrier may be a vehicle or an unmanned aerial vehicle, and the invention is not limited to this.
In another optional implementation, obtaining a multi-frame three-dimensional point cloud including a target region includes: acquiring a multi-frame three-dimensional point cloud containing a target area under a coordinate system of detection equipment; and converting the three-dimensional point cloud detected by the detection equipment into the local coordinate system according to the conversion relation between the coordinate system of the detection equipment and the local coordinate system. Optionally, the obtaining of the multi-frame three-dimensional point cloud including the target area under the coordinate system of the detection device includes: and acquiring three-dimensional point cloud which is detected by detection equipment carried on the carrier and contains a target area around the carrier.
Specifically, as shown in fig. 2, a detection device 22 is disposed on the vehicle 21, and the detection device 22 may be a binocular stereo camera, a TOF camera and/or a laser radar. For example, during the driving of the vehicle 21, the driving direction of the vehicle 21 is the direction indicated by the arrow in fig. 2, and the detection device 22 detects the three-dimensional point cloud of the environmental information around the vehicle 21 in real time. The detection device 22 is exemplified by a laser radar, when a laser beam emitted by the laser radar irradiates an object surface, the object surface reflects the laser beam, and the laser radar can determine information such as the direction and distance of the object relative to the laser radar according to the laser beam reflected by the object surface. If the laser beam emitted by the laser radar scans according to a certain track, for example, 360-degree rotation scanning, a large number of laser points are obtained, and thus laser point cloud data, i.e., three-dimensional point cloud, of the object can be formed.
The three-dimensional point cloud obtained in step S101 is continuous N frames of sparse point cloud data accumulated in the current time window.
Alternatively, the target area may be an object having a flat surface. The embodiment of the present invention is described by taking the target area as a ground area, but the present invention is not limited to the ground area, and the target area may also be an object such as a wall surface or a desktop, and the present invention is not limited to this. The method of the embodiment of the invention can be also applied to the identification of objects with flat surfaces, such as walls or desktops.
And S102, preprocessing a plurality of frames of three-dimensional point clouds.
Because the multi-frame three-dimensional point cloud includes point clouds or noise points in non-target areas, the multi-frame three-dimensional point cloud needs to be preprocessed to filter out the point clouds or noise points in the non-target areas.
Optionally, the preprocessing is performed on the multi-frame three-dimensional point cloud, and includes: and removing noise points in the multi-frame three-dimensional point cloud, wherein the removed noise points refer to three-dimensional points which do not belong to the target area.
And S103, determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model.
Specifically, the preprocessed multi-frame three-dimensional point cloud is input into a preset correction model, and the preset correction model outputs a height value correction parameter of the multi-frame three-dimensional point cloud.
And step S104, correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area.
In this embodiment, assume that the three-dimensional coordinate of a three-dimensional point in the three-dimensional point cloud is (x)i,yi,zi),xi,yi,ziRespectively, the coordinate values of the three-dimensional point in the direction X, Y, Z under the local coordinate system, and the height value refers to the coordinate value of the three-dimensional point in the direction Z of the local coordinate system. The local coordinate system is a coordinate system established with a carrier on which a probe device for detecting a plurality of frames of three-dimensional point clouds is mounted as an origin, and is, for example, a coordinate system established with a vehicle as an origin.
Specifically, because the false recognition of the ground area and other objects in the three-dimensional point cloud obtained by the laser radar scanning is mainly caused by the height value error of the ground area, the recognition of the ground area can be corrected by correcting the height value of the multi-frame three-dimensional point cloud through the height value correction parameter, the recognition precision of the ground is improved, and the three-dimensional reconstruction of the ground is realized. Continuing with the exemplary application scenario described above, for example, the area in front of the vehicle 21 includes a ground area, other vehicles, buildings, trees, fences, pedestrians, etc. As shown in fig. 2, the bottom of the wheel of the front vehicle 23 of the vehicle 21 is in contact with the ground, in other embodiments, there may be an object such as a traffic sign in the front area of the vehicle 21, and the bottom of the traffic sign is also in contact with the ground, so when identifying the object such as the front vehicle 23 of the vehicle 21 and the traffic sign, if the height value of the ground area is not accurate enough, the ground point at the bottom of the front vehicle 23 and/or the ground point at the bottom of the traffic sign may be easily mistakenly identified as the three-dimensional point of the front vehicle or the traffic sign. After the ground area is corrected by the height value correction parameter, the ground point at the bottom of the front vehicle 23 and/or the ground point at the bottom of the traffic sign can be recognized as a three-dimensional point of a non-ground area, that is, a three-dimensional point of the front vehicle 23 or the traffic sign.
The embodiment obtains a multi-frame three-dimensional point cloud containing a target area; preprocessing a plurality of frames of three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. The correction model can determine a height value correction parameter for correcting the height values of the multi-frame three-dimensional point clouds, so that the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point clouds are corrected according to the height value correction parameter.
The embodiment of the invention provides a point cloud processing method. Fig. 3 is a flowchart of a point cloud processing method according to another embodiment of the present invention. As shown in fig. 3, based on the embodiment shown in fig. 1, in the method in this embodiment, the preprocessing of the multi-frame three-dimensional point cloud may be to project the three-dimensional point cloud obtained by scanning with the laser radar onto an XOY plane of a world coordinate system, and then determine whether a point in a grid is a point cloud belonging to a ground area according to a height range between the three-dimensional point clouds mapped in the grid of the XOY plane, and specifically includes the following steps:
step S301, determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the determined height map comprises a plurality of grids.
Optionally, determining a height map according to the height values of the multiple frames of three-dimensional point clouds includes: determining a target plane under a world coordinate system; projecting multi-frame three-dimensional point clouds under the local coordinate system to a target plane according to a conversion relation between the local coordinate system and a world coordinate system; and determining a height map according to the height value of the multi-frame three-dimensional point cloud projected in the target plane. Specifically, the right-hand coordinate system with the Z-axis facing vertically downward is a world coordinate system, and the target plane may be an XOY plane divided into a plurality of square grids of the same size in the world coordinate system. Similarly, a local coordinate system with a vertical downward Z axis is established by taking a vehicle as an origin, X, Y, Z axes of the local coordinate system and X, Y, Z axes of a world coordinate system are respectively aligned, and if n frames of sparse point clouds need to be accumulated to reconstruct the ground, a height map of the point clouds can be obtained by projecting the n frames of accumulated point clouds onto an XOY plane under the world coordinate system.
Specifically, according to a conversion relationship between the local coordinate system and the world coordinate system, each three-dimensional point in the three-dimensional point cloud in the local coordinate system is projected to the world coordinate system, for example, as follows: the point j represents a three-dimensional point in the three-dimensional point cloud, and the position of the three-dimensional point in the local coordinate system is recorded as
Figure BDA0002621285270000071
The position where the point j is converted into the world coordinate system is recorded as
Figure BDA0002621285270000072
The conversion relation between the local coordinate system and the world coordinate system is R, and in the world coordinate system, the three-dimensional position of the laser radar, namely the translation vector is t, and then the three-dimensional position can be represented by a formula:
Figure BDA0002621285270000073
the position of the point j converted into the world coordinate system can be obtained
Figure BDA0002621285270000074
Thereby, the projected point of the point i in the world coordinate system can be calculated.
Similarly, the projection points of other three-dimensional points in the three-dimensional point cloud except the point j in the target plane can be determined. And determining a height map according to the height values of the point i projected in the target plane and other three-dimensional points.
And step S302, determining a rough target area in the height map according to a preset target area height value.
In some embodiments, the predetermined target zone height value may be a predetermined ground zone height value, and for the predetermined ground zone height value, a preliminary ground zone height value may be estimated using the height of the vehicle in the local coordinate system, assuming that the maximum height of the vehicle is z1The overall height of the vehicle is 1.5m, then pass z11.5 preliminary ground area height values are obtained, from which an approximate ground area can be determined in the height map. The target area determined here is a rough grid range where the target area is located, which is divided from the height map, and is not precise, and may include three-dimensional points of other objects, and therefore, the three-dimensional points need to be further filtered through subsequent processing.
Step S303, calculating a difference between the maximum height value and the minimum height value in the same grid in which the target region is located.
Assuming that w three-dimensional points are mapped in a certain grid of the height map after projection, and the maximum height value of the height values of the w three-dimensional points is whThe minimum height value is w1By calculating wh-wlThe difference between the maximum height value and the minimum height value in the grid can be found.
And step S304, determining the grids of which the difference is lower than the difference threshold value and the distance between the difference and the preset target area height value is smaller than the preset distance.
Suppose wh-wlIs below a difference threshold, and (w)h-wl)-(z1-0.5) less than a preset distance, marking the grid corresponding to the three-dimensional point. Specific markThe method can be referred to a marking method in the prior art, for example, marking with different colors, and the invention is not particularly limited herein.
And 305, removing the three-dimensional point cloud outside the grid, wherein the difference value is lower than the difference threshold value and the distance between the three-dimensional point cloud and the preset target area height value is smaller than the preset distance.
After the above steps mark wh-wlIs below a difference threshold, and (w)h-wl)-(z10.5) after the grids corresponding to the three-dimensional points smaller than the preset distance are obtained, removing the approximate unmarked grids in the target area, wherein the points in the unmarked grids can be regarded as non-ground point clouds or noise points, thereby completing the removal of the three-dimensional point clouds except the grids with the difference value lower than the difference value threshold value and the distance between the difference value and the preset height value of the target area smaller than the preset distance, and realizing the initial identification of the target area. In this case, the identified target region needs to be further corrected to improve the accuracy of identifying the target region.
The embodiment of the invention provides a point cloud processing method. Fig. 4 is a flowchart of a point cloud processing method according to another embodiment of the present invention. As shown in fig. 4, on the basis of the foregoing embodiment, projecting a plurality of frames of three-dimensional point clouds under the local coordinate system to the target plane according to the conversion relationship between the local coordinate system and the world coordinate system may include:
step S401, divide the target plane into a plurality of grids of equal size, each grid having a grid number.
And S402, calculating the corresponding grid number of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system according to the conversion relation between the local coordinate system and the world coordinate system.
For example, the XOY plane in the local coordinate system is divided into 0.2 × 0.2m squares to obtain a plurality of grids, and the grids are numbered to obtain a grid number. Similarly, an XOY plane in the world coordinate system is also divided into 0.2 × 0.2m squares to obtain a plurality of grids, the grids are numbered, x-axis coordinates and y-axis coordinates corresponding to the grids can be obtained according to the grid numbers and the size of each grid being 0.2 × 0.2m, the x-axis coordinates and the y-axis coordinates in the world coordinate system are converted into the world coordinate system according to the conversion relation between the local coordinate system and the world coordinate system to obtain the x-axis coordinates and the y-axis coordinates in the world coordinate system, and further, the grids corresponding to a certain grid in the local coordinate system in the world coordinate system can be obtained according to the x-axis coordinates and the y-axis coordinates in the world coordinate system.
Step S403, calculating corresponding height values of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system according to the conversion relation between the local coordinate system and the world coordinate system;
similarly, according to the exemplary description of step S402, the corresponding height value of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system may also be obtained.
Step S402 and step S403 may also be executed first, and then step S402 is executed, where step S402 and step S403 may be regarded as parallel execution, and there is no sequential execution order.
And S404, determining a height map according to the grid number of the multi-frame three-dimensional point cloud in the local coordinate system in the target plane and the height value of the multi-frame three-dimensional point cloud in the local coordinate system in the target plane.
After the grid number corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane is obtained through calculation in the above steps S401 to S403, and the height value corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane is obtained through calculation, the grid number and the height value can be corresponded to each other, so that the three-dimensional point in the local coordinate system is mapped to the world coordinate system, and the height map is obtained.
The embodiment of the invention provides a point cloud processing method. Fig. 5 is a flowchart of a point cloud processing method according to another embodiment of the present invention. As shown in fig. 5, based on the above embodiment, the determining a height value correction parameter of a plurality of frames of three-dimensional point clouds according to a preprocessed plurality of frames of three-dimensional point clouds and the preset correction model may include:
and S501, inputting the preprocessed three-dimensional point cloud into an optimization solving model.
Optionally, the function equation of the optimization solution model is specifically as follows:
Figure BDA0002621285270000101
in the formula, i represents an image frame number corresponding to the three-dimensional point cloud; j represents the number of the three-dimensional point in the three-dimensional point cloud image;
Figure BDA0002621285270000102
the three-dimensional coordinate value of the jth three-dimensional point in the ith frame of three-dimensional point cloud image is represented, and m represents the total number of the three-dimensional points in the ith frame of three-dimensional point cloud image; n represents the total number of the three-dimensional point cloud images, namely the total number of accumulated multi-frame three-dimensional point cloud images;
Figure BDA0002621285270000103
the height value correction quantity of the jth three-dimensional point in the ith frame of three-dimensional point cloud image is represented by the following expression:
Figure BDA0002621285270000104
wherein, aiRepresents a first correction coefficient, biRepresents a second correction coefficient, ciRepresents a third correction coefficient; s denotes the number of the grid in the height map,
Figure BDA0002621285270000105
and representing the mean value of the height values of the three-dimensional points accumulated at the s-th grid on the height map after correction, wherein the expression is as follows:
Figure BDA0002621285270000106
wherein K represents the total number of three-dimensional point clouds accumulated in a grid of which the difference is lower than a difference threshold value and the distance between the grid and the ground area is smaller than a preset distance, ikRepresenting the image frame number of the three-dimensional point cloud to which the kth point belongs; a is used for representing a first correction coefficient of the multi-frame three-dimensional point cloud, B is used for representing a second correction coefficient of the multi-frame three-dimensional point cloud, C is used for representing a third correction coefficient of the multi-frame three-dimensional point cloud, and A ═ a1...ai...,an]T,B=[b1...bi...,bn]T,C=[c1...ci...,ca]T
And S502, solving the optimized solving model by adopting a linear least square method to obtain a correction coefficient.
Specifically, will
Figure BDA0002621285270000107
When the formula (1) is input and the formula (1) is solved by a linear least square method, the minimum value (a) of the formula (1) can be obtainedi,bi,ci),(ai,bi,ci) Is a correction coefficient for correcting the i-th frame image.
Similarly, when another three-dimensional point is input to the above equation (1), a correction coefficient for correcting another frame image may be obtained, and the correction coefficient for all frame images is a ═ a1...ai...,aa]T,B=[b1...bi...,bn]T,C=[c1...ci...,cn]T
Optionally, all three-dimensional points of all frame images may be input into the above formula (1), a linear equation set is established, and the correction coefficients of all frame images may be obtained simultaneously by solving the linear equation set in parallel. The parallel computation can improve the computation efficiency and well meet the real-time requirement of the vehicle-mounted system.
And 503, determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient.
Optionally, the correction coefficient includes a first correction coefficient, a second correction coefficient, and a third correction coefficient; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient, wherein the height value correction parameter comprises the following steps: and calculating a height value correction parameter of the frame of three-dimensional point cloud according to the first correction coefficient, the second correction coefficient and the third correction coefficient of the plurality of frames of three-dimensional point clouds and the three-dimensional coordinate value of the frame of three-dimensional point clouds. Specifically, the height value correction parameter of the frame of three-dimensional point cloud is calculated according to the first correction coefficient, the second correction coefficient, the third correction coefficient of the frame of three-dimensional point cloud and the three-dimensional coordinate value of the frame of three-dimensional point cloud, and may be calculated according to the following function equation:
Figure BDA0002621285270000111
in the formula, ai,bi,ciRespectively a first correction coefficient, a second correction coefficient and a third correction coefficient of the ith frame point cloud image; (a)i,bi,ci) A correction coefficient for correcting the i-th frame image,
Figure BDA0002621285270000112
and d represents a height value correction parameter for correcting all three-dimensional points in the ith frame of image.
Will (a)i,bi,ci) And
Figure BDA0002621285270000113
after the formula (2) is substituted, the height value correction parameter d for correcting all three-dimensional points in the ith frame image can be obtained by solving.
Optionally, after the height value correction parameter d for correcting all three-dimensional points in the ith frame image is obtained through solving, the height values of all three-dimensional points in the ith frame image can be corrected according to the height value correction parameter d for correcting all three-dimensional points in the ith frame image. For example, assume that the coordinate value of the jth three-dimensional point in the ith frame image before correction is
Figure BDA0002621285270000114
The coordinate value of the jth three-dimensional point in the corrected ith frame image is
Figure BDA0002621285270000115
Fig. 6 is an effect diagram before correction of the ground point cloud.
FIG. 7 is a diagram illustrating the effect of the method of the embodiment of the present invention after correcting the ground point cloud.
As shown in fig. 6 and 7, the area formed by the black dots in the drawings is the ground area, and it can be seen that the ground area identified in fig. 6 has large jitter and wide distribution on the Z axis, while the ground area identified in fig. 7 has smoother and more compact distribution and narrow distribution on the Z axis, so that the ground area corrected by the method of the embodiment of the present invention has more accurate identification.
The embodiment of the invention provides a point cloud processing system. Fig. 8 is a block diagram of a processing system for point cloud according to an embodiment of the present invention, and as shown in fig. 8, the processing system 80 for point cloud includes a detection device 81, a memory 82, and a processor 83. The detection device 81 is configured to detect a multi-frame three-dimensional point cloud including a target region; the memory 82 is used to store program codes; a processor 83, calling program code, which when executed, is configured to: acquiring a multi-frame three-dimensional point cloud containing a target area; preprocessing a plurality of frames of three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. The detection device 81 in this embodiment may be the detection device 22 in fig. 2.
Optionally, when the processor 83 preprocesses the multi-frame three-dimensional point cloud, the processor is specifically configured to: and removing noise points in the multi-frame three-dimensional point cloud, wherein the noise points refer to three-dimensional points which do not belong to the target area.
Optionally, when the processor 83 removes noise points in the multi-frame three-dimensional point cloud, the processor is specifically configured to: determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the height map comprises a plurality of grids; determining a rough target area in the height map according to a preset target area height value; calculating the difference value between the maximum height value and the minimum height value in the same grid in which the target area is located; determining grids of which the difference is lower than the difference threshold value and the distance between the difference and the preset height value of the target area is smaller than the preset distance; and removing the three-dimensional point cloud outside the grid, wherein the difference value is lower than the difference value threshold value and the distance between the three-dimensional point cloud and the preset target area height value is smaller than the preset distance.
Optionally, when acquiring the multi-frame three-dimensional point cloud, the processor 83 is specifically configured to: acquiring multi-frame three-dimensional point clouds under a local coordinate system, wherein the local coordinate system is a coordinate system established by taking a carrier carrying detection equipment for detecting the multi-frame three-dimensional point clouds as an origin; when determining the height map according to the height values of the multiple frames of three-dimensional point clouds, the processor 83 is specifically configured to: determining a target plane under a world coordinate system; projecting multi-frame three-dimensional point clouds under the local coordinate system to a target plane according to a conversion relation between the local coordinate system and a world coordinate system; and determining a height map according to the height value of the multi-frame three-dimensional point cloud projected in the target plane.
Optionally, when the processor 83 projects the multiple frames of three-dimensional point clouds in the local coordinate system onto the target plane according to the conversion relationship between the local coordinate system and the world coordinate system, the processor is specifically configured to: dividing the target plane into a plurality of grids with equal size, wherein each grid has a grid number; calculating the grid number corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system according to the conversion relation between the local coordinate system and the world coordinate system; calculating the corresponding height value of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system according to the conversion relation between the local coordinate system and the world coordinate system; and determining a height map according to the grid number of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system and the height value of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system.
Optionally, the preset correction model includes an optimization solution model; when determining the height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and the preset correction model, the processor 83 is specifically configured to: inputting the preprocessed three-dimensional point cloud into the optimization solving model; solving the optimized solving model by adopting a linear least square method to obtain a correction coefficient; and determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient.
Optionally, the correction coefficient includes a first correction coefficient, a second correction coefficient, and a third correction coefficient; when determining the height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient, the processor 83 is specifically configured to: and calculating a height value correction parameter of the frame of three-dimensional point cloud according to the first correction coefficient, the second correction coefficient and the third correction coefficient of the plurality of frames of three-dimensional point clouds and the three-dimensional coordinate value of the frame of three-dimensional point clouds.
Optionally, when the processor 83 obtains a multi-frame three-dimensional point cloud under the local coordinate system, the processor is specifically configured to: acquiring a multi-frame three-dimensional point cloud which is detected by detection equipment and contains a target area; and converting the multi-frame three-dimensional point cloud detected by the detection equipment into the local coordinate system according to the conversion relation between the coordinate system of the detection equipment and the local coordinate system.
Optionally, the detection device comprises at least one of: binocular stereo cameras, TOF cameras and lidar.
Optionally, the target area is a ground area.
The specific principle and implementation of the point cloud processing system provided by the embodiment of the invention are similar to those of the above embodiments, and are not described herein again.
The embodiment acquires a multi-frame three-dimensional point cloud containing a target area; preprocessing a plurality of frames of three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. The correction model can determine a height value correction parameter for correcting the height values of the multi-frame three-dimensional point clouds, so that the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point clouds are corrected according to the height value correction parameter.
The embodiment of the invention provides a movable platform. FIG. 9 is a block diagram of a moveable platform according to an embodiment of the present invention. The embodiment of the invention is a movable platform provided on the basis of the technical scheme provided by the embodiment shown in fig. 8. As shown in fig. 9, the movable platform 90 includes: fuselage 91, power system 92, and cloud processing system 93. The processing system 93 of the point cloud in the present embodiment may be the processing system 80 of the point cloud provided in the above-described embodiment.
The specific principle and implementation of the point cloud processing system provided by the embodiment of the invention are similar to those of the embodiment shown in fig. 8, and are not described herein again.
The embodiment obtains a multi-frame three-dimensional point cloud containing a target area; preprocessing a plurality of frames of three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. The correction model can determine a height value correction parameter for correcting the height values of the multi-frame three-dimensional point clouds, so that the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point clouds are corrected according to the height value correction parameter.
In addition, the present embodiment also provides a computer-readable storage medium on which a computer program is stored, the computer program being executed by a processor to implement the point cloud processing method of the above embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (22)

1. A method for processing point clouds, comprising:
acquiring a multi-frame three-dimensional point cloud containing a target area;
preprocessing the multi-frame three-dimensional point cloud;
determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model;
and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area.
2. The method of claim 1, wherein the preprocessing the plurality of frames of three-dimensional point clouds comprises:
and removing noise points in the multi-frame three-dimensional point cloud, wherein the noise points refer to three-dimensional points which do not belong to the target area.
3. The method of claim 2, wherein the removing noise points from the plurality of frames of three-dimensional point clouds comprises:
determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the height map comprises a plurality of grids;
determining a rough target area in the height map according to a preset target area height value;
calculating the difference value between the maximum height value and the minimum height value in the same grid in which the approximate target area is positioned;
determining a grid of which the difference is lower than a difference threshold value and the distance between the grid and the preset target area height value is smaller than a preset distance;
and removing the three-dimensional point cloud outside the grid, wherein the difference is lower than the difference threshold value and the distance between the three-dimensional point cloud and the preset target area height value is smaller than the preset distance.
4. The method of claim 3, wherein the obtaining a plurality of frames of three-dimensional point clouds including a target region comprises:
acquiring the multi-frame three-dimensional point cloud under a local coordinate system, wherein the local coordinate system is a coordinate system established by taking a carrier carrying detection equipment for detecting the multi-frame three-dimensional point cloud as an origin;
the determining of the height map according to the height values of the multiple frames of three-dimensional point clouds comprises the following steps:
determining a target plane under a world coordinate system;
projecting the multi-frame three-dimensional point cloud under the local coordinate system to the target plane according to the conversion relation between the local coordinate system and the world coordinate system;
and determining a height map according to the height value of the multi-frame three-dimensional point cloud projected in the target plane.
5. The method of claim 4, wherein the projecting the plurality of three-dimensional point clouds in the local coordinate system to the target plane according to the transformation relationship between the local coordinate system and the world coordinate system comprises:
dividing the target plane into a plurality of grids of equal size, each grid having a grid number;
calculating the grid number corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system according to the conversion relation between the local coordinate system and the world coordinate system;
calculating the corresponding height value of the multi-frame three-dimensional point cloud under the local coordinate system in the target plane according to the conversion relation between the local coordinate system and the world coordinate system;
and determining the height map according to the grid number of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system and the height value of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system.
6. The method according to any one of claims 1 to 5, wherein the predetermined modification model comprises an optimization solution model; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model, wherein the height value correction parameter comprises the following steps:
inputting the preprocessed multi-frame three-dimensional point cloud into the optimization solving model;
solving the optimized solving model by adopting a linear least square method to obtain a correction coefficient;
and determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient.
7. The method according to claim 6, wherein the correction coefficients include a first correction coefficient, a second correction coefficient, and a third correction coefficient;
the determining of the height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient comprises the following steps:
and calculating a height value correction parameter of the frame of three-dimensional point cloud according to the first correction coefficient, the second correction coefficient, the third correction coefficient and the three-dimensional coordinate value of the frame of three-dimensional point cloud.
8. The method according to claim 4 or 5, wherein the obtaining the plurality of frames of three-dimensional point clouds in the local coordinate system comprises:
acquiring a multi-frame three-dimensional point cloud which is detected by the detection equipment and contains a target area;
and converting the multi-frame three-dimensional point cloud detected by the detection equipment into the local coordinate system according to the conversion relation between the detection equipment coordinate system and the local coordinate system.
9. The method of claim 8, wherein the detection device comprises at least one of:
binocular stereo cameras, TOF cameras and lidar.
10. The method of any one of claims 1-9, wherein the target area is a ground area.
11. A system for processing a point cloud, comprising: a detection device, a memory, and a processor;
the detection equipment is used for detecting multi-frame three-dimensional point cloud containing a target area;
the memory is used for storing program codes; the processor, invoking the program code, when executed, is configured to:
acquiring a multi-frame three-dimensional point cloud containing a target area;
preprocessing the multi-frame three-dimensional point cloud;
determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model;
and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area.
12. The system of claim 11, wherein the processor, when pre-processing the plurality of frames of three-dimensional point clouds, is configured to:
and removing noise points in the multi-frame three-dimensional point cloud, wherein the noise points refer to three-dimensional points which do not belong to the target area.
13. The system according to claim 12, wherein the processor, when removing the noise points from the plurality of frames of three-dimensional point clouds, is configured to:
determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the height map comprises a plurality of grids;
determining a rough target area in the height map according to a preset target area height value;
calculating the difference value between the maximum height value and the minimum height value in the same grid in which the approximate target area is positioned;
determining a grid of which the difference is lower than a difference threshold value and the distance between the grid and the preset target area height value is smaller than a preset distance;
and removing the three-dimensional point cloud outside the grid, wherein the difference is lower than the difference threshold value and the distance between the three-dimensional point cloud and the preset target area height value is smaller than the preset distance.
14. The system of claim 13, wherein the processor, when acquiring the plurality of frames of three-dimensional point clouds, is configured to:
acquiring the multi-frame three-dimensional point cloud under a local coordinate system, wherein the local coordinate system is a coordinate system established by taking a carrier carrying detection equipment for detecting the multi-frame three-dimensional point cloud as an origin;
when the processor determines the height map according to the height values of the multiple frames of three-dimensional point clouds, the processor is specifically configured to:
determining a target plane under a world coordinate system;
projecting the multi-frame three-dimensional point cloud under the local coordinate system to the target plane according to the conversion relation between the local coordinate system and the world coordinate system;
and determining a height map according to the height value of the multi-frame three-dimensional point cloud projected in the target plane.
15. The system according to claim 14, wherein the processor is configured to project the plurality of three-dimensional point clouds in the local coordinate system onto the target plane according to a transformation relationship between the local coordinate system and the world coordinate system, and is further configured to:
dividing the target plane into a plurality of grids of equal size, each grid having a grid number;
calculating the grid number corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system according to the conversion relation between the local coordinate system and the world coordinate system;
calculating the corresponding height value of the multi-frame three-dimensional point cloud under the local coordinate system in the target plane according to the conversion relation between the local coordinate system and the world coordinate system;
and determining the height map according to the grid number of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system and the height value of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system.
16. The system according to any one of claims 11-15, wherein the predetermined modification model comprises an optimization solution model;
the processor is specifically configured to, when determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model:
inputting the preprocessed three-dimensional point cloud into the optimization solving model;
solving the optimized solving model by adopting a linear least square method to obtain a correction coefficient;
and determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient.
17. The system of claim 16, wherein the correction coefficients comprise a first correction coefficient, a second correction coefficient, and a third correction coefficient;
when the processor determines the height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient, the processor is specifically configured to:
and calculating a height value correction parameter of the frame of three-dimensional point cloud according to the first correction coefficient, the second correction coefficient, the third correction coefficient and the three-dimensional coordinate value of the frame of three-dimensional point cloud.
18. The system according to claim 14 or 15, wherein the processor is configured to, when acquiring the plurality of frames of three-dimensional point clouds in the local coordinate system:
acquiring a multi-frame three-dimensional point cloud which is detected by the detection equipment and contains a target area;
and converting the multi-frame three-dimensional point cloud detected by the detection equipment into the local coordinate system according to the conversion relation between the detection equipment coordinate system and the local coordinate system.
19. The system of claim 18, wherein the detection device comprises at least one of:
binocular stereo cameras, TOF cameras and lidar.
20. The system of any one of claims 11-19, wherein the target area is a ground area.
21. A movable platform, comprising: a fuselage, a power system, and a system for processing the point cloud of any of claims 11-20.
22. A computer-readable storage medium, having stored thereon a computer program for execution by a processor to perform the method of any one of claims 1-10.
CN201980012171.7A 2019-05-29 2019-05-29 Processing method, equipment and computer readable storage medium of point cloud Active CN111699410B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/088931 WO2020237516A1 (en) 2019-05-29 2019-05-29 Point cloud processing method, device, and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111699410A true CN111699410A (en) 2020-09-22
CN111699410B CN111699410B (en) 2024-06-07

Family

ID=72476452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980012171.7A Active CN111699410B (en) 2019-05-29 2019-05-29 Processing method, equipment and computer readable storage medium of point cloud

Country Status (2)

Country Link
CN (1) CN111699410B (en)
WO (1) WO2020237516A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435193A (en) * 2020-11-30 2021-03-02 中国科学院深圳先进技术研究院 Method and device for denoising point cloud data, storage medium and electronic equipment
CN114111568A (en) * 2021-09-30 2022-03-01 深圳市速腾聚创科技有限公司 Method and device for determining appearance size of dynamic target, medium and electronic equipment
WO2022126380A1 (en) * 2020-12-15 2022-06-23 深圳市大疆创新科技有限公司 Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN114782438A (en) * 2022-06-20 2022-07-22 深圳市信润富联数字科技有限公司 Object point cloud correction method and device, electronic equipment and storage medium
CN115830262A (en) * 2023-02-14 2023-03-21 济南市勘察测绘研究院 Real scene three-dimensional model establishing method and device based on object segmentation
CN116309124A (en) * 2023-02-15 2023-06-23 霖鼎光学(江苏)有限公司 Correction method of optical curved surface mold, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831646A (en) * 2012-08-13 2012-12-19 东南大学 Scanning laser based large-scale three-dimensional terrain modeling method
CN106441151A (en) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion
CN108254758A (en) * 2017-12-25 2018-07-06 清华大学苏州汽车研究院(吴江) Three-dimensional road construction method based on multi-line laser radar and GPS
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium
US20190086524A1 (en) * 2017-09-17 2019-03-21 Baidu Online Network Technology (Beijing) Co., Ltd . Parameter calibration method and apparatus of multi-line laser radar, device and readable medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2082188B1 (en) * 2006-10-20 2013-06-05 TomTom Global Content B.V. Computer arrangement for and method of matching location data of different sources
CN106530380B (en) * 2016-09-20 2019-02-26 长安大学 A kind of ground point cloud dividing method based on three-dimensional laser radar
CN110274602A (en) * 2018-03-15 2019-09-24 奥孛睿斯有限责任公司 Indoor map method for auto constructing and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831646A (en) * 2012-08-13 2012-12-19 东南大学 Scanning laser based large-scale three-dimensional terrain modeling method
CN106441151A (en) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion
US20190086524A1 (en) * 2017-09-17 2019-03-21 Baidu Online Network Technology (Beijing) Co., Ltd . Parameter calibration method and apparatus of multi-line laser radar, device and readable medium
CN108254758A (en) * 2017-12-25 2018-07-06 清华大学苏州汽车研究院(吴江) Three-dimensional road construction method based on multi-line laser radar and GPS
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435193A (en) * 2020-11-30 2021-03-02 中国科学院深圳先进技术研究院 Method and device for denoising point cloud data, storage medium and electronic equipment
CN112435193B (en) * 2020-11-30 2024-05-24 中国科学院深圳先进技术研究院 Method and device for denoising point cloud data, storage medium and electronic equipment
WO2022126380A1 (en) * 2020-12-15 2022-06-23 深圳市大疆创新科技有限公司 Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN114111568A (en) * 2021-09-30 2022-03-01 深圳市速腾聚创科技有限公司 Method and device for determining appearance size of dynamic target, medium and electronic equipment
CN114782438A (en) * 2022-06-20 2022-07-22 深圳市信润富联数字科技有限公司 Object point cloud correction method and device, electronic equipment and storage medium
CN114782438B (en) * 2022-06-20 2022-09-16 深圳市信润富联数字科技有限公司 Object point cloud correction method and device, electronic equipment and storage medium
CN115830262A (en) * 2023-02-14 2023-03-21 济南市勘察测绘研究院 Real scene three-dimensional model establishing method and device based on object segmentation
CN115830262B (en) * 2023-02-14 2023-05-26 济南市勘察测绘研究院 Live-action three-dimensional model building method and device based on object segmentation
CN116309124A (en) * 2023-02-15 2023-06-23 霖鼎光学(江苏)有限公司 Correction method of optical curved surface mold, electronic equipment and storage medium
CN116309124B (en) * 2023-02-15 2023-10-20 霖鼎光学(江苏)有限公司 Correction method of optical curved surface mold, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2020237516A1 (en) 2020-12-03
CN111699410B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
CN111699410B (en) Processing method, equipment and computer readable storage medium of point cloud
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN110869974B (en) Point cloud processing method, equipment and storage medium
Banerjee et al. Online camera lidar fusion and object detection on hybrid data for autonomous driving
CN112907676B (en) Calibration method, device and system of sensor, vehicle, equipment and storage medium
CN113111887B (en) Semantic segmentation method and system based on information fusion of camera and laser radar
CN105160702B (en) The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud
CN111553859A (en) Laser radar point cloud reflection intensity completion method and system
Siegemund et al. A temporal filter approach for detection and reconstruction of curbs and road surfaces based on conditional random fields
CN111882612A (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
KR20160123668A (en) Device and method for recognition of obstacles and parking slots for unmanned autonomous parking
Drulea et al. Omnidirectional stereo vision using fisheye lenses
JP2014138420A (en) Depth sensing method and system for autonomous vehicle
CN110119679B (en) Object three-dimensional information estimation method and device, computer equipment and storage medium
CN112154454A (en) Target object detection method, system, device and storage medium
CN112464812B (en) Vehicle-based concave obstacle detection method
CN113160327A (en) Method and system for realizing point cloud completion
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN112700486B (en) Method and device for estimating depth of road surface lane line in image
CN112166458A (en) Target detection and tracking method, system, equipment and storage medium
CN112184793B (en) Depth data processing method and device and readable storage medium
CN115249349A (en) Point cloud denoising method, electronic device and storage medium
CN115164919B (en) Method and device for constructing spatial travelable area map based on binocular camera
CN114119729A (en) Obstacle identification method and device
CN114549542A (en) Visual semantic segmentation method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240522

Address after: Building 3, Xunmei Science and Technology Plaza, No. 8 Keyuan Road, Science and Technology Park Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057, 1634

Applicant after: Shenzhen Zhuoyu Technology Co.,Ltd.

Country or region after: China

Address before: 518057 Shenzhen Nanshan High-tech Zone, Shenzhen, Guangdong Province, 6/F, Shenzhen Industry, Education and Research Building, Hong Kong University of Science and Technology, No. 9 Yuexingdao, South District, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SZ DJI TECHNOLOGY Co.,Ltd.

Country or region before: China

GR01 Patent grant