Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The embodiment of the invention provides a point cloud processing method. The point cloud processing method provided by the embodiment of the invention can be applied to vehicles, such as unmanned vehicles, vehicles with Advanced Driver Assistance Systems (ADAS) and the like. It can be understood that the point cloud processing method may also be applied to an unmanned aerial vehicle, for example, an unmanned aerial vehicle equipped with a detection device for acquiring point cloud data. The method for processing the point cloud provided by the embodiment of the invention can be applied to real-time ground three-dimensional reconstruction, and the significance of the ground three-dimensional reconstruction is that the point cloud obtained by scanning the laser radar contains most ground points, and the ground points can influence the classification, identification and tracking of subsequent obstacle point clouds, for example, in a typical application scene, the front area of a vehicle comprises a ground area, other vehicles, buildings, trees, fences, pedestrians and the like. The bottom of the wheel of the vehicle in front is in contact with the ground, in other embodiments, objects such as traffic signs and the like may be arranged in the front area of the vehicle, and the bottom of the traffic signs is also in contact with the ground. Therefore, when objects such as front vehicles and traffic signs are identified, due to the sparse characteristic of a single-frame laser point cloud, the existing method for reconstructing the three-dimensional scene by using the laser point cloud has to accumulate multiple frames of point clouds within a period of time for time sequence fusion, so that the three-dimensional scene with higher quality can be reconstructed. However, in the automatic driving system, the vehicle-mounted laser radar moves along with the vehicle, and due to the influence of vehicle positioning errors, the accumulated multi-frame point clouds are fused to cause that the same surface shakes greatly on the z axis, so that the reconstruction accuracy is not ideal enough, and the ground points at the bottom of the front vehicle and/or the ground points at the bottom of the traffic sign are easily mistakenly recognized as the three-dimensional points of the front vehicle or the traffic sign, or the bottom points of the front vehicle and/or the bottom points of the traffic sign are/is missed to be detected as the three-dimensional points. Therefore, when vehicles, traffic signs, buildings, trees, fences, pedestrians and the like are identified in the three-dimensional point cloud, the ground point cloud in the three-dimensional point cloud needs to be identified and filtered. However, the existing ground point cloud identification method is low in accuracy, so that errors exist in the identification of the ground point cloud, and further the problem of false detection or missing detection of obstacles, especially short and small obstacles, is caused. The point cloud processing method provided by the embodiment of the invention can correct the point cloud, reduce the negative influence of multi-frame accumulation and further obtain a more ideal result.
The embodiment of the invention provides a point cloud processing method. Fig. 1 is a flowchart of a point cloud processing method according to an embodiment of the present invention. As shown in fig. 1, the method in this embodiment may include:
step S101, obtaining a multi-frame three-dimensional point cloud containing a target area.
In the embodiment of the invention, the multi-frame three-dimensional point cloud is under a local coordinate system.
In an optional implementation manner, the multiple frames of three-dimensional point clouds including the target region are obtained by directly obtaining the multiple frames of three-dimensional point clouds in the local coordinate system. The local coordinate system is a coordinate system established with a carrier carrying a probe for detecting a plurality of frames of three-dimensional point clouds as an origin, for example, a coordinate system established with a vehicle as an origin. The carrier may be a vehicle or an unmanned aerial vehicle, and the invention is not limited to this.
In another optional implementation, obtaining a multi-frame three-dimensional point cloud including a target region includes: acquiring a multi-frame three-dimensional point cloud containing a target area under a coordinate system of detection equipment; and converting the three-dimensional point cloud detected by the detection equipment into the local coordinate system according to the conversion relation between the coordinate system of the detection equipment and the local coordinate system. Optionally, the obtaining of the multi-frame three-dimensional point cloud including the target area under the coordinate system of the detection device includes: and acquiring three-dimensional point cloud which is detected by detection equipment carried on the carrier and contains a target area around the carrier.
Specifically, as shown in fig. 2, a detection device 22 is disposed on the vehicle 21, and the detection device 22 may be a binocular stereo camera, a TOF camera and/or a laser radar. For example, during the driving of the vehicle 21, the driving direction of the vehicle 21 is the direction indicated by the arrow in fig. 2, and the detection device 22 detects the three-dimensional point cloud of the environmental information around the vehicle 21 in real time. The detection device 22 is exemplified by a laser radar, when a laser beam emitted by the laser radar irradiates an object surface, the object surface reflects the laser beam, and the laser radar can determine information such as the direction and distance of the object relative to the laser radar according to the laser beam reflected by the object surface. If the laser beam emitted by the laser radar scans according to a certain track, for example, 360-degree rotation scanning, a large number of laser points are obtained, and thus laser point cloud data, i.e., three-dimensional point cloud, of the object can be formed.
The three-dimensional point cloud obtained in step S101 is continuous N frames of sparse point cloud data accumulated in the current time window.
Alternatively, the target area may be an object having a flat surface. The embodiment of the present invention is described by taking the target area as a ground area, but the present invention is not limited to the ground area, and the target area may also be an object such as a wall surface or a desktop, and the present invention is not limited to this. The method of the embodiment of the invention can be also applied to the identification of objects with flat surfaces, such as walls or desktops.
And S102, preprocessing a plurality of frames of three-dimensional point clouds.
Because the multi-frame three-dimensional point cloud includes point clouds or noise points in non-target areas, the multi-frame three-dimensional point cloud needs to be preprocessed to filter out the point clouds or noise points in the non-target areas.
Optionally, the preprocessing is performed on the multi-frame three-dimensional point cloud, and includes: and removing noise points in the multi-frame three-dimensional point cloud, wherein the removed noise points refer to three-dimensional points which do not belong to the target area.
And S103, determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model.
Specifically, the preprocessed multi-frame three-dimensional point cloud is input into a preset correction model, and the preset correction model outputs a height value correction parameter of the multi-frame three-dimensional point cloud.
And step S104, correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area.
In this embodiment, assume that the three-dimensional coordinate of a three-dimensional point in the three-dimensional point cloud is (x)i,yi,zi),xi,yi,ziRespectively, the coordinate values of the three-dimensional point in the direction X, Y, Z under the local coordinate system, and the height value refers to the coordinate value of the three-dimensional point in the direction Z of the local coordinate system. The local coordinate system is a coordinate system established with a carrier on which a probe device for detecting a plurality of frames of three-dimensional point clouds is mounted as an origin, and is, for example, a coordinate system established with a vehicle as an origin.
Specifically, because the false recognition of the ground area and other objects in the three-dimensional point cloud obtained by the laser radar scanning is mainly caused by the height value error of the ground area, the recognition of the ground area can be corrected by correcting the height value of the multi-frame three-dimensional point cloud through the height value correction parameter, the recognition precision of the ground is improved, and the three-dimensional reconstruction of the ground is realized. Continuing with the exemplary application scenario described above, for example, the area in front of the vehicle 21 includes a ground area, other vehicles, buildings, trees, fences, pedestrians, etc. As shown in fig. 2, the bottom of the wheel of the front vehicle 23 of the vehicle 21 is in contact with the ground, in other embodiments, there may be an object such as a traffic sign in the front area of the vehicle 21, and the bottom of the traffic sign is also in contact with the ground, so when identifying the object such as the front vehicle 23 of the vehicle 21 and the traffic sign, if the height value of the ground area is not accurate enough, the ground point at the bottom of the front vehicle 23 and/or the ground point at the bottom of the traffic sign may be easily mistakenly identified as the three-dimensional point of the front vehicle or the traffic sign. After the ground area is corrected by the height value correction parameter, the ground point at the bottom of the front vehicle 23 and/or the ground point at the bottom of the traffic sign can be recognized as a three-dimensional point of a non-ground area, that is, a three-dimensional point of the front vehicle 23 or the traffic sign.
The embodiment obtains a multi-frame three-dimensional point cloud containing a target area; preprocessing a plurality of frames of three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. The correction model can determine a height value correction parameter for correcting the height values of the multi-frame three-dimensional point clouds, so that the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point clouds are corrected according to the height value correction parameter.
The embodiment of the invention provides a point cloud processing method. Fig. 3 is a flowchart of a point cloud processing method according to another embodiment of the present invention. As shown in fig. 3, based on the embodiment shown in fig. 1, in the method in this embodiment, the preprocessing of the multi-frame three-dimensional point cloud may be to project the three-dimensional point cloud obtained by scanning with the laser radar onto an XOY plane of a world coordinate system, and then determine whether a point in a grid is a point cloud belonging to a ground area according to a height range between the three-dimensional point clouds mapped in the grid of the XOY plane, and specifically includes the following steps:
step S301, determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the determined height map comprises a plurality of grids.
Optionally, determining a height map according to the height values of the multiple frames of three-dimensional point clouds includes: determining a target plane under a world coordinate system; projecting multi-frame three-dimensional point clouds under the local coordinate system to a target plane according to a conversion relation between the local coordinate system and a world coordinate system; and determining a height map according to the height value of the multi-frame three-dimensional point cloud projected in the target plane. Specifically, the right-hand coordinate system with the Z-axis facing vertically downward is a world coordinate system, and the target plane may be an XOY plane divided into a plurality of square grids of the same size in the world coordinate system. Similarly, a local coordinate system with a vertical downward Z axis is established by taking a vehicle as an origin, X, Y, Z axes of the local coordinate system and X, Y, Z axes of a world coordinate system are respectively aligned, and if n frames of sparse point clouds need to be accumulated to reconstruct the ground, a height map of the point clouds can be obtained by projecting the n frames of accumulated point clouds onto an XOY plane under the world coordinate system.
Specifically, according to a conversion relationship between the local coordinate system and the world coordinate system, each three-dimensional point in the three-dimensional point cloud in the local coordinate system is projected to the world coordinate system, for example, as follows: the point j represents a three-dimensional point in the three-dimensional point cloud, and the position of the three-dimensional point in the local coordinate system is recorded as
The position where the point j is converted into the world coordinate system is recorded as
The conversion relation between the local coordinate system and the world coordinate system is R, and in the world coordinate system, the three-dimensional position of the laser radar, namely the translation vector is t, and then the three-dimensional position can be represented by a formula:
the position of the point j converted into the world coordinate system can be obtained
Thereby, the projected point of the point i in the world coordinate system can be calculated.
Similarly, the projection points of other three-dimensional points in the three-dimensional point cloud except the point j in the target plane can be determined. And determining a height map according to the height values of the point i projected in the target plane and other three-dimensional points.
And step S302, determining a rough target area in the height map according to a preset target area height value.
In some embodiments, the predetermined target zone height value may be a predetermined ground zone height value, and for the predetermined ground zone height value, a preliminary ground zone height value may be estimated using the height of the vehicle in the local coordinate system, assuming that the maximum height of the vehicle is z1The overall height of the vehicle is 1.5m, then pass z11.5 preliminary ground area height values are obtained, from which an approximate ground area can be determined in the height map. The target area determined here is a rough grid range where the target area is located, which is divided from the height map, and is not precise, and may include three-dimensional points of other objects, and therefore, the three-dimensional points need to be further filtered through subsequent processing.
Step S303, calculating a difference between the maximum height value and the minimum height value in the same grid in which the target region is located.
Assuming that w three-dimensional points are mapped in a certain grid of the height map after projection, and the maximum height value of the height values of the w three-dimensional points is whThe minimum height value is w1By calculating wh-wlThe difference between the maximum height value and the minimum height value in the grid can be found.
And step S304, determining the grids of which the difference is lower than the difference threshold value and the distance between the difference and the preset target area height value is smaller than the preset distance.
Suppose wh-wlIs below a difference threshold, and (w)h-wl)-(z1-0.5) less than a preset distance, marking the grid corresponding to the three-dimensional point. Specific markThe method can be referred to a marking method in the prior art, for example, marking with different colors, and the invention is not particularly limited herein.
And 305, removing the three-dimensional point cloud outside the grid, wherein the difference value is lower than the difference threshold value and the distance between the three-dimensional point cloud and the preset target area height value is smaller than the preset distance.
After the above steps mark wh-wlIs below a difference threshold, and (w)h-wl)-(z10.5) after the grids corresponding to the three-dimensional points smaller than the preset distance are obtained, removing the approximate unmarked grids in the target area, wherein the points in the unmarked grids can be regarded as non-ground point clouds or noise points, thereby completing the removal of the three-dimensional point clouds except the grids with the difference value lower than the difference value threshold value and the distance between the difference value and the preset height value of the target area smaller than the preset distance, and realizing the initial identification of the target area. In this case, the identified target region needs to be further corrected to improve the accuracy of identifying the target region.
The embodiment of the invention provides a point cloud processing method. Fig. 4 is a flowchart of a point cloud processing method according to another embodiment of the present invention. As shown in fig. 4, on the basis of the foregoing embodiment, projecting a plurality of frames of three-dimensional point clouds under the local coordinate system to the target plane according to the conversion relationship between the local coordinate system and the world coordinate system may include:
step S401, divide the target plane into a plurality of grids of equal size, each grid having a grid number.
And S402, calculating the corresponding grid number of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system according to the conversion relation between the local coordinate system and the world coordinate system.
For example, the XOY plane in the local coordinate system is divided into 0.2 × 0.2m squares to obtain a plurality of grids, and the grids are numbered to obtain a grid number. Similarly, an XOY plane in the world coordinate system is also divided into 0.2 × 0.2m squares to obtain a plurality of grids, the grids are numbered, x-axis coordinates and y-axis coordinates corresponding to the grids can be obtained according to the grid numbers and the size of each grid being 0.2 × 0.2m, the x-axis coordinates and the y-axis coordinates in the world coordinate system are converted into the world coordinate system according to the conversion relation between the local coordinate system and the world coordinate system to obtain the x-axis coordinates and the y-axis coordinates in the world coordinate system, and further, the grids corresponding to a certain grid in the local coordinate system in the world coordinate system can be obtained according to the x-axis coordinates and the y-axis coordinates in the world coordinate system.
Step S403, calculating corresponding height values of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system according to the conversion relation between the local coordinate system and the world coordinate system;
similarly, according to the exemplary description of step S402, the corresponding height value of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system may also be obtained.
Step S402 and step S403 may also be executed first, and then step S402 is executed, where step S402 and step S403 may be regarded as parallel execution, and there is no sequential execution order.
And S404, determining a height map according to the grid number of the multi-frame three-dimensional point cloud in the local coordinate system in the target plane and the height value of the multi-frame three-dimensional point cloud in the local coordinate system in the target plane.
After the grid number corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane is obtained through calculation in the above steps S401 to S403, and the height value corresponding to the multi-frame three-dimensional point cloud in the local coordinate system in the target plane is obtained through calculation, the grid number and the height value can be corresponded to each other, so that the three-dimensional point in the local coordinate system is mapped to the world coordinate system, and the height map is obtained.
The embodiment of the invention provides a point cloud processing method. Fig. 5 is a flowchart of a point cloud processing method according to another embodiment of the present invention. As shown in fig. 5, based on the above embodiment, the determining a height value correction parameter of a plurality of frames of three-dimensional point clouds according to a preprocessed plurality of frames of three-dimensional point clouds and the preset correction model may include:
and S501, inputting the preprocessed three-dimensional point cloud into an optimization solving model.
Optionally, the function equation of the optimization solution model is specifically as follows:
in the formula, i represents an image frame number corresponding to the three-dimensional point cloud; j represents the number of the three-dimensional point in the three-dimensional point cloud image;
the three-dimensional coordinate value of the jth three-dimensional point in the ith frame of three-dimensional point cloud image is represented, and m represents the total number of the three-dimensional points in the ith frame of three-dimensional point cloud image; n represents the total number of the three-dimensional point cloud images, namely the total number of accumulated multi-frame three-dimensional point cloud images;
the height value correction quantity of the jth three-dimensional point in the ith frame of three-dimensional point cloud image is represented by the following expression:
wherein, a
iRepresents a first correction coefficient, b
iRepresents a second correction coefficient, c
iRepresents a third correction coefficient; s denotes the number of the grid in the height map,
and representing the mean value of the height values of the three-dimensional points accumulated at the s-th grid on the height map after correction, wherein the expression is as follows:
wherein K represents the total number of three-dimensional point clouds accumulated in a grid of which the difference is lower than a difference threshold value and the distance between the grid and the ground area is smaller than a preset distance, i
kRepresenting the image frame number of the three-dimensional point cloud to which the kth point belongs; a is used for representing a first correction coefficient of the multi-frame three-dimensional point cloud, B is used for representing a second correction coefficient of the multi-frame three-dimensional point cloud, C is used for representing a third correction coefficient of the multi-frame three-dimensional point cloud, and A ═ a
1...a
i...,a
n]
T,B=[b
1...b
i...,b
n]
T,C=[c
1...c
i...,c
a]
T。
And S502, solving the optimized solving model by adopting a linear least square method to obtain a correction coefficient.
Specifically, will
When the formula (1) is input and the formula (1) is solved by a linear least square method, the minimum value (a) of the formula (1) can be obtained
i,b
i,c
i),(a
i,b
i,c
i) Is a correction coefficient for correcting the i-th frame image.
Similarly, when another three-dimensional point is input to the above equation (1), a correction coefficient for correcting another frame image may be obtained, and the correction coefficient for all frame images is a ═ a1...ai...,aa]T,B=[b1...bi...,bn]T,C=[c1...ci...,cn]T。
Optionally, all three-dimensional points of all frame images may be input into the above formula (1), a linear equation set is established, and the correction coefficients of all frame images may be obtained simultaneously by solving the linear equation set in parallel. The parallel computation can improve the computation efficiency and well meet the real-time requirement of the vehicle-mounted system.
And 503, determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient.
Optionally, the correction coefficient includes a first correction coefficient, a second correction coefficient, and a third correction coefficient; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient, wherein the height value correction parameter comprises the following steps: and calculating a height value correction parameter of the frame of three-dimensional point cloud according to the first correction coefficient, the second correction coefficient and the third correction coefficient of the plurality of frames of three-dimensional point clouds and the three-dimensional coordinate value of the frame of three-dimensional point clouds. Specifically, the height value correction parameter of the frame of three-dimensional point cloud is calculated according to the first correction coefficient, the second correction coefficient, the third correction coefficient of the frame of three-dimensional point cloud and the three-dimensional coordinate value of the frame of three-dimensional point cloud, and may be calculated according to the following function equation:
in the formula, a
i,b
i,c
iRespectively a first correction coefficient, a second correction coefficient and a third correction coefficient of the ith frame point cloud image; (a)
i,b
i,c
i) A correction coefficient for correcting the i-th frame image,
and d represents a height value correction parameter for correcting all three-dimensional points in the ith frame of image.
Will (a)
i,b
i,c
i) And
after the formula (2) is substituted, the height value correction parameter d for correcting all three-dimensional points in the ith frame image can be obtained by solving.
Optionally, after the height value correction parameter d for correcting all three-dimensional points in the ith frame image is obtained through solving, the height values of all three-dimensional points in the ith frame image can be corrected according to the height value correction parameter d for correcting all three-dimensional points in the ith frame image. For example, assume that the coordinate value of the jth three-dimensional point in the ith frame image before correction is
The coordinate value of the jth three-dimensional point in the corrected ith frame image is
Fig. 6 is an effect diagram before correction of the ground point cloud.
FIG. 7 is a diagram illustrating the effect of the method of the embodiment of the present invention after correcting the ground point cloud.
As shown in fig. 6 and 7, the area formed by the black dots in the drawings is the ground area, and it can be seen that the ground area identified in fig. 6 has large jitter and wide distribution on the Z axis, while the ground area identified in fig. 7 has smoother and more compact distribution and narrow distribution on the Z axis, so that the ground area corrected by the method of the embodiment of the present invention has more accurate identification.
The embodiment of the invention provides a point cloud processing system. Fig. 8 is a block diagram of a processing system for point cloud according to an embodiment of the present invention, and as shown in fig. 8, the processing system 80 for point cloud includes a detection device 81, a memory 82, and a processor 83. The detection device 81 is configured to detect a multi-frame three-dimensional point cloud including a target region; the memory 82 is used to store program codes; a processor 83, calling program code, which when executed, is configured to: acquiring a multi-frame three-dimensional point cloud containing a target area; preprocessing a plurality of frames of three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. The detection device 81 in this embodiment may be the detection device 22 in fig. 2.
Optionally, when the processor 83 preprocesses the multi-frame three-dimensional point cloud, the processor is specifically configured to: and removing noise points in the multi-frame three-dimensional point cloud, wherein the noise points refer to three-dimensional points which do not belong to the target area.
Optionally, when the processor 83 removes noise points in the multi-frame three-dimensional point cloud, the processor is specifically configured to: determining a height map according to the height values of the multi-frame three-dimensional point cloud, wherein the height map comprises a plurality of grids; determining a rough target area in the height map according to a preset target area height value; calculating the difference value between the maximum height value and the minimum height value in the same grid in which the target area is located; determining grids of which the difference is lower than the difference threshold value and the distance between the difference and the preset height value of the target area is smaller than the preset distance; and removing the three-dimensional point cloud outside the grid, wherein the difference value is lower than the difference value threshold value and the distance between the three-dimensional point cloud and the preset target area height value is smaller than the preset distance.
Optionally, when acquiring the multi-frame three-dimensional point cloud, the processor 83 is specifically configured to: acquiring multi-frame three-dimensional point clouds under a local coordinate system, wherein the local coordinate system is a coordinate system established by taking a carrier carrying detection equipment for detecting the multi-frame three-dimensional point clouds as an origin; when determining the height map according to the height values of the multiple frames of three-dimensional point clouds, the processor 83 is specifically configured to: determining a target plane under a world coordinate system; projecting multi-frame three-dimensional point clouds under the local coordinate system to a target plane according to a conversion relation between the local coordinate system and a world coordinate system; and determining a height map according to the height value of the multi-frame three-dimensional point cloud projected in the target plane.
Optionally, when the processor 83 projects the multiple frames of three-dimensional point clouds in the local coordinate system onto the target plane according to the conversion relationship between the local coordinate system and the world coordinate system, the processor is specifically configured to: dividing the target plane into a plurality of grids with equal size, wherein each grid has a grid number; calculating the grid number corresponding to the multi-frame three-dimensional point cloud in the target plane under the local coordinate system according to the conversion relation between the local coordinate system and the world coordinate system; calculating the corresponding height value of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system according to the conversion relation between the local coordinate system and the world coordinate system; and determining a height map according to the grid number of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system and the height value of the multi-frame three-dimensional point cloud in the target plane under the local coordinate system.
Optionally, the preset correction model includes an optimization solution model; when determining the height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and the preset correction model, the processor 83 is specifically configured to: inputting the preprocessed three-dimensional point cloud into the optimization solving model; solving the optimized solving model by adopting a linear least square method to obtain a correction coefficient; and determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient.
Optionally, the correction coefficient includes a first correction coefficient, a second correction coefficient, and a third correction coefficient; when determining the height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient, the processor 83 is specifically configured to: and calculating a height value correction parameter of the frame of three-dimensional point cloud according to the first correction coefficient, the second correction coefficient and the third correction coefficient of the plurality of frames of three-dimensional point clouds and the three-dimensional coordinate value of the frame of three-dimensional point clouds.
Optionally, when the processor 83 obtains a multi-frame three-dimensional point cloud under the local coordinate system, the processor is specifically configured to: acquiring a multi-frame three-dimensional point cloud which is detected by detection equipment and contains a target area; and converting the multi-frame three-dimensional point cloud detected by the detection equipment into the local coordinate system according to the conversion relation between the coordinate system of the detection equipment and the local coordinate system.
Optionally, the detection device comprises at least one of: binocular stereo cameras, TOF cameras and lidar.
Optionally, the target area is a ground area.
The specific principle and implementation of the point cloud processing system provided by the embodiment of the invention are similar to those of the above embodiments, and are not described herein again.
The embodiment acquires a multi-frame three-dimensional point cloud containing a target area; preprocessing a plurality of frames of three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. The correction model can determine a height value correction parameter for correcting the height values of the multi-frame three-dimensional point clouds, so that the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point clouds are corrected according to the height value correction parameter.
The embodiment of the invention provides a movable platform. FIG. 9 is a block diagram of a moveable platform according to an embodiment of the present invention. The embodiment of the invention is a movable platform provided on the basis of the technical scheme provided by the embodiment shown in fig. 8. As shown in fig. 9, the movable platform 90 includes: fuselage 91, power system 92, and cloud processing system 93. The processing system 93 of the point cloud in the present embodiment may be the processing system 80 of the point cloud provided in the above-described embodiment.
The specific principle and implementation of the point cloud processing system provided by the embodiment of the invention are similar to those of the embodiment shown in fig. 8, and are not described herein again.
The embodiment obtains a multi-frame three-dimensional point cloud containing a target area; preprocessing a plurality of frames of three-dimensional point clouds; determining a height value correction parameter of the multi-frame three-dimensional point cloud according to the pre-processed multi-frame three-dimensional point cloud and a preset correction model; and correcting the height value of the multi-frame three-dimensional point cloud according to the height value correction parameter so as to correct the identification of the target area. The correction model can determine a height value correction parameter for correcting the height values of the multi-frame three-dimensional point clouds, so that the recognition accuracy of the target area can be improved after the height values of the multi-frame three-dimensional point clouds are corrected according to the height value correction parameter.
In addition, the present embodiment also provides a computer-readable storage medium on which a computer program is stored, the computer program being executed by a processor to implement the point cloud processing method of the above embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.