WO2020237516A1 - 点云的处理方法、设备和计算机可读存储介质 - Google Patents

点云的处理方法、设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2020237516A1
WO2020237516A1 PCT/CN2019/088931 CN2019088931W WO2020237516A1 WO 2020237516 A1 WO2020237516 A1 WO 2020237516A1 CN 2019088931 W CN2019088931 W CN 2019088931W WO 2020237516 A1 WO2020237516 A1 WO 2020237516A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
frame
dimensional point
coordinate system
height
Prior art date
Application number
PCT/CN2019/088931
Other languages
English (en)
French (fr)
Inventor
郑杨杨
刘晓洋
张晓炜
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/088931 priority Critical patent/WO2020237516A1/zh
Priority to CN201980012171.7A priority patent/CN111699410A/zh
Publication of WO2020237516A1 publication Critical patent/WO2020237516A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating

Definitions

  • the embodiments of the present invention relate to the field of automatic driving, and in particular, to a point cloud processing method, device, and computer-readable storage medium.
  • Lidar is one of the main sensors used in the field of 3D scene reconstruction. It can generate a sparse point cloud of the 3D scene in real time according to the principle of light reflection, and then reconstruct the 3D scene at the current position by fusing multiple frames of sparse point clouds.
  • the embodiments of the present invention provide a point cloud processing method, device, and computer-readable storage medium, so as to improve the recognition accuracy of a target area and reconstruct a high-quality three-dimensional scene.
  • the first aspect of the embodiments of the present invention is to provide a point cloud processing method, including:
  • the height value of the multi-frame three-dimensional point cloud is corrected according to the height value correction parameter to correct the recognition of the target area.
  • the second aspect of the embodiments of the present invention is to provide a point cloud processing system, including: a detection device, a memory, and a processor;
  • the detection device is used to detect a multi-frame three-dimensional point cloud containing a target area
  • the memory is used to store program code; the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the height value of the multi-frame three-dimensional point cloud is corrected according to the height value correction parameter to correct the recognition of the target area.
  • the third aspect of the embodiments of the present invention is to provide a movable platform including: a fuselage, a power system, and the point cloud processing system described in the second aspect.
  • the fourth aspect of the embodiments of the present invention is to provide a computer-readable storage medium having a computer program stored thereon, and the computer program is executed by a processor to implement the method described in the first aspect.
  • the point cloud processing method, device, and computer-readable storage medium provided in this embodiment are obtained by acquiring multiple frames of three-dimensional point clouds containing a target area; preprocessing the multiple frames of three-dimensional point clouds; Frame 3D point cloud and preset correction model to determine the height value correction parameter of the multi-frame 3D point cloud; according to the height value correction parameter, the height value of the multi-frame 3D point cloud is corrected to correct the recognition of the target area. Since the correction model can determine the height value correction parameter for correcting the height value of the multi-frame 3D point cloud, after correcting the height value of the multi-frame 3D point cloud according to the height value correction parameter, the recognition accuracy of the target area can be improved.
  • FIG. 1 is a flowchart of a point cloud processing method provided by an embodiment of the present invention
  • Figure 2 is a schematic diagram of an application scenario provided by an embodiment of the present invention.
  • FIG. 3 is a flowchart of a point cloud processing method provided by another embodiment of the present invention.
  • FIG. 4 is a flowchart of a point cloud processing method provided by another embodiment of the present invention.
  • FIG. 5 is a flowchart of a point cloud processing method provided by another embodiment of the present invention.
  • Figure 6 is an effect diagram before the ground point cloud is corrected
  • FIG. 7 is an effect diagram of the ground point cloud corrected by the method of the embodiment of the present invention.
  • FIG. 8 is a structural diagram of a point cloud processing system provided by an embodiment of the present invention.
  • Fig. 9 is a structural diagram of a movable platform according to an embodiment of the present invention.
  • 80 point cloud processing system
  • 81 detection equipment
  • 82 memory
  • 83 processor
  • 90 movable platform
  • 91 fuselage
  • 92 power system
  • 93 point cloud processing system.
  • a component when a component is said to be “fixed to” another component, it can be directly on the other component or a central component may also exist. When a component is considered to be “connected” to another component, it can be directly connected to another component or there may be a centered component at the same time.
  • the embodiment of the present invention provides a point cloud processing method.
  • the point cloud processing method provided by the embodiments of the present invention can be applied to vehicles, such as unmanned vehicles, or vehicles equipped with Advanced Driver Assistance Systems (ADAS) systems. It is understandable that the point cloud processing method can also be applied to drones, such as drones equipped with detection equipment to obtain point cloud data.
  • the point cloud processing method provided by the embodiment of the present invention can be applied to real-time ground 3D reconstruction. The significance of the ground 3D reconstruction is that since the point cloud scanned by the lidar contains most of the ground points, these ground points are effective for subsequent obstacles. The classification, recognition and tracking of point clouds will have an impact.
  • the area in front of a vehicle includes ground areas, other vehicles, buildings, trees, fences, pedestrians, etc.
  • the bottom of the wheel of the front vehicle is in contact with the ground.
  • the on-board lidar moves with the car. Due to the influence of the vehicle positioning error, the accumulated multi-frame point cloud will appear on the z-axis after the fusion of the same surface, which leads to the reconstruction accuracy. If it is not ideal enough, it is easy to misidentify the ground point at the bottom of the vehicle ahead and/or the ground point at the bottom of the traffic sign as the three-dimensional point of the vehicle ahead or the traffic sign, or the bottom point of the vehicle ahead and/or the traffic sign The missing point at the bottom of the sign is a three-dimensional point. Therefore, when recognizing vehicles, traffic signs, buildings, trees, fences, pedestrians, etc.
  • Fig. 1 is a flowchart of a point cloud processing method provided by an embodiment of the present invention. As shown in Figure 1, the method in this embodiment may include:
  • Step S101 Obtain a multi-frame three-dimensional point cloud containing a target area.
  • obtaining multiple frames of three-dimensional point clouds containing the target area can be directly obtained by obtaining multiple frames of three-dimensional point clouds in a local coordinate system.
  • the local coordinate system is a coordinate system established with a carrier equipped with a detection device that detects multiple frames of three-dimensional point clouds as the origin, for example, a coordinate system established with a vehicle as the origin.
  • the carrier may be a vehicle or an unmanned machine, which is not specifically limited in the present invention.
  • acquiring a multi-frame 3D point cloud containing the target area includes: acquiring a multi-frame 3D point cloud containing the target area in the detection device coordinate system; according to the detection device coordinate system and local coordinates The conversion relationship between the systems is to convert the three-dimensional point cloud detected by the detection device to the local coordinate system.
  • acquiring a multi-frame three-dimensional point cloud containing the target area in the detection device coordinate system includes: acquiring a three-dimensional point cloud containing the target area around the carrier detected by the detection device mounted on the carrier.
  • a detection device 22 is provided on the vehicle 21, and the detection device 22 may specifically be a binocular stereo camera, a TOF camera and/or a lidar.
  • the traveling direction of the vehicle 21 is the direction indicated by the arrow in FIG. 2, and the detection device 22 detects the three-dimensional point cloud of the surrounding environment information of the vehicle 21 in real time.
  • the detection device 22 takes a laser radar as an example. When a laser beam emitted by the laser radar irradiates the surface of an object, the surface of the object will reflect the laser beam. The laser radar can determine according to the laser reflected on the surface of the object The position and distance of the object relative to the lidar.
  • the laser beam emitted by the lidar is scanned according to a certain trajectory, such as a 360-degree rotating scan, a large number of laser points will be obtained, and thus the laser point cloud data of the object can be formed, that is, a three-dimensional point cloud.
  • the three-dimensional point cloud acquired in step S101 is continuous N frames of sparse point cloud data accumulated in the current time window.
  • the target area may be an object with a flat surface.
  • the target area is a ground area as an example for description, but it is not limited to the ground area.
  • the target area may also be an object such as a wall surface or a desktop, which is not specifically limited in the present invention.
  • the method of the embodiment of the present invention can also be applied to the recognition of objects with flat surfaces such as walls or desktops.
  • the multi-frame 3D point cloud contains non-target area point clouds or noise points, it is necessary to preprocess the multi-frame 3D point cloud to filter out the non-target area point clouds or noise points.
  • preprocessing the multi-frame three-dimensional point cloud includes: removing noise points in the multi-frame three-dimensional point cloud, where the removed noise points refer to three-dimensional points that do not belong to the target area.
  • Step S103 Determine the height value correction parameters of the multi-frame 3D point cloud according to the preprocessed multi-frame 3D point cloud and the preset correction model.
  • the preprocessed multi-frame 3D point cloud is input into the preset correction model, and the preset correction model will output the height value correction parameters of the multi-frame 3D point cloud.
  • Step S104 Correct the height values of the multi-frame three-dimensional point clouds according to the height value correction parameters to correct the recognition of the target area.
  • the three-dimensional coordinates of a certain three-dimensional point in the three-dimensional point cloud are (x i , y i , z i ), and x i , y i , and z i represent the three-dimensional point X, Y, and Y in the local coordinate system.
  • the height value refers to the coordinate value of the three-dimensional point in the Z direction of the local coordinate system.
  • the local coordinate system refers to a coordinate system established with a carrier equipped with a detection device for detecting multiple frames of three-dimensional point clouds as the origin, for example, a coordinate system established with a vehicle as the origin.
  • the height value of the multi-frame 3D point cloud is corrected by the height value correction parameter.
  • the correction can correct the recognition of the ground area, improve the recognition accuracy of the ground, and realize the three-dimensional reconstruction of the ground.
  • the ground point at the bottom of the front vehicle 23 and/or the ground point at the bottom of the traffic sign can be identified as a three-dimensional point in the non-ground area, that is, the front vehicle 23 or traffic The three-dimensional point of the sign.
  • Altitude value correction parameter According to the altitude value correction parameter, the height value of the multi-frame 3D point cloud is corrected to correct the recognition of the target area. Since the correction model can determine the height value correction parameter for correcting the height value of the multi-frame 3D point cloud, after correcting the height value of the multi-frame 3D point cloud according to the height value correction parameter, the recognition accuracy of the target area can be improved.
  • FIG. 3 is a flowchart of a point cloud processing method provided by another embodiment of the present invention.
  • the method in this embodiment may preprocess the multi-frame 3D point cloud by projecting the 3D point cloud scanned by the lidar to the world coordinate system.
  • the specific steps are as follows:
  • Step S301 Determine a height map according to the height values of multiple frames of three-dimensional point clouds, where the determined height map includes multiple grids.
  • determining the height map according to the height values of multiple frames of three-dimensional point clouds includes: determining a target plane in the world coordinate system; according to the conversion relationship between the local coordinate system and the world coordinate system, the The frame 3D point cloud is projected to the target plane; the height map is determined according to the height values of the multiple frames of 3D point cloud projected in the target plane.
  • the right-hand coordinate system with the Z axis facing vertically downward is the world coordinate system
  • the target plane may be an XOY plane divided into a plurality of square grids of the same size in the world coordinate system.
  • a local coordinate system with the Z axis vertically downward is established with the vehicle as the origin, so that the X, Y, and Z axes of the local coordinate system are aligned with the X, Y, and Z axes of the world coordinate system respectively.
  • n frames are sparsely accumulated
  • the height map of the point cloud can be obtained by projecting the accumulated n frames of point cloud onto the XOY plane in the world coordinate system.
  • each three-dimensional point in the three-dimensional point cloud in the local coordinate system is projected to the world coordinate system, for example: point j represents one of the three-dimensional point cloud Three-dimensional point, the position of the three-dimensional point in the local coordinate system is recorded as Convert point j to the position in the world coordinate system as
  • the conversion relationship between the local coordinate system and the world coordinate system is R.
  • the three-dimensional position of the lidar that is, the translation vector t
  • the projection point of point i in the world coordinate system can be calculated.
  • the projection points of other three-dimensional points in the three-dimensional point cloud except point j on the target plane can be determined.
  • the height map is determined according to the height values of the point i projected on the target plane and other three-dimensional points.
  • Step S302 Determine a rough target area in the height map according to a preset height value of the target area.
  • the preset height value of the target area may be the preset height value of the ground area.
  • the height of the vehicle in the local coordinate system may be used to estimate a preliminary ground area.
  • the area height value assuming that the maximum height of the vehicle is z 1 and the overall height of the vehicle is 1.5m, the preliminary ground area height value can be obtained through z 1 -1.5. Based on this result, the approximate height can be determined in the above height map Ground area.
  • the target area determined here is the grid range where a rough target area is divided from the height map. It is not accurate and may contain 3D points of other objects. Therefore, it is necessary to further filter these through subsequent processing. Three-dimensional points.
  • Step S303 Calculate the difference between the maximum height value and the minimum height value in the same grid in the grid where the approximate target area is located.
  • the maximum height value among the height values of the w three-dimensional points is w h
  • the minimum height value is w l
  • Step S304 Determine a grid whose difference value is lower than the difference value threshold and the distance from the preset target area height value is less than the preset distance.
  • the grid corresponding to the three-dimensional point is marked.
  • the specific marking method can refer to the marking method in the prior art, for example, using different colors for marking, which is not specifically limited in the present invention.
  • Step 305 Remove the three-dimensional point cloud outside the grid whose difference value is lower than the difference value threshold and whose distance from the preset target area height value is less than the preset distance.
  • FIG. 4 is a flowchart of a point cloud processing method provided by another embodiment of the present invention.
  • projecting multiple frames of three-dimensional point clouds in the local coordinate system to the target plane may include:
  • Step S401 Divide the target plane into multiple grids of equal size, and each grid has a grid number.
  • Step S402 According to the conversion relationship between the local coordinate system and the world coordinate system, calculate the corresponding grid numbers of the multi-frame three-dimensional point cloud in the local coordinate system in the target plane.
  • the XOY plane in the local coordinate system is divided into 0.2*0.2m squares to obtain multiple grids, and these grids are numbered to obtain the grid number.
  • the XOY plane in the world coordinate system is also divided according to 0.2*0.2m squares to obtain multiple grids, and these grids are numbered.
  • the x-axis and y-axis coordinates corresponding to the grid are converted to the world coordinate system according to the conversion relationship between the local coordinate system and the world coordinate system to obtain the x-axis and y-axis in the world coordinate system
  • Step S403 According to the conversion relationship between the local coordinate system and the world coordinate system, calculate the corresponding height values of the multi-frame three-dimensional point clouds in the local coordinate system in the target plane;
  • the corresponding height values of the multi-frame three-dimensional point cloud in the local coordinate system in the target plane can also be obtained according to the example introduction of the above step S402.
  • step S403 may be executed first, and then step S402 is executed.
  • Step S402 and step S403 may be considered to be executed in parallel, and there is no sequential execution order.
  • Step S404 Determine a height map according to the grid numbers corresponding to the multiple frames of three-dimensional point clouds in the local coordinate system in the target plane and the corresponding height values of the multiple frames of three-dimensional point clouds in the local coordinate system in the target plane.
  • the grid number corresponding to the multi-frame 3D point cloud in the local coordinate system in the target plane is calculated, and the corresponding height of the multi-frame 3D point cloud in the local coordinate system in the target plane is calculated.
  • the grid number and the height value can be correlated to realize the mapping of the three-dimensional points in the local coordinate system to the world coordinate system to obtain the height map.
  • FIG. 5 is a flowchart of a point cloud processing method provided by another embodiment of the present invention.
  • the preset correction model includes an optimized solution model, and then the preprocessed multi-frame 3D point cloud and the preset correction model are used to determine the multi-frame 3D point cloud Altitude correction parameters can include:
  • Step S501 Input the preprocessed three-dimensional point cloud into the optimization solution model.
  • the functional equation of the optimal solution model is as follows:
  • i represents the image frame number corresponding to the 3D point cloud
  • j represents the number of the 3D point in the 3D point cloud image
  • m represents the total number of three-dimensional points in the i-th frame of the three-dimensional point cloud image
  • n represents the total number of three-dimensional point cloud images, that is, accumulate multiple frames of three-dimensional The total number of point cloud images
  • Represents the height value correction amount of the j-th three-dimensional point in the i-th frame of the three-dimensional point cloud image, and its expression is:
  • a i represents the first correction coefficient
  • b i represents the second correction coefficient
  • c i represents the third correction coefficient
  • s represents the number of the grid in the height map
  • Step S502 Solve the optimization solution model by using the linear least square method to obtain the correction coefficient.
  • all three-dimensional points of all frame images can be input into the above formula (1) to establish a linear equation group, and by solving the linear equation group in parallel, the correction coefficients of all frame images can be obtained at the same time.
  • parallel computing can improve computing efficiency and well meet the real-time requirements of vehicle-mounted systems.
  • Step 503 Determine the height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient.
  • the correction coefficient includes a first correction coefficient, a second correction coefficient, and a third correction coefficient; then determining the height value correction parameter of the multi-frame 3D point cloud according to the correction coefficient includes: according to the multi-frame 3D point cloud The first correction coefficient, the second correction coefficient, the third correction coefficient and the three-dimensional coordinate value of the frame of the three-dimensional point cloud are calculated, and the height value correction parameter of the frame of the three-dimensional point cloud is calculated. Specifically, according to the first correction coefficient, the second correction coefficient, and the third correction coefficient of the frame of 3D point cloud and the 3D coordinate value of the frame of 3D point cloud, the height value correction parameter of the frame of 3D point cloud is calculated, which can be based on the following Function equation to calculate:
  • a i, b i, c i are the first correction coefficient, the second correction coefficient and the third correction coefficient;
  • (a i, b i, c i) is the i-th frame image correction coefficient for correcting, Represents the three-dimensional coordinate value of the j-th three-dimensional point in the i-th frame of three-dimensional point cloud image, and d represents the height value correction parameter for correcting all the three-dimensional points in the i-th frame of image.
  • the height value correction parameter d for correcting all three-dimensional points in the i-th frame image is obtained, the height value correction parameter d for correcting all the three-dimensional points in the i-th frame image The height values of all 3D points have been corrected. For example, suppose the coordinate value of the j-th three-dimensional point in the i-th frame image before correction is Then the coordinate value of the j-th three-dimensional point in the i-th frame image after correction is
  • Figure 6 is the effect diagram before the ground point cloud is corrected.
  • Fig. 7 is an effect diagram of ground point cloud after correction by the method of the embodiment of the present invention.
  • the area formed by the black dots in the figure is the ground area. It can be seen that the ground area identified in Figure 6 is jittery and has a wider distribution on the Z axis, while the area identified in Figure 7 The ground area is smoother and more compact, and the distribution on the Z axis is narrower. Therefore, the ground area after correction by the method of the embodiment of the present invention is more accurate.
  • FIG. 8 is a structural diagram of a point cloud processing system provided by an embodiment of the present invention.
  • the point cloud processing system 80 includes a detection device 81, a memory 82 and a processor 83.
  • the detection device 81 is used to detect a multi-frame three-dimensional point cloud containing the target area; the memory 82 is used to store program code; the processor 83 is used to call the program code, and when the program code is executed, it is used to perform the following operations: Multi-frame 3D point cloud with target area; preprocess the multi-frame 3D point cloud; determine the height value correction parameters of the multi-frame 3D point cloud according to the preprocessed multi-frame 3D point cloud and the preset correction model; according to The height value correction parameter corrects the height value of the multi-frame 3D point cloud to correct the recognition of the target area.
  • the detection device 81 in this embodiment may be the detection device 22 in FIG. 2.
  • the processor 83 when the processor 83 preprocesses the multi-frame three-dimensional point cloud, it is specifically used to remove noise points in the multi-frame three-dimensional point cloud, where the noise points refer to three-dimensional points that do not belong to the target area.
  • the processor 83 when it removes the noise points in the multi-frame three-dimensional point cloud, it is specifically used to: determine a height map according to the height value of the multi-frame three-dimensional point cloud, the height map including a plurality of grids; according to a preset target The area height value determines the approximate target area in the height map; calculates the difference between the maximum height value and the minimum height value in the same grid in the grid where the approximate target area is located; determines that the difference value is lower than the difference threshold And the distance from the preset target area height value is less than the preset distance; remove the grid whose difference value is lower than the difference threshold and the distance from the preset target area height value is less than the preset distance 3D point cloud outside.
  • the processor 83 acquires multiple frames of three-dimensional point clouds, it is specifically used to: acquire multiple frames of three-dimensional point clouds in a local coordinate system, where the local coordinate system is a carrier equipped with a detection device that detects multiple frames of three-dimensional point clouds The coordinate system established for the origin; when the processor 83 determines the height map according to the height values of multiple frames of three-dimensional point clouds, it is specifically used to: determine a target plane in the world coordinate system; according to the conversion between the local coordinate system and the world coordinate system Relation, project the multi-frame 3D point cloud in the local coordinate system to the target plane; determine the height map according to the height value of the multi-frame 3D point cloud projected in the target plane.
  • the processor 83 projects the multi-frame 3D point cloud in the local coordinate system to the target plane according to the conversion relationship between the local coordinate system and the world coordinate system, it is specifically used to: divide the target plane into multiple sizes For equal grids, each grid has a grid number; according to the conversion relationship between the local coordinate system and the world coordinate system, calculate the corresponding grid number of the multi-frame 3D point cloud in the local coordinate system in the target plane; according to The conversion relationship between the local coordinate system and the world coordinate system, calculate the corresponding height value of the multi-frame 3D point cloud in the local coordinate system in the target plane; according to the corresponding height value of the multi-frame 3D point cloud in the local coordinate system in the target plane.
  • the grid number and the corresponding height value of the multi-frame 3D point cloud in the local coordinate system in the target plane determine the height map.
  • the preset correction model includes an optimized solution model; the processor 83 determines the height value correction parameters of the multi-frame 3D point cloud according to the preprocessed multi-frame 3D point cloud and the preset correction model, specifically using In: input the preprocessed three-dimensional point cloud into the optimization solution model; use the linear least square method to solve the optimization solution model to obtain correction coefficients; determine the multi-frame three-dimensional point cloud according to the correction coefficients The height value correction parameter.
  • the correction coefficient includes a first correction coefficient, a second correction coefficient, and a third correction coefficient; when the processor 83 determines the height value correction parameter of the multi-frame three-dimensional point cloud according to the correction coefficient, it is specifically configured to: The first correction coefficient, the second correction coefficient, the third correction coefficient and the three-dimensional coordinate value of the three-dimensional point cloud of the multiple frames of three-dimensional point clouds are calculated, and the height value correction parameter of the three-dimensional point cloud of the frame is calculated.
  • the processor 83 when the processor 83 obtains the multi-frame 3D point cloud in the local coordinate system, it is specifically used to: obtain the multi-frame 3D point cloud that contains the target area detected by the detection device; according to the difference between the detection device coordinate system and the local coordinate system The conversion relationship between the multi-frame three-dimensional point cloud detected by the detection device is converted to the local coordinate system.
  • the detection device includes at least one of the following: a binocular stereo camera, a TOF camera, and a lidar.
  • the target area is a ground area.
  • This embodiment acquires a multi-frame 3D point cloud containing the target area; preprocesses the multi-frame 3D point cloud; determines the height of the multi-frame 3D point cloud according to the preprocessed multi-frame 3D point cloud and a preset correction model Value correction parameter: According to the height value correction parameter, the height value of the multi-frame 3D point cloud is corrected to correct the recognition of the target area. Since the correction model can determine the height value correction parameter for correcting the height value of the multi-frame 3D point cloud, after correcting the height value of the multi-frame 3D point cloud according to the height value correction parameter, the recognition accuracy of the target area can be improved.
  • the embodiment of the present invention provides a movable platform.
  • Fig. 9 is a structural diagram of a movable platform according to an embodiment of the present invention.
  • the embodiment of the present invention is a movable platform provided on the basis of the technical solution provided by the embodiment shown in FIG. 8.
  • the movable platform 90 includes a body 91, a power system 92 and a point cloud processing system 93.
  • the point cloud processing system 93 in this embodiment may be the point cloud processing system 80 provided in the foregoing embodiment.
  • Altitude value correction parameter According to the altitude value correction parameter, the height value of the multi-frame 3D point cloud is corrected to correct the recognition of the target area. Since the correction model can determine the height value correction parameter for correcting the height value of the multi-frame 3D point cloud, after correcting the height value of the multi-frame 3D point cloud according to the height value correction parameter, the recognition accuracy of the target area can be improved.
  • this embodiment also provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the point cloud processing method of the foregoing embodiment.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the above-mentioned integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium.
  • the above-mentioned software functional unit is stored in a storage medium, and includes several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) or a processor (processor) execute part of the steps of the methods of the various embodiments of the present invention .
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Abstract

本发明实施例提供一种点云的处理方法、设备和计算机可读存储介质,该方法包括:获取包含有目标区域的多帧三维点云;对所述多帧三维点云进行预处理;根据预处理后的所述多帧三维点云和预设的修正模型,确定所述多帧三维点云的高度值修正参数;根据所述高度值修正参数对所述多帧三维点云的高度值进行修正,以修正对所述目标区域的识别。本发明实施例通过对多帧三维点云进行修正,能够解决多帧稀疏点云的时序差异造成的表面模糊问题,提高目标区域的识别精度,重建出高质量的三维场景。

Description

点云的处理方法、设备和计算机可读存储介质 技术领域
本发明实施例涉及自动驾驶领域,尤其涉及一种点云的处理方法、设备和计算机可读存储介质。
背景技术
激光雷达是三维场景重建领域所用到的主要传感器之一,能够根据光反射原理实时生成三维场景的稀疏点云,进而通过融合多帧稀疏点云重建出当前位置的三维场景。
由于单帧激光点云通常比较稀疏,使得现有的采用激光点云进行三维场景重建的方法必须累积一段时间内的多帧点云进行时序融合,才能重建出较高质量的三维场景。但是,在自动驾驶系统中,车载激光雷达是随车运动的,由于车辆定位误差的影响,累积的多帧点云在融合后会出现同一表面抖动较大的问题,使得对于目标区域的识别精度较低,导致三维场景重建精度不够理想。尤其是在地面重建的三维场景中,会造成矮小障碍物的漏检或误检。
发明内容
本发明实施例提供一种点云的处理方法、设备和计算机可读存储介质,以提高目标区域的识别精度,重建出高质量的三维场景。
本发明实施例的第一方面是提供一种点云的处理方法,包括:
获取包含有目标区域的多帧三维点云;
对所述多帧三维点云进行预处理;
根据预处理后的所述多帧三维点云和预设的修正模型,确定所述多帧三维点云的高度值修正参数;
根据所述高度值修正参数对所述多帧三维点云的高度值进行修正,以修正对所述目标区域的识别。
本发明实施例的第二方面是提供一种点云的处理系统,包括:探测设 备、存储器和处理器;
所述探测设备用于探测包含有目标区域的多帧三维点云;
所述存储器用于存储程序代码;所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
获取包含有目标区域的多帧三维点云;
对所述多帧三维点云进行预处理;
根据预处理后的所述多帧三维点云和预设的修正模型,确定所述多帧三维点云的高度值修正参数;
根据所述高度值修正参数对所述多帧三维点云的高度值进行修正,以修正对所述目标区域的识别。
本发明实施例的第三方面是提供一种可移动平台,包括:机身、动力系统和第二方面所述的点云的处理系统。
本发明实施例的第四方面是提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行以实现第一方面所述的方法。
本实施例提供的一种点云的处理方法、设备和计算机可读存储介质,通过获取包含有目标区域的多帧三维点云;对多帧三维点云进行预处理;根据预处理后的多帧三维点云和预设的修正模型,确定多帧三维点云的高度值修正参数;根据高度值修正参数对多帧三维点云的高度值进行修正,以修正对目标区域的识别。由于修正模型能够确定对多帧三维点云的高度值进行修正的高度值修正参数,因此根据高度值修正参数对多帧三维点云的高度值进行修正后,能够提高目标区域的识别精度。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的点云的处理方法的流程图;
图2为本发明实施例提供的一种应用场景的示意图;
图3为本发明另一实施例提供的点云的处理方法的流程图;
图4为本发明另一实施例提供的点云的处理方法的流程图;
图5为本发明另一实施例提供的点云的处理方法的流程图;
图6为对地面点云进行修正前的效果图;
图7为采用本发明实施例的方法对地面点云修正后的效果图;
图8为本发明实施例提供的点云的处理系统的结构图;
图9是本发明实施例一种可移动平台的结构图。
附图标记:
21:车辆;22:探测设备;23:前方车辆;
80:点云的处理系统;81:探测设备;82:存储器;83:处理器;
90:可移动平台;91:机身;92:动力系统;93:点云的处理系统。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本发明实施例提供一种点云的处理方法。本发明实施例提供的点云的处理方法可以应用于车辆上,例如无人驾驶车辆,或者搭载有高级辅助驾驶(Advanced Driver Assistance Systems,ADAS)系统的车辆等。可以理解的是,点云的处理方法还可以应用于无人机上,例如搭载有获取点云数 据的探测设备的无人机。本发明实施例提供的点云的处理方法可以应用于实时的地面三维重建,地面三维重建的意义在于,由于激光雷达扫描得到的点云包含有大部分地面点,这些地面点对后续的障碍物点云的分类、识别和跟踪会造成影响,例如,在一个典型的应用场景中,车辆的前方区域中包括地面区域、其他车辆、建筑物、树木、围栏、行人等。其中,前方车辆的车轮底部与地面接触,在其他实施例中,车辆的前方区域可能还会有交通指示牌等物体,交通指示牌的底部也与地面有接触。因此,在识别前方车辆、交通指示牌等对象时,由于单帧激光点云的稀疏特性,使得现有的采用激光点云进行三维场景重建的方法必须累积一段时间内的多帧点云进行时序融合,才能重建出较高质量的三维场景。但是,在自动驾驶系统中,车载激光雷达是随车运动的,由于车辆定位误差的影响,使得累积的多帧点云在融合后会出现同一表面在z轴上抖动较大,进而导致重建精度不够理想,就很容易将该前方车辆底部的地面点和/或该交通指示牌底部的地面点误识别为前方车辆或交通指示牌的三维点,或者将该前方车辆底部点和/或该交通指示牌底部点漏检为三维点。因此在三维点云中对车辆、交通指示牌、建筑物、树木、围栏、行人等进行识别时,需要先识别出该三维点云中的地面点云并过滤掉地面点。但现有的对地面点云的识别方法精度不高,导致对于地面点云的识别存在误差,进而导致障碍物,尤其是矮小障碍物的误检或漏检问题。本发明实施例提出的点云的处理方法可以对点云进行修正,降低多帧累积起来的负面影响,进而得到更为理想的结果。
本发明实施例提供一种点云的处理方法。图1为本发明实施例提供的点云的处理方法的流程图。如图1所示,本实施例中的方法,可以包括:
步骤S101、获取包含有目标区域的多帧三维点云。
在本发明实施例中,多帧三维点云是局部坐标系下的。
在一种可选的实施方式中,获取包含有目标区域的多帧三维点云,可以直接通过获取局部坐标系下的多帧三维点云来得到。其中,局部坐标系是以搭载有探测多帧三维点云的探测设备的载体为原点建立的坐标系,例如,以车辆为原点建立的坐标系。其中,载体可以是车辆,也可以是无人 机,本发明对此不做具体限定。
在另一种可选的实施方式中,获取包含有目标区域的多帧三维点云,包括:获取探测设备坐标系下包含有目标区域的多帧三维点云;根据探测设备坐标系和局部坐标系之间的转换关系,将探测设备探测到的三维点云转换至局部坐标系下。可选的,获取探测设备坐标系下包含有目标区域的多帧三维点云,包括:获取载体上搭载的探测设备探测到的载体周围包含有目标区域的三维点云。
具体的,如图2所示,车辆21上设置有探测设备22,该探测设备22具体可以是双目立体相机、TOF相机和/或激光雷达。例如,车辆21在行驶的过程中,车辆21的行驶方向为图2中箭头所指方向,探测设备22实时探测车辆21周围环境信息的三维点云。探测设备22以激光雷达为例,当该激光雷达发射出的一束激光照射到物体表面时,该物体表面将会对该束激光进行反射,该激光雷达根据该物体表面反射的激光,可确定该物体相对于该激光雷达的方位、距离等信息。若该激光雷达发射出的该束激光按照某种轨迹进行扫描,例如360度旋转扫描,将得到大量的激光点,因而就可形成该物体的激光点云数据,也就是三维点云。
其中,步骤S101获取的三维点云是在当前时间窗口内累积的连续N帧稀疏点云数据。
可选的,目标区域可以是具有平坦表面的对象。本发明实施例以目标区域是地面区域为例进行说明,但不限于地面区域,目标区域也可以是墙面或者桌面等对象,本发明对此不做具体限定。本发明实施例的方法同样可以适用于墙面或者桌面等具有平坦表面的对象的识别。
步骤S102、对多帧三维点云进行预处理。
由于多帧三维点云中包含了非目标区域的点云或噪声点,因此,需要对多帧三维点云进行预处理,以过滤掉非目标区域的点云或者噪声点。
可选的,对多帧三维点云进行预处理,包括:去除多帧三维点云中的噪声点,其中,去除的噪声点是指不属于目标区域的三维点。
步骤S103、根据预处理后的多帧三维点云和预设的修正模型,确定多帧三维点云的高度值修正参数。
具体的,是将预处理后的多帧三维点云输入预设的修正模型,预设的 修正模型就会输出多帧三维点云的高度值修正参数。
步骤S104、根据高度值修正参数对多帧三维点云的高度值进行修正,以修正对目标区域的识别。
本实施例中,假设三维点云中某一个三维点的三维坐标为(x i,y i,z i),x i,y i,z i分别代表三维点在局部坐标系下X、Y、Z方向上的坐标值,则高度值是指该三维点在局部坐标系Z方向上的坐标值。其中,局部坐标系是指以搭载有探测多帧三维点云的探测设备的载体为原点建立的坐标系,例如,以车辆为原点建立的坐标系。
具体的,由于激光雷达扫描得到的三维点云中地面区域与其他对象的误识别主要是由于地面区域的高度值误差引起的,因此,通过高度值修正参数对多帧三维点云的高度值进行修正,就能够修正对于地面区域的识别,提高地面的识别精度,实现地面的三维重建。继续以上述典型的应用场景为例,例如车辆21的前方区域中包括地面区域、其他车辆、建筑物、树木、围栏、行人等。如图2所示,车辆21的前方车辆23的车轮底部与地面接触,在其他实施例中,车辆21的前方区域可能还会有交通指示牌等物体,交通指示牌的底部也与地面接触,因此,在识别车辆21的前方车辆23、交通指示牌等物体时,如果地面区域的高度值不够精确,就很容易将该前方车辆23底部的地面点和/或该交通指示牌底部的地面点误识别为前方车辆或交通指示牌的三维点。而通过高度值修正参数对地面区域进行修正后,就可以将该前方车辆23底部的地面点和/或该交通指示牌底部的地面点识别为非地面区域的三维点,即前方车辆23或交通指示牌的三维点。
本实施例通过获取包含有目标区域的多帧三维点云;对多帧三维点云进行预处理;根据预处理后的多帧三维点云和预设的修正模型,确定多帧三维点云的高度值修正参数;根据高度值修正参数对多帧三维点云的高度值进行修正,以修正对目标区域的识别。由于修正模型能够确定对多帧三维点云的高度值进行修正的高度值修正参数,因此根据高度值修正参数对多帧三维点云的高度值进行修正后,能够提高目标区域的识别精度。
本发明实施例提供一种点云的处理方法。图3为本发明另一实施例提 供的点云的处理方法的流程图。如图3所示,在图1所示实施例的基础上,本实施例中的方法,对多帧三维点云进行预处理可以是将激光雷达扫描得到的三维点云投影到世界坐标系的XOY平面上,然后根据映射在XOY平面的栅格内的三维点云之间的高度极差来确定栅格中的点是否为属于地面区域的点云,具体包括如下步骤:
步骤S301、根据多帧三维点云的高度值确定高度图,其中,确定的高度图包括多个栅格。
可选的,根据多帧三维点云的高度值确定高度图,包括:在世界坐标系下确定一目标平面;根据局部坐标系和世界坐标系之间的转换关系,将局部坐标系下的多帧三维点云投影至目标平面;根据投影在目标平面内的多帧三维点云的高度值确定高度图。具体的,以Z轴竖直朝下的右手坐标系为世界坐标系,目标平面可以是世界坐标系下被划分为包含多个大小一致的正方形栅格的XOY平面。同样,以车辆为原点建立一个Z轴竖直向下的局部坐标系,使局部坐标系的X、Y、Z轴和世界坐标系的X、Y、Z轴分别对齐,假设需要累积n帧稀疏点云来重建地面,则通过将累积的n帧点云投影到世界坐标系下的XOY平面上,就可以得到点云的高度图。
具体的,根据局部坐标系和世界坐标系之间的转换关系,将局部坐标系下三维点云中的每个三维点投影到世界坐标系下,举例如下:点j表示三维点云中的一个三维点,该三维点在局部坐标系中的位置记为
Figure PCTCN2019088931-appb-000001
将点j转换到世界坐标系下的位置记为
Figure PCTCN2019088931-appb-000002
局部坐标系和世界坐标系之间的转换关系为R,在世界坐标系下,激光雷达的三维位置即平移向量为t,则可以通过公式:
Figure PCTCN2019088931-appb-000003
可以得到点j转换到世界坐标系下的位置
Figure PCTCN2019088931-appb-000004
由此,可以计算得到点i在世界坐标系中的投影点。
同理,可确定出三维点云中除点j之外的其他三维点在目标平面中的投影点。进而根据投影在目标平面内的点i和其他三维点的高度值确定高度图。
步骤S302、根据预设的目标区域高度值在高度图中确定出大致的目标区域。
在一些实施例中,预设的目标区域高度值可以是预设的地面区域高度值,对于预设的地面区域高度值时,可以采用在局部坐标系中车辆的高度 来估算出一个初步的地面区域高度值,假设车辆的高度最大值为z 1,车辆的整体高度为1.5m,则通过z 1-1.5可以得到初步的地面区域高度值,根据这一结果可以在上述高度图中确定出大致的地面区域。此处确定的目标区域为从高度图中划分出的一个大致的目标区域所在的栅格范围,并不精确,可能包含了其他物体的三维点,因此,还需要通过后续的处理进一步滤除这些三维点。
步骤S303、计算大致的目标区域所在的栅格中,同一栅格中最大高度值与最小高度值之间的差值。
假设经过投影之后映射在高度图的某一个栅格中有w个三维点,这w个三维点中的高度值中最大高度值为w h,最小高度值为w l,则通过计算w h-w l可以得到该栅格中最大高度值与最小高度值之间的差值。
步骤S304、确定差值低于差值阈值且与预设的目标区域高度值之间的距离小于预设距离的栅格。
假设w h-w l低于差值阈值,并且(w h-w l)-(z 1-0.5)小于预设距离,则将该三维点对应的栅格标记。具体的标记方法可以参见现有技术中的标记方法,例如,使用不同颜色来进行标记,本发明在此不做具体限定。
步骤305、去除差值低于差值阈值且与预设的目标区域高度值之间的距离小于预设距离的栅格之外的三维点云。
在通过上述步骤标记了w h-w l低于差值阈值,并且(w h-w l)-(z 1-0.5)小于预设距离的三维点对应的栅格后,则可以将大致的目标区域中未标记的栅格去除掉,其中,未标记的栅格中的点可以认为是非地面点云或噪声点,由此完成去除差值低于差值阈值且与预设的目标区域高度值之间的距离小于预设距离的栅格之外的三维点云,可以实现对目标区域的初步识别。此时,识别到的目标区域还需进一步进行修正,以提高目标区域的识别精度。
本发明实施例提供一种点云的处理方法。图4为本发明另一实施例提供的点云的处理方法的流程图。如图4所示,在上述实施例的基础上,根据局部坐标系和世界坐标系之间的转换关系,将局部坐标系下的多帧三维点云投影至目标平面,可以包括:
步骤S401、将目标平面划分为多个大小相等的栅格,每个栅格具有栅格号。
步骤S402、根据局部坐标系和世界坐标系之间的转换关系,计算局部坐标系下的多帧三维点云在目标平面中对应的栅格号。
例如,将局部坐标系下的XOY平面按照0.2*0.2m的正方形划分,得到多个栅格,并对这些栅格进行编号,得到栅格号。同样,世界坐标系下的XOY平面也按照0.2*0.2m的正方形划分,得到多个栅格,并对这些栅格编号,根据栅格编号和每个栅格的大小0.2*0.2m,可以得到该栅格对应的x轴和y轴坐标,根据局部坐标系和世界坐标系之间的转换关系将x轴和y轴坐标转换至世界坐标系下,得到世界坐标系下的x轴和y轴坐标,进而根据世界坐标系下的x轴和y轴坐标可以得到局部坐标系中某一栅格在世界坐标系中对应的栅格。
步骤S403、根据局部坐标系和世界坐标系之间的转换关系,计算局部坐标系下的多帧三维点云在目标平面中对应的高度值;
同理,按照上述步骤S402的举例介绍也可以得到局部坐标系下的多帧三维点云在目标平面中对应的高度值。
其中,对于步骤S402和步骤S403也可以是先执行步骤S403,再执行步骤S402,步骤S402和步骤S403可以认为是并列执行,不存在先后执行顺序。
步骤S404、根据局部坐标系下的多帧三维点云在目标平面中对应的栅格号和局部坐标系下的多帧三维点云在目标平面中对应的高度值,确定高度图。
在经过上述步骤S401-步骤S403计算得到局部坐标系下的多帧三维点云在目标平面中对应的栅格号,以及计算得到局部坐标系下的多帧三维点云在目标平面中对应的高度值之后,就可以将栅格号和高度值对应起来,实现将局部坐标系下的三维点映射至世界坐标系下,得到高度图。
本发明实施例提供一种点云的处理方法。图5为本发明另一实施例提供的点云的处理方法的流程图。如图5所示,在上述实施例的基础上,预设的修正模型包括最优化求解模型,则根据预处理后的多帧三维点云和预 设的修正模型,确定多帧三维点云的高度值修正参数,可以包括:
步骤S501、将预处理后的三维点云输入最优化求解模型。
可选的,最优化求解模型的函数方程具体如下:
Figure PCTCN2019088931-appb-000005
式中,i表示三维点云对应的图像帧编号;j表示三维点云图像中三维点的编号;
Figure PCTCN2019088931-appb-000006
表示第i帧三维点云图像中第j个三维点的三维坐标值,m表示第i帧三维点云图像中三维点的总数量;n表示三维点云图像的总数量,即累计多帧三维点云图像的总数量;
Figure PCTCN2019088931-appb-000007
表示第i帧三维点云图像中第j个三维点的高度值修正量,其表达式为:
Figure PCTCN2019088931-appb-000008
其中,a i表示第一修正系数,b i表示第二修正系数,c i表示第三修正系数;s表示高度图中栅格的编号,
Figure PCTCN2019088931-appb-000009
表示在高度图上第s号栅格处累积的三维点在修正以后的高度值的均值,其表达式为:
Figure PCTCN2019088931-appb-000010
其中,K表示所述差值低于差值阈值且与所述地面区域之间的距离小于预设距离的栅格中累积的三维点云的总数量,i k表示第k个点所属的三维点云的图像帧号;A用于表示所述多帧三维点云的第一修正系数,B用于表示所述多帧三维点云的第二修正系数,C用于表示所述多帧三维点云的第三修正系数,A=[a 1...a i...,a n] T,B=[b 1...b i...,b n] T,C=[c 1...c i...,c n] T
步骤S502、采用线性最小二乘法求解最优化求解模型,得到修正系数。
具体的,将
Figure PCTCN2019088931-appb-000011
输入上述公式(1)之后,采用线性最小二乘法求解上述的公式(1),就可以得到上述公式(1)在取得最小值时的(a i,b i,c i),(a i,b i,c i)是对第i帧图像进行修正的修正系数。
同样的,将其他三维点输入上述公式(1)之后,也可以得到对其他帧图像进行修正的修正系数,所有帧图像的修正系数为A=[a 1...a i...,a n] T,B=[b 1...b i...,b n] T,C=[c 1...c i...,c n] T
可选的,可以将所有帧图像的所有三维点都输入上述公式(1),建立一个线性方程组,通过对该线性方程组并行求解,可以同时得到所有帧图像的修正系数。这样的并行计算,能够提高运算效率,很好的满足车载系统的实时性要求。
步骤503、根据所述修正系数确定所述多帧三维点云的高度值修正参数。
可选的,修正系数包括第一修正系数、第二修正系数和第三修正系数;则根据所述修正系数确定所述多帧三维点云的高度值修正参数,包括:根据多帧三维点云的第一修正系数、第二修正系数、第三修正系数和该帧三维点云的三维坐标值,计算该帧三维点云的高度值修正参数。具体的,根据该帧三维点云的第一修正系数、第二修正系数、第三修正系数和该帧三维点云的三维坐标值,计算该帧三维点云的高度值修正参数,可以根据如下函数方程式来计算:
Figure PCTCN2019088931-appb-000012
式中,a i,b i,c i分别为第一修正系数、第二修正系数和第三修正系数;(a i,b i,c i)为第i帧图像进行修正的修正系数,
Figure PCTCN2019088931-appb-000013
表示第i帧三维点云图像中第j个三维点的三维坐标值,d表示对第i帧图像中所有三维点进行修正的高度值修正参数。
将(a i,b i,c i)和
Figure PCTCN2019088931-appb-000014
代入上述公式(2)之后,可以求解得到对第i帧图像中所有三维点进行修正的高度值修正参数d。
可选的,在求解得到对第i帧图像中所有三维点进行修正的高度值修正参数d,就可以根据第i帧图像中所有三维点进行修正的高度值修正参数d对第i帧图像中所有三维点的高度值进行修正了。例如,假设修正前第i帧图像中第j个三维点的坐标值为
Figure PCTCN2019088931-appb-000015
则修正后第i帧图像中第j个三维点的坐标值为
Figure PCTCN2019088931-appb-000016
图6为对地面点云进行修正前的效果图。
图7为采用本发明实施例的方法对地面点云修正后的效果图。
如图6和图7所示,图中黑色点形成的区域即为地面区域,可以看到图6中识别出的地面区域抖动大,在Z轴上的分布较宽,而图7中识别出的地面区域更加平滑紧凑,在Z轴上的分布较窄,因此,采用本发明实施例的方法修正后的地面区域的识别更加精准。
本发明实施例提供一种点云的处理系统。图8为本发明实施例提供的点云的处理系统的结构图,如图8所示,点云的处理系统80包括探测设备81、存储器82和处理器83。其中,探测设备81用于探测包含有目标区域的多帧三维点云;存储器82用于存储程序代码;处理器83,调用程 序代码,当程序代码被执行时,用于执行以下操作:获取包含有目标区域的多帧三维点云;对多帧三维点云进行预处理;根据预处理后的多帧三维点云和预设的修正模型,确定多帧三维点云的高度值修正参数;根据高度值修正参数对多帧三维点云的高度值进行修正,以修正对目标区域的识别。本实施例中的探测设备81可以是图2中的探测设备22。
可选的,处理器83对多帧三维点云进行预处理时,具体用于:去除多帧三维点云中的噪声点,该噪声点是指不属于目标区域的三维点。
可选的,处理器83去除多帧三维点云中的噪声点时,具体用于:根据多帧三维点云的高度值确定高度图,该高度图包括多个栅格;根据预设的目标区域高度值在高度图中确定出大致的目标区域;计算大致的目标区域所在的栅格中,同一栅格中最大高度值与最小高度值之间的差值;确定差值低于差值阈值且与预设的目标区域高度值之间的距离小于预设距离的栅格;去除差值低于差值阈值且与预设的目标区域高度值之间的距离小于预设距离的栅格之外的三维点云。
可选的,处理器83在获取多帧三维点云时,具体用于:获取局部坐标系下的多帧三维点云,局部坐标系是以搭载有探测多帧三维点云的探测设备的载体为原点建立的坐标系;处理器83根据多帧三维点云的高度值确定高度图时,具体用于:在世界坐标系下确定一目标平面;根据局部坐标系和世界坐标系之间的转换关系,将局部坐标系下的多帧三维点云投影至目标平面;根据投影在目标平面内的多帧三维点云的高度值确定高度图。
可选的,处理器83根据局部坐标系和世界坐标系之间的转换关系,将局部坐标系下的多帧三维点云投影至目标平面时,具体用于:将目标平面划分为多个大小相等的栅格,每个栅格具有栅格号;根据局部坐标系和世界坐标系之间的转换关系,计算局部坐标系下的多帧三维点云在目标平面中对应的栅格号;根据局部坐标系和世界坐标系之间的转换关系,计算局部坐标系下的多帧三维点云在目标平面中对应的高度值;根据局部坐标系下的多帧三维点云在目标平面中对应的栅格号和局部坐标系下的多帧三维点云在目标平面中对应的高度值,确定高度图。
可选的,预设的修正模型包括最优化求解模型;处理器83根据预处理后的多帧三维点云和预设的修正模型,确定多帧三维点云的高度值修正 参数时,具体用于:将预处理后的所述三维点云输入所述最优化求解模型;采用线性最小二乘法求解所述最优化求解模型,得到修正系数;根据所述修正系数确定所述多帧三维点云的高度值修正参数。
可选的,修正系数包括第一修正系数、第二修正系数和第三修正系数;处理器83根据所述修正系数确定所述多帧三维点云的高度值修正参数时,具体用于:根据多帧三维点云的第一修正系数、第二修正系数、第三修正系数和该帧三维点云的三维坐标值,计算该帧三维点云的高度值修正参数。
可选的,处理器83获取局部坐标系下的多帧三维点云时,具体用于:获取探测设备探测的包含有目标区域的多帧三维点云;根据探测设备坐标系和局部坐标系之间的转换关系,将探测设备探测的多帧三维点云转换至局部坐标系下。
可选的,探测设备包括以下中的至少一个:双目立体相机、TOF相机和激光雷达。
可选的,目标区域为地面区域。
本发明实施例提供的点云的处理系统的具体原理和实现方式均与上述实施例类似,此处不再赘述。
本实施例获取包含有目标区域的多帧三维点云;对多帧三维点云进行预处理;根据预处理后的多帧三维点云和预设的修正模型,确定多帧三维点云的高度值修正参数;根据高度值修正参数对多帧三维点云的高度值进行修正,以修正对目标区域的识别。由于修正模型能够确定对多帧三维点云的高度值进行修正的高度值修正参数,因此根据高度值修正参数对多帧三维点云的高度值进行修正后,能够提高目标区域的识别精度。
本发明实施例提供一种可移动平台。图9是本发明实施例一种可移动平台的结构图。本发明实施例是在图8所示实施例提供的技术方案的基础上提供的一种可移动平台。如图9所示,可移动平台90包括:机身91、动力系统92和点云的处理系统93。本实施例中的点云的处理系统93可以是上述实施例中提供的点云的处理系统80。
本发明实施例提供的点云的处理系统的具体原理和实现方式均与图8所示实施例类似,此处不再赘述。
本实施例通过获取包含有目标区域的多帧三维点云;对多帧三维点云进行预处理;根据预处理后的多帧三维点云和预设的修正模型,确定多帧三维点云的高度值修正参数;根据高度值修正参数对多帧三维点云的高度值进行修正,以修正对目标区域的识别。由于修正模型能够确定对多帧三维点云的高度值进行修正的高度值修正参数,因此根据高度值修正参数对多帧三维点云的高度值进行修正后,能够提高目标区域的识别精度。
另外,本实施例还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行以实现上述实施例的点云的处理方法。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘 等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (22)

  1. 一种点云的处理方法,其特征在于,包括:
    获取包含有目标区域的多帧三维点云;
    对所述多帧三维点云进行预处理;
    根据预处理后的所述多帧三维点云和预设的修正模型,确定所述多帧三维点云的高度值修正参数;
    根据所述高度值修正参数对所述多帧三维点云的高度值进行修正,以修正对所述目标区域的识别。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述多帧三维点云进行预处理,包括:
    去除所述多帧三维点云中的噪声点,所述噪声点是指不属于所述目标区域的三维点。
  3. 根据权利要求2所述的方法,其特征在于,所述去除所述多帧三维点云中的噪声点,包括:
    根据所述多帧三维点云的高度值确定高度图,所述高度图包括多个栅格;
    根据预设的目标区域高度值在所述高度图中确定出大致的目标区域;
    计算所述大致的目标区域所在的栅格中,同一所述栅格中最大高度值与最小高度值之间的差值;
    确定所述差值低于差值阈值且与所述预设的目标区域高度值之间的距离小于预设距离的栅格;
    去除所述差值低于差值阈值且与所述预设的目标区域高度值之间的距离小于预设距离的栅格之外的三维点云。
  4. 根据权利要求3所述的方法,其特征在于,所述获取包含有目标区域的多帧三维点云,包括:
    获取局部坐标系下的所述多帧三维点云,所述局部坐标系是以搭载有探测所述多帧三维点云的探测设备的载体为原点建立的坐标系;
    所述根据所述多帧三维点云的高度值确定高度图,包括:
    在世界坐标系下确定一目标平面;
    根据所述局部坐标系和所述世界坐标系之间的转换关系,将所述局部 坐标系下的所述多帧三维点云投影至所述目标平面;
    根据投影在所述目标平面内的所述多帧三维点云的高度值确定高度图。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述局部坐标系和所述世界坐标系之间的转换关系,将所述局部坐标系下的所述多帧三维点云投影至所述目标平面,包括:
    将所述目标平面划分为多个大小相等的栅格,每个所述栅格具有栅格号;
    根据所述局部坐标系和所述世界坐标系之间的转换关系,计算所述局部坐标系下的多帧三维点云在所述目标平面中对应的栅格号;
    根据所述局部坐标系和所述世界坐标系之间的转换关系,计算所述局部坐标系下的多帧三维点云在所述目标平面中对应的高度值;
    根据所述局部坐标系下的多帧三维点云在所述目标平面中对应的栅格号和所述局部坐标系下的多帧三维点云在所述目标平面中对应的高度值,确定所述高度图。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述预设的修正模型包括最优化求解模型;所述根据预处理后的所述多帧三维点云和预设的修正模型,确定所述多帧三维点云的高度值修正参数,包括:
    将预处理后的所述多帧三维点云输入所述最优化求解模型;
    采用线性最小二乘法求解所述最优化求解模型,得到修正系数;
    根据所述修正系数确定所述多帧三维点云的高度值修正参数。
  7. 根据权利要求6所述的方法,其特征在于,所述修正系数包括第一修正系数、第二修正系数和第三修正系数;
    所述根据所述修正系数确定所述多帧三维点云的高度值修正参数,包括:
    根据所述多帧三维点云的所述第一修正系数、所述第二修正系数、所述第三修正系数和该帧三维点云的三维坐标值,计算该帧三维点云的高度值修正参数。
  8. 根据权利要求4或5所述的方法,其特征在于,所述获取局部坐标系下的所述多帧三维点云,包括:
    获取所述探测设备探测的包含有目标区域的多帧三维点云;
    根据所述探测设备坐标系和所述局部坐标系之间的转换关系,将所述探测设备探测的所述多帧三维点云转换至所述局部坐标系下。
  9. 根据权利要求8所述的方法,其特征在于,所述探测设备包括以下中的至少一个:
    双目立体相机、TOF相机和激光雷达。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述目标区域为地面区域。
  11. 一种点云的处理系统,其特征在于,包括:探测设备、存储器和处理器;
    所述探测设备用于探测包含有目标区域的多帧三维点云;
    所述存储器用于存储程序代码;所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    获取包含有目标区域的多帧三维点云;
    对所述多帧三维点云进行预处理;
    根据预处理后的所述多帧三维点云和预设的修正模型,确定所述多帧三维点云的高度值修正参数;
    根据所述高度值修正参数对所述多帧三维点云的高度值进行修正,以修正对所述目标区域的识别。
  12. 根据权利要求11所述的系统,其特征在于,所述处理器对所述多帧三维点云进行预处理时,具体用于:
    去除所述多帧三维点云中的噪声点,所述噪声点是指不属于所述目标区域的三维点。
  13. 根据权利要求12所述的系统,其特征在于,所述处理器去除所述多帧三维点云中的噪声点时,具体用于:
    根据所述多帧三维点云的高度值确定高度图,所述高度图包括多个栅格;
    根据预设的目标区域高度值在所述高度图中确定出大致的目标区域;
    计算所述大致的目标区域所在的栅格中,同一所述栅格中最大高度值与最小高度值之间的差值;
    确定所述差值低于差值阈值且与所述预设的目标区域高度值之间的距离小于预设距离的栅格;
    去除所述差值低于差值阈值且与所述预设的目标区域高度值之间的距离小于预设距离的栅格之外的三维点云。
  14. 根据权利要求13所述的系统,其特征在于,所述处理器获取多帧三维点云时,具体用于:
    获取局部坐标系下的所述多帧三维点云,所述局部坐标系是以搭载有探测所述多帧三维点云的探测设备的载体为原点建立的坐标系;
    所述处理器根据所述多帧三维点云的高度值确定高度图时,具体用于:
    在世界坐标系下确定一目标平面;
    根据所述局部坐标系和所述世界坐标系之间的转换关系,将所述局部坐标系下的所述多帧三维点云投影至所述目标平面;
    根据投影在所述目标平面内的所述多帧三维点云的高度值确定高度图。
  15. 根据权利要求14所述的系统,其特征在于,所述处理器根据所述局部坐标系和所述世界坐标系之间的转换关系,将所述局部坐标系下的所述多帧三维点云投影至所述目标平面时,具体用于:
    将所述目标平面划分为多个大小相等的栅格,每个所述栅格具有栅格号;
    根据所述局部坐标系和所述世界坐标系之间的转换关系,计算所述局部坐标系下的多帧三维点云在所述目标平面中对应的栅格号;
    根据所述局部坐标系和所述世界坐标系之间的转换关系,计算所述局部坐标系下的多帧三维点云在所述目标平面中对应的高度值;
    根据所述局部坐标系下的多帧三维点云在所述目标平面中对应的栅格号和所述局部坐标系下的多帧三维点云在所述目标平面中对应的高度值,确定所述高度图。
  16. 根据权利要求11-15任一项所述的系统,其特征在于,所述预设的修正模型包括最优化求解模型;
    所述处理器根据预处理后的所述多帧三维点云和预设的修正模型,确定所述多帧三维点云的高度值修正参数时,具体用于:
    将预处理后的所述三维点云输入所述最优化求解模型;
    采用线性最小二乘法求解所述最优化求解模型,得到修正系数;
    根据所述修正系数确定所述多帧三维点云的高度值修正参数。
  17. 根据权利要求16所述的系统,其特征在于,所述修正系数包括第一修正系数、第二修正系数和第三修正系数;
    所述处理器根据所述修正系数确定所述多帧三维点云的高度值修正参数时,具体用于:
    根据所述多帧三维点云的所述第一修正系数、所述第二修正系数、所述第三修正系数和该帧三维点云的三维坐标值,计算该帧三维点云的高度值修正参数。
  18. 根据权利要求14或15所述的系统,其特征在于,所述处理器获取局部坐标系下的所述多帧三维点云时,具体用于:
    获取所述探测设备探测的包含有目标区域的多帧三维点云;
    根据所述探测设备坐标系和所述局部坐标系之间的转换关系,将所述探测设备探测的所述多帧三维点云转换至所述局部坐标系下。
  19. 根据权利要求18所述的系统,其特征在于,所述探测设备包括以下中的至少一个:
    双目立体相机、TOF相机和激光雷达。
  20. 根据权利要求11-19任一项所述的系统,其特征在于,所述目标区域为地面区域。
  21. 一种可移动平台,其特征在于,包括:机身、动力系统和权利要求11-20任一项所述的点云的处理系统。
  22. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被处理器执行以实现如权利要求1-10任一项所述的方法。
PCT/CN2019/088931 2019-05-29 2019-05-29 点云的处理方法、设备和计算机可读存储介质 WO2020237516A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/088931 WO2020237516A1 (zh) 2019-05-29 2019-05-29 点云的处理方法、设备和计算机可读存储介质
CN201980012171.7A CN111699410A (zh) 2019-05-29 2019-05-29 点云的处理方法、设备和计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/088931 WO2020237516A1 (zh) 2019-05-29 2019-05-29 点云的处理方法、设备和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020237516A1 true WO2020237516A1 (zh) 2020-12-03

Family

ID=72476452

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088931 WO2020237516A1 (zh) 2019-05-29 2019-05-29 点云的处理方法、设备和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111699410A (zh)
WO (1) WO2020237516A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435193A (zh) * 2020-11-30 2021-03-02 中国科学院深圳先进技术研究院 一种点云数据去噪的方法、装置、存储介质和电子设备
CN114556419A (zh) * 2020-12-15 2022-05-27 深圳市大疆创新科技有限公司 三维点云分割方法和装置、可移动平台
CN114111568B (zh) * 2021-09-30 2023-05-23 深圳市速腾聚创科技有限公司 动态目标外观尺寸的确定方法及装置、介质及电子设备
CN114782438B (zh) * 2022-06-20 2022-09-16 深圳市信润富联数字科技有限公司 物体点云修正方法、装置、电子设备和存储介质
CN115830262B (zh) * 2023-02-14 2023-05-26 济南市勘察测绘研究院 一种基于对象分割的实景三维模型建立方法及装置
CN116309124B (zh) * 2023-02-15 2023-10-20 霖鼎光学(江苏)有限公司 一种光学曲面模具的修正方法、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100091020A1 (en) * 2006-10-20 2010-04-15 Marcin Michal Kmiecik Computer arrangement for and method of matching location Data of different sources
CN102831646A (zh) * 2012-08-13 2012-12-19 东南大学 一种基于扫描激光的大尺度三维地形建模方法
CN106530380A (zh) * 2016-09-20 2017-03-22 长安大学 一种基于三维激光雷达的地面点云分割方法
CN110274602A (zh) * 2018-03-15 2019-09-24 奥孛睿斯有限责任公司 室内地图自动构建方法及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106441151A (zh) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 一种基于视觉和主动光学融合的三维目标欧式空间重建的测量系统
CN109521403B (zh) * 2017-09-19 2020-11-20 百度在线网络技术(北京)有限公司 多线激光雷达的参数标定方法及装置、设备及可读介质
CN108254758A (zh) * 2017-12-25 2018-07-06 清华大学苏州汽车研究院(吴江) 基于多线激光雷达和gps的三维道路构建方法
CN109297510B (zh) * 2018-09-27 2021-01-01 百度在线网络技术(北京)有限公司 相对位姿标定方法、装置、设备及介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100091020A1 (en) * 2006-10-20 2010-04-15 Marcin Michal Kmiecik Computer arrangement for and method of matching location Data of different sources
CN102831646A (zh) * 2012-08-13 2012-12-19 东南大学 一种基于扫描激光的大尺度三维地形建模方法
CN106530380A (zh) * 2016-09-20 2017-03-22 长安大学 一种基于三维激光雷达的地面点云分割方法
CN110274602A (zh) * 2018-03-15 2019-09-24 奥孛睿斯有限责任公司 室内地图自动构建方法及系统

Also Published As

Publication number Publication date
CN111699410A (zh) 2020-09-22

Similar Documents

Publication Publication Date Title
WO2020237516A1 (zh) 点云的处理方法、设备和计算机可读存储介质
CN111553859B (zh) 一种激光雷达点云反射强度补全方法及系统
CN108320329B (zh) 一种基于3d激光的3d地图创建方法
WO2022156175A1 (zh) 融合图像和点云信息的检测方法、系统、设备及存储介质
CN112396650B (zh) 一种基于图像和激光雷达融合的目标测距系统及方法
CN113111887B (zh) 一种基于相机和激光雷达信息融合的语义分割方法及系统
CN113657224B (zh) 车路协同中用于确定对象状态的方法、装置、设备
CN111882612A (zh) 一种基于三维激光检测车道线的车辆多尺度定位方法
KR20160123668A (ko) 무인자동주차 기능 지원을 위한 장애물 및 주차구획 인식 장치 및 그 방법
CN113409459B (zh) 高精地图的生产方法、装置、设备和计算机存储介质
CN111046776A (zh) 基于深度相机的移动机器人行进路径障碍物检测的方法
CN112097732A (zh) 一种基于双目相机的三维测距方法、系统、设备及可读存储介质
US20230005278A1 (en) Lane extraction method using projection transformation of three-dimensional point cloud map
KR20210090384A (ko) 카메라 및 라이다 센서를 이용한 3d 객체 검출방법 및 장치
WO2022054422A1 (ja) 障害物検知装置、障害物検知システム及び障害物検知方法
CN112017248B (zh) 一种基于点线特征的2d激光雷达相机多帧单步标定方法
CN114692720A (zh) 基于鸟瞰图的图像分类方法、装置、设备及存储介质
JP2022045947A5 (zh)
CN111724432B (zh) 物体三维检测方法和装置
CN115546216B (zh) 一种托盘检测方法、装置、设备及存储介质
CN109598199B (zh) 车道线生成方法和装置
CN116642490A (zh) 基于混合地图的视觉定位导航方法、机器人及存储介质
CN116403186A (zh) 基于FPN Swin Transformer与Pointnet++ 的自动驾驶三维目标检测方法
CN115761164A (zh) 逆透视ipm图像生成的方法和装置
Pfeiffer et al. Ground truth evaluation of the Stixel representation using laser scanners

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19931416

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19931416

Country of ref document: EP

Kind code of ref document: A1