WO2022213376A1 - 障碍物检测方法、装置、可移动平台及存储介质 - Google Patents

障碍物检测方法、装置、可移动平台及存储介质 Download PDF

Info

Publication number
WO2022213376A1
WO2022213376A1 PCT/CN2021/086233 CN2021086233W WO2022213376A1 WO 2022213376 A1 WO2022213376 A1 WO 2022213376A1 CN 2021086233 W CN2021086233 W CN 2021086233W WO 2022213376 A1 WO2022213376 A1 WO 2022213376A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud data
dimensional
height
point
Prior art date
Application number
PCT/CN2021/086233
Other languages
English (en)
French (fr)
Inventor
张易
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/086233 priority Critical patent/WO2022213376A1/zh
Publication of WO2022213376A1 publication Critical patent/WO2022213376A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications

Definitions

  • the present application relates to the technical field of object detection, and in particular, to an obstacle detection method, device, movable platform and storage medium.
  • Obstacle detection is one of the key elements to ensure the safe and reliable movement of movable platforms (such as autonomous vehicles, aircraft, and mobile robots), and is the basis for further decision-making and motion control of movable platforms.
  • movable platforms such as autonomous vehicles, aircraft, and mobile robots
  • the obstacle detection method in the related art usually obtains the environmental information around the movable platform in real time through the payload carried by the movable platform (such as camera, lidar, etc.), and then detects other objects in the surrounding environment based on this environmental information.
  • the obstacle detection method in the related art has problems such as low accuracy and slow detection for long-distance obstacle detection.
  • one of the objectives of the present application is to provide an obstacle detection method, device, movable platform and storage medium.
  • the mobile platform has limited environmental information obtained from a long distance by the payload carried by the mobile platform, and cannot accurately obtain information about the ground, which leads to the obstacle detection method in the related art for long-distance obstacles. The accuracy of the test results is not high.
  • an embodiment of the present application provides an obstacle detection method, including:
  • the deviation is determined according to the height of one or more target three-dimensional points in the first point cloud data; the height of the target three-dimensional point is lower than the height of most of the three-dimensional points in the first point cloud data;
  • Obstacle detection is performed according to the corrected first point cloud data.
  • an embodiment of the present application provides an obstacle detection device, including one or more processors and a memory storing executable instructions;
  • the one or more processors when executing the executable instructions, are individually or collectively configured to:
  • the deviation is determined according to the height of one or more target three-dimensional points in the first point cloud data; the height of the target three-dimensional point is lower than the height of most of the three-dimensional points in the first point cloud data;
  • Obstacle detection is performed according to the corrected first point cloud data.
  • the obstacle detection method in the related art uses point cloud data for detection, and the point cloud data is unstructured data, which needs to be further converted into structured data that can be processed.
  • the depth of the three-dimensional points in the point cloud data determines the The data volume of the converted structured data, in the case that the point cloud data is collected at a long distance, the depth of the three-dimensional points is large, and the data volume of the converted structured data is also large, which requires The long processing time causes the problem of slow obstacle detection, which does not meet the real-time requirements in some scenarios (such as autonomous driving scenarios).
  • an embodiment of the present application provides an obstacle detection method, including:
  • Obstacle detection is performed according to the compressed first point cloud data.
  • an embodiment of the present application provides an obstacle detection device, including one or more processors and a memory storing executable instructions;
  • the one or more processors when executing the executable instructions, are individually or collectively configured to:
  • Obstacle detection is performed according to the compressed first point cloud data.
  • an embodiment of the present application provides a movable platform, including:
  • a power system mounted within the body for powering the movable platform
  • the obstacle detection device according to the second aspect or the fourth aspect.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores executable instructions, and when the executable instructions are executed by a processor, the first aspect or the third aspect is implemented the method described.
  • height correction processing is performed on the acquired first point cloud data, so that the first point cloud data are all corrected to the vicinity of the same height reference plane, and the height reference plane is eliminated or reduced.
  • the influence of surface deviation is beneficial to improve the accuracy of the subsequent obstacle detection process using the corrected first point cloud data.
  • the part of the space where the distance between the three-dimensional points is too large in the acquired first point cloud data is compressed and deleted, so that the point cloud is denser, and after compression is used
  • the first point cloud data is used for obstacle detection, which is beneficial to improve the processing speed, achieve faster obstacle detection results, and meet the real-time requirements in certain scenarios.
  • FIG. 1A is a schematic diagram of an automatic driving scenario provided by an embodiment of the present application.
  • FIG. 1B is a schematic structural diagram of an automatic driving vehicle provided by an embodiment of the present application.
  • FIG. 2 and FIG. 7 are schematic diagrams of different processes of the obstacle detection device provided by the embodiment of the present application.
  • 3A, 3B, and 3C are schematic diagrams related to height correction of point clouds provided by embodiments of the present application.
  • FIG. 4 is a schematic diagram of multiple three-dimensional point sets provided by an embodiment of the present application.
  • 5A and 5B are schematic diagrams related to point cloud compression provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an obstacle detection model provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an obstacle detection device provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a movable platform provided by an embodiment of the present application.
  • the inventor finds that, as the distance increases, the influence of terrain fluctuations on the obstacle detection process gradually increases. For example, when the ground has only a 1° inclination angle, there will be a height difference of 3.5m at a distance of 200m from the movable platform, and a height difference of 7m at a distance of 400m from the movable platform.
  • the mobile platform has limited environmental information obtained from a long distance by the payload carried by the mobile platform, and cannot accurately obtain information about the ground, which leads to the obstacle detection method in the related art for long-distance obstacles. The accuracy of the test results is not high.
  • the autonomous vehicle uses lidar to collect point cloud data of the surrounding environment of the autonomous driving vehicle, and then uses the point cloud data to perform obstacle detection to obtain position information of other vehicles in the surrounding environment.
  • the impact of terrain fluctuations on the obstacle detection process gradually increases.
  • the terrain fluctuations make the ground at long distances deviate, and the point clouds collected from long distances are sparse and may not be able to be extracted effectively. Therefore, the point cloud ground plane correction algorithm in the related art cannot be applied, and the obstacle detection method in the related art is inaccurate for long-distance obstacle detection.
  • the embodiment of the present application provides an obstacle detection method, after acquiring the first point cloud data, the deviation amount can be determined according to the height of one or more target three-dimensional points in the first point cloud data, so
  • the target three-dimensional point is a three-dimensional point with a lower height in the first point cloud data, and then use the deviation to correct the height of the three-dimensional point in the first point cloud data, so as to realize the transformation of the first point cloud data They are all corrected to the vicinity of the same height datum plane, eliminating or reducing the influence of the deviation of the height datum plane (such as terrain fluctuations), which is conducive to improving the accuracy of the subsequent obstacle detection process using the corrected first point cloud data.
  • the obstacle detection method may be applied to an obstacle detection device, and the obstacle detection device may be a chip, an integrated circuit, or an electronic device with a data processing function.
  • the obstacle detection device is a chip or integrated circuit with a data processing function
  • the obstacle detection device includes but is not limited to, for example, a central processing unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processor, DSP) ), Application Specific Integrated Circuit (ASIC) or off-the-shelf Programmable Gate Array (Field-Programmable Gate Array, FPGA), etc.; wherein, the obstacle detection device can be installed on an electronic device.
  • a central processing unit Central Processing Unit, CPU
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the obstacle detection device is an electronic device with a data processing function
  • the electronic device includes, but is not limited to, a movable platform, a terminal device, a server, and the like.
  • the movable platform include, but are not limited to, unmanned aerial vehicles, unmanned vehicles, pan-tilts, unmanned ships, or mobile robots.
  • terminal devices include, but are not limited to: smartphones/mobile phones, tablet computers, personal digital assistants (PDAs), laptop computers, desktop computers, media content players, video game stations/systems, virtual reality systems, augmented reality Systems, wearable devices (eg, watches, glasses, gloves, headwear (eg, hats, helmets, virtual reality headsets, augmented reality headsets, head mounted devices (HMDs), headbands), pendants, armbands , leg loops, shoes, vest), remote control, or any other type of device.
  • PDAs personal digital assistants
  • laptop computers desktop computers
  • media content players e.g, video game stations/systems
  • virtual reality systems augmented reality Systems
  • wearable devices eg, watches, glasses, gloves, headwear (eg, hats, helmets, virtual reality headsets, augmented reality headsets, head mounted devices (HMDs), headbands), pendants, armbands , leg loops, shoes, vest), remote control, or any other type of device.
  • HMDs head mounted devices
  • the movable platform is an autonomous vehicle, and the autonomous vehicle includes the obstacle detection device as an example for illustration.
  • FIG. 1A shows a driving scene of the autonomous vehicle 100
  • FIG. 1B shows a structural diagram of an autonomous vehicle 100 on which the lidar 10 for acquiring point cloud data and the obstacle detection device 20 may be installed.
  • the number of the lidars 10 may be one or more. It can be understood that the installation position of the lidar 10 may be specifically set according to the actual application scenario, for example, one lidar 10 may be installed on the front of the autonomous vehicle 100 .
  • the lidar 10 collects point cloud data of objects around the autonomous driving vehicle 100 and transmits it to the obstacle detection device 20 in the autonomous driving vehicle 100 .
  • the obstacle detection device 20 acquires the point cloud data, performs height correction on the point cloud data based on the obstacle detection method of the embodiment of the present application, and then performs obstacle detection based on the corrected point cloud data to obtain a detection result.
  • the autonomous driving vehicle 100 may use the detection result to make an obstacle avoidance decision or perform route planning; in a second possible implementation manner, it may Displaying the detection result on the interface of the self-driving vehicle 100 or the interface of the terminal communicatively connected to the self-driving vehicle 100, so that the user can know the driving situation of the self-driving vehicle 100 and the road conditions around the self-driving vehicle 100; In a third possible implementation manner, the detection result may be transmitted to other components in the autonomous driving vehicle 100, so that the other components control the autonomous driving vehicle 100 to work safely and reliably based on the detection result .
  • FIG. 2 is a schematic flowchart of an obstacle detection method provided by the embodiment of the present application.
  • the method may be executed by an obstacle detection device.
  • the methods described include:
  • step S101 first point cloud data is acquired.
  • step S102 the deviation is determined according to the height of one or more target three-dimensional points in the first point cloud data; the height of the target three-dimensional point is lower than the height of most three-dimensional points in the first point cloud data high.
  • step S103 the deviation is used to correct the height of the three-dimensional point in the first point cloud data.
  • step S104 obstacle detection is performed according to the corrected first point cloud data.
  • the first point cloud data may be raw point cloud data collected by a detection device (eg, lidar, or other devices that can obtain point cloud data), and the obstacle detection device may The original point cloud data collected by the detection device is processed for height correction.
  • a detection device eg, lidar, or other devices that can obtain point cloud data
  • the obstacle detection device may The original point cloud data collected by the detection device is processed for height correction.
  • point cloud data at a relatively short distance considering that there are some point cloud data that do not need to be highly corrected in the original point cloud data collected by the detection device, such as point cloud data at a relatively short distance from the detection device, a short distance
  • the point cloud data at the location is relatively dense, and effective three-dimensional points on the ground can be extracted.
  • the terrain fluctuation has little effect on the point cloud data at a close distance, so it can be considered not to perform height correction processing on the point cloud data at a close distance.
  • the obstacle detection device can determine, from the original point cloud data collected by the detection device, the first point cloud data that needs to be corrected and the second point cloud data that does not need to be corrected according to the depth of the three-dimensional point, and then The 3D points in the first point cloud data are processed for height correction.
  • the height reference plane of the corrected first point cloud data is closer to the height reference plane of the second point cloud data than the height reference plane of the uncorrected first point cloud data.
  • the height difference between the height reference plane of the corrected first point cloud data and the height reference plane of the second point cloud data is smaller than the height reference plane of the uncorrected first point cloud data and the height reference plane of the second point cloud data. The height difference between the height reference planes of the second point cloud data.
  • This embodiment implements height correction processing on the first point cloud data, and corrects the height reference plane of the first point cloud data to be near the height reference plane of the second point cloud data, thereby eliminating or reducing the fluctuation of the height reference plane. It is beneficial to improve the accuracy of obstacle detection.
  • the first point cloud data and the second point cloud data belong to different depth intervals; wherein, the minimum depth of the depth interval to which the first point cloud data belongs is greater than or equal to that to which the second point cloud data belongs.
  • the first point cloud data for which height correction is required may be the part of the original point cloud data whose depth of three-dimensional points is greater than a preset depth threshold, and the second point cloud data for which height correction is not required is the original point cloud
  • the depth of the 3D point in the data is less than or equal to the preset depth threshold.
  • the preset depth threshold can be specifically set according to the actual application scenario; for example, in the automatic driving scenario, the preset depth threshold is 150 meters and 180 meters. Or 200 meters, etc.; this embodiment realizes the height correction of the point cloud data at a long distance, which is beneficial to improve the accuracy of long-distance obstacle detection.
  • raw point cloud data as shown in Figure 3A is collected by a lidar on an autonomous vehicle.
  • Figure 3A shows a side view of the raw point cloud data, with the X-axis pointing to the autonomous driving The direction in which the vehicle is moving, the Z axis represents the height.
  • the obstacle detection device in the autonomous vehicle determines, according to the depth of the three-dimensional points in the original point cloud data, the first point cloud data and The second point cloud data without height correction, such as shown in Figure 3B. Then the obstacle detection device can perform height correction processing on the three-dimensional points in the first point cloud data. As shown in FIG.
  • the ground plane of the corrected first point cloud data is higher than that of the uncorrected first point cloud data.
  • the ground plane which is closer to the ground plane of the second point cloud data, eliminates or reduces the influence of terrain fluctuations, and can detect long-distance obstacles more accurately and stably.
  • the obstacle detection device may acquire one or more targets according to the depth of the three-dimensional point in the first point cloud data 3D point, the height of the target 3D point is lower than the height of most of the 3D points in the first point cloud data, in other words, the height of the target 3D point is lower than more than half of the first point cloud data The height of the above three-dimensional points; then, the obstacle detection device determines the deviation according to the height of the one or more target three-dimensional points, and corrects the height of the three-dimensional points in the first point cloud data according to the deviation.
  • the first point cloud data is corrected to the vicinity of the same height reference plane, and the influence caused by the fluctuation of the height reference plane is eliminated or reduced, which is beneficial to improve the accuracy of the subsequent obstacle detection process.
  • the three-dimensional points in the first point cloud data may be sorted according to the height, and the one or more target three-dimensional points may be obtained from the side with the lowest height; one or more target three-dimensional points with the lowest height may also be obtained.
  • the target 3D point in other words, the height of the one or more target 3D points is lower than the heights of other 3D points except the target 3D point in the first point cloud data.
  • the obstacle detection device may determine the deviation according to the height statistical value of one or more target three-dimensional points in the first point cloud data, and the statistical value includes: average, median or mode. Therefore, the obstacle detection device can determine the height of the corrected three-dimensional point according to the difference between the deviation and the height of the three-dimensional point, such as the corrected three-dimensional point.
  • the height is the difference between the height of the three-dimensional point and the deviation
  • the height of the corrected three-dimensional point is the product of the difference between the deviation and the height of the three-dimensional point and the preset correction coefficient
  • the preset correction coefficient may be specifically set according to the actual application scenario.
  • a height difference of 3.5m will be produced at 200m.
  • a height difference of 7m is produced at 400m.
  • the obstacle detection device may divide the first point according to the depth of the three-dimensional point in the first point cloud data Cloud data, obtain multiple three-dimensional point sets; for each three-dimensional point set, obtain one or more target three-dimensional points according to the height of the three-dimensional points in the three-dimensional point set, and the height of the target three-dimensional point is lower than the height of the three-dimensional point
  • the heights of most of the three-dimensional points in the three-dimensional point set in other words, the height of the target three-dimensional point is lower than the height of more than half of the three-dimensional points in the three-dimensional point set, and then, the obstacle detection device according to the The height of one or more target 3D points in the 3D point set determines a deviation amount, and the deviation amount is used to correct the heights of the 3D points in the 3D point set.
  • the first point cloud data is divided based on the depth to obtain multiple three-dimensional point sets, and the deviation amount is determined and corrected for each three-dimensional point set, so as to be able to Eliminate or reduce the influence of different degrees of fluctuation of the height datum, and correct the first point cloud data to the vicinity of the same height datum, which is beneficial to improve the accuracy of the subsequent obstacle detection process.
  • the obstacle detection device may determine the angle between the three-dimensional point in the first point cloud data and the detection device , and then the three-dimensional points with the angle greater than the preset angle can be regarded as noise points and filtered out, which is beneficial to reduce the influence of individual error factors and improve the accuracy of the subsequent point cloud height correction process and obstacle detection process.
  • the obstacle detection device acquires first point cloud data, and the first point cloud data may be the original point cloud data collected by the detection device, or the original point cloud. The part of the point cloud data that needs to be corrected based on the depth of the 3D points in the data.
  • the obstacle detection device determines the angle between the three-dimensional point in the first point cloud data and the detection device, and filters out the three-dimensional point whose angle is greater than the preset angle; the obstacle detection device filters the Except for the first point cloud data of the three-dimensional points, the first point cloud data is divided according to the depth of the three-dimensional points to obtain multiple three-dimensional point sets.
  • Figure 4 shows a side view of the first point cloud data
  • Figure 4 shows The five three-dimensional point sets divided according to the depth are respectively three-dimensional point set A, three-dimensional point set B, three-dimensional point set C, three-dimensional point set D and three-dimensional point set E.
  • the obstacle detection device counts at least one three-dimensional point with the lowest height in each three-dimensional point set, takes the median of the height of the at least one three-dimensional point with the lowest height as the deviation of the three-dimensional point set, and uses the three-dimensional The height of all three-dimensional points in the point set is subtracted from the deviation amount, the height correction of the first point cloud data is completed, and the first point cloud data is corrected to the vicinity of the same ground plane, which is conducive to improving subsequent obstacle detection. accuracy of the process.
  • the obstacle detection apparatus may use the corrected first point cloud data to perform obstacle detection; for example, Feature extraction can be performed on the corrected first point cloud data to obtain point cloud features, and then obstacle detection is performed according to the point cloud features to obtain obstacle detection results.
  • point cloud data is unstructured data, it needs to be processed into a format that can be used for data analysis, and the processing method of point cloud data can be point cloud three-dimensional grid processing.
  • the corrected first point cloud data is divided into grids to obtain multiple voxels of the point cloud data and their voxel information, and then feature extraction is performed on the voxel information through the obstacle detection model to obtain point cloud features.
  • the voxel information may include a voxel value (for example, if there is a three-dimensional point in the voxel, the voxel value is 1, otherwise, it is 0), the point cloud density or the reflectance intensity of the voxel and other information.
  • the point cloud data may be rasterized into a 3D grid of H*W*C (where H and W represent the length and width, respectively, and C represents the depth of the 3D grid), each grid Represents a voxel, if there is a three-dimensional point in the voxel, the voxel value is 1, otherwise it is 0, so as to obtain a three-dimensional matrix including 1 and 0; further, in order to improve the accuracy of the obstacle detection result, the The position corresponding to each voxel in the three-dimensional matrix may also carry information such as the density of the point cloud or the intensity of the reflectivity.
  • the irregular point cloud is processed into a regular representation form, so that more useful point cloud features can be extracted from the information represented by the rules, which is beneficial to improve the accuracy of the obstacle detection result.
  • the point cloud data considering that the point cloud data is unstructured data, it needs to be further converted into structured data that can be processed by the obstacle detection device, wherein the depth of the three-dimensional points in the point cloud data determines the converted data.
  • the data volume of the structured data for example, the point cloud data mentioned above can be rasterized into a H*W*C three-dimensional grid.
  • the larger the 3D grid may be, the larger the size of the final 3D matrix, that is, the larger the amount of structured data.
  • the point cloud data is collected at a long distance, the depth of the three-dimensional points is large, and the size of the three-dimensional matrix obtained after conversion is larger, that is, the data volume of the structured data is also larger, which requires a longer time.
  • the processing time does not meet the real-time requirements in some scenarios (such as autonomous driving scenarios).
  • the converted structured data also includes the data in this part of the space without three-dimensional points, which causes redundancy in the processing of this part of the data.
  • this embodiment realizes that the distance between the three-dimensional points in the corrected first point cloud data is too large for this part of the space. Compression and deletion are performed to make the point cloud denser, and the compressed first point cloud data is used for obstacle detection, which can significantly reduce the amount of structured data, achieve faster obstacle detection results, and meet certain requirements. Real-time requirements in scenarios.
  • the obstacle detection device may count the distance between adjacent three-dimensional points in the first point cloud data, if the distance between the adjacent three-dimensional points is greater than a preset The distance between the adjacent three-dimensional points is compressed, and then the compressed first point cloud data is used for obstacle detection.
  • the part of the space where the distance between adjacent three-dimensional points is too large is compressed and deleted, so that the point cloud is denser.
  • the preset distance may be specifically set according to the actual application scenario, for example, the preset distance may be determined according to the size of obstacles that may be encountered in the actual application scenario, for example, the preset distance is greater than the maximum size of the obstacle size. Then if the distance between adjacent 3D points is less than or equal to the preset distance, it indicates that the adjacent 3D points may indicate the same obstacle, and the distance between the adjacent 3D points does not need to be compressed; The distance between the three-dimensional points is greater than the preset distance, indicating that the adjacent three-dimensional points may indicate different obstacles, and the space between the adjacent three-dimensional points is redundant space. The distances between the 3D points are compressed.
  • the distance between the adjacent three-dimensional points includes a distance along a specified direction.
  • the first point cloud data is obtained when the detection device is installed on the movable platform, and the specified direction includes: direction and a second direction; the first direction is the moving direction of the movable platform, and the second direction intersects the first direction.
  • the corrected first point cloud data is compressed in at least two directions, and the density of the point cloud data is further improved.
  • the first direction is the X direction, which is the forward direction of the autonomous vehicle
  • the second direction is the Y direction, which is the autonomous driving vehicle.
  • the obstacle detection device compresses the point cloud data in the X direction and the Y direction to obtain the compressed point cloud data as shown in FIG. 5B .
  • the first direction and the second direction may also be other directions, which may be specifically set according to actual application scenarios, which are not limited in this embodiment.
  • the obstacle detection device can be based on the preset obstacle size. , compress the distance between the adjacent three-dimensional points; wherein, the distance between the compressed adjacent three-dimensional points is not less than the preset obstacle size, ensuring that the adjacent three-dimensional points are within After the compression, a certain distance is still maintained. This distance prevents the compressed adjacent two 3D points from being mistaken for indicating the same obstacle in the subsequent obstacle detection process, which is beneficial to ensure the accuracy of the subsequent obstacle detection process. sex.
  • the preset obstacle size may be determined according to the size of obstacles that may be encountered in actual application scenarios. For example, in an automatic driving scenario, the preset obstacle size is a vehicle, for example, it can be Set the dimensions of the vehicle to be 5 meters in length and 2 meters in width.
  • the obstacle detection model may perform clustering processing on the three-dimensional points in the corrected first point cloud data to obtain multiple point cloud clustering results, different point cloud clustering results.
  • the results may indicate different obstacles, and the three-dimensional points in each point cloud clustering result may indicate the same obstacle, and then the distance between adjacent point cloud clustering results is compressed, and then the compressed first clustering result is used.
  • Point cloud data for obstacle detection In this embodiment, the space between adjacent point cloud clustering results is compressed and deleted, so that the point cloud is denser.
  • the distance between the cluster centers of two adjacent point cloud clustering results when the distance between the cluster centers of two adjacent point cloud clustering results is greater than a preset distance, the distance between adjacent point cloud clustering results may be compressed, and the preset distance may be compressed.
  • the set distance can be specifically set according to the actual application scenario.
  • the preset distance can be determined according to the size of obstacles that may be encountered in the actual application scenario.
  • the distance between the cluster centers of each point cloud clustering result is greater than the preset distance, which further indicates that two adjacent point cloud clustering results have a great probability to indicate different obstacles.
  • the space between the results is redundant space, and at this time, the distance between the two adjacent point cloud clustering results can be compressed.
  • this embodiment does not impose any restrictions on the clustering algorithm used in the clustering process, and specific settings can be made according to actual application scenarios, for example, K-MEANS clustering algorithm and mean-shift clustering algorithm can be used. Or hierarchical clustering algorithms, etc.
  • the distance between the adjacent point cloud clustering results includes a distance along a specified direction.
  • the first point cloud data is obtained when the detection device is installed on the movable platform, and the specified direction includes: direction and a second direction; the first direction is the moving direction of the movable platform, and the second direction intersects the first direction.
  • the first direction may be the forward direction of the automatic driving vehicle
  • the second direction may be the lateral direction (or the left and right direction) of the automatic driving vehicle.
  • the corrected first point cloud data is compressed in at least two directions, and the density of the point cloud data is further improved.
  • the obstacle detection device may Obstacle size, the distance between the adjacent point cloud clustering results is compressed; wherein, the distance between the compressed adjacent point cloud clustering results is not less than the preset obstacle size, It is ensured that the adjacent three-dimensional points still maintain a certain distance after compression, and this distance can prevent the subsequent obstacle detection process from being mistaken for indicating the same obstacle, which is beneficial to ensure the subsequent obstacle detection process. accuracy.
  • the preset obstacle size may be determined according to the size of obstacles that may be encountered in an actual application scenario.
  • the obstacle detection device may record relevant compression information for the subsequent use of points
  • the compressed information at least includes: compressed position and/or compressed distance.
  • the obstacle detection apparatus may use the corrected first point cloud data to perform obstacle detection; or in order to improve the processing speed, a part of the corrected first point cloud data may be compressed after , and perform obstacle detection on the compressed point cloud data.
  • the obstacle detection device is installed with a trained obstacle detection model, and the obstacle detection device can One point cloud data is input into the obstacle detection model, and feature extraction is performed on the corrected first point cloud data or the compressed first point cloud data through the obstacle detection model to obtain point cloud features , and perform obstacle detection according to the point cloud feature to obtain the obstacle detection result.
  • the obstacle detection result includes the confidence and/or state information of the obstacle; the confidence represents the probability that the detected object is an obstacle; the state information includes at least one of the following: the type of the obstacle , size, location and orientation.
  • the obstacle can be other vehicles, and the detection result of each vehicle can be described as [conf, cls0, cls1, cls2, x, y, z, l, w, h, sin ⁇ , cos ⁇ ] such a An array of length 12.
  • conf represents the confidence of the vehicle
  • cls0, cls1, cls2 represent the probability that the vehicle is a car, a passenger car and a truck, respectively
  • x, y, z represent the position of the vehicle center point relative to the lidar coordinate system
  • l, w, h represent The length, width and height of the vehicle, sin ⁇ , cos ⁇ together represent the front direction of the vehicle.
  • the training process of the obstacle detection model can be: firstly express a model through modeling, then evaluate the model by constructing an evaluation function, and finally optimize the evaluation function according to the sample data and the optimization method, and adjust the model to the optimum.
  • modeling is to convert practical problems into problems that can be understood by computers, that is, to convert practical problems into ways that computers can represent.
  • Modeling generally refers to the process of estimating the objective function of the model based on a large number of sample data.
  • evaluation is an indicator used to represent the quality of the model.
  • evaluation indicators will involve some evaluation indicators and the design of some evaluation functions.
  • evaluation indicators There will be targeted evaluation indicators in machine learning. For example, after the modeling is completed, a loss function needs to be designed for the model to evaluate the output error of the model.
  • the goal of optimization is the evaluation function. That is, the optimization method is used to optimize the evaluation function and find the model with the highest evaluation. For example, an optimization method such as gradient descent can be used to find the minimum value (optimal solution) of the output error of the loss function, and adjust the parameters of the model to the optimum.
  • an optimization method such as gradient descent can be used to find the minimum value (optimal solution) of the output error of the loss function, and adjust the parameters of the model to the optimum.
  • the sample data used to train the obstacle detection model may include point cloud data.
  • the point cloud data is unstructured data, it needs to be processed into a format that can be input to the obstacle detection model, such as
  • the point cloud data is rasterized to obtain each voxel of the point cloud data and its voxel information, and the voxel information corresponding to each voxel of the point cloud data is used as the input of the obstacle detection model.
  • the training process of the obstacle detection model in this embodiment may be supervised training or unsupervised training.
  • a supervised training method can be used to improve the training speed, and the sample data can be marked with real values.
  • the speed and accuracy of model training can be improved.
  • the real value includes: the confidence of the obstacle (representing the probability that the detected object is an obstacle) and the state information of the object; the state information may include at least one of the following: the type of the obstacle, size, location and orientation.
  • the obstacle detection model 200 can be obtained through machine learning.
  • the machine learning model may be a neural network model or the like, such as a deep learning-based neural network model.
  • the specific structural design of obstacle detection is one of the important aspects of the training process.
  • the obstacle detection model 200 at least includes: a feature extraction network 201 and an object prediction network 202 .
  • the feature extraction network 201 is used to perform a convolution operation on the corrected first point cloud data to obtain the point cloud feature; the object prediction network 202 is used to perform obstacle detection according to the point cloud feature, Get obstacle detection results.
  • the feature extraction network 201 may include multiple convolutional layers, and the multiple convolutional layers may use convolution kernels of different scales, and the corrected first The point cloud data is subjected to convolution operation to obtain point cloud features of different sizes.
  • this embodiment uses a lightweight feature extraction network to Perform point cloud feature extraction, and the number of convolutional layers included in the feature extraction network is less than the preset value, which significantly improves the speed of the feature extraction network while effectively extracting scene semantic information; as an example, such as the feature extraction network Includes 6 convolutional layers and 5 pooling layers.
  • the loss function is also called the cost function.
  • the sample data is marked with the real value, and the loss function is used to estimate the error between the predicted value of the model and the real value.
  • some existing loss functions such as logarithmic loss functions, squared loss functions, exponential loss functions, 0/1 loss functions, etc. can be used to form loss functions of corresponding scenarios.
  • the optimization method In the training process, it is necessary to use the optimization method to optimize the evaluation function and find the model with the highest evaluation. For example, the minimum value (optimal solution) of the output error of the loss function can be found through optimization methods such as gradient descent, and the parameters of the model can be adjusted to the optimum, that is, the optimal coefficients of each network layer in the model can be solved.
  • the process of solving may be to solve for gradients that adjust model parameters by computing the output of the model and the error value of the loss function.
  • a back-propagation function can be called to calculate the gradient, and the calculation result of the loss function can be back-propagated into the obstacle detection model, so that the obstacle detection model can update model parameters.
  • the obstacle detection model is obtained after the training, and the obtained obstacle detection model can also be tested with test samples to check the recognition accuracy of the obstacle detection model.
  • the finally trained obstacle detection model can be set in the object recognition device.
  • the obstacle detection device can be a movable platform, or the object recognition device can be installed in the movable platform as a chip.
  • point cloud data can be acquired through a detection device (such as a lidar or a camera with a depth information collection function) configured on the movable platform, and then the obstacles in the movable platform
  • the detection device performs height correction processing, compression processing, etc. according to the point cloud data, and inputs the corrected first point cloud data or the compressed first point cloud data into the obstacle detection device, so as to obtain the obstacle
  • the detection result output by the obstacle detection device includes the confidence and state information of the obstacle.
  • the detection result is data with a confidence level greater than a preset threshold, and for data whose confidence level is not greater than the preset threshold value, it indicates that it is not an obstacle, and there is no need for confidence levels not greater than or greater than the data.
  • the data of the preset threshold value is further processed, and the preset threshold value can be specifically set according to the actual application scenario; as an example, for the object to be recognized, a series of candidate frames can be identified, and each candidate frame corresponds to the confidence level and Status information, based on the confidence corresponding to the candidate frame, the probability that the object to be identified in the candidate frame belongs to an obstacle can be determined, the confidence corresponding to each candidate frame is sorted, and after sorting, it is screened according to the set threshold. If the threshold is set, it can be considered that an obstacle has been detected, and then the final detection result can be obtained.
  • the obstacle detection device inputs the compressed first point cloud data into the obstacle detection model,
  • the obstacle detection model obtains the obstacle detection result, and the obstacle detection result is the result corresponding to the compressed first point cloud data, and the obstacle corresponding to the uncompressed first point cloud data needs to be obtained according to the pre-recorded compression information Physical test results.
  • the obstacle detection device may determine the three-dimensional point corresponding to each obstacle based on the obstacle detection result, and then restore the distance between the three-dimensional points corresponding to the obstacle according to the compression information, and finally The obstacle detection result is updated according to the restored first point cloud data.
  • the obstacle detection result includes the position information of the obstacle, and in the case of using the compressed first point cloud data for obstacle detection, the obtained position information of the obstacle is not actual. It is necessary to restore the distance between the three-dimensional points corresponding to the obstacle according to the pre-recorded compression information, and adjust the position information of the obstacle based on the actual distance between the three-dimensional points in the restored point cloud data. , and then obtain the actual position information of the obstacle corresponding to the uncompressed point cloud data (or the restored point cloud data).
  • the obstacle detection method in the related art uses point cloud data for detection, and the point cloud data is unstructured data, which needs to be further converted into structured data that can be processed.
  • the depth determines the data volume of the converted structured data.
  • the point cloud data is collected at a long distance, the depth of the 3D points is large, and the data volume of the converted structured data is also relatively large.
  • the point cloud data collected at a long distance is usually sparse, with most of the space There are no three-dimensional points, so that the converted structured data also includes the data of this part of the space without three-dimensional points, which causes redundancy in the processing of this part of the data and causes the problem of slow obstacle detection.
  • an embodiment of the present application provides an obstacle detection method, and the obstacle detection method can be applied to an obstacle detection device, and the obstacle detection device can be a chip with data processing function, An integrated circuit or an electronic device, etc., the method includes:
  • step S201 first point cloud data is acquired.
  • step S202 the distances between adjacent three-dimensional points in the first point cloud data are counted.
  • step S203 if the distance between the adjacent three-dimensional points is greater than a preset distance, the distance between the adjacent three-dimensional points is compressed.
  • step S204 obstacle detection is performed according to the compressed first point cloud data.
  • this embodiment realizes that the part of the space where the distance between the three-dimensional points in the first point cloud data is too large can be compressed Deletion makes the point cloud more dense, which can significantly reduce the amount of converted structured data. Using the compressed first point cloud data for obstacle detection can achieve faster obstacle detection results and meet certain requirements. Real-time requirements in scenarios.
  • the obstacle detection device may count the distance between adjacent three-dimensional points in the first point cloud data, if the distance between the adjacent three-dimensional points is greater than a preset The distance between the adjacent three-dimensional points is compressed, and then the compressed first point cloud data is used for obstacle detection.
  • the part of the space where the distance between adjacent three-dimensional points is too large is compressed and deleted, so that the point cloud is denser.
  • the preset distance may be specifically set according to the actual application scenario, for example, the preset distance may be determined according to the size of obstacles that may be encountered in the actual application scenario, for example, the preset distance is greater than the maximum size of the obstacle size. Then if the distance between adjacent 3D points is less than or equal to the preset distance, it indicates that the adjacent 3D points may indicate the same obstacle, and the distance between the adjacent 3D points does not need to be compressed; The distance between the three-dimensional points is greater than the preset distance, indicating that the adjacent three-dimensional points may indicate different obstacles, and the space between the adjacent three-dimensional points is redundant space. The distances between the 3D points are compressed.
  • the distance between the adjacent three-dimensional points includes a distance along a specified direction.
  • the first point cloud data is obtained when the detection device is installed on the movable platform, and the specified direction includes: direction and a second direction; the first direction is the moving direction of the movable platform, and the second direction intersects the first direction.
  • the corrected first point cloud data is compressed in at least two directions, and the density of the point cloud data is further improved.
  • the obstacle detection device can be based on the preset obstacle size. , compress the distance between the adjacent three-dimensional points; wherein, the distance between the compressed adjacent three-dimensional points is not less than the preset obstacle size, ensuring that the adjacent three-dimensional points are within After the compression, a certain distance is still maintained. This distance prevents the compressed adjacent two 3D points from being mistaken for indicating the same obstacle in the subsequent obstacle detection process, which is beneficial to ensure the accuracy of the subsequent obstacle detection process. sex.
  • the preset obstacle size may be determined according to the size of obstacles that may be encountered in actual application scenarios. For example, in an automatic driving scenario, the preset obstacle size is a vehicle, for example, it can be Set the dimensions of the vehicle to be 5 meters in length and 2 meters in width.
  • the obstacle detection model may perform clustering processing on the three-dimensional points in the corrected first point cloud data to obtain multiple point cloud clustering results, different point cloud clustering results.
  • the results may indicate different obstacles, and the three-dimensional points in each point cloud clustering result may indicate the same obstacle, and then the distance between adjacent point cloud clustering results can be compressed, and then the compressed first point cloud clustering results can be used.
  • a little cloud data for obstacle detection In this embodiment, the space between adjacent point cloud clustering results is compressed and deleted, so that the point cloud is denser.
  • the distance between the cluster centers of two adjacent point cloud clustering results when the distance between the cluster centers of two adjacent point cloud clustering results is greater than a preset distance, the distance between adjacent point cloud clustering results may be compressed, and the preset distance may be compressed.
  • the set distance can be specifically set according to the actual application scenario.
  • the preset distance can be determined according to the size of obstacles that may be encountered in the actual application scenario.
  • the distance between the cluster centers of each point cloud clustering result is greater than the preset distance, which further indicates that two adjacent point cloud clustering results have a great probability to indicate different obstacles.
  • the space between the results is redundant space, and at this time, the distance between the two adjacent point cloud clustering results can be compressed.
  • this embodiment does not impose any restrictions on the clustering algorithm used in the clustering process, and specific settings can be made according to actual application scenarios, for example, K-MEANS clustering algorithm and mean-shift clustering algorithm can be used. Or hierarchical clustering algorithms, etc.
  • the distance between the adjacent point cloud clustering results includes a distance along a specified direction.
  • the first point cloud data is obtained when the detection device is installed on the movable platform, and the specified direction includes: direction and a second direction; the first direction is the moving direction of the movable platform, and the second direction intersects the first direction.
  • the first direction may be the forward direction of the automatic driving vehicle
  • the second direction may be the lateral direction (or the left and right direction) of the automatic driving vehicle.
  • the corrected first point cloud data is compressed in at least two directions, and the density of the point cloud data is further improved.
  • the obstacle detection device may Obstacle size, the distance between the adjacent point cloud clustering results is compressed; wherein, the distance between the compressed adjacent point cloud clustering results is not less than the preset obstacle size, It is ensured that the adjacent three-dimensional points still maintain a certain distance after compression, and this distance can prevent the subsequent obstacle detection process from being mistaken for indicating the same obstacle, which is beneficial to ensure the subsequent obstacle detection process. accuracy.
  • the preset obstacle size may be determined according to the size of obstacles that may be encountered in an actual application scenario.
  • the method further includes: recording compression information.
  • the compression information includes at least: compression position and/or compression distance.
  • the performing obstacle detection according to the compressed first point cloud data includes: performing feature extraction on the compressed first point cloud data to obtain point cloud features; Perform obstacle detection and obtain obstacle detection results.
  • the method further includes: determining the three-dimensional point corresponding to each obstacle based on the obstacle detection result; restoring the corresponding 3D point of the obstacle according to the compression information The distance between the three-dimensional points; the obstacle detection result is updated according to the restored first point cloud data.
  • the obstacle detection result includes a confidence level and/or state information of the obstacle; the confidence level represents a probability that the detected object is an obstacle; the state information includes at least one of the following: the Type, size, location, and orientation of obstacles.
  • the point cloud features include point cloud features of different sizes.
  • the point cloud feature and the obstacle detection result are obtained through a pre-established obstacle detection model.
  • the obstacle detection model includes a feature extraction network; the feature extraction network is configured to perform a convolution operation on the compressed first point cloud data to obtain the point cloud feature.
  • the number of convolutional layers included in the feature extraction network is less than a preset value.
  • the performing feature extraction on the compressed first point cloud data includes: performing grid division on the compressed first point cloud data to obtain voxel information; Pixel information for feature extraction.
  • the method before the counting the distances between the adjacent three-dimensional points in the first point cloud data, the method further includes: according to one or more target three-dimensional points in the first point cloud data The height determines the deviation; the height of the target three-dimensional point is lower than the height of most of the three-dimensional points in the first point cloud data; the deviation is used to correct the height of the three-dimensional points in the first point cloud data.
  • the counting the distances between adjacent three-dimensional points in the first point cloud data includes: counting the distances between adjacent three-dimensional points in the corrected first point cloud data.
  • the acquiring the first point cloud data includes: according to the depth of the three-dimensional point, from the original point cloud data collected by the detection device, determining the first point cloud data that needs to be corrected for the height and the first point cloud data that does not need to be corrected for the height.
  • Two point cloud data wherein, the height reference plane of the corrected first point cloud data is closer to the height reference plane of the second point cloud data than the height reference plane of the uncorrected first point cloud data; or,
  • the height difference between the height reference plane of the corrected first point cloud data and the height reference plane of the second point cloud data is smaller than the height reference plane of the uncorrected first point cloud data and the second point cloud data The height difference between the height datums of the data.
  • the first point cloud data and the second point cloud data belong to different depth intervals; wherein, the minimum depth of the depth interval to which the first point cloud data belongs is greater than or equal to the second point cloud The maximum depth of the depth region to which the data belongs. For example, the depth of the three-dimensional point in the first point cloud data whose height needs to be corrected is greater than a preset depth threshold.
  • the method further includes: dividing the first point cloud data according to the depth of the three-dimensional points in the first point cloud data to obtain a plurality of three-dimensional point sets. .
  • determining the deviation according to the height of one or more target three-dimensional points in the first point cloud data, and using the deviation to correct the heights of the three-dimensional points in the first point cloud data including: for For each of the three-dimensional point sets, a deviation is determined according to the height of one or more target three-dimensional points in the three-dimensional point set, and the deviation is used to correct the heights of the three-dimensional points in the three-dimensional point set; wherein, The height of the target 3D point is lower than the height of most 3D points in the 3D point set.
  • the height of the one or more target three-dimensional points is lower than the heights of other three-dimensional points except the target three-dimensional point in the first point cloud data.
  • the deviation is determined according to a statistical value of the height of the one or more target three-dimensional points.
  • the statistical value includes: mean, median or mode.
  • the height of the corrected three-dimensional point is determined according to the difference between the height of the three-dimensional point and the deviation.
  • the method before the determining the deviation amount according to the height of one or more target three-dimensional points in the first point cloud data, the method further includes: determining the difference between the three-dimensional points in the first point cloud data and the target three-dimensional points in the first point cloud data.
  • the angle between the detection devices is filtered; the three-dimensional points whose angle is greater than the preset angle are filtered out.
  • an embodiment of the present application further provides an obstacle detection device 20 including one or more processors 21 and a memory 22 storing executable instructions;
  • the one or more processors 21, when executing the executable instructions, are individually or collectively configured to:
  • the deviation is determined according to the height of one or more target three-dimensional points in the first point cloud data; the height of the target three-dimensional point is lower than the height of most of the three-dimensional points in the first point cloud data;
  • Obstacle detection is performed according to the corrected first point cloud data.
  • the processor 21 is further configured to: according to the depth of the three-dimensional point, from the original point cloud data collected by the detection device, determine the first point cloud data whose height needs to be corrected and the second point whose height does not need to be corrected. cloud data.
  • the height reference plane of the corrected first point cloud data is closer to the height reference plane of the second point cloud data than the height reference plane of the uncorrected first point cloud data.
  • the first point cloud data and the second point cloud data belong to different depth intervals; wherein, the minimum depth of the depth interval to which the first point cloud data belongs is greater than or equal to the second point cloud The maximum depth of the depth region to which the data belongs.
  • the depth of the three-dimensional point in the first point cloud data whose height needs to be corrected is greater than a preset depth threshold.
  • the processor 21 is further configured to:
  • a deviation amount is determined according to the height of one or more target three-dimensional points in the three-dimensional point set, and the deviation amount is used to correct the heights of the three-dimensional points in the three-dimensional point set; wherein , the height of the target 3D point is lower than the height of most 3D points in the 3D point set.
  • the height of the one or more target 3D points is lower than the heights of other 3D points except the target 3D point in the first point cloud data.
  • the deviation is determined according to a statistical value of the height of the one or more target three-dimensional points.
  • the statistical value includes: mean, median or mode.
  • the height of the corrected three-dimensional point is determined according to the difference between the height of the three-dimensional point and the deviation.
  • the processor 21 is further configured to: determine the angle between the three-dimensional point in the first point cloud data and the detection device; 3D points larger than the preset angle are filtered out.
  • the processor 21 is further configured to:
  • the obstacle detection is performed according to the point cloud feature, and the obstacle detection result is obtained.
  • the processor 21 before the feature extraction is performed on the corrected first point cloud data, the processor 21 is further configured to:
  • Feature extraction is performed on the compressed first point cloud data.
  • the processor 21 is further configured to: compress the distance between the adjacent three-dimensional points according to a preset obstacle size.
  • the distance between the compressed adjacent three-dimensional points is not less than the preset obstacle size.
  • the distance between the adjacent three-dimensional points includes a distance along a specified direction.
  • the first point cloud data is obtained when the detection device is installed on the movable platform.
  • the specified direction includes: a first direction and a second direction.
  • the first direction is the moving direction of the movable platform, and the second direction intersects the first direction.
  • the processor 21 is further configured to: record compression information.
  • the compressed information includes at least: a compressed position and/or a compressed distance.
  • the processor 21 is further configured to:
  • the obstacle detection result is updated according to the restored first point cloud data.
  • the obstacle detection result includes the confidence and/or state information of the obstacle.
  • the confidence level represents the probability that the detected object is an obstacle.
  • the state information includes at least one of the following: the type, size, location, and orientation of the obstacle.
  • the point cloud features include point cloud features of different sizes.
  • the point cloud feature and the obstacle detection result are obtained through a pre-established obstacle detection model.
  • the obstacle detection model includes a feature extraction network; the feature extraction network is configured to perform a convolution operation on the corrected first point cloud data to obtain the point cloud feature.
  • the number of convolutional layers included in the feature extraction network is less than a preset value.
  • the processor 21 is further configured to: perform grid division on the corrected first point cloud data to obtain voxel information; and perform feature extraction on the voxel information.
  • an obstacle detection device including one or more processors and a memory storing executable instructions
  • the one or more processors when executing the executable instructions, are individually or collectively configured to:
  • Obstacle detection is performed according to the compressed first point cloud data.
  • the processor is further configured to: compress the distance between the adjacent three-dimensional points according to a preset obstacle size.
  • the distance between the compressed adjacent three-dimensional points is not less than the preset obstacle size.
  • the distance between the adjacent three-dimensional points includes a distance along a specified direction.
  • the first point cloud data is obtained when the detection device is installed on a movable platform.
  • the specified direction includes: a first direction and a second direction.
  • the first direction is the moving direction of the movable platform, and the second direction intersects the first direction.
  • the processor is further configured to: record compression information.
  • the compressed information includes at least: a compressed position and/or a compressed distance.
  • the processor is further configured to: perform feature extraction on the compressed first point cloud data to obtain point cloud features; perform obstacle detection according to the point cloud features to obtain an obstacle detection result.
  • the processor is further configured to:
  • the obstacle detection result is updated according to the restored first point cloud data.
  • the obstacle detection result includes the confidence and/or state information of the obstacle.
  • the confidence level represents the probability that the detected object is an obstacle.
  • the state information includes at least one of the following: the type, size, location, and orientation of the obstacle.
  • the point cloud features include point cloud features of different sizes.
  • the point cloud feature and the obstacle detection result are obtained through a pre-established obstacle detection model.
  • the obstacle detection model includes a feature extraction network
  • the feature extraction network is configured to perform a convolution operation on the compressed first point cloud data to obtain the point cloud features.
  • the number of convolutional layers included in the feature extraction network is less than a preset value.
  • the processor is further configured to: perform grid division on the compressed first point cloud data to obtain voxel information; and perform feature extraction on the voxel information.
  • the processor is further configured to:
  • the deviation is determined according to the height of one or more target three-dimensional points in the first point cloud data; the height of the target three-dimensional point is lower than the height of most of the three-dimensional points in the first point cloud data;
  • the distance between adjacent three-dimensional points in the first point cloud data after statistical correction is the distance between adjacent three-dimensional points in the first point cloud data after statistical correction.
  • the processor is further configured to: determine, from the original point cloud data collected by the detection device, first point cloud data requiring height correction and second point cloud data requiring no height correction according to the depth of the three-dimensional point.
  • the height reference plane of the corrected first point cloud data is closer to the height reference plane of the second point cloud data than the height reference plane of the uncorrected first point cloud data.
  • the first point cloud data and the second point cloud data belong to different depth intervals; wherein, the minimum depth of the depth interval to which the first point cloud data belongs is greater than or equal to that to which the second point cloud data belongs. The maximum depth of the depth region.
  • the depth of the three-dimensional point in the first point cloud data whose height needs to be corrected is greater than a preset depth threshold.
  • the processor is further configured to:
  • a deviation amount is determined according to the height of one or more target three-dimensional points in the three-dimensional point set, and the deviation amount is used to correct the heights of the three-dimensional points in the three-dimensional point set; wherein , the height of the target 3D point is lower than the height of most 3D points in the 3D point set.
  • the height of the one or more target three-dimensional points is lower than the heights of other three-dimensional points except the target three-dimensional point in the first point cloud data.
  • the deviation amount is determined according to a statistical value of the height of the one or more target three-dimensional points.
  • the statistical value includes: mean, median or mode.
  • the height of the corrected three-dimensional point is determined according to the difference between the height of the three-dimensional point and the deviation.
  • the processor is further configured to:
  • Three-dimensional points whose angle is greater than the preset angle are filtered out.
  • an embodiment of the present application further provides a movable platform 300, including:
  • a power system 30 installed in the body 40 for powering the movable platform 300;
  • the obstacle detection device 20 as described above.
  • the movable platform 300 includes, but is not limited to, unmanned aerial vehicles, autonomous vehicles, mobile robots, and the like.
  • the movable platform 300 further includes a detection device such as a lidar, which is used to collect point cloud data.
  • a detection device such as a lidar, which is used to collect point cloud data.
  • non-transitory computer-readable storage medium such as a memory including instructions, executable by a processor of an apparatus to perform the above-described method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
  • a non-transitory computer-readable storage medium when the instructions in the storage medium are executed by the processor of the terminal, enable the terminal to execute the above method.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

一种障碍物检测方法、装置、可移动平台及存储介质,该方法包括:获取第一点云数据(S101);根据第一点云数据中的一个或多个目标三维点的高度确定偏差量;目标三维点的高度低于第一点云数据中大部分三维点的高度(S102);使用该偏差量矫正第一点云数据中的三维点的高度(S103);根据矫正后的第一点云数据进行障碍物检测(S104)。实现将第一点云数据均矫正到同一高度基准面附近,提高障碍物检测准确性。

Description

障碍物检测方法、装置、可移动平台及存储介质 技术领域
本申请涉及物体检测技术领域,具体而言,涉及一种障碍物检测方法、装置、可移动平台及存储介质。
背景技术
障碍物检测是保证可移动平台(如自动驾驶车辆、飞行器、移动机器人)安全可靠移动的关键要素之一,是可移动平台进行进一步决策和运动控制的基础。
相关技术中的障碍物检测方法通常通过可移动平台携带的有效载荷(如摄像头、激光雷达等)实时地获取可移动平台周边的环境信息,然后基于这些环境信息进行检测得到周边环境中其他物体准确的位置信息;然而,相关技术中的障碍物检测方法对于远距离的障碍物检测,存在准确率不高、检测慢等问题。
发明内容
有鉴于此,本申请的目的之一是提供一种障碍物检测方法、装置、可移动平台及存储介质。
发明人发现,随着距离的增加,地形起伏对障碍物检测过程带来的影响逐渐增加。例如当地面仅产生1°的倾斜角时,在距可移动平台200m处就会产生3.5m的高度差,在距可移动平台400m处会产生7m的高度差,可见,地形起伏使得远距离处的地面有所偏差,而可移动平台利用携带的有效载荷从远距离处获得的环境信息有限,无法准确获得有关于地面的信息,从而导致相关技术中的障碍物检测方法对于远距离的障碍物检测结果的准确率不高。
基于此,第一方面,本申请实施例提供了一种障碍物检测方法,包括:
获取第一点云数据;
根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量;所述目标三维点的高度低于所述第一点云数据中大部分三维点的高度;
使用所述偏差量矫正所述第一点云数据中的三维点的高度;
根据矫正后的第一点云数据进行障碍物检测。
第二方面,本申请实施例提供了一种障碍物检测装置,包括一个或多个处理器和存储有可执行指令的存储器;
所述一个或多个处理器在执行所述可执行指令时,被单独或共同地配置成:
获取第一点云数据;
根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量;所述目标三维点的高度低于所述第一点云数据中大部分三维点的高度;
使用所述偏差量矫正所述第一点云数据中的三维点的高度;
根据矫正后的第一点云数据进行障碍物检测。
相关技术中的障碍物检测方法使用点云数据进行检测,而点云数据为非结构化数据,需要进一步将其转换成能够处理的结构化数据,其中点云数据中的三维点的深度决定了转换后的结构化数据的数据量大小,在点云数据是在较远距离处采集得到的情况下,其三维点的深度较大,经转换后得到结构化数据的数据量也较大,需要较长的处理时间,造成障碍物检测速度慢的问题,不满足某些场景(如自动驾驶场景)下的实时性需求。
基于此,第三方面,本申请实施例提供了一种障碍物检测方法,包括:
获取第一点云数据;
统计所述第一点云数据中相邻的三维点之间的距离;
如果所述相邻的三维点之间的距离大于预设距离,对所述相邻的三维点之间的距离进行压缩;
根据压缩后的第一点云数据进行障碍物检测。
第四方面,本申请实施例提供了一种障碍物检测装置,包括一个或多个处理器和存储有可执行指令的存储器;
所述一个或多个处理器在执行所述可执行指令时,被单独或共同地配置成:
获取第一点云数据;
统计所述第一点云数据中相邻的三维点之间的距离;
如果所述相邻的三维点之间的距离大于预设距离,对所述相邻的三维点之间的距离进行压缩;
根据压缩后的第一点云数据进行障碍物检测。
第五方面,本申请实施例提供了一种可移动平台,包括:
机体;
动力系统,安装在所述机体内,用于为所述可移动平台提供动力;以及,
如第二方面或第四方面所述的障碍物检测装置。
第六方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有可执行指令,所述可执行指令被处理器执行时实现如第一方面或第三方面所述的方法。
本申请实施例所提供的一种障碍物检测方法,对获取的第一点云数据进行高度矫正处理,实现将所述第一点云数据均矫正到同一高度基准面附近,消除或者降低高度基准面有所偏差(如地形起伏)所带来的影响,有利于提高后续利用矫正后的第一点云数据进行障碍物检测过程的准确性。
本申请实施例所提供的一种障碍物检测方法,对获取的第一点云数据中三维点之间的距离过大的这部分空间进行压缩删减,使得点云更稠密化,使用压缩后的第一点云数据进行障碍物检测,有利于提高处理速度,实现更快获得障碍物检测结果,满足某些场景下的实时性需求。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例提供的自动驾驶场景的示意图;
图1B是本申请实施例提供的一种自动驾驶车辆的结构示意图;
图2、图7是本申请实施例提供的障碍物检测装置的不同流程示意图;
图3A、图3B、图3C是本申请实施例提供的有关于点云的高度矫正的示意图;
图4是本申请实施例提供的多个三维点集合的示意图;
图5A、5B是本申请实施例提供的有关于点云压缩的示意图;
图6是本申请实施例提供的一种障碍物检测模型的结构示意图;
图8是本申请实施例提供的一种障碍物检测装置的结构示意图;
图9是本申请实施例提供的一种可移动平台的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在实现本申请实施例的过程中,发明人发现,随着距离的增加,地形起伏对障碍物检测过程带来的影响逐渐增加。例如当地面仅产生1°的倾斜角时,在距可移动平台200m处就会产生3.5m的高度差,在距可移动平台400m处会产生7m的高度差,可见,地形起伏使得远距离处的地面有所偏差,而可移动平台利用携带的有效载荷从远距离处获得的环境信息有限,无法准确获得有关于地面的信息,从而导致相关技术中的障碍物检测方法对于远距离的障碍物检测结果的准确率不高。
示例性的,在自动驾驶应用场景中,自动驾驶车辆使用激光雷达采集自动驾驶车辆周边环境的点云数据,然后使用点云数据进行障碍物检测得到周边环境中其他车辆的位置信息。随着距离的增加,地形起伏对障碍物检测过程带来的影响逐渐增加,地形起伏使得远距离处的地面有所偏差,而从远距离处采集到的点云较为稀疏,可能无法提取到有效的地面三维点,使得相关技术中的点云地平面矫正算法无法适用,进而导致相关技术中的障碍物检测方法对于远距离的障碍物检测不准确。
基于此,本申请实施例提供了一种障碍物检测方法,在获取第一点云数据之后,能够根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量,所述目标三维点为所述第一点云数据中高度较低的三维点,然后使用所述偏差量矫正所述第一点云数据中的三维点的高度,实现将所述第一点云数据均矫正到同一高度基准面附近,消除或者降低高度基准面有所偏差(如地形起伏)所带来的影响,有利于提高后续利用矫正后的第一点云数据进行障碍物检测过程的准确性。
其中,所述障碍物检测方法可以应用于障碍物检测装置,所述障碍物检测装置可以是具有数据处理功能的芯片、集成电路或者电子设备等。
如果所述障碍物检测装置是具有数据处理功能的芯片或者集成电路,所述障碍物检测装置包括但不限于例如中央处理单元(Central Processing Unit,CPU)、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)或者现成可编程门阵列(Field-Programmable Gate Array,FPGA)等;其中,所述障碍物检测装置可以安装于电子设备上。
如果所述障碍物检测装置是具有数据处理功能的电子设备,所述电子设备包括但不限于可移动平台、终端设备或者服务器等。其中,所述可移动平台的示例包括但不限于无人飞行器、无人驾驶车辆、云台、无人驾驶船只或者移动机器人等。所述终端设备的示例包括但不限于:智能电话/手机、平板计算机、个人数字助理(PDA)、膝上计算机、台式计算机、媒体内容播放器、视频游戏站/系统、虚拟现实系统、增强现实系统、可穿戴式装置(例如,手表、眼镜、手套、头饰(例如,帽子、头盔、虚拟现实头戴耳机、增强现实头戴耳机、头装式装置(HMD)、头带)、挂件、臂章、腿环、鞋子、马甲)、遥控器、或者任何其他类型的装置。
在一示例性的应用场景中,以可移动平台为自动驾驶车辆,所述自动驾驶车辆包括有所述障碍物检测装置为例进行说明,图1A示出了自动驾驶车辆100的行驶场景,图1B示出了自动驾驶车辆100的结构图,所述自动驾驶车辆100上可以安装有用于获取点云数据的激光雷达10和所述障碍物检测装置20。示例性的,所述激光雷达10的数量可以是一个或多个。可以理解的是,所述激光雷达10安装位置可依据实际应用场景进行具体设置,示例性的,其中一个激光雷达10可以安装于自动驾驶车辆100的车头。
在所述自动驾驶车辆100行驶过程中,所述激光雷达10采集所述自动驾驶车辆100周围的物体的点云数据并传输给所述自动驾驶车辆100中的障碍物检测装置20。所述障碍物检测装置20获取所述点云数据,并基于本申请实施例的障碍物检测方法对点云数据进行高度矫正之后,基于矫正后的点云数据进行障碍物检测,得到检测结果。
在获取所述检测结果之后,在第一种可能的实现方式中,所述自动驾驶车辆100可以使用所述检测结果进行避障决策或者进行路线规划;在第二种可能的实现方式中,可以将所述检测结果显示在所述自动驾驶车辆100的界面或者与所述自动驾驶车辆100通信连接的终端的界面,以便让用户了解自动驾驶车辆100的行驶情况以及自动驾驶车辆100周边的路况;在第三种可能的实现方式中,可以将所述检测结果传输给所述自动驾驶车辆100中的其他部件,以便所述其他部件基于所述检测结果控制所述自动驾驶车辆100安全可靠地工作。
接下来对本申请实施例提供的障碍物检测方法进行说明:请参阅图2,为本申请实施例提供的一种障碍物检测方法的流程示意图,所述方法可以由障碍物检测装置来执行,所述方法包括:
在步骤S101中,获取第一点云数据。
在步骤S102中,根据所述第一点云数据中的一个或多个目标三维点的高度确定偏 差量;所述目标三维点的高度低于所述第一点云数据中大部分三维点的高度。
在步骤S103中,使用所述偏差量矫正所述第一点云数据中的三维点的高度。
在步骤S104中,根据矫正后的第一点云数据进行障碍物检测。
在一些实施例中,所述第一点云数据可以是探测装置(例如激光雷达,或者其他可以获得点云数据的装置)采集到的原始点云数据,所述障碍物检测装置可以对所述探测装置采集到的原始点云数据进行高度矫正处理。
在另一些实施例中,考虑到所述探测装置采集到的原始点云数据中存在部分不需要进行高度矫正的点云数据,比如距所述探测装置较近距离处的点云数据,近距离处的点云数据是较为密集的,能够提取到有效的地面三维点,加之地形起伏对于近距离处的点云数据影响不大,因此可以考虑不对近距离处的点云数据进行高度矫正处理。
则所述障碍物检测装置可以根据三维点的深度,从探测装置采集到的原始点云数据中确定需要矫正高度的第一点云数据和无需矫正高度的第二点云数据,然后对所述第一点云数据中的三维点进行高度矫正处理。其中,矫正后的第一点云数据的高度基准面比未矫正的第一点云数据的高度基准面,更靠近所述第二点云数据的高度基准面。换句话说,矫正后的第一点云数据的高度基准面与所述第二点云数据的高度基准面之间的高度差,小于未矫正的第一点云数据的高度基准面与所述第二点云数据的高度基准面之间的高度差。本实施例实现对第一点云数据进行高度矫正处理,将第一点云数据的高度基准面矫正到第二点云数据的高度基准面附近,消除或者降低了高度基准面存在起伏所带来的影响,有利于提高障碍物检测的准确性。
示例性的,所述第一点云数据和所述第二点云数据属于不同深度区间;其中,所述第一点云数据所属深度区间的最小深度大于或等于所述第二点云数据所属深度区域的最大深度。比如所述需要矫正高度的第一点云数据可以是所述原始点云数据中三维点的深度大于预设深度阈值的部分,所述无需矫正高度的第二点云数据是所述原始点云数据中三维点的深度小于或等于预设深度阈值的部分,所述预设深度阈值可依据实际应用场景进行具体设置;比如在自动驾驶场景下,所述预设深度阈值为150米、180米或者200米等;本实施例实现对远距离下的点云数据进行高度矫正,有利于提高远距离障碍物检测的准确性。
在一个例子中,在自动驾驶场景下,由自动驾驶车辆上的激光雷达采集到如图3A所示的原始点云数据,图3A示出了原始点云数据的侧视图,X轴指向自动驾驶车辆前进的方向,Z轴表示高度。自动驾驶车辆中的障碍物检测装置在获取所述原始点云数据之后,根据所述原始点云数据中三维点的深度,确定所述原始点云数据中需要矫 正高度的第一点云数据和无需矫正高度的第二点云数据,比如如图3B所示。然后所述障碍物检测装置可以对第一点云数据中的三维点进行高度矫正处理,如图3C所示,矫正后的第一点云数据的地平面比未矫正的第一点云数据的地平面,更靠近所述第二点云数据的地平面,实现消除或者降低了地形存在起伏所带来的影响,能更精确稳定地检测出远距离的障碍物。
在一些实施例中,在步骤S102及步骤S103中,在进行高度矫正处理过程中,所述障碍物检测装置可以根据所述第一点云数据中的三维点的深度,获取一个或多个目标三维点,所述目标三维点的高度低于所述第一点云数据中大部分三维点的高度,换句话说,所述目标三维点的高度低于所述第一点云数据中超过半数以上的三维点的高度;然后,所述障碍物检测装置根据所述一个或多个目标三维点的高度确定偏差量,并根据所述偏差量矫正所述第一点云数据中的三维点的高度,本实施例实现将所述第一点云数据均矫正到同一高度基准面附近,消除或者降低高度基准面起伏所带来的影响,有利于提高后续障碍物检测过程的准确性。
示例性的,可以按照高度大小对所述第一点云数据中的三维点进行排序,从高度低的一侧获取所述一个或多个目标三维点;也可以获取高度最低的一个或多个目标三维点,换句话说,所述一个或多个目标三维点的高度低于所述第一点云数据中除所述目标三维点以外的其他三维点的高度。
示例性的,所述障碍物检测装置可以根据所述第一点云数据中的一个或多个目标三维点的高度统计值确定偏差量,所述统计值包括:平均数、中位数或者众数,从而有利于降低个别误差影响;则所述障碍物检测装置可以根据所述偏差量与所述三维点的高度之间的差值确定矫正后的三维点的高度,比如矫正后的三维点的高度为所述三维点的高度与所述偏差量之差,又比如矫正后的三维点的高度为所述偏差量与所述三维点的高度之间的差值与预设矫正系数的乘积,所述预设矫正系数可依据实际应用场景进行具体设置。
在一些实施例中,考虑到在不同的距离处所产生的偏差程度是不同的,例如上述提到的当地面仅产生1°的倾斜角时,在200m处就会产生3.5m的高度差,在400m处产生7m的高度差。为了提高点云矫正过程中的准确性,在获取所述第一点云数据之后,所述障碍物检测装置可以根据所述第一点云数据中的三维点的深度,划分所述第一点云数据,获得多个三维点集合;对于每一个三维点集合,根据所述三维点集合中的三维点的高度,获取一个或多个目标三维点,所述目标三维点的高度低于所述三维点集合中大部分三维点的高度,换句话说,所述目标三维点的高度低于所述三维点 集合中超过半数以上的三维点的高度,然后,所述障碍物检测装置根据所述三维点集合中的一个或多个目标三维点的高度确定偏差量,并使用所述偏差量矫正所述三维点集合中的三维点的高度。本实施例考虑到不同的距离处的高度基准面的偏差程度不同的问题,基于深度来划分第一点云数据获得多个三维点集合,并对各个三维点集合分别确定偏差量并矫正,能够消除或者降低高度基准面不同程度的起伏所带来的影响,将第一点云数据均矫正到同一高度基准面附近,从而有利于提高后续障碍物检测过程的准确性。
在一些实施例中,为了降低某些误差点的影响,在获取第一点云数据之后,所述障碍物检测装置可以确定所述第一点云数据中的三维点与探测装置之间的角度,然后可以将所述角度大于预设角度的三维点认为是噪点,并将其滤除,从而有利于降低个别误差因素的影响,提高后续点云高度矫正过程和障碍物检测过程的准确性。
在一个例子中,在自动驾驶场景下,所述障碍物检测装置获取第一点云数据,所述第一点云数据可以是探测装置采集的原始点云数据,也可以是所述原始点云数据中基于三维点的深度确定的需要矫正高度的部分点云数据。接着,所述障碍物检测装置确定所述第一点云数据中的三维点与探测装置之间的角度并将所述角度大于预设角度的三维点滤除;所述障碍物检测装置将滤除了三维点的第一点云数据按照三维点的深度进行划分,获得多个三维点集合,比如如图4所示,图4示出了第一点云数据的侧视图,图4示出了根据深度划分的5个三维点集合,分别为三维点集合A、三维点集合B、三维点集合C、三维点集合D和三维点集合E。进一步地,所述障碍物检测装置统计每个三维点集合中高度最低的至少一个三维点,将高度最低的至少一个三维点的高度的中位数作为该三维点集合的偏差量,将该三维点集合中的所有三维点的高度均减去所述偏差量,完成第一点云数据的高度矫正,实现将第一点云数据均矫正到同一地平面附近,从而有利于提高后续障碍物检测过程的准确性。
在一些实施例中,在步骤S104中,在获取矫正后的第一点云数据之后,所述障碍物检测装置可以使用所述矫正后的第一点云数据进行障碍物检测;示例性的,可以对所述矫正后的第一点云数据进行特征提取,获得点云特征,然后根据所述点云特征进行障碍物检测,获得障碍物检测结果。
其中,考虑到点云数据是非结构化数据,需要处理为可进行数据分析的格式,而点云数据的处理方式可以是点云三维网格化处理,例如可以按照预设分辨率将对所述矫正后的第一点云数据进行栅格划分,获得点云数据的多个体素及其体素信息,然后通过所述障碍物检测模型对所述体素信息进行特征提取以获得点云特征。示例性的, 所述体素信息可以包括体素值(作为例子,如果该体素中存在三维点,则体素值为1,否则为0)、该体素的点云密度或者反射率强度等信息。作为例子,所述点云数据可以被栅格化为H*W*C的三维网格(其中,H和W分别代表长和宽,C表示所述三维网格的深度),每个网格表示一个体素,如果该体素中存在三维点,则体素值为1,否则为0,从而获得一个包括1和0的三维矩阵;进一步地,为了提高障碍物检测结果的准确性,所述三维矩阵中对应于每个体素的位置还可以携带所述点云密度或者反射率强度等信息。本实施例将不规则点云处理成规则的表示形式,以便可以从规则表示的信息中提取更多有用的点云特征,有利于提高障碍物检测结果的准确性。
在一些实施例中,考虑到点云数据为非结构化数据,需要进一步将其转换成所述障碍物检测装置能够处理的结构化数据,其中点云数据中的三维点的深度决定了转换后的结构化数据的数据量大小,比如上述提到所述点云数据可以被栅格化为H*W*C的三维网格,其中,点云数据中的三维点的深度越大,栅格化后的三维网格可能越大,最终得到的三维矩阵的尺寸越大,即结构化数据的数据量越多。在点云数据是在较远距离处采集得到的情况下,其三维点的深度较大,经转换后得到三维矩阵的尺寸越大,即结构化数据的数据量也较大,需要较长的处理时间,不满足某些场景(如自动驾驶场景)下的实时性需求;而发明人发现,在较远距离处采集得到的点云数据通常较为稀疏,有大部分空间是没有三维点的,使得经转换得到的结构化数据中也包含了没有三维点的这部分空间的数据,对这部分数据的处理造成冗余。
基于此,在获取所述矫正的第一点云数据之后,为了进一步提高处理速度,本实施例实现可以对所述矫正的第一点云数据中三维点之间的距离过大的这部分空间进行压缩删减,使得点云更稠密化,并使用压缩后的第一点云数据进行障碍物检测,从而可以显著减少结构化数据的数据量,实现更快获得障碍物检测结果,满足某些场景下的实时性需求。
在一种可能的实现方式中,所述障碍物检测装置可以统计所述第一点云数据中相邻的三维点之间的距离,如果所述相邻的三维点之间的距离大于预设距离,对所述相邻的三维点之间的距离进行压缩,进而使用压缩后的第一点云数据进行障碍物检测。本实施例实现对相邻的三维点之间距离过大的这部分空间进行压缩删减,使得点云更稠密化。
其中,所述预设距离可依据实际应用场景进行具体设置,例如所述预设距离可以根据实际应用场景中可能遇到的障碍物的尺寸所确定,例如所述预设距离大于障碍物的最大尺寸。则如果相邻的三维点之间的距离小于或者等于预设距离,表明所述相邻 的三维点可能指示同一障碍物,所述相邻的三维点之间的距离无需压缩;如果相邻的三维点之间的距离大于预设距离,表明所述相邻的三维点可能指示不同的障碍物,所述相邻的三维点之间的空间是冗余空间,这时候可以对所述相邻的三维点之间的距离进行压缩。
在一些实施例中,所述相邻的三维点之间的距离包括沿指定方向的距离。示例性的,当所述障碍物检测装置安装于可移动平台的情况下,所述第一点云数据是由探测装置安装在可移动平台上时获取到的,所述指定方向包括:第一方向和第二方向;所述第一方向为所述可移动平台的移动方向,所述第二方向与所述第一方向相交。本实施例实现在至少两个方向对所述矫正后的第一点云数据进行压缩,进一步提高点云数据的稠密度。
作为例子,在自动驾驶场景,请参阅图5A,图5A为点云数据的俯视图,第一方向为X方向,即自动驾驶车辆的前进方向,第二方向为Y方向,即所述自动驾驶车辆的横向方向(或者说左右方向),所述障碍物检测装置在X方向和Y方向上对点云数据进行压缩,得到如图5B所示的压缩后的点云数据。当然,可以理解的是,所述第一方向和所述第二方向也可以是其他方向,可依据实际应用场景进行具体设置,本实施例对此不做任何限制。
进一步地,为了避免在压缩之后指示不同障碍物的三维点在后续的障碍物检测过程中因距离过近被误认为是指示同一障碍物,所述障碍物检测装置可以根据预设的障碍物尺寸,对所述相邻的三维点之间的距离进行压缩;其中,压缩后的相邻的三维点之间的距离不小于所述预设的障碍物尺寸,保证所述相邻的三维点在压缩之后仍保持一定距离,该距离使得在后续的障碍物检测过程中压缩后的相邻两个三维点不会被误认为是指示同一障碍物,从而有利于保证后续的障碍物检测过程的准确性。可以理解的是,所述预设的障碍物尺寸可根据实际应用场景中可能遇到的障碍物的尺寸所确定,比如在自动驾驶场景中,所述预设的障碍物尺寸是车辆,比如可以设置车辆的尺寸为长度5米,宽度2米。
在另一种可能的实现方式中,所述障碍物检测模型可以对矫正后的第一点云数据中的三维点进行聚类处理,获得多个点云聚类结果,不同的点云聚类结果可能指示不同的障碍物,每个点云聚类结果中的三维点可能指示同一个障碍物,然后将相邻的点云聚类结果之间的距离进行压缩,进而使用压缩后的第一点云数据进行障碍物检测。本实施例实现对相邻的点云聚类结果之间的这部分空间进行压缩删减,使得点云更稠密化。
示例性的,可以在相邻两个点云聚类结果的聚类中心之间的距离大于预设距离的情况下,将相邻的点云聚类结果之间的距离进行压缩,所述预设距离可依据实际应用场景进行具体设置,例如所述预设距离可以根据实际应用场景中可能遇到的障碍物的尺寸所确定,例如所述预设距离大于障碍物的最大尺寸,相邻两个点云聚类结果的聚类中心之间的距离大于预设距离,进一步表明相邻两个点云聚类结果有极大概率指示不同的障碍物,所述相邻两个点云聚类结果之间的空间是冗余空间,这时候可以对所述相邻两个点云聚类结果之间的距离进行压缩。
可以理解的是,本实施例对此聚类处理过程中使用的聚类算法不做任何限制,可依据实际应用场景进行具体设置,比如可以使用K-MEANS聚类算法、均值偏移聚类算法或者层次聚类算法等。
在一些实施例中,所述相邻的点云聚类结果之间的距离包括沿指定方向的距离。示例性的,当所述障碍物检测装置安装于可移动平台的情况下,所述第一点云数据是由探测装置安装在可移动平台上时获取到的,所述指定方向包括:第一方向和第二方向;所述第一方向为所述可移动平台的移动方向,所述第二方向与所述第一方向相交。作为例子,在自动驾驶场景,所述第一方向可以是自动驾驶车辆的前进方向,所述第二方向可以是所述自动驾驶车辆的横向方向(或者说左右方向)。本实施例实现在至少两个方向对所述矫正后的第一点云数据进行压缩,进一步提高点云数据的稠密度。
进一步地,为了避免在压缩之后指示不同障碍物的点云聚类结果在后续的障碍物检测过程中因距离过近被误认为是指示同一障碍物,所述障碍物检测装置可以根据预设的障碍物尺寸,对所述相邻的点云聚类结果之间的距离进行压缩;其中,压缩后的相邻的点云聚类结果之间的距离不小于所述预设的障碍物尺寸,保证所述相邻的三维点在压缩之后仍保持一定距离,该距离可以使得在后续的障碍物检测过程中不会被误认为是指示同一障碍物,从而有利于保证后续的障碍物检测过程的准确性。可以理解的是,所述预设的障碍物尺寸可根据实际应用场景中可能遇到的障碍物的尺寸所确定。
在对所述相邻的三维点之间的距离进行压缩之后,或者对相邻的点云聚类结果之间的距离进行压缩之后,所述障碍物检测装置可以记录相关压缩信息以便后续针对点云距离的还原过程,所述压缩信息至少包括:压缩位置和/或压缩距离。
在一些实施例中,所述障碍物检测装置可以使用矫正后的第一点云数据进行障碍物检测;或者为了提高处理速度,可以对所述矫正后的第一点云数据的部分进行压缩之后,对压缩后的点云数据进行障碍物检测。
在一种可能的实现方式中,所述障碍物检测装置安装有训练好的障碍物检测模型, 所述障碍物检测装置可以将所述矫正后的第一点云数据或者所述压缩后的第一点云数据输入所述障碍物检测模型中,通过所述障碍物检测模型对所述矫正后的第一点云数据或者所述压缩后的第一点云数据进行特征提取以获得点云特征,并根据所述点云特征进行障碍物检测以获得障碍物检测结果。
其中,所述障碍物检测结果包括障碍物的置信度和/或状态信息;所述置信度表征检测的物体为障碍物的概率;所述状态信息包括以下至少一项:所述障碍物的类型、尺寸、位置以及朝向。例如在自动驾车场景下,障碍物可以是其他车辆,每辆车的检测结果可以描述为[conf,cls0,cls1,cls2,x,y,z,l,w,h,sinθ,cosθ]这样一个长度为12的数组。其中conf表示该车辆的置信度,cls0,cls1,cls2分别表示车辆是小车、客车和卡车的概率,x,y,z表示车辆中心点相对于激光雷达坐标系的位置,l,w,h表示车辆的长宽高大小,sinθ,cosθ共同表示车辆的车头方向。
障碍物检测模型的训练过程可以是:先通过建模表示出一个模型,再通过构建评价函数对模型进行评价,最后根据样本数据及最优化方法对评价函数进行优化,把模型调整到最优。
其中,建模是将实际问题转化成为计算机可以理解的问题,即将实际的问题转换成计算机可以表示的方式。建模一般是指基于大量样本数据估计出来模型的目标函数的过程。
评价的目标是判断已建好的模型的优劣。对于第一步中建好的模型,评价是一个指标,用于表示模型的优劣。这里就会涉及一些评价的指标以及一些评价函数的设计。在机器学习中会有针对性的评价指标。例如,在建模完成后,需要为模型设计一个损失函数,来评价模型的输出误差。
优化的目标是评价函数。即利用最优化方法,对评价函数进行最优化求解,找到评价最高的模型。例如,可以通过诸如梯度下降法等最优化方法,找到损失函数的输出误差的最小值(最优解),将模型的参数调到最优。
可以这么理解,要训练一个模型之前,首先确定出一个合适的参数估计方法,再利用这种参数估计方法,把这个模型的目标函数中的各个参数估计出来,进而确定出目标函数最终的数学表达式。
示例性的,用于训练所述障碍物检测模型的样本数据可以包括是点云数据,进一步,考虑到点云数据是非结构化数据,需要处理为可输入至障碍物检测模型的格式,例如将点云数据进行栅格处理,得到点云数据的每个体素及其体素信息,将点云数据的每个体素对应的体素信息作为障碍物检测模型的输入。
本实施例中障碍物检测模型的训练过程可以是有监督训练,也可以是无监督训练。在一些例子中,可以采用有监督训练方式以提高训练速度,样本数据可以标注真实值,通过有监督的训练方式,可以提高模型训练的速度和精确度。作为例子,所述真实值包括有:障碍物的置信度(表征检测的物体为障碍物的概率)以及物体的状态信息;所述状态信息可以包括以下至少一种:所述障碍物的类型、尺寸、位置以及朝向。
利用上述样本数据,可以通过机器学习获得所述障碍物检测模型。机器学习模型可以是神经网络模型等,例如基于深度学习的神经网络模型。而障碍物检测的具体结构设计,是训练过程的其中一个重要方面。本实施例中,请参阅图6,障碍物检测模型200至少包括:特征提取网络201以及物体预测网络202。所述特征提取网络201用于对所述矫正后的第一点云数据进行卷积操作,获得所述点云特征;所述物体预测网络202用于根据所述点云特征进行障碍物检测,获得障碍物检测结果。示例性的,所述特征提取网络201可以包括多个卷积层,所述多个卷积层可以使用不同尺度的卷积核,则可以通过所述特征提取网络对所述矫正后的第一点云数据进行卷积操作以获得不同尺寸的点云特征。
在一些实施例中,在点云数据是在远距离处采集到的情况下,通常远距离处的点云数据会更加稀疏,信息量更少,因此本实施例使用轻量化的特征提取网络来进行点云特征提取,所述特征提取网络包括的卷积层的数量小于预设值,在有效提取场景语义信息的同时,显著提高了特征提取网络的速度;作为例子,比如所述特征提取网络包括6个卷积层和5个池化层。
训练过程的其中另一个重要方面,需要根据业务需求设计合适的损失函数。损失函数也称之为代价函数,在有监督模型训练的场景下,样本数据中标注有真实值,损失函数用来估量模型的预测值与真实值之间的误差。在一些例子中,可以利用一些已有的如对数损失函数、平方损失函数、指数损失函数、0/1损失函数等来构成相应场景的损失函数。
训练过程中,需要利用最优化方法对评价函数进行最优化求解,找到评价最高的模型。例如,可以通过诸如梯度下降法等最优化方法,找到损失函数的输出误差的最小值(最优解),将模型的参数调到最优,即求解到模型中各网络层的最优系数。在一些例子中,求解的过程可以是通过计算模型的输出和损失函数的误差值,以求解对模型参数进行调整的梯度。作为例子,可以调用反向传播函数,来计算梯度,将所述损失函数的计算结果反向传播至所述障碍物检测模型中,以使所述障碍物检测模型更新模型参数。
通过上述训练过程,训练结束获得障碍物检测模型,获得的障碍物检测模型还可利用测试样本进行测试,以检验障碍物检测模型的识别准确度。最终训练好的障碍物检测模型可以设置于物体识别装置中,示例性的,所述障碍物检测装置可以是可移动平台,或者所述物体识别装置作为芯片安装于可移动平台中。
在所述可移动平台移动过程中,可以通过配置于可移动平台上的探测装置(如激光雷达或者具有深度信息采集功能的拍摄装置获取点云数据),然后所述可移动平台中的障碍物检测装置根据所述点云数据进行高度矫正处理、压缩处理等,并将矫正后的第一点云数据或者压缩后的第一点云数据输入所述障碍物检测装置中,从而获取所述障碍物检测装置输出的检测结果,所述障碍物检测装置包括障碍物的置信度和状态信息。
进一步地,为了方便后续基于检测结果的处理过程,所述检测结果为置信度大于预设阈值的数据,而对于置信度不大于预设阈值的数据,表明不是障碍物,无需对置信度不大于预设阈值的数据进行进一步处理,所述预设阈值可依据实际应用场景进行具体设置;作为例子,对于待识别物体,可以识别一系列候选框,每个候选框对应有障碍物的置信度和状态信息,基于所述候选框对应的置信度,可以确定候选框中的待识别物体属于障碍物的概率,将各个候选框对应的置信度进行排序,排序后按照设定阈值进行筛选,大于设定阈值的可以认为检测出一个障碍物,进而得到最终的检测结果。
在一些实施例中,在使用压缩后的第一点云数据进行障碍物检测的情况下,所述障碍物检测装置将压缩后的第一点云数据输入所述障碍物检测模型中,通过所述障碍物检测模型获得障碍物检测结果,所述障碍物检测结果为压缩后的第一点云数据对应的结果,需要根据预先记录的压缩信息来得到未压缩的第一点云数据对应的障碍物检测结果。示例性的,所述障碍物检测装置可以基于所述障碍物检测结果,确定每个障碍物对应的三维点,然后根据所述压缩信息还原所述障碍物对应的三维点之间的距离,最后根据还原后的第一点云数据更新所述障碍物检测结果。
在一些例子中,比如所述障碍物检测结果包括所述障碍物的位置信息,在使用压缩后的第一点云数据进行障碍物检测的情况下,得到的所述障碍物的位置信息不是实际的位置信息,需要根据预先记录的压缩信息对障碍物对应的三维点之间的距离进行还原,基于还原后的点云数据中三维点之间的实际距离对所述障碍物的位置信息进行调整,进而得到未压缩的点云数据(或者说还原后的点云数据)对应的所述障碍物实际的位置信息。
相应的,相关技术中的障碍物检测方法使用点云数据进行检测,而点云数据为非结构化数据,需要进一步将其转换成能够处理的结构化数据,其中点云数据中的三维点的深度决定了转换后的结构化数据的数据量大小,在点云数据是在较远距离处采集得到的情况下,其三维点的深度较大,经转换后得到结构化数据的数据量也较大,需要较长的处理时间,不满足某些场景(如自动驾驶场景)下的实时性需求;而发明人发现,在较远距离处采集得到的点云数据通常较为稀疏,有大部分空间是没有三维点的,使得经转换得到的结构化数据中也包含了没有三维点的这部分空间的数据,对这部分数据的处理造成冗余,造成障碍物检测速度慢的问题。
基于此,请参阅图7,本申请实施例提供了一种障碍物检测方法,所述障碍物检测方法可以应用于障碍物检测装置,所述障碍物检测装置可以是具有数据处理功能的芯片、集成电路或者电子设备等,所述方法包括:
在步骤S201中,获取第一点云数据。
在步骤S202中,统计所述第一点云数据中相邻的三维点之间的距离。
在步骤S203中,如果所述相邻的三维点之间的距离大于预设距离,对所述相邻的三维点之间的距离进行压缩。
在步骤S204中,根据压缩后的第一点云数据进行障碍物检测。
本实施例中,在获取所述第一点云数据之后,为了进一步提高处理速度,本实施例实现可以对所述第一点云数据中三维点之间的距离过大的这部分空间进行压缩删减,使得点云更稠密化,可以显著减少转换后的结构化数据的数据量,使用压缩后的第一点云数据进行障碍物检测,可以实现更快获得障碍物检测结果,满足某些场景下的实时性需求。
在一种可能的实现方式中,所述障碍物检测装置可以统计所述第一点云数据中相邻的三维点之间的距离,如果所述相邻的三维点之间的距离大于预设距离,对所述相邻的三维点之间的距离进行压缩,进而使用压缩后的第一点云数据进行障碍物检测。本实施例实现对相邻的三维点之间距离过大的这部分空间进行压缩删减,使得点云更稠密化。
其中,所述预设距离可依据实际应用场景进行具体设置,例如所述预设距离可以根据实际应用场景中可能遇到的障碍物的尺寸所确定,例如所述预设距离大于障碍物的最大尺寸。则如果相邻的三维点之间的距离小于或者等于预设距离,表明所述相邻的三维点可能指示同一障碍物,所述相邻的三维点之间的距离无需压缩;如果相邻的 三维点之间的距离大于预设距离,表明所述相邻的三维点可能指示不同的障碍物,所述相邻的三维点之间的空间是冗余空间,这时候可以对所述相邻的三维点之间的距离进行压缩。
在一些实施例中,所述相邻的三维点之间的距离包括沿指定方向的距离。示例性的,当所述障碍物检测装置安装于可移动平台的情况下,所述第一点云数据是由探测装置安装在可移动平台上时获取到的,所述指定方向包括:第一方向和第二方向;所述第一方向为所述可移动平台的移动方向,所述第二方向与所述第一方向相交。本实施例实现在至少两个方向对所述矫正后的第一点云数据进行压缩,进一步提高点云数据的稠密度。
进一步地,为了避免在压缩之后指示不同障碍物的三维点在后续的障碍物检测过程中因距离过近被误认为是指示同一障碍物,所述障碍物检测装置可以根据预设的障碍物尺寸,对所述相邻的三维点之间的距离进行压缩;其中,压缩后的相邻的三维点之间的距离不小于所述预设的障碍物尺寸,保证所述相邻的三维点在压缩之后仍保持一定距离,该距离使得在后续的障碍物检测过程中压缩后的相邻两个三维点不会被误认为是指示同一障碍物,从而有利于保证后续的障碍物检测过程的准确性。可以理解的是,所述预设的障碍物尺寸可根据实际应用场景中可能遇到的障碍物的尺寸所确定,比如在自动驾驶场景中,所述预设的障碍物尺寸是车辆,比如可以设置车辆的尺寸为长度5米,宽度2米。
在另一种可能的实现方式中,所述障碍物检测模型可以对矫正后的第一点云数据中的三维点进行聚类处理,获得多个点云聚类结果,不同的点云聚类结果可能指示不同的障碍物,每个点云聚类结果中的三维点可能指示同一个障碍物,然后可以将相邻的点云聚类结果之间的距离进行压缩,进而使用压缩后的第一点云数据进行障碍物检测。本实施例实现对相邻的点云聚类结果之间的这部分空间进行压缩删减,使得点云更稠密化。
示例性的,可以在相邻两个点云聚类结果的聚类中心之间的距离大于预设距离的情况下,将相邻的点云聚类结果之间的距离进行压缩,所述预设距离可依据实际应用场景进行具体设置,例如所述预设距离可以根据实际应用场景中可能遇到的障碍物的尺寸所确定,例如所述预设距离大于障碍物的最大尺寸,相邻两个点云聚类结果的聚类中心之间的距离大于预设距离,进一步表明相邻两个点云聚类结果有极大概率指示不同的障碍物,所述相邻两个点云聚类结果之间的空间是冗余空间,这时候可以对所述相邻两个点云聚类结果之间的距离进行压缩。
可以理解的是,本实施例对此聚类处理过程中使用的聚类算法不做任何限制,可依据实际应用场景进行具体设置,比如可以使用K-MEANS聚类算法、均值偏移聚类算法或者层次聚类算法等。
在一些实施例中,所述相邻的点云聚类结果之间的距离包括沿指定方向的距离。示例性的,当所述障碍物检测装置安装于可移动平台的情况下,所述第一点云数据是由探测装置安装在可移动平台上时获取到的,所述指定方向包括:第一方向和第二方向;所述第一方向为所述可移动平台的移动方向,所述第二方向与所述第一方向相交。作为例子,在自动驾驶场景,所述第一方向可以是自动驾驶车辆的前进方向,所述第二方向可以是所述自动驾驶车辆的横向方向(或者说左右方向)。本实施例实现在至少两个方向对所述矫正后的第一点云数据进行压缩,进一步提高点云数据的稠密度。
进一步地,为了避免在压缩之后指示不同障碍物的点云聚类结果在后续的障碍物检测过程中因距离过近被误认为是指示同一障碍物,所述障碍物检测装置可以根据预设的障碍物尺寸,对所述相邻的点云聚类结果之间的距离进行压缩;其中,压缩后的相邻的点云聚类结果之间的距离不小于所述预设的障碍物尺寸,保证所述相邻的三维点在压缩之后仍保持一定距离,该距离可以使得在后续的障碍物检测过程中不会被误认为是指示同一障碍物,从而有利于保证后续的障碍物检测过程的准确性。可以理解的是,所述预设的障碍物尺寸可根据实际应用场景中可能遇到的障碍物的尺寸所确定。
在一实施例中,在所述对所述相邻的三维点之间的距离进行压缩之后,还包括:记录压缩信息。其中,所述压缩信息至少包括:压缩位置和/或压缩距离。
在一实施例中,所述根据压缩后的第一点云数据进行障碍物检测,包括:对所述压缩后的第一点云数据进行特征提取,获得点云特征;根据所述点云特征进行障碍物检测,获得障碍物检测结果。
在一实施例中,在所述获得障碍物检测结果之后,还包括:基于所述障碍物检测结果,确定每个障碍物对应的三维点;根据所述压缩信息,还原所述障碍物对应的三维点之间的距离;根据还原后的第一点云数据更新所述障碍物检测结果。
在一实施例中,所述障碍物检测结果包括障碍物的置信度和/或状态信息;所述置信度表征检测的物体为障碍物的概率;所述状态信息包括以下至少一项:所述障碍物的类型、尺寸、位置以及朝向。
在一实施例中,所述点云特征包括不同尺寸的点云特征。
在一实施例中,所述点云特征以及障碍物检测结果通过预先建立的障碍物检测模型获得。
在一实施例中,所述障碍物检测模型包括特征提取网络;所述特征提取网络用于对所述压缩后的第一点云数据进行卷积操作,获得所述点云特征。其中,所述特征提取网络包括的卷积层的数量小于预设值。
在一实施例中,所述对所述压缩后的第一点云数据进行特征提取,包括:对所述压缩后的第一点云数据进行栅格划分,获取体素信息;对所述体素信息进行特征提取。
在一实施例中,在所述统计所述第一点云数据中相邻的三维点之间的距离之前,还包括:根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量;所述目标三维点的高度低于所述第一点云数据中大部分三维点的高度;使用所述偏差量矫正所述第一点云数据中的三维点的高度。
则所述统计所述第一点云数据中相邻的三维点之间的距离,包括:统计矫正后的第一点云数据中相邻的三维点之间的距离。
在一实施例中,所述获取第一点云数据,包括:根据三维点的深度,从探测装置采集到的原始点云数据中确定需要矫正高度的第一点云数据和无需矫正高度的第二点云数据;其中,矫正后的第一点云数据的高度基准面比未矫正的第一点云数据的高度基准面,更靠近所述第二点云数据的高度基准面;或者说,矫正后的第一点云数据的高度基准面与所述第二点云数据的高度基准面之间的高度差,小于未矫正的第一点云数据的高度基准面与所述第二点云数据的高度基准面之间的高度差。
在一实施例中,所述第一点云数据和所述第二点云数据属于不同深度区间;其中,所述第一点云数据所属深度区间的最小深度大于或等于所述第二点云数据所属深度区域的最大深度。例如,所述需要矫正高度的第一点云数据中的三维点的深度大于预设深度阈值。
在一实施例中,在所述获取第一点云数据之后,还包括:根据所述第一点云数据中的三维点的深度,划分所述第一点云数据,获得多个三维点集合。
则所述根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量,并使用所述偏差量矫正所述第一点云数据中的三维点的高度,包括:对于每个所述三维点集合,根据所述三维点集合中的一个或多个目标三维点的高度确定偏差量,并使用所述偏差量矫正所述三维点集合中的三维点的高度;其中,所述目标三维点的高度低于所述三维点集合中大部分三维点的高度。
在一实施例中,所述一个或多个目标三维点的高度低于所述第一点云数据中除所述目标三维点以外的其他三维点的高度。
在一实施例中,所述偏差量根据所述一个或多个目标三维点的高度的统计值确定。 其中,所述统计值包括:平均数、中位数或者众数。
在一实施例中,矫正后的三维点的高度根据所述三维点的高度与所述偏差量之间的差值确定。
在一实施例中,在所述根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量之前,还包括:确定所述第一点云数据中的三维点与所述探测装置之间的角度;将所述角度大于预设角度的三维点滤除。
相应地,请参阅图8,本申请实施例还提供了一种障碍物检测装置20包括一个或多个处理器21和存储有可执行指令的存储器22;
所述一个或多个处理器21在执行所述可执行指令时,被单独或共同地配置成:
获取第一点云数据;
根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量;所述目标三维点的高度低于所述第一点云数据中大部分三维点的高度;
使用所述偏差量矫正所述第一点云数据中的三维点的高度;
根据矫正后的第一点云数据进行障碍物检测。
在一实施例中,所述处理器21还用于:根据三维点的深度,从探测装置采集到的原始点云数据中确定需要矫正高度的第一点云数据和无需矫正高度的第二点云数据。
其中,矫正后的第一点云数据的高度基准面比未矫正的第一点云数据的高度基准面,更靠近所述第二点云数据的高度基准面。
在一实施例中,所述第一点云数据和所述第二点云数据属于不同深度区间;其中,所述第一点云数据所属深度区间的最小深度大于或等于所述第二点云数据所属深度区域的最大深度。
在一实施例中,所述需要矫正高度的第一点云数据中的三维点的深度大于预设深度阈值。
在一实施例中,在所述获取第一点云数据之后,所述处理器21还用于:
根据所述第一点云数据中的三维点的深度,划分所述第一点云数据,获得多个三维点集合;
对于每个所述三维点集合,根据所述三维点集合中的一个或多个目标三维点的高度确定偏差量,并使用所述偏差量矫正所述三维点集合中的三维点的高度;其中,所述目标三维点的高度低于所述三维点集合中大部分三维点的高度。
在一实施例中,所述一个或多个目标三维点的高度低于所述第一点云数据中除所 述目标三维点以外的其他三维点的高度。
在一实施例中,所述偏差量根据所述一个或多个目标三维点的高度的统计值确定。
在一实施例中,所述统计值包括:平均数、中位数或者众数。
在一实施例中,矫正后的三维点的高度根据所述三维点的高度与所述偏差量之间的差值确定。
在一实施例中,在所述获取第一点云数据之后,所述处理器21还用于:确定所述第一点云数据中的三维点与探测装置之间的角度;将所述角度大于预设角度的三维点滤除。
在一实施例中,所述处理器21还用于:
对所述矫正后的第一点云数据进行特征提取,获得点云特征;
根据所述点云特征进行障碍物检测,获得障碍物检测结果。
在一实施例中,在所述对所述矫正后的第一点云数据进行特征提取之前,所述处理器21还用于:
统计所述第一点云数据中相邻的三维点之间的距离;
如果所述相邻的三维点之间的距离大于预设距离,对所述相邻的三维点之间的距离进行压缩;
对压缩后的第一点云数据进行特征提取。
在一实施例中,所述处理器21还用于:根据预设的障碍物尺寸,对所述相邻的三维点之间的距离进行压缩。
在一实施例中,压缩后的相邻的三维点之间的距离不小于所述预设的障碍物尺寸。
在一实施例中,所述相邻的三维点之间的距离包括沿指定方向的距离。
在一实施例中,所述第一点云数据是由探测装置安装在可移动平台上时获取到的。
所述指定方向包括:第一方向和第二方向。
所述第一方向为所述可移动平台的移动方向,所述第二方向与所述第一方向相交。
在一实施例中,在所述对所述相邻的三维点之间的距离进行压缩之后,所述处理器21还用于:记录压缩信息。
在一实施例中,所述压缩信息至少包括:压缩位置和/或压缩距离。
在一实施例中,在所述获得障碍物检测结果之后,所述处理器21还用于:
基于所述障碍物检测结果,确定每个障碍物对应的三维点;
根据所述压缩信息,还原所述障碍物对应的三维点之间的距离;
根据还原后的第一点云数据更新所述障碍物检测结果。
在一实施例中,所述障碍物检测结果包括障碍物的置信度和/或状态信息。
所述置信度表征检测的物体为障碍物的概率。
所述状态信息包括以下至少一项:所述障碍物的类型、尺寸、位置以及朝向。
在一实施例中,所述点云特征包括不同尺寸的点云特征。
在一实施例中,所述点云特征以及障碍物检测结果通过预先建立的障碍物检测模型获得。
在一实施例中,所述障碍物检测模型包括特征提取网络;所述特征提取网络用于对所述矫正后的第一点云数据进行卷积操作,获得所述点云特征。
在一实施例中,所述特征提取网络包括的卷积层的数量小于预设值。
在一实施例中,所述处理器21还用于:对所述矫正后的第一点云数据进行栅格划分,获取体素信息;对所述体素信息进行特征提取。
相应地,本申请实施例还提供了一种障碍物检测装置,包括一个或多个处理器和存储有可执行指令的存储器;
所述一个或多个处理器在执行所述可执行指令时,被单独或共同地配置成:
获取第一点云数据;
统计所述第一点云数据中相邻的三维点之间的距离;
如果所述相邻的三维点之间的距离大于预设距离,对所述相邻的三维点之间的距离进行压缩;
根据压缩后的第一点云数据进行障碍物检测。
示例性地,所述处理器还用于:根据预设的障碍物尺寸,对所述相邻的三维点之间的距离进行压缩。
示例性地,压缩后的相邻的三维点之间的距离不小于所述预设的障碍物尺寸。
示例性地,所述相邻的三维点之间的距离包括沿指定方向的距离。
示例性地,所述第一点云数据是由探测装置安装在可移动平台上时获取到的。
所述指定方向包括:第一方向和第二方向。
所述第一方向为所述可移动平台的移动方向,所述第二方向与所述第一方向相交。
示例性地,在所述对所述相邻的三维点之间的距离进行压缩之后,所述处理器还用于:记录压缩信息。
示例性地,所述压缩信息至少包括:压缩位置和/或压缩距离。
示例性地,所述处理器还用于:对所述压缩后的第一点云数据进行特征提取,获 得点云特征;根据所述点云特征进行障碍物检测,获得障碍物检测结果。
示例性地,在所述获得障碍物检测结果之后,所述处理器还用于:
基于所述障碍物检测结果,确定每个障碍物对应的三维点;
根据所述压缩信息,还原所述障碍物对应的三维点之间的距离;
根据还原后的第一点云数据更新所述障碍物检测结果。
示例性地,所述障碍物检测结果包括障碍物的置信度和/或状态信息。
所述置信度表征检测的物体为障碍物的概率。
所述状态信息包括以下至少一项:所述障碍物的类型、尺寸、位置以及朝向。
示例性地,所述点云特征包括不同尺寸的点云特征。
示例性地,所述点云特征以及障碍物检测结果通过预先建立的障碍物检测模型获得。
示例性地,所述障碍物检测模型包括特征提取网络;
所述特征提取网络用于对所述压缩后的第一点云数据进行卷积操作,获得所述点云特征。
示例性地,所述特征提取网络包括的卷积层的数量小于预设值。
示例性地,所述处理器还用于:对所述压缩后的第一点云数据进行栅格划分,获取体素信息;对所述体素信息进行特征提取。
示例性地,在所述统计所述第一点云数据中相邻的三维点之间的距离之前,所述处理器还用于:
根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量;所述目标三维点的高度低于所述第一点云数据中大部分三维点的高度;
使用所述偏差量矫正所述第一点云数据中的三维点的高度;
统计矫正后的第一点云数据中相邻的三维点之间的距离。
示例性地,所述处理器还用于:根据三维点的深度,从探测装置采集到的原始点云数据中确定需要矫正高度的第一点云数据和无需矫正高度的第二点云数据。
其中,矫正后的第一点云数据的高度基准面比未矫正的第一点云数据的高度基准面,更靠近所述第二点云数据的高度基准面。
示例性地,所述第一点云数据和所述第二点云数据属于不同深度区间;其中,所述第一点云数据所属深度区间的最小深度大于或等于所述第二点云数据所属深度区域的最大深度。
示例性地,所述需要矫正高度的第一点云数据中的三维点的深度大于预设深度阈 值。
示例性地,在所述获取第一点云数据之后,所述处理器还用于:
根据所述第一点云数据中的三维点的深度,划分所述第一点云数据,获得多个三维点集合;
对于每个所述三维点集合,根据所述三维点集合中的一个或多个目标三维点的高度确定偏差量,并使用所述偏差量矫正所述三维点集合中的三维点的高度;其中,所述目标三维点的高度低于所述三维点集合中大部分三维点的高度。
示例性地,所述一个或多个目标三维点的高度低于所述第一点云数据中除所述目标三维点以外的其他三维点的高度。
示例性地,所述偏差量根据所述一个或多个目标三维点的高度的统计值确定。
示例性地,所述统计值包括:平均数、中位数或者众数。
示例性地,矫正后的三维点的高度根据所述三维点的高度与所述偏差量之间的差值确定。
示例性地,在所述根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量之前,所述处理器还用于:
确定所述第一点云数据中的三维点与探测装置之间的角度;
将所述角度大于预设角度的三维点滤除。
相应地,请参阅图9,本申请实施例还提供了一种可移动平台300,包括:
机体40;
动力系统30,安装在所述机体40内,用于为所述可移动平台300提供动力;以及,
如上述的障碍物检测装置20。
示例性地,所述可移动平台300包括但不限于无人飞行器,自动驾驶车辆,移动机器人等。
示例性地,所述可移动平台300还包括探测装置如激光雷达,其用于采集点云数据。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器,上述指令可由装置的处理器执行以完成上述方法。例如,非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当存储介质中的指令由终端的处理器执行时, 使得终端能够执行上述方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本申请实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (102)

  1. 一种障碍物检测方法,其特征在于,包括:
    获取第一点云数据;
    根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量;所述目标三维点的高度低于所述第一点云数据中大部分三维点的高度;
    使用所述偏差量矫正所述第一点云数据中的三维点的高度;
    根据矫正后的第一点云数据进行障碍物检测。
  2. 根据权利要求1所述的方法,其特征在于,所述获取第一点云数据,包括:
    根据三维点的深度,从探测装置采集到的原始点云数据中确定需要矫正高度的第一点云数据和无需矫正高度的第二点云数据;
    其中,矫正后的第一点云数据的高度基准面比未矫正的第一点云数据的高度基准面,更靠近所述第二点云数据的高度基准面。
  3. 根据权利要求2所述的方法,其特征在于,所述第一点云数据和所述第二点云数据属于不同深度区间;
    其中,所述第一点云数据所属深度区间的最小深度大于或等于所述第二点云数据所属深度区域的最大深度。
  4. 根据权利要求2所述的方法,其特征在于,所述需要矫正高度的第一点云数据中的三维点的深度大于预设深度阈值。
  5. 根据权利要求1所述的方法,其特征在于,在所述获取第一点云数据之后,还包括:
    根据所述第一点云数据中的三维点的深度,划分所述第一点云数据,获得多个三维点集合;
    所述根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量,以及使用所述偏差量矫正所述第一点云数据中的三维点的高度,包括:
    对于每个所述三维点集合,根据所述三维点集合中的一个或多个目标三维点的高度确定偏差量,并使用所述偏差量矫正所述三维点集合中的三维点的高度;其中,所述目标三维点的高度低于所述三维点集合中大部分三维点的高度。
  6. 根据权利要求1所述的方法,其特征在于,所述一个或多个目标三维点的高度低于所述第一点云数据中除所述目标三维点以外的其他三维点的高度。
  7. 根据权利要求1所述的方法,其特征在于,所述偏差量根据所述一个或多个目标三维点的高度的统计值确定。
  8. 根据权利要求7所述的方法,其特征在于,所述统计值包括:平均数、中位数或者众数。
  9. 根据权利要求1所述的方法,其特征在于,矫正后的三维点的高度根据所述三维点的高度与所述偏差量之间的差值确定。
  10. 根据权利要求1所述的方法,其特征在于,在所述获取第一点云数据之后,还包括:
    确定所述第一点云数据中的三维点与探测装置之间的角度;
    将所述角度大于预设角度的三维点滤除。
  11. 根据权利要求1所述的方法,其特征在于,所述根据矫正后的第一点云数据进行障碍物检测,包括:
    对所述矫正后的第一点云数据进行特征提取,获得点云特征;
    根据所述点云特征进行障碍物检测,获得障碍物检测结果。
  12. 根据权利要求11所述的方法,其特征在于,在所述对所述矫正后的第一点云数据进行特征提取之前,还包括:
    统计所述第一点云数据中相邻的三维点之间的距离;
    如果所述相邻的三维点之间的距离大于预设距离,对所述相邻的三维点之间的距离进行压缩;
    所述对所述矫正后的第一点云数据进行特征提取,包括:
    对压缩后的第一点云数据进行特征提取。
  13. 根据权利要求12所述的方法,其特征在于,所述对所述相邻的三维点之间的距离进行压缩,包括:
    根据预设的障碍物尺寸,对所述相邻的三维点之间的距离进行压缩。
  14. 根据权利要求13所述的方法,其特征在于,压缩后的相邻的三维点之间的距离不小于所述预设的障碍物尺寸。
  15. 根据权利要求12所述的方法,其特征在于,所述相邻的三维点之间的距离包括沿指定方向的距离。
  16. 根据权利要求15所述的方法,其特征在于,所述第一点云数据是由探测装置安装在可移动平台上时获取到的;
    所述指定方向包括:第一方向和第二方向;
    所述第一方向为所述可移动平台的移动方向,所述第二方向与所述第一方向相交。
  17. 根据权利要求12所述的方法,其特征在于,在所述对所述相邻的三维点之间 的距离进行压缩之后,还包括:
    记录压缩信息。
  18. 根据权利要求17所述的方法,其特征在于,所述压缩信息至少包括:压缩位置和/或压缩距离。
  19. 根据权利要求17所述的方法,其特征在于,在所述获得障碍物检测结果之后,还包括:
    基于所述障碍物检测结果,确定每个障碍物对应的三维点;
    根据所述压缩信息,还原所述障碍物对应的三维点之间的距离;
    根据还原后的第一点云数据更新所述障碍物检测结果。
  20. 根据权利要求11所述的方法,其特征在于,所述障碍物检测结果包括障碍物的置信度和/或状态信息;
    所述置信度表征检测的物体为障碍物的概率;
    所述状态信息包括以下至少一项:所述障碍物的类型、尺寸、位置以及朝向。
  21. 根据权利要求11所述的方法,其特征在于,所述点云特征包括不同尺寸的点云特征。
  22. 根据权利要求11~21中任意一项所述的方法,其特征在于,所述点云特征以及障碍物检测结果通过预先建立的障碍物检测模型获得。
  23. 根据权利要求22所述的方法,其特征在于,所述障碍物检测模型包括特征提取网络;
    所述特征提取网络用于对所述矫正后的第一点云数据进行卷积操作,获得所述点云特征。
  24. 根据权利要求23所述的方法,其特征在于,所述特征提取网络包括的卷积层的数量小于预设值。
  25. 根据权利要求11所述的方法,其特征在于,所述对所述矫正后的第一点云数据进行特征提取,包括:
    对所述矫正后的第一点云数据进行栅格划分,获取体素信息;
    对所述体素信息进行特征提取。
  26. 一种障碍物检测方法,其特征在于,
    获取第一点云数据;
    统计所述第一点云数据中相邻的三维点之间的距离;
    如果所述相邻的三维点之间的距离大于预设距离,对所述相邻的三维点之间的距离进行压缩;
    根据压缩后的第一点云数据进行障碍物检测。
  27. 根据权利要求26所述的方法,其特征在于,所述对所述相邻的三维点之间的距离进行压缩,包括:
    根据预设的障碍物尺寸,对所述相邻的三维点之间的距离进行压缩。
  28. 根据权利要求26所述的方法,其特征在于,压缩后的相邻的三维点之间的距离不小于所述预设的障碍物尺寸。
  29. 根据权利要求26所述的方法,其特征在于,所述相邻的三维点之间的距离包括沿指定方向的距离。
  30. 根据权利要求29所述的方法,其特征在于,所述第一点云数据是由探测装置安装在可移动平台上时获取到的;
    所述指定方向包括:第一方向和第二方向;
    所述第一方向为所述可移动平台的移动方向,所述第二方向与所述第一方向相交。
  31. 根据权利要求26所述的方法,其特征在于,在所述对所述相邻的三维点之间的距离进行压缩之后,还包括:
    记录压缩信息。
  32. 根据权利要求31所述的方法,其特征在于,所述压缩信息至少包括:压缩位置和/或压缩距离。
  33. 根据权利要求31所述的方法,其特征在于,所述根据压缩后的第一点云数据进行障碍物检测,包括:
    对所述压缩后的第一点云数据进行特征提取,获得点云特征;
    根据所述点云特征进行障碍物检测,获得障碍物检测结果。
  34. 根据权利要求33所述的方法,其特征在于,在所述获得障碍物检测结果之后,还包括:
    基于所述障碍物检测结果,确定每个障碍物对应的三维点;
    根据所述压缩信息,还原所述障碍物对应的三维点之间的距离;
    根据还原后的第一点云数据更新所述障碍物检测结果。
  35. 根据权利要求33所述的方法,其特征在于,所述障碍物检测结果包括障碍物的置信度和/或状态信息;
    所述置信度表征检测的物体为障碍物的概率;
    所述状态信息包括以下至少一项:所述障碍物的类型、尺寸、位置以及朝向。
  36. 根据权利要求33所述的方法,其特征在于,所述点云特征包括不同尺寸的点云特征。
  37. 根据权利要求33~36中任意一项所述的方法,其特征在于,所述点云特征以及障碍物检测结果通过预先建立的障碍物检测模型获得。
  38. 根据权利要求37所述的方法,其特征在于,所述障碍物检测模型包括特征提取网络;
    所述特征提取网络用于对所述压缩后的第一点云数据进行卷积操作,获得所述点云特征。
  39. 根据权利要求38所述的方法,其特征在于,所述特征提取网络包括的卷积层的数量小于预设值。
  40. 根据权利要求33所述的方法,其特征在于,所述对所述压缩后的第一点云数据进行特征提取,包括:
    对所述压缩后的第一点云数据进行栅格划分,获取体素信息;
    对所述体素信息进行特征提取。
  41. 根据权利要求26所述的方法,其特征在于,在所述统计所述第一点云数据中相邻的三维点之间的距离之前,还包括:
    根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量;所述目标三维点的高度低于所述第一点云数据中大部分三维点的高度;
    使用所述偏差量矫正所述第一点云数据中的三维点的高度;
    所述统计所述第一点云数据中相邻的三维点之间的距离,包括:
    统计矫正后的第一点云数据中相邻的三维点之间的距离。
  42. 根据权利要求41所述的方法,其特征在于,所述获取第一点云数据,包括:
    根据三维点的深度,从探测装置采集到的原始点云数据中确定需要矫正高度的第一点云数据和无需矫正高度的第二点云数据;
    其中,矫正后的第一点云数据的高度基准面比未矫正的第一点云数据的高度基准面,更靠近所述第二点云数据的高度基准面。
  43. 根据权利要求42所述的方法,其特征在于,所述第一点云数据和所述第二点云数据属于不同深度区间;
    其中,所述第一点云数据所属深度区间的最小深度大于或等于所述第二点云数据所属深度区域的最大深度。
  44. 根据权利要求42所述的方法,其特征在于,所述需要矫正高度的第一点云数据中的三维点的深度大于预设深度阈值。
  45. 根据权利要求41所述的方法,其特征在于,在所述获取第一点云数据之后,还包括:
    根据所述第一点云数据中的三维点的深度,划分所述第一点云数据,获得多个三维点集合;
    所述根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量,并使用所述偏差量矫正所述第一点云数据中的三维点的高度,包括:
    对于每个所述三维点集合,根据所述三维点集合中的一个或多个目标三维点的高度确定偏差量,并使用所述偏差量矫正所述三维点集合中的三维点的高度;其中,所述目标三维点的高度低于所述三维点集合中大部分三维点的高度。
  46. 根据权利要求41所述的方法,其特征在于,所述一个或多个目标三维点的高度低于所述第一点云数据中除所述目标三维点以外的其他三维点的高度。
  47. 根据权利要求41所述的方法,其特征在于,所述偏差量根据所述一个或多个目标三维点的高度的统计值确定。
  48. 根据权利要求47所述的方法,其特征在于,所述统计值包括:平均数、中位数或者众数。
  49. 根据权利要求41所述的方法,其特征在于,矫正后的三维点的高度根据所述三维点的高度与所述偏差量之间的差值确定。
  50. 根据权利要求41所述的方法,其特征在于,在所述根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量之前,还包括:
    确定所述第一点云数据中的三维点与探测装置之间的角度;
    将所述角度大于预设角度的三维点滤除。
  51. 一种障碍物检测装置,其特征在于,包括一个或多个处理器和存储有可执行指令的存储器;
    所述一个或多个处理器在执行所述可执行指令时,被单独或共同地配置成:
    获取第一点云数据;
    根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量;所述目标三维点的高度低于所述第一点云数据中大部分三维点的高度;
    使用所述偏差量矫正所述第一点云数据中的三维点的高度;
    根据矫正后的第一点云数据进行障碍物检测。
  52. 根据权利要求51所述的装置,其特征在于,所述处理器还用于:根据三维点的深度,从探测装置采集到的原始点云数据中确定需要矫正高度的第一点云数据和无需矫正高度的第二点云数据;
    其中,矫正后的第一点云数据的高度基准面比未矫正的第一点云数据的高度基准面,更靠近所述第二点云数据的高度基准面。
  53. 根据权利要求52所述的装置,其特征在于,所述第一点云数据和所述第二点云数据属于不同深度区间;
    其中,所述第一点云数据所属深度区间的最小深度大于或等于所述第二点云数据所属深度区域的最大深度。
  54. 根据权利要求52所述的装置,其特征在于,所述需要矫正高度的第一点云数据中的三维点的深度大于预设深度阈值。
  55. 根据权利要求51所述的装置,其特征在于,在所述获取第一点云数据之后,所述处理器还用于:
    根据所述第一点云数据中的三维点的深度,划分所述第一点云数据,获得多个三维点集合;
    对于每个所述三维点集合,根据所述三维点集合中的一个或多个目标三维点的高度确定偏差量,并使用所述偏差量矫正所述三维点集合中的三维点的高度;其中,所述目标三维点的高度低于所述三维点集合中大部分三维点的高度。
  56. 根据权利要求51所述的装置,其特征在于,所述一个或多个目标三维点的高度低于所述第一点云数据中除所述目标三维点以外的其他三维点的高度。
  57. 根据权利要求51所述的装置,其特征在于,所述偏差量根据所述一个或多个目标三维点的高度的统计值确定。
  58. 根据权利要求57所述的装置,其特征在于,所述统计值包括:平均数、中位数或者众数。
  59. 根据权利要求51所述的装置,其特征在于,矫正后的三维点的高度根据所述三维点的高度与所述偏差量之间的差值确定。
  60. 根据权利要求51所述的装置,其特征在于,在所述获取第一点云数据之后,所述处理器还用于:确定所述第一点云数据中的三维点与探测装置之间的角度;将所述角度大于预设角度的三维点滤除。
  61. 根据权利要求51所述的装置,其特征在于,所述处理器还用于:
    对所述矫正后的第一点云数据进行特征提取,获得点云特征;
    根据所述点云特征进行障碍物检测,获得障碍物检测结果。
  62. 根据权利要求61所述的装置,其特征在于,在所述对所述矫正后的第一点云数据进行特征提取之前,所述处理器还用于:
    统计所述第一点云数据中相邻的三维点之间的距离;
    如果所述相邻的三维点之间的距离大于预设距离,对所述相邻的三维点之间的距离进行压缩;
    对压缩后的第一点云数据进行特征提取。
  63. 根据权利要求62所述的装置,其特征在于,所述处理器还用于:根据预设的障碍物尺寸,对所述相邻的三维点之间的距离进行压缩。
  64. 根据权利要求63所述的装置,其特征在于,压缩后的相邻的三维点之间的距离不小于所述预设的障碍物尺寸。
  65. 根据权利要求62所述的装置,其特征在于,所述相邻的三维点之间的距离包括沿指定方向的距离。
  66. 根据权利要求65所述的装置,其特征在于,所述第一点云数据是由探测装置安装在可移动平台上时获取到的;
    所述指定方向包括:第一方向和第二方向;
    所述第一方向为所述可移动平台的移动方向,所述第二方向与所述第一方向相交。
  67. 根据权利要求62所述的装置,其特征在于,在所述对所述相邻的三维点之间的距离进行压缩之后,所述处理器还用于:记录压缩信息。
  68. 根据权利要求67所述的装置,其特征在于,所述压缩信息至少包括:压缩位置和/或压缩距离。
  69. 根据权利要求67所述的装置,其特征在于,在所述获得障碍物检测结果之后,所述处理器还用于:
    基于所述障碍物检测结果,确定每个障碍物对应的三维点;
    根据所述压缩信息,还原所述障碍物对应的三维点之间的距离;
    根据还原后的第一点云数据更新所述障碍物检测结果。
  70. 根据权利要求61所述的装置,其特征在于,所述障碍物检测结果包括障碍物的置信度和/或状态信息;
    所述置信度表征检测的物体为障碍物的概率;
    所述状态信息包括以下至少一项:所述障碍物的类型、尺寸、位置以及朝向。
  71. 根据权利要求61所述的装置,其特征在于,所述点云特征包括不同尺寸的点云特征。
  72. 根据权利要求61~71中任意一项所述的装置,其特征在于,所述点云特征以及障碍物检测结果通过预先建立的障碍物检测模型获得。
  73. 根据权利要求72所述的装置,其特征在于,所述障碍物检测模型包括特征提取网络;
    所述特征提取网络用于对所述矫正后的第一点云数据进行卷积操作,获得所述点云特征。
  74. 根据权利要求73所述的装置,其特征在于,所述特征提取网络包括的卷积层的数量小于预设值。
  75. 根据权利要求61所述的装置,其特征在于,所述处理器还用于:对所述矫正后的第一点云数据进行栅格划分,获取体素信息;对所述体素信息进行特征提取。
  76. 一种障碍物检测装置,其特征在于,包括一个或多个处理器和存储有可执行指令的存储器;
    所述一个或多个处理器在执行所述可执行指令时,被单独或共同地配置成:
    获取第一点云数据;
    统计所述第一点云数据中相邻的三维点之间的距离;
    如果所述相邻的三维点之间的距离大于预设距离,对所述相邻的三维点之间的距离进行压缩;
    根据压缩后的第一点云数据进行障碍物检测。
  77. 根据权利要求76所述的装置,其特征在于,所述处理器还用于:根据预设的障碍物尺寸,对所述相邻的三维点之间的距离进行压缩。
  78. 根据权利要求76所述的装置,其特征在于,压缩后的相邻的三维点之间的距离不小于所述预设的障碍物尺寸。
  79. 根据权利要求76所述的装置,其特征在于,所述相邻的三维点之间的距离包括沿指定方向的距离。
  80. 根据权利要求79所述的装置,其特征在于,所述第一点云数据是由探测装置安装在可移动平台上时获取到的;
    所述指定方向包括:第一方向和第二方向;
    所述第一方向为所述可移动平台的移动方向,所述第二方向与所述第一方向相交。
  81. 根据权利要求76所述的装置,其特征在于,在所述对所述相邻的三维点之间的距离进行压缩之后,所述处理器还用于:记录压缩信息。
  82. 根据权利要求81所述的装置,其特征在于,所述压缩信息至少包括:压缩位置和/或压缩距离。
  83. 根据权利要求81所述的装置,其特征在于,所述处理器还用于:
    对所述压缩后的第一点云数据进行特征提取,获得点云特征;
    根据所述点云特征进行障碍物检测,获得障碍物检测结果。
  84. 根据权利要求83所述的装置,其特征在于,在所述获得障碍物检测结果之后,所述处理器还用于:
    基于所述障碍物检测结果,确定每个障碍物对应的三维点;
    根据所述压缩信息,还原所述障碍物对应的三维点之间的距离;
    根据还原后的第一点云数据更新所述障碍物检测结果。
  85. 根据权利要求83所述的装置,其特征在于,所述障碍物检测结果包括障碍物的置信度和/或状态信息;
    所述置信度表征检测的物体为障碍物的概率;
    所述状态信息包括以下至少一项:所述障碍物的类型、尺寸、位置以及朝向。
  86. 根据权利要求83所述的装置,其特征在于,所述点云特征包括不同尺寸的点云特征。
  87. 根据权利要求83~86中任意一项所述的装置,其特征在于,所述点云特征以及障碍物检测结果通过预先建立的障碍物检测模型获得。
  88. 根据权利要求87所述的装置,其特征在于,所述障碍物检测模型包括特征提取网络;
    所述特征提取网络用于对所述压缩后的第一点云数据进行卷积操作,获得所述点云特征。
  89. 根据权利要求88所述的装置,其特征在于,所述特征提取网络包括的卷积层的数量小于预设值。
  90. 根据权利要求83所述的装置,其特征在于,所述处理器还用于:
    对所述压缩后的第一点云数据进行栅格划分,获取体素信息;
    对所述体素信息进行特征提取。
  91. 根据权利要求76所述的装置,其特征在于,在所述统计所述第一点云数据中相邻的三维点之间的距离之前,所述处理器还用于:
    根据所述第一点云数据中的一个或多个目标三维点的高度确定偏差量;所述目标三维点的高度低于所述第一点云数据中大部分三维点的高度;
    使用所述偏差量矫正所述第一点云数据中的三维点的高度;
    统计矫正后的第一点云数据中相邻的三维点之间的距离。
  92. 根据权利要求91所述的装置,其特征在于,所述处理器还用于:根据三维点的深度,从探测装置采集到的原始点云数据中确定需要矫正高度的第一点云数据和无需矫正高度的第二点云数据;
    其中,矫正后的第一点云数据的高度基准面比未矫正的第一点云数据的高度基准面,更靠近所述第二点云数据的高度基准面。
  93. 根据权利要求92所述的装置,其特征在于,所述第一点云数据和所述第二点云数据属于不同深度区间;
    其中,所述第一点云数据所属深度区间的最小深度大于或等于所述第二点云数据所属深度区域的最大深度。
  94. 根据权利要求92所述的装置,其特征在于,所述需要矫正高度的第一点云数据中的三维点的深度大于预设深度阈值。
  95. 根据权利要求91所述的装置,其特征在于,在所述获取第一点云数据之后,所述处理器还用于:
    根据所述第一点云数据中的三维点的深度,划分所述第一点云数据,获得多个三维点集合;
    对于每个所述三维点集合,根据所述三维点集合中的一个或多个目标三维点的高度确定偏差量,并使用所述偏差量矫正所述三维点集合中的三维点的高度;其中,所述目标三维点的高度低于所述三维点集合中大部分三维点的高度。
  96. 根据权利要求91所述的装置,其特征在于,所述一个或多个目标三维点的高度低于所述第一点云数据中除所述目标三维点以外的其他三维点的高度。
  97. 根据权利要求91所述的装置,其特征在于,所述偏差量根据所述一个或多个目标三维点的高度的统计值确定。
  98. 根据权利要求97所述的装置,其特征在于,所述统计值包括:平均数、中位数或者众数。
  99. 根据权利要求91所述的装置,其特征在于,矫正后的三维点的高度根据所述三维点的高度与所述偏差量之间的差值确定。
  100. 根据权利要求91所述的装置,其特征在于,在所述根据所述第一点云数据 中的一个或多个目标三维点的高度确定偏差量之前,所述处理器还用于:
    确定所述第一点云数据中的三维点与探测装置之间的角度;
    将所述角度大于预设角度的三维点滤除。
  101. 一种可移动平台,其特征在于,包括:
    机体;
    动力系统,安装在所述机体内,用于为所述可移动平台提供动力;以及,
    如权利要求51~75任一项或者76~100任一项所述的障碍物检测装置。
  102. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有可执行指令,所述可执行指令被处理器执行时实现如权利要求1至50任一项所述的方法。
PCT/CN2021/086233 2021-04-09 2021-04-09 障碍物检测方法、装置、可移动平台及存储介质 WO2022213376A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/086233 WO2022213376A1 (zh) 2021-04-09 2021-04-09 障碍物检测方法、装置、可移动平台及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/086233 WO2022213376A1 (zh) 2021-04-09 2021-04-09 障碍物检测方法、装置、可移动平台及存储介质

Publications (1)

Publication Number Publication Date
WO2022213376A1 true WO2022213376A1 (zh) 2022-10-13

Family

ID=83544979

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/086233 WO2022213376A1 (zh) 2021-04-09 2021-04-09 障碍物检测方法、装置、可移动平台及存储介质

Country Status (1)

Country Link
WO (1) WO2022213376A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419794A (zh) * 2011-10-31 2012-04-18 武汉大学 一种机载激光点云数据的快速滤波方法
KR20170104287A (ko) * 2016-03-07 2017-09-15 한국전자통신연구원 주행 가능 영역 인식 장치 및 그것의 주행 가능 영역 인식 방법
CN107272019A (zh) * 2017-05-09 2017-10-20 深圳市速腾聚创科技有限公司 基于激光雷达扫描的路沿检测方法
CN110687549A (zh) * 2019-10-25 2020-01-14 北京百度网讯科技有限公司 障碍物检测方法和装置
CN111325666A (zh) * 2020-02-10 2020-06-23 武汉大学 基于变分辨率体素格网的机载激光点云处理方法及应用
CN111886597A (zh) * 2019-06-28 2020-11-03 深圳市大疆创新科技有限公司 可移动平台的障碍物检测方法、装置及可移动平台
CN112184736A (zh) * 2020-10-10 2021-01-05 南开大学 一种基于欧式聚类的多平面提取方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419794A (zh) * 2011-10-31 2012-04-18 武汉大学 一种机载激光点云数据的快速滤波方法
KR20170104287A (ko) * 2016-03-07 2017-09-15 한국전자통신연구원 주행 가능 영역 인식 장치 및 그것의 주행 가능 영역 인식 방법
CN107272019A (zh) * 2017-05-09 2017-10-20 深圳市速腾聚创科技有限公司 基于激光雷达扫描的路沿检测方法
CN111886597A (zh) * 2019-06-28 2020-11-03 深圳市大疆创新科技有限公司 可移动平台的障碍物检测方法、装置及可移动平台
CN110687549A (zh) * 2019-10-25 2020-01-14 北京百度网讯科技有限公司 障碍物检测方法和装置
CN111325666A (zh) * 2020-02-10 2020-06-23 武汉大学 基于变分辨率体素格网的机载激光点云处理方法及应用
CN112184736A (zh) * 2020-10-10 2021-01-05 南开大学 一种基于欧式聚类的多平面提取方法

Similar Documents

Publication Publication Date Title
CN111091105B (zh) 基于新的边框回归损失函数的遥感图像目标检测方法
CN109087510B (zh) 交通监测方法及装置
CN113902897B (zh) 目标检测模型的训练、目标检测方法、装置、设备和介质
CN112200225B (zh) 基于深度卷积神经网络的钢轨伤损b显图像识别方法
CN111428859A (zh) 自动驾驶场景的深度估计网络训练方法、装置和自主车辆
CN114612795A (zh) 基于激光雷达点云的路面场景目标识别方法
WO2022227771A1 (zh) 目标跟踪方法、装置、设备和介质
CN113705583B (zh) 一种基于卷积神经网络模型的目标检测识别方法
WO2021056516A1 (zh) 目标检测方法、设备及可移动平台
CN113409252B (zh) 一种架空输电线路巡检机器人障碍物检测方法
CN114898470A (zh) 基于改进YOLOv5的跌倒行为检测方法及系统
CN113177968A (zh) 目标跟踪方法、装置、电子设备及存储介质
CN114973191A (zh) 基于点云密度和间距的动态阈值确定方法及欧式聚类方法
CN114241448A (zh) 障碍物航向角的获取方法、装置、电子设备及车辆
CN114137526A (zh) 基于标签的车载毫米波雷达多目标检测方法和系统
WO2022213376A1 (zh) 障碍物检测方法、装置、可移动平台及存储介质
Ballinas-Hernández et al. Marked and unmarked speed bump detection for autonomous vehicles using stereo vision
CN115481680A (zh) 基于外源雷达的飞鸟与无人机航迹目标分类方法及设备
CN115272899A (zh) 一种风险预警方法、装置、飞行器及存储介质
CN114882460A (zh) 一种基于特征层融合的道路车辆检测方法
CN116863325A (zh) 一种用于多个目标检测的方法和相关产品
CN113298844A (zh) 基于多特征融合和区域生长的低小慢目标跟踪方法及装置
CN117311393B (zh) 一种无人机自主飞行路径规划方法及系统
CN117392166B (zh) 一种基于地平面拟合的三阶段点云地面分割方法
CN112396031B (zh) 基于异构运算平台的目标检测方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21935603

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21935603

Country of ref document: EP

Kind code of ref document: A1