WO2021238306A1 - 一种激光点云的处理方法及相关设备 - Google Patents

一种激光点云的处理方法及相关设备 Download PDF

Info

Publication number
WO2021238306A1
WO2021238306A1 PCT/CN2021/076816 CN2021076816W WO2021238306A1 WO 2021238306 A1 WO2021238306 A1 WO 2021238306A1 CN 2021076816 W CN2021076816 W CN 2021076816W WO 2021238306 A1 WO2021238306 A1 WO 2021238306A1
Authority
WO
WIPO (PCT)
Prior art keywords
cluster
initial
divided
laser point
area
Prior art date
Application number
PCT/CN2021/076816
Other languages
English (en)
French (fr)
Inventor
李志刚
彭凤超
刘冰冰
杨臻
张维
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021238306A1 publication Critical patent/WO2021238306A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23211Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with adaptive number of clusters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • This application relates to the field of laser processing, and in particular to a laser point cloud processing method and related equipment.
  • perception can have a variety of modules, such as laser perception module, visual perception module, millimeter wave perception module, etc.
  • laser perception module is one of the key modules , Is widely used in advanced driver assistance systems (Advanced Driver Assistant System, ADAS), Autonomous Driving System (Autonomous Driving System, ADS), which can provide wheeled mobile devices with the system (such as autonomous vehicles) Provide accurate location information of obstacles, so as to provide a solid basis for reasonable decision-making.
  • ADAS Advanced Driver Assistant System
  • ADS Autonomous Driving System
  • the laser information received by laser sensing modules such as lidars and three-dimensional laser scanners is presented in the form of point clouds, and the point data collection on the surface of the measured object obtained by the measuring instrument is called the point cloud.
  • the measuring instrument is Laser sensing module
  • the obtained point cloud is called laser point cloud (generally there are tens of thousands of laser points at the same time for 32-line laser)
  • the laser information contained in the laser point cloud can be recorded as [x,y,z,intensity ]
  • the laser information represents the three-dimensional coordinates of the target position hit by each laser point in the laser coordinate system and the reflection intensity of the laser point.
  • clusters are obtained by clustering the laser point clouds.
  • a cluster is a collection of multiple laser points.
  • Each cluster represents a target object.
  • each cluster is calculated according to each cluster.
  • the position, orientation, size and other information of the target object are output to the downstream module for further processing.
  • the wheeled mobile device as an autonomous vehicle as an example, because the laser spot hits between adjacent key obstacles (or the laser spot hits on the key obstacle is different from the road edge, bushes and other non-key obstacles).
  • Laser point is not easy to distinguish, or due to occlusion, there may be discontinuities in the laser point cloud on the same target object, which will cause under-segmentation and/or over-segmentation in the process of clustering the laser point cloud, which will The tracking module of the subsequent self-driving vehicle will cause the target id to jump, the target position jumps, and so on. In severe cases, the vehicle will be taken over.
  • the embodiment of the application provides a laser point cloud processing method and related equipment.
  • semantically segmenting the laser point cloud and combining it with the traditional laser clustering algorithm the over-segmentation and under-segmentation of the laser point cloud in laser perception are improved. Segmentation and other issues can further improve the detection performance of key obstacles.
  • the embodiments of the present application provide a laser point cloud processing method, which can be applied to the laser perception field in the field of autonomous driving.
  • the method includes: First, a related system (such as an environmental perception system of an autonomous vehicle) that is deployed with a laser sensor can obtain the laser point cloud at any time through the laser sensor, and whenever the current frame of the current time is obtained Laser point cloud, the related system can cluster the laser point cloud of the current frame according to a preset algorithm (for example, Depth-First-Search (DFS)) to obtain N initial clusters of rough classification Cluster, where N is an integer greater than or equal to 1.
  • a preset algorithm for example, Depth-First-Search (DFS)
  • related systems deployed with laser sensors can also perform semantic segmentation on the laser point cloud of the current frame acquired by the laser sensor (for example, through a preset neural network such as PointSeg or DeepSeg) to obtain the laser point cloud
  • the category label corresponding to each laser point in the laser point cloud is used to indicate the category category to which each laser point in the laser point cloud belongs.
  • the system After obtaining the N initial clusters of the rough classification of the laser point cloud of the current frame and the category label corresponding to each laser point in the laser point cloud, the system will query each initial cluster of the N initial clusters Cluster (any one of the N initial clusters can be called the first initial cluster) corresponds to the category label of each laser point, and further corresponds to each laser point in each initial cluster The category label of each initial cluster is reprocessed to obtain the target cluster.
  • the acquired laser point cloud of the current frame is first clustered (for example, the laser point cloud is clustered in the Occupancy Grid Map (OGM) through the DFS algorithm),
  • OGM Occupancy Grid Map
  • N initial clusters of the rough classification are obtained, and the laser point cloud can be semantically segmented through related neural networks (such as PointSeg, DeepSeg, etc.) to obtain each laser point cloud in the laser point cloud Corresponding category label, and finally, for each initial cluster cluster, query the category label corresponding to each laser point in it, and perform reprocessing on each initial cluster cluster according to the queried category label (eg, re-segment, merge Etc.) to obtain target clusters, where one target cluster corresponds to one target object.
  • related neural networks such as PointSeg, DeepSeg, etc.
  • the laser point cloud is semantically segmented and combined with the traditional laser clustering algorithm to improve the over-segmentation and under-segmentation of the laser point cloud in laser perception, thereby improving the key obstacles The detection performance of objects.
  • the first initial cluster cluster that is, any initial cluster cluster among the N initial cluster clusters
  • the first initial cluster cluster is processed (eg, segmented) according to the preset method to obtain at least one target cluster cluster corresponding to the initial cluster cluster .
  • the first initial cluster cluster is reprocessed by judging the type of the category label corresponding to each laser point in the first initial cluster cluster, so as to obtain the first initial cluster cluster.
  • the first initial cluster cluster is classified according to the category label corresponding to the laser point Divide to obtain multiple divided areas, where any one of the multiple divided areas is an area in which the laser points belonging to the same category label in the initial cluster cluster are circled together in a preset circle, and then , And then obtain the number of intersections between the first divided area and the second divided area in the plurality of divided areas, and perform the next processing on the first initial cluster according to the number of intersections (for example, if the number of intersections is 0, No segmentation is performed; if the number of intersections ⁇ 2, then segmentation is performed), and at least one target cluster cluster corresponding to the first initial cluster cluster is obtained.
  • the laser spot in the second divided area is considered to be a misclassified point. At this time, it is considered that there is no under-classification between the first divided area and the second divided area.
  • the first divided area is taken as a target cluster cluster, that is, the first divided area and the second divided area both correspond to the same target cluster cluster.
  • the first divided area can be used as a target cluster A cluster corresponds to a target object, and the target object is the object represented by the category label corresponding to the first divided area.
  • the processing method is to use the connecting line between the intersection points as the dividing line to divide the first initial cluster cluster into at least two targets Clusters, where each target cluster corresponds to a category label.
  • the first initial cluster cluster is divided into at least two target cluster clusters, and each target cluster cluster corresponds to one Target objects
  • the two target objects are respectively objects represented by two category labels (ie, the category label corresponding to the laser point in the first divided area and the category label corresponding to the laser point in the second divided area).
  • the fifth implementation manner of the first aspect of the embodiments of the present application when the number of intersections between the first divided area and the second divided area is 4 , And the line between the first intersection point and the second intersection point divides the first divided area into a first part and a second part, and the number of laser points contained in the first part is greater than the number of laser points contained in the second part , The laser points contained in the second part are considered to be misclassified points.
  • the first divided area is re-divided to obtain a third divided area.
  • the third divided area is only the laser points included in the first part.
  • the first initial cluster cluster is subdivided in a manner similar to the number of intersections described above for 2, that is, the line between the two intersections between the second divided area and the third divided area As a dividing line, the first initial cluster cluster is divided into at least two target cluster clusters, where each target cluster cluster corresponds to a category label.
  • the first initial cluster cluster when the number of intersections of two divided regions is 4, that is, first divide one of the divided regions according to a set of intersections. (For example, the first divided area) is re-divided to obtain a new third divided area. At this time, the number of intersections between the third divided area and the original another divided area (for example, the second divided area) is 2, and then the area is divided according to the above The case where the number of intersecting points is 2 is similarly processed.
  • the above-mentioned three kinds of subdivision methods in this application are also different and have flexibility.
  • the fourth divided region formed by the at least two initial cluster clusters meets a preset condition Specifically, the size of the fourth divided region formed by the at least two initial cluster clusters is within a preset size range, where the preset size range is the corresponding laser point in the at least two initial cluster clusters The actual size of the target object identified by the category label; and/or, the orientation angle of the fourth divided region formed by the at least two initial clusters and the first initial cluster of the at least two initial clusters The difference in the orientation angles of the clusters is within the preset angle range.
  • the shapes of the various divided regions are not limited, which makes the implementation of the embodiments of the present application more flexible.
  • a second aspect of the embodiments of the present application provides an environment perception system, which has the function of implementing the foregoing first aspect or any one of the possible implementation methods of the first aspect.
  • This function can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the environment perception system can be applied to an intelligent driving agent, and the intelligent driving agent may be an autonomous vehicle (eg, a smart car, an intelligent connected car, etc.) , It can also be an assisted driving vehicle, which is not specifically limited here.
  • the intelligent driving agent may be an autonomous vehicle (eg, a smart car, an intelligent connected car, etc.) , It can also be an assisted driving vehicle, which is not specifically limited here.
  • a third aspect of the embodiments of the present application provides an autonomous driving vehicle, which may include a memory, a processor, and a bus system, where the memory is used to store a program, and the processor is used to call the program stored in the memory to execute the first embodiment of the present application. Aspect or any one of the possible implementation methods of the first aspect.
  • the fourth aspect of the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, and when it runs on a computer, the computer can execute the first aspect or any one of the possible implementations of the first aspect. Way way.
  • the fifth aspect of the embodiments of the present application provides a computer program, which when running on a computer, causes the computer to execute the foregoing first aspect or any one of the possible implementation methods of the first aspect.
  • FIG. 1 is a schematic diagram of a real scene provided by an embodiment of the application and a correspondingly formed laser point cloud;
  • FIG. 2 is another schematic diagram of a real scene provided by an embodiment of the application and a correspondingly formed laser point cloud;
  • FIG. 3 is a schematic diagram of OGMs with different resolutions provided by an embodiment of the application.
  • Figure 4 is a flow chart of a laser clustering algorithm based on OGM
  • Figure 5 is a flow chart of fusing with visual information to solve the under-segmentation and over-segmentation problems in the process of target clustering;
  • FIG. 6 is a schematic diagram of the overall architecture of an autonomous vehicle provided by an embodiment of the application.
  • FIG. 7 is a schematic structural diagram of an autonomous vehicle provided by an embodiment of the application.
  • FIG. 8 is a flowchart of a laser point cloud processing method provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of mapping a laser point cloud to an OGM according to an embodiment of the application.
  • FIG. 10 is a schematic diagram of the clustering results of each cluster in the vehicle coordinate system obtained through DFS algorithm clustering after the laser point cloud is projected to the OGM according to an embodiment of the application;
  • FIG. 11 is a structural diagram of a neural network PointSeg for semantic segmentation of laser point clouds provided by an embodiment of the application;
  • FIG. 12 is a schematic diagram of dividing initial clusters according to category tags of laser point clouds according to an embodiment of the application.
  • FIG. 13 is another schematic diagram of dividing initial clusters according to category tags of laser point clouds according to an embodiment of the application.
  • FIG. 14 is another schematic diagram of dividing initial clusters according to the category tags of the laser point cloud according to an embodiment of the application.
  • FIG. 15 is another schematic diagram of dividing initial clusters according to the category tags of the laser point cloud according to an embodiment of the application.
  • FIG. 16 is another schematic diagram of dividing initial clusters according to category tags of laser point clouds according to an embodiment of the application.
  • FIG. 17 is a schematic diagram of a plurality of initial clusters belonging to the same category label according to an embodiment of the application.
  • FIG. 18 is a schematic diagram of a plurality of initial clusters belonging to the same category label according to an embodiment of the application.
  • FIG. 19 is a schematic diagram of using an L-shape method to estimate the size range of a fourth region formed by at least two initial cluster clusters to be fitted and merged according to an embodiment of the application;
  • FIG. 20 is a schematic diagram of several common over-segmentation situations provided by an embodiment of this application.
  • FIG. 21 is a schematic diagram of the use effect in a specific application scenario through an embodiment of the present application.
  • FIG. 22 is another schematic diagram of the use effect in a specific application scenario through an embodiment of the present application.
  • FIG. 23 is a schematic structural diagram of an environment sensing system provided by an embodiment of this application.
  • FIG. 24 is a schematic diagram of a structure of an autonomous vehicle provided by an embodiment of the application.
  • the embodiment of the application provides a laser point cloud processing method and related equipment.
  • semantically segmenting the laser point cloud and combining it with the traditional laser clustering algorithm the over-segmentation and under-segmentation of the laser point cloud in laser perception are improved. Segmentation and other issues can further improve the detection performance of key obstacles.
  • Wheeled mobile equipment It is a comprehensive system integrating environment perception, dynamic decision-making and planning, behavior control and execution. It can also be called a wheeled mobile robot or a wheeled intelligent body. For example, it can be a wheeled construction equipment. , Self-driving vehicles, assisted driving vehicles, etc., as long as they have wheeled mobile devices, they are all referred to as wheeled mobile devices described in this application.
  • the wheeled mobile device is an example of an autonomous driving vehicle.
  • the autonomous driving vehicle can be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, or mowing. Motors, recreational vehicles, playground vehicles, construction equipment, trams, golf carts, trains, and trolleys, etc., are not particularly limited in the embodiments of the present application.
  • sensors such as cameras, lidars, millimeter-wave radars, etc.
  • sensors are used to discover relevant information about key obstacles on the road in the surrounding environment of wheeled mobile devices (such as autonomous vehicles). It can also be called perceptual information.
  • ADAS or ADS After ADAS or ADS receives the sensing information obtained by the sensors, it is a decision-making system that plans and controls the driving state of the wheeled mobile device. It can also be called motion planning. It generates specific instructions by generating instructions from the upper-level decision-making module. The motion trajectory is executed by the lower-level control module, which is the key link of intelligent driving (including assisted driving and automatic driving).
  • Critical obstacles It can also be called key obstacles on the road. It refers to vehicles and pedestrians on the road, which are different from other non-critical obstacles, such as bushes, isolation belts, and buildings on the roadside.
  • a laser point cloud corresponding to a target object such as a pedestrian on the road
  • a laser point cloud corresponding to one or more other target objects such as other vehicles on the road
  • the laser point cloud output corresponding to a target object is clustered, or the laser point cloud corresponding to a target object (such as pedestrians on the road) as a key obstacle on the road and non-key obstacles (such as bushes, roadsides)
  • the laser point cloud corresponding to buildings, etc.) is clustered into a laser point cloud output corresponding to a target object.
  • the laser point cloud corresponding to "vehicle 1" in the dashed frame a and the laser point cloud corresponding to the "bush” in the dashed frame b shown in Figure 1 are clustered into the solid frame A, and the solid frame The laser point cloud in A is output as a target object.
  • Another example is the laser point cloud corresponding to "pedestrian” in the dashed frame c shown in Figure 1 and the laser point cloud corresponding to "vehicle 2" in the dashed frame d.
  • it is also clustered as a target object (for example, solid line frame B) and output. This situation where multiple target objects are clustered as a target object and output is called under-segmentation.
  • Over-segmentation The laser point cloud originally belonging to one target object is clustered into multiple target objects during clustering.
  • the laser point cloud corresponding to the "truck" in the dashed box a in Figure 2 should have been clustered into one target Objects, and in the actual clustering process, clustering into two target objects (for example, the laser point cloud contained in the solid-line frame 1 and solid-line frame 2 in Figure 2 corresponds to a target object).
  • the case where the target object is divided into multiple target objects and output is called over-segmentation.
  • Occupancy grid map (Occupancy Grid Map, OGM): A commonly used map representation for robots. Robots often use laser sensors, and the sensor data is noisy. For example, it is impossible to detect how far away obstacles are from the robot with laser sensors. To an accurate value, for example, at an angle, if the accurate value is 4 meters, then the obstacle detected at the current moment is 3.9 meters, but the next moment is 4.1 meters, and the two distances cannot be considered as obstacles. In order to solve this problem, OGM is used. As shown in Figure 3, two OGMs with different resolutions are shown. The black points are laser points. All laser points mapped in the OGM form a laser point cloud.
  • the generally used OGM size is 300*300, that is, it consists of 300*300 small grids (ie grids).
  • the size of each grid ie length * width, refers to each grid corresponding to the vehicle coordinate
  • the number of meters in the system is called the resolution of OGM.
  • the laser point cloud acquired by the laser sensor at the same time falls on a specific grid, the fewer laser points, as shown in the right figure of Figure 3, fall on the gray bottom grid ( Figure 3 right).
  • Figure 3 right There are 9 laser spots in the 4th row and 7th column of the figure.
  • a certain point on the map either has obstacles or no obstacles, but in OGM, at a certain moment, if there is no laser point in a grid, it is considered empty, and there is at least one laser point. It is considered that the grid corresponds to an obstacle.
  • the laser point cloud is mapped to the OGM, and after a series of mathematical transformations, the grid is positioned as an occupied state or an idle state according to the probability of whether each grid is occupied.
  • Method 1 This is an OGM-based laser clustering algorithm. As shown in Figure 4, it is a flow chart of the method. The process is as follows: First, obtain the laser scanning information, that is, the laser perceives surrounding obstacles during the working process. , Return the laser point cloud to the system; then, after setting the length and width of the OGM (that is, setting the OGM size) and the grid resolution, all laser points in the laser coordinate system can be projected into the OGM; finally, according to the depth-first algorithm The laser point cloud is clustered, and the depth-first algorithm is a common target clustering algorithm. Set the size of the neighborhood every time with a point as the center. In this neighborhood, it is judged whether there is a grid occupied by the laser point. Until the end.
  • over-segmentation and under-segmentation are controlled by controlling the size of the neighborhood, so it is impossible to find a reasonable neighborhood value and can solve the over-segmentation and under-segmentation problems in the target clustering process.
  • Method 2 which is a method of fusing with visual information to solve the under-segmentation and over-segmentation problems in the process of target clustering.
  • Figure 5 it is a flowchart of the method. The process is summarized as follows: from laser scanning to laser dot The method of cloud segmentation and clustering is similar to method 1, the difference is that there is an extra perceptual part of visual 2D detection (which can be called a visual 2D detection module), that is, the visual 2D data is read from the video data for detection.
  • the acquired laser point cloud is clustered to obtain M clusters, each cluster corresponds to a 3D target, and the visual 2D detection module inputs the collected images to the trained relevant network.
  • Output N targets marked with 2D boxes and then project the laser point cloud into the image, and then use the 2D boxes detected in the image to re-segment the clusters, thereby solving the under-segmentation in the target clustering process problem.
  • the target clusters are merged through appropriate merging strategies to improve the over-segmentation problem.
  • method 2 has very high requirements for calibration, that is, the coordinates of the laser point cloud should be very accurate when projected into the image, and slight changes in the camera position will have a great impact on the final clustering results.
  • the visual 2D detection module cannot obtain a valid image, and cannot use the image to subdivide the laser point cloud. This method has limitations in the use of the scene.
  • the embodiments of the present application first provide a laser point cloud processing method.
  • the laser is improved.
  • the over-segmentation and under-segmentation of the laser point cloud in perception can further improve the detection performance of key obstacles.
  • the laser point cloud processing method provided by the embodiments of the present application can be applied to motion planning (e.g., speed planning, driving behavior decision-making, global path In the scenario of planning, etc., taking the agent as an autonomous vehicle as an example, the overall architecture of the autonomous vehicle will be described first. Please refer to Figure 6 for details.
  • Figure 6 illustrates the top-down hierarchical architecture. , There may be a defined interface between each system to transmit data between the systems to ensure the real-time and integrity of the data. The following is a brief introduction to each system:
  • Environmental perception is the most basic part of intelligent driving vehicles. Whether it is making driving behavior decisions or global path planning, it needs to be established on the basis of environmental perception. Based on the real-time perception of the road traffic environment, corresponding judgments, Decision-making and planning to enable intelligent driving of vehicles.
  • the environmental perception system mainly uses various sensors to obtain relevant environmental information to complete the construction of the environmental model and the knowledge expression of the traffic scene.
  • the sensors used include cameras, single-line radar (SICK), four-line radar (IBEO), Three-dimensional lidar (HDL-64E), among which, the camera is mainly responsible for traffic light detection, lane line detection, road sign detection, vehicle recognition, etc.; other lidar sensors are mainly responsible for the detection and recognition of dynamic and static key obstacles And tracking, as well as the detection and extraction of non-critical obstacles such as road boundaries, shrub belts, and surrounding buildings.
  • the laser emitted by 3D lidar generally collects external environmental information at a frequency of 10FPS, and returns the laser point cloud at each moment Specifically, it can output information such as the position and orientation of the target object by clustering the laser point cloud acquired at each time.
  • data fusion processing is performed based on the perception information obtained by the above sensors, mapped to an OGM that can express the road environment, and sent to the autonomous decision-making system for further decision-making and planning.
  • the autonomous decision-making system is a key component of intelligent driving vehicles.
  • the system is mainly divided into two core subsystems, behavior decision-making and motion planning.
  • the behavior decision-making subsystem is mainly to obtain the global optimal driving route by running the global planning layer.
  • motion planning In order to clarify specific driving tasks, and then based on the current real-time road information sent by the environmental perception system, based on road traffic rules and driving experience, make decisions about reasonable driving behaviors, and send the driving behavior instructions to the motion planning subsystem; motion planning The subsystem plans a feasible driving trajectory based on the received driving behavior instructions and the current local environment perception information, based on indicators such as safety and stability, and sends it to the control system.
  • control system is divided into two parts: the control subsystem and the execution subsystem.
  • the control subsystem is used to convert the feasible driving trajectory generated by the autonomous decision-making system into the specific execution instructions of each execution module and pass it to the execution.
  • Subsystem; the execution subsystem receives the execution instructions from the control subsystem and sends them to each control object to perform reasonable control of the vehicle's steering, braking, throttle, gear position, etc., so that the vehicle can drive automatically to complete the corresponding Driving operation.
  • the accuracy of the driving operation of an autonomous vehicle mainly depends on whether the specific execution instructions generated by the control system for each execution module are accurate, and the accuracy depends on autonomous decision-making. Systems and autonomous decision-making systems are faced with uncertainty.
  • the uncertainty factors mainly include the following aspects: 1) The characteristics of each sensor in the environmental perception system and the uncertainty caused by calibration errors, the perception mechanism and perception of different sensors The range and the corresponding error mode are different, and the calibration error caused by the installation of the self-driving vehicle will eventually be reflected in the uncertainty of the perception information; 2) the uncertainty caused by the data processing delay of the environmental perception system This is because the road environment is complex and the amount of data information is huge, which makes the data processing of the environment perception system also require a large amount of calculation, and the entire environment is constantly changing, which will inevitably lead to the delay of data and information, thereby affecting autonomous decision-making The correct judgment of the system; 3) The different processing methods of the perception information will also bring uncertainty.
  • FIG. 6 the overall architecture of the autonomous vehicle shown in FIG. 6 is only for illustration. In actual applications, it may contain more or fewer systems/subsystems or modules, and each system/subsystem or module It may include multiple components, which are not specifically limited here.
  • FIG. 7 is a schematic structural diagram of an autonomous driving vehicle provided by an embodiment of the application.
  • the autonomous driving vehicle 100 is configured in a fully or partially autonomous driving mode.
  • the autonomous driving vehicle 100 can control itself while in the automatic driving mode.
  • And can determine the current state of the vehicle and its surrounding environment through human operations, determine the possible behavior of at least one other vehicle in the surrounding environment, and determine the confidence level corresponding to the possibility of other vehicles performing the possible behavior, based on the determined Information to control the autonomous vehicle 100.
  • the self-driving vehicle 100 can also be set to operate without human interaction.
  • the autonomous vehicle 100 may include various subsystems, such as a traveling system 102, a sensor system 104 (for example, the camera, SICK, IBEO, lidar, etc. in FIG. 6 are all modules in the sensor system 104), a control system 106, a Or a plurality of peripheral devices 108 and a power supply 110, a computer system 112, and a user interface 116.
  • the autonomous vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components.
  • each of the subsystems and components of the autonomous vehicle 100 may be wired or wirelessly interconnected.
  • the traveling system 102 may include components that provide power movement for the autonomous vehicle 100.
  • the travel system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels/tires 121.
  • the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, for example, a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
  • the engine 118 converts the energy source 119 into mechanical energy. Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 119 may also provide energy for other systems of the autonomous vehicle 100.
  • the transmission device 120 can transmit the mechanical power from the engine 118 to the wheels 121.
  • the transmission device 120 may include a gearbox, a differential, and a drive shaft. In an embodiment, the transmission device 120 may further include other devices, such as a clutch.
  • the drive shaft may include one or more shafts that can be coupled to one or more wheels 121.
  • the sensor system 104 may include several sensors that sense information about the environment around the autonomous vehicle 100.
  • the sensor system 104 may include a positioning system 122 (the positioning system may be a global positioning GPS system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar 126, a laser rangefinder 128 and camera 130.
  • the sensor system 104 may also include sensors for monitoring the internal system of the autonomous vehicle 100 (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensing data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and recognition are key functions for the safe operation of the autonomous self-driving vehicle 100.
  • the laser sensing module is a very important sensing module in the sensor system 104.
  • the positioning system 122 can be used to estimate the geographic location of the autonomous vehicle 100.
  • the IMU 124 is used to perceive changes in the position and orientation of the autonomous vehicle 100 based on inertial acceleration.
  • the IMU 124 may be a combination of an accelerometer and a gyroscope.
  • the radar 126 may use radio signals to perceive objects in the surrounding environment of the autonomous vehicle 100, and may specifically be expressed as millimeter wave radar or lidar. In some embodiments, in addition to sensing an object, the radar 126 may also be used to sense the speed and/or direction of the object.
  • the laser rangefinder 128 can use laser light to perceive objects in the environment where the autonomous vehicle 100 is located.
  • the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
  • the camera 130 may be used to capture multiple images of the surrounding environment of the autonomous vehicle 100.
  • the camera 130 may be a still camera or a video camera.
  • the control system 106 controls the operation of the autonomous vehicle 100 and its components.
  • the control system 106 may include various components, including a steering system 132, a throttle 134, a braking unit 136, a computer vision system 140, a line control system 142, and an obstacle avoidance system 144.
  • the steering system 132 is operable to adjust the forward direction of the autonomous vehicle 100.
  • it may be a steering wheel system.
  • the throttle 134 is used to control the operating speed of the engine 118 and thereby control the speed of the autonomous vehicle 100.
  • the braking unit 136 is used to control the automatic driving vehicle 100 to decelerate.
  • the braking unit 136 may use friction to slow down the wheels 121.
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electric current.
  • the braking unit 136 may also take other forms to slow down the rotation speed of the wheels 121 so as to control the speed of the autonomous vehicle 100.
  • the computer vision system 140 may be operable to process and analyze the images captured by the camera 130 in order to identify objects and/or features in the surrounding environment of the autonomous vehicle 100.
  • the objects and/or features may include traffic signals, road boundaries, and obstacles.
  • the computer vision system 140 may use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision technologies.
  • SFM Structure from Motion
  • the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and so on.
  • the route control system 142 is used to determine the route and speed of the autonomous vehicle 100.
  • the route control system 142 may include a horizontal planning module 1421 and a vertical planning module 1422.
  • the horizontal planning module 1421 and the vertical planning module 1422 are respectively used to combine data from the obstacle avoidance system 144, GPS 122, and one or more predetermined maps.
  • the data for the autonomous vehicle 100 determines the driving route and driving speed.
  • the obstacle avoidance system 144 is used to identify, evaluate and avoid or otherwise surpass obstacles in the environment of the autonomous vehicle 100.
  • the aforementioned obstacles may specifically be represented as actual obstacles and virtual moving objects that may collide with the autonomous vehicle 100.
  • the control system 106 may additionally or alternatively include components other than those shown and described. Alternatively, a part of the components shown above may be reduced.
  • the autonomous vehicle 100 interacts with external sensors, other vehicles, other computer systems, or users through peripheral devices 108.
  • the peripheral device 108 may include a wireless communication system 146, an onboard computer 148, a microphone 150, and/or a speaker 152.
  • the peripheral device 108 provides a means for the user of the autonomous vehicle 100 to interact with the user interface 116.
  • the onboard computer 148 may provide information to the user of the autonomous vehicle 100.
  • the user interface 116 can also operate the onboard computer 148 to receive user input.
  • the on-board computer 148 can be operated through a touch screen.
  • the peripheral device 108 may provide a means for the autonomous vehicle 100 to communicate with other devices located in the vehicle.
  • the wireless communication system 146 may wirelessly communicate with one or more devices directly or via a communication network.
  • the wireless communication system 146 may use 3G cellular communication, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication.
  • the wireless communication system 146 may use a wireless local area network (WLAN) to communicate.
  • the wireless communication system 146 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols such as various vehicle communication systems.
  • the wireless communication system 146 may include one or more dedicated short-range communications (DSRC) devices. These devices may include vehicles and/or roadside stations. Public and/or private data communications.
  • DSRC dedicated short-range communications
  • the power supply 110 may provide power to various components of the autonomous vehicle 100.
  • the power source 110 may be a rechargeable lithium ion or lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the autonomous vehicle 100.
  • the power source 110 and the energy source 119 may be implemented together, such as in some all-electric vehicles.
  • the computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer readable medium such as the memory 114.
  • the computer system 112 may also be multiple computing devices that control individual components or subsystems of the autonomous vehicle 100 in a distributed manner.
  • the processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU).
  • the processor 113 may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor.
  • processors, memory, and other components of the computer system 112 may actually include the processor or memory that is not stored in the same block.
  • the memory 114 may be a hard disk drive or other storage medium located in a housing other than the computer system 112. Therefore, a reference to the processor 113 or the memory 114 will be understood to include a reference to a collection of processors or memories that may or may not operate in parallel.
  • some components such as steering components and deceleration components may each have its own processor that only performs calculations related to component-specific functions .
  • the processor 113 may be located far away from the autonomous vehicle 100 and wirelessly communicate with the autonomous vehicle 100. In other respects, some of the processes described herein are executed on the processor 113 arranged in the autonomous vehicle 100 and others are executed by the remote processor 113, including taking the necessary steps to perform a single operation.
  • the memory 114 may include instructions 115 (eg, program logic), which may be executed by the processor 113 to perform various functions of the autonomous vehicle 100, including those functions described above.
  • the memory 114 may also contain additional instructions, including those for sending data to, receiving data from, interacting with, and/or controlling one or more of the traveling system 102, the sensor system 104, the control system 106, and the peripheral device 108. instruction.
  • the memory 114 may also store data, such as road maps, route information, the location, direction, and speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the autonomous vehicle 100 and the computer system 112 during operation of the autonomous vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • the user interface 116 is used to provide information to or receive information from the user of the autonomous vehicle 100.
  • the user interface 116 may include one or more input/output devices in the set of peripheral devices 108, such as a wireless communication system 146, a car computer 148, a microphone 150, and a speaker 152.
  • the computer system 112 may control the functions of the autonomous vehicle 100 based on inputs received from various subsystems (for example, the travel system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may use input from the control system 106 to control the steering system 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 is operable to provide control of many aspects of the autonomous vehicle 100 and its subsystems.
  • one or more of the aforementioned components may be installed or associated with the autonomous vehicle 100 separately.
  • the memory 114 may exist partially or completely separately from the autonomous vehicle 100.
  • the aforementioned components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 7 should not be construed as a limitation to the embodiment of the present application.
  • An autonomous vehicle traveling on a road such as the above autonomous vehicle 100, can recognize objects in its surrounding environment to determine the adjustment to the current speed.
  • the object may be other vehicles, traffic control equipment, or other types of objects.
  • each recognized object can be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc., can be used to determine the speed to be adjusted by the autonomous vehicle.
  • the autonomous vehicle 100 or the computing device associated with the autonomous vehicle 100 may be based on the characteristics of the recognized object and the state of the surrounding environment (for example, Traffic, rain, ice on the road, etc.) to predict the behavior of the identified object.
  • each recognized object depends on each other's behavior, so all recognized objects can also be considered together to predict the behavior of a single recognized object.
  • the autonomous vehicle 100 can adjust its speed based on the predicted behavior of the recognized object. In other words, the autonomous vehicle 100 can determine what stable state the vehicle will need to adjust to (for example, accelerate, decelerate, or stop) based on the predicted behavior of the object.
  • the computing device may also provide instructions for modifying the steering angle of the autonomous vehicle 100, so that the autonomous vehicle 100 follows a given trajectory and/or maintains the vicinity of the autonomous vehicle 100
  • the safe horizontal and vertical distances of objects for example, cars in adjacent lanes on the road.
  • the aforementioned autonomous vehicle 100 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train, and trolley, etc. ,
  • the embodiment of this application does not make any special limitation.
  • the embodiment of this application provides a laser point cloud processing method, which can be applied to various intelligent driving (such as unmanned driving, assisted driving, etc.) intelligent body (such as the autonomous driving vehicle corresponding to FIG. 6 and FIG. 7).
  • intelligent driving such as unmanned driving, assisted driving, etc.
  • intelligent body such as the autonomous driving vehicle corresponding to FIG. 6 and FIG. 7.
  • motion planning such as speed planning, driving behavior decision, global path planning, etc.
  • Figure 8 is the laser point cloud processing provided by the embodiment of the application
  • a schematic flow chart of the method which specifically includes:
  • related systems that are deployed with laser sensors can obtain laser point clouds at any time through the laser sensor.
  • the system will The laser point cloud of the current frame can be clustered according to a preset algorithm to obtain N initial clusters of rough classification, where N is an integer greater than or equal to 1.
  • the system can cluster the laser point cloud of the current frame through, but not limited to, the following methods: First, project the acquired laser point cloud of the current frame into the OGM, because each laser point in the laser point cloud contains
  • the laser information of can be recorded as [x,y,z,intensity], the laser information represents the three-dimensional coordinates of the target position of each laser point in the laser coordinate system and the reflection intensity of the laser point, assuming the current time
  • the height information of each laser point is ignored, but [x i , y i ] in the three-dimensional coordinates is scaled according to a certain ratio and then projected into the OGM.
  • the left picture in Figure 9 shows each laser point.
  • the white dots are the collected laser dots.
  • all laser dots can be mapped to the OGM in the right image of Figure 9 (the right image of Figure 9 is only Some of the laser points mapped are shown), where the "black square" in the OGM is the origin of the corresponding laser coordinate system.
  • the laser point cloud can be clustered in the OGM by using a preset algorithm, such as a depth-first search (Depth-First-Search, DFS).
  • DFS is an algorithm for traversing and searching trees or graphs. It traverses the nodes of the tree along the depth of the tree and searches the branches of the tree as deep as possible.
  • the search will backtrack Until the starting node of the edge where node V is found, this process continues until all nodes reachable from the source node have been found. If there are still undiscovered nodes, then one of the nodes will be selected as the source node and Repeat the above process, the whole process is repeated until all nodes are visited.
  • the laser point cloud can be clustered in OGM, and m clusters are obtained. Each cluster corresponds to a target object determined by this method. Each cluster includes an OGM map.
  • At least one coordinate value in the laser point cloud projected into the OGM as shown in Figure 9 clusters the laser point cloud according to the above-mentioned algorithm, and then 4 coarsely classified initial clusters are obtained, as shown in Figure 9.
  • the four “gray squares” connected together are the initial clusters of the four rough classifications.
  • the clustering in the OGM in the previous step is obtained.
  • the m clusters can be converted into the clustering of the laser point cloud in the vehicle coordinate system.
  • m is a certain cluster
  • n is the number of laser points
  • the cluster clusters obtained by DFS algorithm clustering are in the vehicle coordinate system.
  • Clustering results where each white convex hull is a cluster cluster, and each cluster cluster corresponds to a target object determined by the clustering method. It can be seen from Figure 10 that the laser point cloud is clustered into Many targets, each target contains many laser points.
  • each initial cluster of the rough classification obtained is considered to correspond to a target object.
  • whether each initial cluster actually corresponds to a target object is determined by the adopted preset algorithm, according to the currently commonly used DFS algorithm , This traditional algorithm will have the problem of over-segmentation and/or under-segmentation.
  • Related systems deployed with laser sensors can also perform semantic segmentation on the laser point cloud of the current frame acquired by the laser sensor to obtain the category label corresponding to each laser point in the laser point cloud.
  • the category label is used to represent the laser point.
  • Semantic segmentation of laser point clouds is different from image semantic segmentation, but is similar to the idea of image semantic segmentation.
  • the semantic segmentation of laser point clouds uses a neural network with a specific structure to semantically classify laser point clouds.
  • Figure 11 illustrates the structure of a neural network PointSeg for semantic segmentation of laser point clouds (this is common knowledge, and the structure is not specifically described here).
  • PointSeg is a real-time spherical map-based The method of end-to-end semantic segmentation of the target object.
  • the input of the PointSeg network is a spherical map obtained by laser point cloud calculation.
  • the configuration parameters of the spherical map are generally 64*512*5, where 64*512 is the size of the spherical map
  • the size, 5 is the number of channels
  • the output of the PointSeg network is a label map of the same size as the input spherical map. Since the coordinates of the laser point cloud and the spherical map are also one-to-one correspondence, the final output label map can be passed Obtain the category label corresponding to each laser point, so as to realize the semantic segmentation of the laser point cloud.
  • the category label described in this application can be set according to user needs, or the vehicle can be set when the vehicle is shipped or upgraded. The specifics are not limited here. Generally, the category label can be set as follows according to actual scenarios. Category: “Background”, “Car”, “Truck”, “Tram”, “Biker”, “Pedestrian” and other common types of key obstacles that may be encountered during the driving of vehicles, according to actual application scenarios OK, there is no limit here.
  • PointSeg network can also be used to semantically segment the laser point cloud, such as the DeepSeg network.
  • the specific form of the neural network is not limited here, as long as the laser point can be realized. Any neural network for the purpose of semantic segmentation in the cloud can be used.
  • each laser point in the laser point cloud of the current frame can correspond to a category label, that is, each laser point p i corresponds to a category label l i .
  • step 801 and step 802 are performed.
  • Step 801 can be performed first and then step 802, or step 802 can be performed before step 801, or step 801 can be performed at the same time.
  • step 802 is not specifically limited here.
  • each initial cluster of the rough classification obtained is considered to correspond to a target object, according to This is used to determine the actual position and orientation of each target object in the vehicle coordinate system.
  • step 803 combined with the category label to which each laser point belongs, the initial clusters of each rough classification are reprocessed to obtain one or more target clusters, and each target cluster corresponds to one The real target object.
  • the acquired laser point cloud of the current frame is firstly clustered (for example, the laser point cloud is clustered in the OGM through the DFS algorithm), so as to obtain N initial clusters of rough classification
  • the laser point cloud can be further semantically segmented through related neural networks (such as PointSeg network, DeepSeg network, etc.) to obtain the category label corresponding to each laser point cloud in the laser point cloud.
  • query the category label corresponding to each laser point query the category label corresponding to each laser point, and reprocess each initial cluster according to the queried category label (such as re-segmentation, merging, etc.) to obtain the target cluster ,
  • a target cluster corresponds to a target object.
  • the laser point cloud is semantically segmented and combined with the traditional laser clustering algorithm to improve the over-segmentation and under-segmentation of the laser point cloud in laser perception, thereby improving the key obstacles The detection performance of objects.
  • the initial cluster cluster obtained through the above steps 801-802 is denoted as T x , which contains y laser points ⁇ p 0 ⁇ p y ⁇ , and x is a certain initial cluster cluster (which can be called the first initial cluster Cluster), each of the y laser points has a corresponding classification label.
  • the initial cluster cluster T x can be processed according to a preset method to obtain at least one target cluster cluster corresponding to the initial cluster cluster T x , including but not limited to the initial cluster cluster in the following manner
  • the cluster T x is processed: First, the initial cluster T x is re-divided according to the category labels corresponding to the laser points, that is, the laser points with the same category labels in the initial cluster T x are circled in the preset circle method.
  • the initial cluster cluster T x is divided according to the number of intersections to obtain and At least one target cluster corresponding to the initial cluster T x.
  • the initial cluster T 1 can be divided into 2 divided areas according to the label category (which can be marked as area 1, area respectively).
  • Each divided area corresponds to a category label (area 1 and area 2 correspond to "pedestrian” and "car” respectively), and then calculate the number of intersections between the two areas (for example, the number of intersections shown in Figure 12 2, the intersection point a0 and B0 respectively), then an original clusters dividing the number of intersection points in accordance with T to give at least one target cluster corresponding to the cluster with the original clusters.
  • the reprocessing of the initial cluster T x includes but not limited to:
  • each laser point indicated by a gray dot corresponds to a category label, which is located in the first divided area.
  • the first divided area can be marked as area 1.
  • the laser dot indicated by a black dot corresponds to another category label. , It is located in the second divided area, and the second divided area can be marked as area 2.
  • area 2 is a subset of area 1.
  • the laser spot in area 2 can be considered as a misclassified point, namely It is considered that the initial cluster T x does not have under-segmentation, the initial cluster T x is regarded as a target cluster, which corresponds to a target object, and the target object is the object represented by the category label corresponding to area 1. .
  • each laser point indicated by a gray dot corresponds to a category label, which is located in the first divided area.
  • the first divided area can be marked as area 1.
  • the laser dot indicated by a black dot corresponds to another category label. , which is located in the second divided area, the second divided area can be marked as area 2.
  • there are two intersection points between area 1 and area 2 (divided into a1 and b1).
  • the processing method can be that the line between the intersection a1 and the intersection b1 is used as the dividing line (the black straight line in Figure 14), and the initial cluster T x is divided
  • the processing method can be that the line between the intersection a1 and the intersection b1 is used as the dividing line (the black straight line in Figure 14), and the initial cluster T x is divided
  • each target cluster corresponds to a target object, and the two target objects are objects represented by two category labels.
  • each laser point indicated by a gray dot corresponds to a category label, which is located in the first divided area.
  • the first divided area can be marked as area 1.
  • the laser dot indicated by a black dot corresponds to another category label. , It is located in the second divided area, the second divided area can be marked as area 2.
  • there are four intersection points between area 1 and area 2 (divided into a2, b2, a3, b3),
  • the laser spot in area 1 is divided into left and right parts by a set of intersection points a2 and b2.
  • the left part of area 1 can be called the first part, and the right part is called the second part.
  • the first part is The number of laser points indicated by the gray points included is greater than the number of laser points indicated by the gray points included in the second part.
  • the laser points included in the second part are considered to be misclassified points, and area 1 is re-divided to obtain area 3.
  • 3 is the area occupied by the laser points indicated by the gray dots included in the first part. It can be seen that the number of intersections between area 1 and area 3 is 2.
  • the initial classification cluster is similar to the above "case b" T x is re-segmented, that is, the initial cluster T x is divided into two target clusters with the line between the intersection a2 and the intersection b2 as the dividing line, and each target cluster corresponds to a target object ,
  • the two target objects are the objects represented by the two category labels.
  • the above situation ac is based on the case where there are two category labels corresponding to each laser point in the initial cluster T x (at this time there are only two divided areas, namely the first divided area and the second divided area) as an example
  • the category labels corresponding to each laser point in the initial cluster T x are more than two (for example, three)
  • the intersection between two of the multiple divided regions can be processed sequentially in the similar manner as described above. not the same number of cases, for ease of understanding, refer to FIG.
  • the category labels corresponding to each laser point in the initial clusters are the same, at this time, there is a suspected over-segmentation of the initial clusters.
  • the initial cluster cluster obtained through the above steps 801-802 is marked as T x , which contains y laser points ⁇ p 0 ⁇ p y ⁇ , x is an initial cluster cluster, each of the y laser points All have determined their corresponding classification labels.
  • T x contains y laser points ⁇ p 0 ⁇ p y ⁇
  • x is an initial cluster cluster
  • each of the y laser points All have determined their corresponding classification labels.
  • the at least two initial clusters can be judged first Whether the constituted fourth divided area meets the preset condition, if so, the at least two initial clusters are merged into one target cluster cluster.
  • how to determine whether the fourth divided area formed by the at least two initial cluster clusters meets the preset condition can be through, but not limited to, the following methods:
  • the size of the fourth division area formed by at least two initial clusters with the same category label is within the preset size range, it is considered that the at least two initial clusters are from the same target object, and the The at least two initial clusters are merged into a target cluster, and the merged target cluster corresponds to a real target object, and the target object is the object represented by the category label.
  • the following example shows that the category labels corresponding to the laser points in the two initial clusters are the same.
  • Figure 18 Assuming that the category labels corresponding to the initial clusters T6 and T7 are both l 2 , l 2 is "car”. If the size range of the fourth division area formed by the initial cluster clusters T6 and T7 in the vehicle coordinate system is within the real size range of the "car” (assuming the error value has been considered), it is considered as the initial The clusters T6 and T7 are from the same target object "car”. At this time, the initial clusters T6 and T7 can be merged into a target cluster cluster Ta, and the target cluster cluster Ta corresponds to the target object "car” .
  • the real target objects such as adults, large trucks, cars, etc.
  • the real target objects under each category label can be obtained from big data for their true size range, such as adults
  • the height is 1.5 meters to 1.9 meters, and the width is 0.4 meters to 0.8 meters.
  • the true size range of adults can be obtained as the preset size range corresponding to the category label "pedestrian" mentioned in this application; similar, you can ask
  • the real size ranges of the real target objects under all category labels are obtained, and the size ranges of the real target objects corresponding to each category label are the preset size ranges mentioned above. Only when the size of the fourth divided region formed by at least two initial clusters with the same category label is within the preset size range, can the at least two initial clusters be considered to be from the same target object.
  • the real size range of the real target object under each category label can also be used as the search area, and each search area is swiped according to a certain moving step. If there are at least two initial clusters in a search area corresponding to the category labels of the search area, it is considered that the at least two initial clusters are from the target object corresponding to the search area. At least two initial clusters are merged into one target cluster.
  • the search area is not limited to a shape, and can be any closed area among a circular area, a rectangular area, a square area, a trapezoidal area, a polygonal area, and an irregular-shaped area, which is not specifically limited here.
  • the L-shape method can be used to estimate the size range of the fourth region formed by at least two initial clusters to be fitted, as shown in FIG. 19,
  • the laser point cloud of the preceding vehicle acquired by the laser sensor forms an "L" shape ( Figure 19 shows the "L” shape formed by two preceding vehicles).
  • the class algorithm clustered into two initial clusters, and after the semantic segmentation of the laser point cloud, it is known that the two initial clusters are from the same category (ie "car"). In this case, it is impossible to calculate a " The size of the L" shape, at this time, the "L" shape can be filled into a rectangle through the L-shape method, and the rectangle can be considered as the fourth area.
  • the orientation angle of the fourth division area formed by at least two initial clusters is the same as the first initial cluster of the at least two initial clusters (the first initial cluster may be Determined from the at least two initial clusters according to a preset method, or arbitrarily selected from the at least two initial clusters, which is not specifically limited here)
  • the difference in the orientation angle is within the preset angle range If the at least two initial clusters are from the same target object, then the at least two initial clusters can be merged into a target cluster, and the merged target cluster corresponds to a real
  • the target object is the object represented by the category label.
  • the following example uses the same category label for each laser point in the two initial clusters as an example.
  • the preset angle is ⁇ th
  • the orientation angle of the target corresponding to the initial cluster 1 is ⁇ 1
  • the orientation angle of the initial cluster 2 corresponding to the target 2 is ⁇ 2
  • the angle of the new target after trying to merge the two targets is ⁇ new
  • the condition for judging that the two initial clusters can be successfully merged can be:
  • ⁇ th if this condition is met, the two initial clusters are considered to be from the same target object, and they can be merged.
  • ⁇ th can be set according to the actual situation, generally set ⁇ th to 10°.
  • different ⁇ th can be set according to the category label corresponding to the target object.
  • the laser beam that is divided into segments has The number of points is relatively large, and the angle estimation of the laser point cloud of the segment is more stable and accurate. Therefore, a smaller ⁇ th can be set for large-scale target objects.
  • the category label corresponding to small-size target objects can be set. Relatively set a larger ⁇ th .
  • the laser point cloud scanned by the side laser of the vehicle directly in front is very small (generally, the laser point cloud returned by another car driving left, right or forward from the front of the car If it is indistinguishable, when the orientation of the two initial clusters belonging to the same category is 90°, it is considered that the two initial clusters come from the same target object), and they are not continuous, which leads to over-segmentation.
  • This kind of scene can also be smoothly judged whether it comes from the same target object through the angle rationality after merging.
  • the way “c” actually means that the fourth division area formed by at least two initial clusters with the same category label must not only satisfy the condition of being within the preset size range of the way "a”, but also meet the above
  • the condition that the orientation angle of mode "b" is within the preset angle range which makes it more accurate in dealing with the problem of over-segmentation and under-segmentation of the laser point cloud, as shown in the scene (c) in Figure 20, where the initial convergence of the gray points
  • the initial clusters of the clusters and the black dots are the cars driving on the two lanes on the road in front of the vehicle. In this case, the two initial clusters should not be merged.
  • the angle is reasonable If the size range is judged according to the previous method "a", the two initial clusters will be merged into a target cluster Tb, thus reintroducing the under-segmentation problem when solving the over-segmentation problem . At this time, the judgment of the angle rationality will prevent the two initial clusters from being merged, so that the under-segmentation problem will not be introduced in the process of dealing with the over-segmentation problem.
  • adding the angle rationality judgment after merging can ensure that the initial clusters that need to be merged are successfully merged, and can identify the initial clusters that cannot be merged, thereby improving the system's ability to solve over-segmentation and under-segmentation problems.
  • the fourth division area formed by at least two initial clusters with the same category label is within the preset size range and the fourth division is determined Whether the difference between the orientation angle of the area and the orientation angle of the first initial cluster cluster in the at least two initial cluster clusters is within the preset angle range has no priority, you can choose which judgment to make first according to the actual situation , The specifics are not limited here.
  • the shapes of the various divided regions are not limited.
  • it may be any closed area among a circular area, a rectangular area, a square area, a trapezoidal area, a polygonal area, and an irregular-shaped area. This makes the embodiments of the present application more flexible in terms of implementation.
  • FIG. 21 and FIG. 21 The use effect of the embodiment in specific application scenarios, as shown in the ellipse box (ie the initial cluster cluster) shown in Figure 21, the initial cluster cluster of the rough classification originally obtained by the traditional clustering algorithm is such that "small car” and “ Shrubs are clustered into a target object for output, and the “people” and “cars” directly in front of the vehicle are clustered into a target object for output.
  • each target can be effectively output.
  • the object separates.
  • the rough classification of the initial clusters originally obtained by the traditional clustering algorithm will cause the “truck” driving in front to be divided into multiple targets.
  • FIG. 23 is a schematic diagram of a structure of an environment perception system provided by an embodiment of the application.
  • the environment perception system may include: a clustering module 2301, a semantic segmentation module 2302, and a reprocessing module 2303.
  • the clustering module 2301 is used to obtain The obtained laser point cloud of the current frame is clustered to obtain N initial clusters of coarse classification; the semantic segmentation module 2302 is used to perform semantic segmentation on the laser point cloud to obtain the corresponding laser point in the laser point cloud
  • the category label is used to indicate the classification category to which each laser point in the laser point cloud belongs; the reprocessing module 2303 is used to query each of the N initial clusters (which can be called Is the category label corresponding to each laser point in the first initial cluster cluster), and the first initial cluster cluster is reprocessed according to the category label corresponding to each laser point in the first initial cluster cluster to obtain the target cluster Cluster, a target cluster corresponds to a target object.
  • the clustering module 2301 clusters the acquired laser point cloud of the current frame (for example, clusters the laser point cloud in the OGM through the DFS algorithm) to obtain a rough classification N initial clusters, and the laser point cloud can be further semantically segmented by the semantic segmentation module 2302 (such as PointSeg network, DeepSeg network, etc.) to obtain the category corresponding to each laser point cloud in the laser point cloud Label.
  • the semantic segmentation module 2302 such as PointSeg network, DeepSeg network, etc.
  • the reprocessing module 2303 queries the category label corresponding to each laser point, and reprocesses each initial cluster according to the queried category label (eg, re-segment , Merge, etc.) to obtain target clusters, where one target cluster corresponds to one target object.
  • the laser point cloud is semantically segmented and combined with the traditional laser clustering algorithm to improve the over-segmentation and under-segmentation of the laser point cloud in laser perception, thereby improving the key obstacles The detection performance of objects.
  • the reprocessing module 2303 is specifically configured to: when there are at least two category labels corresponding to each laser point in the first initial clustering cluster, perform the first initial clustering according to a preset method.
  • the clusters are further processed (for example, if the number of intersections is 0, no segmentation is performed; if the number of intersections ⁇ 2, then segmentation is performed) to obtain at least one target cluster corresponding to the first initial cluster.
  • the next step of processing is performed on the first initial cluster cluster by judging the type of the category label corresponding to each laser point in the first initial cluster cluster, so as to obtain the first initial cluster cluster.
  • At least one target cluster corresponding to the cluster is obtained.
  • the reprocessing module 2303 is specifically further used to: divide the first initial cluster cluster according to the category labels corresponding to the laser points to obtain multiple divided regions, where the multiple divided regions Any one of the divided areas in the first initial cluster cluster is the area where the laser points belonging to the same category label in the first initial cluster are circled together in a preset circle, and then the first divided area and the first divided area in the multiple divided areas are obtained.
  • the number of intersection points between the second divided regions, and the first initial cluster cluster is divided according to the number of intersection points to obtain at least one target cluster cluster corresponding to the first initial cluster cluster.
  • the reprocessing module 2303 is specifically also used to: when the number of intersections is 0, and the second divided area is a subset of the first divided area, then the second divided area is considered The laser point is a misclassified point. At this time, it is considered that there is no under-segmentation between the first divided area and the second divided area, and the first divided area is taken as a target cluster cluster, that is, the first divided area and the second divided area. The divided regions all correspond to the same target cluster.
  • the first divided area can be used as a target cluster A cluster corresponds to a target object, and the target object is the object represented by the category label corresponding to the first divided area.
  • the reprocessing module 2303 is specifically used to: when the number of intersections is 2, it is considered that there is under-segmentation between the first divided area and the second divided area, and the processing method is Using the connecting line between the intersection points as a dividing line, the first initial cluster cluster is divided into at least two target cluster clusters, wherein each target cluster cluster corresponds to a category label.
  • the first initial cluster cluster is divided into at least two target cluster clusters, and each target cluster cluster corresponds to one Target objects
  • the two target objects are respectively objects represented by two category labels (ie, the category label corresponding to the laser point in the first divided area and the category label corresponding to the laser point in the second divided area).
  • the reprocessing module 2303 is specifically used to: when the number of intersections is 4, and the line between the first intersection and the second intersection divides the first divided area into the first part and The second part, and the number of laser points contained in the first part is greater than the number of laser points contained in the second part, the laser points contained in the second part are considered to be misclassified points, and then the first divided area is re- Divide to obtain the third divided area, which is the area that only includes the laser points in the first part, and then the first initial cluster cluster is subdivided in a manner similar to the above-mentioned number of intersections of 2 , That is, the first initial cluster cluster is divided into at least two target cluster clusters by taking the line between the two intersection points between the second divided area and the third divided area as the dividing line, where each Each target cluster corresponds to a category label.
  • the first initial cluster cluster when the number of intersections of two divided regions is 4, that is, first divide one of the divided regions according to a set of intersections. (For example, the first divided area) is re-divided to obtain a new third divided area. At this time, the number of intersections between the third divided area and the original another divided area (for example, the second divided area) is 2, and then the area is divided according to the above The case where the number of intersecting points is 2 is similarly processed.
  • the above-mentioned three kinds of subdivision methods in this application are also different and have flexibility.
  • the reprocessing module 2303 is specifically used to: when there are at least two initial clusters in the N initial clusters, the category labels corresponding to each laser point in the initial clusters are the same, and the at least The fourth divided area formed by the two initial clusters satisfies the preset condition, and the at least two initial clusters are merged into one target cluster.
  • the fourth divided region formed by the at least two initial cluster clusters meeting a preset condition includes: the size of the fourth divided region formed by the at least two initial cluster clusters is within a preset size Within the range, wherein the preset size range is the actual size of the target object identified by the category label corresponding to each laser point in the at least two initial clusters; and/or, the at least two initial clusters are formed The difference between the orientation angle of the fourth divided region and the orientation angle of the first initial cluster cluster in the at least two initial cluster clusters is within a preset angle range.
  • any one of the first divided area to the fourth divided area includes any of a circular area, a rectangular area, a square area, a trapezoidal area, a polygonal area, and an irregularly shaped area.
  • a closed area includes any of a circular area, a rectangular area, a square area, a trapezoidal area, a polygonal area, and an irregularly shaped area.
  • the shapes of the various divided regions are not limited, which makes the implementation of the embodiments of the present application more flexible.
  • the environment perception system described in this application can be applied to various intelligent driving agents.
  • the intelligent driving agent may be an autonomous vehicle (such as a smart car, an intelligent connected car). Etc.), it can also be an assisted driving vehicle, which is not specifically limited here.
  • FIG. 23 corresponds to the information interaction and execution process between the modules/units in the environment perception system described in the embodiment.
  • the method embodiment corresponding to FIG. 8 in this application is based on the same concept, and the specific content can be Please refer to the description in the method embodiment shown in the foregoing application, which will not be repeated here.
  • FIG. 24 is a schematic structural diagram of an autonomous driving vehicle provided by an embodiment of the application, in which, The automatic driving vehicle 2400 may be deployed with the environment sensing system described in the embodiment corresponding to FIG. 23 (not shown in FIG. 24) to implement various functions described in the embodiment corresponding to FIG. 8. Since in some embodiments, the autonomous driving vehicle 2400 may also include a communication function, the autonomous driving vehicle 2400 may include a receiver 2401 and a transmitter 2402 in addition to the components shown in FIG. 7, wherein the processor 243 may Including an application processor 2431 and a communication processor 2432. In some embodiments of the present application, the receiver 2401, the transmitter 2402, the processor 243, and the memory 244 may be connected by a bus or other methods.
  • the processor 243 controls the operation of the autonomous vehicle.
  • the various components of the autonomous vehicle 2400 are coupled together through a bus system.
  • the bus system may also include a power bus, a control bus, and a status signal bus.
  • various buses are referred to as bus systems in the figure.
  • the receiver 2401 can be used to receive input digital or character information, and to generate signal input related to the relevant settings and function control of the autonomous vehicle.
  • the transmitter 2402 can be used to output digital or character information through the first interface; the transmitter 2402 can also be used to send instructions to the disk group through the first interface to modify the data in the disk group; the transmitter 2402 can also include display devices such as a display screen .
  • the application processor 2431 is configured to execute the laser point cloud processing method in the embodiment corresponding to FIG. 8. Specifically, the application processor 2431 is used to perform the following steps: first, cluster the acquired laser point cloud of the current frame (for example, cluster the laser point cloud in OGM through the DFS algorithm) to obtain a rough classification N initial clusters of the laser point cloud, and the laser point cloud can be semantically segmented through a preset neural network (such as PointSeg network, DeepSeg network, etc.) to obtain the corresponding laser point cloud in the laser point cloud Category label.
  • a preset neural network such as PointSeg network, DeepSeg network, etc.
  • the target cluster cluster is obtained, where one target cluster cluster corresponds to one target object.
  • the application processor 2431 is specifically configured to: when there are at least two category labels corresponding to each laser point in the first initial cluster, perform the first initial cluster according to a preset method Reprocessing is performed to obtain at least one target cluster cluster corresponding to the first initial cluster cluster.
  • the application processor 2431 is specifically further configured to: divide the first initial cluster cluster according to the category labels corresponding to the laser points to obtain multiple divided regions, wherein Any divided area is the area where the laser points belonging to the same category label in the first initial cluster cluster are circled together in a preset circle, and then the first divided area and the second divided area in the plurality of divided areas are obtained. Divide the number of intersection points between the regions, and divide the first initial cluster cluster according to the number of intersection points to obtain at least one target cluster cluster corresponding to the first initial cluster cluster.
  • the application processor 2431 is also specifically configured to: when the number of intersections between the first divided area and the second divided area is 0, and the second divided area is the size of the first divided area Subset, the laser points in the second divided area are considered to be misclassified points. At this time, it is considered that there is no under-segmentation between the first divided area and the second divided area, and the first divided area is regarded as a target cluster. Class cluster.
  • the application processor 2431 is also specifically configured to: when the number of intersections between the first divided area and the second divided area is 2, it is considered that the first divided area and the second divided area are different from each other.
  • the processing method is to use the connecting line between the intersections as the dividing line to divide the first initial cluster into at least two target clusters, where each target cluster corresponds to one Category label.
  • the application processor 2431 is also specifically configured to: when the number of intersections between the first divided area and the second divided area is 4, and the line between the first intersection and the second intersection
  • the first divided area is divided into a first part and a second part, and the number of laser points contained in the first part is greater than the number of laser points contained in the second part, the laser points contained in the second part are considered to be misclassified
  • the first divided area is re-divided to obtain a third divided area.
  • the third divided area is an area that only includes each laser point in the first part.
  • the first initial cluster cluster is re-segmented, that is, the line between the two intersection points between the second divided area and the third divided area is taken as the dividing line, and the first initial cluster cluster is divided into At least two target clusters, where each target cluster corresponds to a category label.
  • the application processor 2431 is also specifically configured to: when there are at least two initial clusters in the N initial clusters, the category labels corresponding to the respective laser points in the initial clusters are the same, and the above at least two The fourth divided region formed by the initial clusters meets the preset condition, and the at least two initial clusters are merged into one target cluster.
  • the fourth divided region formed by the at least two initial cluster clusters meeting a preset condition includes: the size of the fourth divided region formed by the at least two initial cluster clusters is within a preset size Within the range, wherein the preset size range is the actual size of the target object identified by the category label corresponding to each laser point in the at least two initial clusters; and/or, the at least two initial clusters are formed The difference between the orientation angle of the fourth divided region and the orientation angle of the first initial cluster cluster in the at least two initial cluster clusters is within a preset angle range.
  • any one of the first divided area to the fourth divided area includes any of a circular area, a rectangular area, a square area, a trapezoidal area, a polygonal area, and an irregularly shaped area.
  • a closed area includes any of a circular area, a rectangular area, a square area, a trapezoidal area, a polygonal area, and an irregularly shaped area.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium stores a program for processing laser point clouds. When it runs on a computer, the computer executes the implementation shown in FIG. 8 The steps performed by the relevant system in the method described in the example.
  • the embodiment of the present application also provides a product including a computer program, which when running on a computer, causes the computer to execute the steps performed by the relevant system in the method described in the embodiment shown in FIG. 8.
  • An embodiment of the present application also provides a circuit system, the circuit system includes a processing circuit, and the processing circuit is configured to execute the steps performed by the related system in the method described in the foregoing embodiment shown in FIG. 8.
  • the related system (such as the environmental perception system described in FIG. 6) or the autonomous vehicle provided by the embodiment of the present application may be a chip.
  • the chip includes a processing unit and a communication unit.
  • the processing unit may, for example, It is a processor, and the communication unit may be, for example, an input/output interface, a pin, or a circuit.
  • the processing unit can execute the computer execution instructions stored in the storage unit, so that the chip in the server executes the laser point cloud processing method described in the embodiment shown in FIG. 8.
  • the storage unit is a storage unit in the chip, such as a register, a cache, etc., and the storage unit may also be a storage unit located outside the chip in the wireless access device, such as a storage unit located outside the chip.
  • Read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), etc.
  • the device embodiments described above are merely illustrative, and the units described as separate components may or may not be physically separate, and the components displayed as units may be or may not be physically separate. It may not be a physical unit, that is, it may be located in one place, or it may be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the connection relationship between the modules indicates that they have a communication connection between them, which can be specifically implemented as one or more communication buses or signal lines.
  • this application can be implemented by means of software plus necessary general hardware.
  • it can also be implemented by dedicated hardware including dedicated integrated circuits, dedicated CLUs, dedicated memories, Dedicated components and so on to achieve.
  • all functions completed by computer programs can be easily implemented with corresponding hardware, and the specific hardware structure used to achieve the same function can also be diverse, such as analog circuits, digital circuits or special purpose circuits. Circuit etc.
  • software program implementation is a better implementation in more cases.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a readable storage medium, such as a computer floppy disk. , U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk, etc., including several instructions to make a computer device (which can be a personal computer, server, or network device, etc.) execute the method described in each embodiment of this application .
  • a computer device which can be a personal computer, server, or network device, etc.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website site, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless such as infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Image Analysis (AREA)

Abstract

一种激光点云的处理方法及相关设备,可应用于自动驾驶领域中的激光感知领域,具体可应用在智能行驶的智能体(如智能汽车、智能网联汽车)上,该方法包括:对激光点云进行聚类(如通过DFS在OGM中对激光点云聚类),得到粗分类的N个初始聚类簇,通过神经网络(如PointSeg、DeepSeg)对激光点云进行语义分割,得到各激光点的类别标签,针对每个初始聚类簇查询各激光点对应的类别标签,并根据查询到的类别标签情况对各初始聚类簇进行再处理(如再分割),得到与目标物体对应的目标聚类簇。通过将激光点云进行语义分割,结合传统聚类算法,改善激光感知中激光点云的过分割、欠分割等问题,从而提升了对关键障碍物的检测性能。

Description

一种激光点云的处理方法及相关设备
本申请要求于2020年5月25日提交中国专利局、申请号为202010449480.5、申请名称为“一种激光点云的处理方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及激光处理领域,尤其涉及一种激光点云的处理方法及相关设备。
背景技术
保证感知的准确性是自动驾驶能够安全进行的首要条件,感知从传感器角度讲可以有多种模块,如激光感知模块、视觉感知模块、毫米波感知模块等,而激光感知模块作为关键模块之一,被广泛应用于诸如先进驾驶辅助系统(Advanced Driver Assistant System,ADAS)、自动驾驶系统(Autonomous Driving System,ADS)中,其能够给安装有该系统的轮式移动设备(如,自动驾驶车辆)提供障碍物精确的位置信息,从而为规控合理的决策提供坚实的依据。
使用激光雷达、三维激光扫描仪等激光感知模块接收到的激光信息以点云的形式呈现,而通过测量仪器得到的被测对象外观表面的点数据集合就称为点云,若该测量仪器是激光感知模块,那么得到的点云则称为激光点云(一般32线激光在同一时刻会有数万个激光点),激光点云包含的激光信息可记为[x,y,z,intensity],该激光信息表示的分别是每个激光点所打目标位置处在激光坐标系的三维坐标以及该激光点的反射强度。之后,通过对激光点云进行聚类得到聚类簇,一个聚类簇是多个激光点的一个集合,每个聚类簇表示的是一个目标物体,最后再根据各个聚类簇计算得到各个目标物体的位置、朝向、尺寸大小等信息输出给下游模块进行进一步处理。
以轮式移动设备为自动驾驶车辆为例,由于打在相邻关键障碍物之间的激光点(或打在关键障碍物上的激光点与打在道路边沿、灌木丛等非关键障碍物上的激光点)不易区分,或者由于遮挡,激光点云在同一目标物体上可能存在不连续性,会造成在对激光点云进行聚类的过程中出现欠分割和/或过分割现象,从而会引发后续的自动驾驶车辆的跟踪模块出现目标id跳变、目标位置跳变等情况,严重时会导致车辆被接管。
发明内容
本申请实施例提供了一种激光点云的处理方法及相关设备,通过将激光点云进行语义分割,并与传统的激光聚类算法相结合,改善激光感知中激光点云的过分割、欠分割等问题,从而进一步可提升对关键障碍物的检测性能。
基于此,本申请实施例提供以下技术方案:
第一方面,本申请实施例提供了一种激光点云的处理方法,可应用于自动驾驶领域中的激光感知领域,例如,可应用在智能行驶的智能体(如,智能汽车、智能网联汽车)上,该方法包括:首先,部署有激光传感器的相关系统(如,自动驾驶车辆的环境感知系统) 可以通过激光传感器获取任意时刻的激光点云,每当获取到当前时刻的当前帧的激光点云,该相关系统就可根据预设算法(如,深度优先算法(Depth-First-Search,DFS))对该当前帧的激光点云进行聚类,得到粗分类的N个初始聚类簇,其中,N为大于等于1的整数。此外,部署有激光传感器的相关系统还可以通过对激光传感器获取到的当前帧的激光点云进行语义分割(如,可通过诸如PointSeg或DeepSeg等预设的神经网络),以得到该激光点云内每个激光点对应的类别标签,该类别标签用于表示激光点云中每个激光点所属的分类类别。在得到当前帧的激光点云的粗分类的N个初始聚类簇以及激光点云内每个激光点对应的类别标签后,系统将查询这N个初始聚类簇中的每个初始聚类簇(这N个初始聚类簇中的任意一个初始聚类簇可称为第一初始聚类簇)内各个激光点对应的类别标签,并进一步根据每个初始聚类簇内各个激光点对应的类别标签对每个初始聚类簇进行再处理,得到目标聚类簇。
在本申请上述实施方式中,首先对获取到的当前帧的激光点云进行聚类(如,通过DFS算法在占据栅格地图(Occupancy Grid Map,OGM)中对激光点云进行聚类),从而得到粗分类的N个初始聚类簇,并且可进一步通过相关的神经网络(如,PointSeg、DeepSeg等)对该激光点云进行语义分割,以得到该激光点云内的每个激光点云对应的类别标签,最后,针对每个初始聚类簇,查询其中各个激光点对应的类别标签,并根据查询到的类别标签的情况对各个初始聚类簇进行再处理(如,再分割、合并等),得到目标聚类簇,其中,一个目标聚类簇就对应一个目标物体。在本申请上述实施方式中,通过将激光点云进行语义分割,并与传统的激光聚类算法相结合,改善激光感知中激光点云的过分割、欠分割等问题,从而提升了对关键障碍物的检测性能。
结合本申请实施例第一方面,在本申请实施例第一方面的第一种实现方式中,当第一初始聚类簇(即N个初始聚类簇中的任意一个初始聚类簇)内各个激光点对应的类别标签存在至少两个,则按预设方法对该第一初始聚类簇进行下一步处理(如,分割),得到与该初始聚类簇对应的至少一个目标聚类簇。
在本申请上述实施方式中,说明是通过判断第一初始聚类簇中各激光点对应的类别标签的种类来对第一初始聚类簇进行再处理的,从而得到与第一初始聚类簇对应的至少一个目标聚类簇。
结合本申请实施例第一方面的第一种实现方式,在本申请实施例第一方面的第二种实现方式中,具体可以是:将该第一初始聚类簇按照激光点对应的类别标签进行划分,得到多个划分区域,其中,该多个划分区域中的任意一个划分区域为以预设的圈定方式将该初始聚类簇中属于同一类别标签的激光点圈定在一起的区域,之后,再获取该多个划分区域中第一划分区域与第二划分区域之间的交点数量,并根据该交点数量对该第一初始聚类簇进行下一步处理(如,若交点数量为0,则不进行分割;若交点数量≥2,则进行分割),得到与该第一初始聚类簇对应的至少一个目标聚类簇。
在本申请上述实施方式中,具体阐述了是如何根据预设方法对第一初始聚类簇进行下一步处理以得到与该第一初始聚类簇对应的至少一个目标聚类簇的,即通过对第一初始聚类簇按照类别标签重新划分区域,再计算各个划分区域之间的交点数量,不同的交点数量 处理方式也不同,具备实用性和灵活性。
结合本申请实施例第一方面的第二种实现方式,在本申请实施例第一方面的第三种实现方式中,当该第一划分区域与该第二划分区域之间的交点数量为0,且该第二划分区域为该第一划分区域的子集,则认为第二划分区域中的激光点是误分类点,此时认为该第一划分区域与第二划分区域之间不存在欠分割情况,则将该第一划分区域作为一个目标聚类簇,即第一划分区域与第二划分区域均对应于同一个目标聚类簇。
在本申请上述实施方式中,具体阐述了当两个划分区域的交点数量为0,且第二划分区域为该第一划分区域的子集,就可将该第一划分区域作为一个目标聚类簇,其对应一个目标物体,该目标物体就为第一划分区域对应的类别标签所表示的物体。
结合本申请实施例第一方面的第二种实现方式,在本申请实施例第一方面的第四种实现方式中,当该第一划分区域与该第二划分区域之间的交点数量为2,则认为该第一划分区域与第二划分区域之间存在欠分割的情况,处理的方式就是以交点之间的连接线作为分界线,将该第一初始聚类簇分割为至少两个目标聚类簇,其中,每个目标聚类簇对应一个类别标签。
在本申请上述实施方式中,具体阐述了当两个划分区域的交点数量为2,则将该第一初始聚类簇分割为至少两个目标聚类簇,每个目标聚类簇就对应一个目标物体,这两个目标物体就分别为两个类别标签(即第一划分区域内的激光点对应的类别标签和第二划分区域内的激光点对应的类别标签)所表示的物体。
结合本申请实施例第一方面的第二种实现方式,在本申请实施例第一方面的第五种实现方式中,当该第一划分区域与该第二划分区域之间的交点数量为4,且第一交点和第二交点之间的连线将该第一划分区域分为第一部分和第二部分,且该第一部分所包含的激光点数量大于该第二部分所包含的激光点数量,则认为第二部分所包含的激光点为误分类点,此时对该第一划分区域重新划分,得到第三划分区域,该第三划分区域就为仅包括该第一部分内各个激光点的区域,之后,再按照上述交点数量为2类似的方式对该第一初始聚类簇进行再分割,即以该第二划分区域与该第三划分区域之间的两个交点之间的连线为分界线,将该第一初始聚类簇分割为至少两个目标聚类簇,其中,每个目标聚类簇对应一个类别标签。
在本申请上述实施方式中,具体阐述了当两个划分区域的交点数量为4时,该如何对该第一初始聚类簇进行再分割的情形,即先根据一组交点对其中一个划分区域(如,第一划分区域)重新划分得到新的第三划分区域,这时第三划分区域与原来的另一个划分区域(如,第二划分区域)的交点数量为2,再按照上述划分区域间交点数量为2的情形类似处理。此外,在本申请上述三种根据划分区域两两之间交点数量的不同,所采取的再分割的方式也不同,具备灵活性。
结合本申请实施例第一方面、第一方面的第一种实现方式至的第五种实现方式,在本申请实施例第一方面的第六种实现方式中,当这N个初始聚类簇中存在至少两个初始聚类簇内各个激光点对应的类别标签为同一个,且该至少两个初始聚类簇所构成的第四划分区域满足预设条件,则将该至少两个初始聚类合并为一个目标聚类簇。
在本申请上述实施方式中,阐述了当存在至少两个初始聚类簇内各个激光点对应的类别标签为同一个,则认为这至少两个初始聚类簇存在疑似过分割情况,此时可先判断这至少两个初始聚类簇所构成的第四划分区域是否满足预设条件,若满足,则将该至少两个初始聚类合并为一个目标聚类簇,具备灵活性。
结合本申请实施例第一方面的第六种实现方式,在本申请实施例第一方面的第七种实现方式中,该至少两个初始聚类簇所构成的第四划分区域满足预设条件具体可以是:该至少两个初始聚类簇所构成的第四划分区域的大小在预设尺寸范围内,其中,该预设尺寸范围为该至少两个初始聚类簇内各个激光点对应的类别标签所标识的目标物体的实际尺寸;和/或,该至少两个初始聚类簇所构成的该第四划分区域的朝向角度与该至少两个初始聚类簇中的第一初始聚类簇的朝向角度的差值在预设角度范围内。
在本申请上述实施方式中,给出了判断该至少两个初始聚类簇所构成的第四划分区域是否满足预设条件的三种情形,具备选择性和可实现性。
结合本申请实施例第一方面、第一方面的第一种实现方式至的第七种实现方式,在本申请实施例第一方面的第八种实现方式中,该第一划分区域至该第四划分区域中的任意一个划分区域包括:圆形区域、矩形区域、正方形区域、梯形区域、多边形区域以及不规则形状区域中的任意一种封闭区域。
在本申请上述实施方式中,并不限定各种划分区域的形状,这样使得本申请实施例在实现方式上,可以更加灵活。
本申请实施例第二方面提供一种环境感知系统,该环境感知系统具有实现上述第一方面或第一方面任意一种可能实现方式的方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
在第二方面的一种可能的实现方式中,该环境感知系统可应用于智能行驶的智能体上,该智能行驶的智能体可以是自动驾驶车辆(如,智能汽车、智能网联汽车等),也可以是辅助驾驶车辆,具体此处不做限定。
本申请实施例第三方面提供一种自动驾驶车辆,可以包括存储器、处理器以及总线系统,其中,存储器用于存储程序,处理器用于调用该存储器中存储的程序以执行本申请实施例第一方面或第一方面任意一种可能实现方式的方法。
本申请第四方面提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机可以执行上述第一方面或第一方面任意一种可能实现方式的方法。
本申请实施例第五方面提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面或第一方面任意一种可能实现方式的方法。
附图说明
图1为本申请实施例提供的真实场景与对应形成的激光点云的一个示意图;
图2为本申请实施例提供的真实场景与对应形成的激光点云的另一示意图;
图3为本申请实施例提供的不同分辨率的OGM的一个示意图;
图4为一种基于OGM的激光聚类算法的流程图;
图5为一种与视觉信息融合解决目标聚类过程中的欠分割与过分割问题的流程图;
图6为本申请实施例提供的自动驾驶车辆的总体架构的一个示意图;
图7为本申请实施例提供的自动驾驶车辆的一种结构示意图;
图8为本申请实施例提供的激光点云的处理方法的一个流程图;
图9为本申请实施例提供的激光点云映射到OGM中的一个示意图;
图10为本申请实施例提供的激光点云投影到OGM后经过DFS算法聚类得到的各个聚类簇在车辆坐标系下的聚类结果的一个示意图;
图11为本申请实施例提供的用于对激光点云进行语义分割的神经网络PointSeg的结构图;
图12为本申请实施例提供的根据激光点云的类别标签对初始聚类簇进行划分的一个示意图;
图13为本申请实施例提供的根据激光点云的类别标签对初始聚类簇进行划分的另一示意图;
图14为本申请实施例提供的根据激光点云的类别标签对初始聚类簇进行划分的另一示意图;
图15为本申请实施例提供的根据激光点云的类别标签对初始聚类簇进行划分的另一示意图;
图16为本申请实施例提供的根据激光点云的类别标签对初始聚类簇进行划分的另一示意图;
图17为本申请实施例提供的多个初始聚类簇属于同一类别标签的一个示意图;
图18为本申请实施例提供的多个初始聚类簇属于同一类别标签的一个示意图;
图19为本申请实施例提供的用L-shape方式估计待拟合并的至少两个初始聚类簇构成的第四区域的尺寸范围的一个示意图;
图20为本申请实施例提供的几种常见的过分割情形的示意图;
图21为通过本申请实施例在具体应用场景中的使用效果的一个示意图;
图22为通过本申请实施例在具体应用场景中的使用效果的另一示意图;
图23为本申请实施例提供的环境感知系统的一种结构示意图;
图24为本申请实施例提供的自动驾驶车辆的一种结构示意图。
具体实施方式
本申请实施例提供了一种激光点云的处理方法及相关设备,通过将激光点云进行语义分割,并与传统的激光聚类算法相结合,改善激光感知中激光点云的过分割、欠分割等问题,从而进一步可提升对关键障碍物的检测性能。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方 式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。
本申请实施例涉及了许多关于感知的相关知识,为了更好地理解本申请实施例的方案,下面先对本申请实施例可能涉及的相关术语和概念进行介绍。
轮式移动设备:是集环境感知、动态决策与规划、行为控制与执行等多功能于一体的综合系统,也可称为轮式移动机器人或轮式智能体,例如,可以是轮式施工设备、自动驾驶车辆、辅助驾驶车辆等,只要是具备轮式可移动的设备就都称为本申请所述的轮式移动设备。为便于理解,在以下实施例中,均以轮式移动设备为自动驾驶车辆为例进行说明,自动驾驶车辆可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
感知:在ADAS或ADS中,通过传感器(如,摄像机、激光雷达、毫米波雷达等)等发现轮式移动设备(如,自动驾驶车辆)周围环境中路面关键障碍物的相关信息,该相关信息也可称为感知信息。
规控:ADAS或ADS接收到传感器获取的感知信息后,对轮式移动设备的行驶状态做出规划与控制的决策系统,也可称为运动规划,通过将上层决策模块产生的指令生成具体的运动轨迹交由下层控制模块执行,是智能驾驶(包括辅助驾驶和自动驾驶)的关键环节。
关键障碍物:也可称为路面关键障碍物,是指行驶在路面上的车辆、行人等,区别于其他非关键障碍物,如路边的灌木丛、隔离带、建筑物等。
欠分割:作为路面关键障碍物的一个目标物体(如,路上的行人)对应的激光点云与其他一个或多个目标物体(如,该路面上的行驶的其他车辆等)对应的激光点云被聚类为一个目标物体对应的激光点云输出,或,作为路面关键障碍物的一个目标物体(如,路上的行人)对应的激光点云与非关键障碍物(如,灌木、路边的建筑物等)对应的激光点云被聚类为一个目标物体对应的激光点云输出。如,图1所示虚线框a中的“车辆1”对应的激光点云与虚线框b中的“灌木丛”对应的激光点云就被聚类为实线框A,并将实线框A中的激光点云作为一个目标物体输出,再如,图1所示的虚线框c中的“行人”对应的激光点云与虚线框d中的“车辆2”对应的激光点云挨的很近,也被聚类为一个目标物体(如,实线框B)输出,这种多个目标物体被聚类为一个目标物体输出的情况都称为欠分割。
过分割:本属于一个目标物体的激光点云在聚类时被聚类成多个目标物体,如图2中的虚线框a中的“卡车”对应的激光点云本应聚类为一个目标物体,而在实际的聚类过程中,聚类成了两个目标物体(如,图2实线框1和实线框2所包含的激光点云分别对应一个目标物体),这种将一个目标物体分成多个目标物体输出的情况都称为过分割。
占据栅格地图(Occupancy Grid Map,OGM):一种机器人常用的地图表示法,机器人经常使用激光传感器,而传感器数据存在噪音,例如,用激光传感器检测前方障碍物距离机器人多远,不可能检测到一个准确的数值,比如一个角度下,如果准确值是4米,那么在当前时刻检测到障碍物是3.9米,但是下一刻检测的是4.1m,不能将两个距离的位置都 认为是障碍物,为解决这一问题就采用OGM,如图3所示的就是示意的两个不同分辨率的OGM,黑色的点为激光点,所有映射在OGM中的激光点就构成激光点云,在实际应用中,一般采用的OGM尺寸为300*300,即有300*300个小格子(即栅格)组成,每个栅格的尺寸(即长*宽,指每个栅格对应在车辆坐标系中是多少米)就称为OGM的分辨率,分辨率越高,栅格尺寸越小,那么某一时刻激光传感器获取到的激光点云落在某个特定栅格中的激光点就越少,如图3左边的图所示,落在灰色底栅格(图3左边图的第6行第11列)中的激光点为4个,反之,分辨率越低,栅格尺寸越大,那么同样的,在同一时刻激光传感器获取到的激光点云落在某个特定栅格中的激光点就越少,如图3右边的图所示,落在灰色底栅格(图3右边图的第4行第7列)中的激光点为9个。对一般的地图来讲,地图上某一个点要么有障碍物要么没有,但在OGM中,在某一特定时刻,若某个栅格中没有激光点就认为是空,有至少一个激光点就认为该栅格对应存在障碍物。因此,对于一个栅格把它是空的概率表示为p(s=1),有障碍物表示为p(s=0),两者的概率和为1,之后,把各个不同时刻获取到的激光点云映射到OGM中,并经过一系列数学变换,根据各个栅格是否被占据的概率把这个栅格定位为占据状态或者空闲状态。
此外,在介绍本申请实施例之前,先对目前激光点云的几种常见的聚类方式进行简单介绍,使得后续便于理解本申请实施例。
方式1,这是一种基于OGM的激光聚类算法,如图4所示,为该方法的一个流程图,其流程为:首先,得到激光扫描信息,即激光在工作过程中感知周围障碍物,向系统返回激光点云;之后,设置OGM的长和宽(即设置OGM尺寸)以及栅格分辨率后,激光坐标系中的所有激光点可以投影到该OGM中;最后根据深度优先算法对激光点云进行聚类,深度优先算法是常见的目标聚类算法。每次以一个点为中心,设置邻域大小,在此邻域判断是否有栅格被激光点占有,如果有则归为一个聚类,并以邻域的栅格为中心继续往下搜索,直到结束。通常通过控制邻域大小来控制过分割与欠分割情况,因此无法找到合理的邻域值同时能解决目标聚类过程中的过分割和欠分割问题。
方式2,这是一种与视觉信息融合解决目标聚类过程中的欠分割与过分割问题,如图5所示,为该方法的一个流程图,其流程概括为:从激光扫描到激光点云分割聚类的方法与方式1类似,不同的地方是多出一个视觉2D检测的感知部分(可称为视觉2D检测模块),即从视频数据读取到视觉2D数据进行检测。在同一场景中,获取到的激光点云通过聚类得到M个聚类簇,每个聚类簇对应一个3D目标物,而视觉2D检测模块通过向训练好的相关网络输入采集到的图像,输出N个标注有2D框的目标物,之后通过将激光点云投影到该图像中,再应用图像中检测到的2D框对聚类簇进行再分割,从而解决目标聚类过程中的欠分割问题。此外,通过与激光点云匹配的2D框的id,可以判断某些聚类簇是否有可能是过分割产生的(如果两个聚类簇对应的视觉2D框id相同,则有可能这两个聚类簇来自于同一个目标物)。最后,通过适当的合并策略对目标聚类簇进行合并处理,从而改善过分割问题。然而,方式2对标定的要求非常高,即激光点云投影到图像中时坐标要非常精确,相机位置的轻微变动都会对最后的聚类结果产生极大的影响,此外,若目标物存在遮挡或是在夜晚的环境,对遮挡的目标或者是夜晚的环境,视觉2D检测模块无法获取到有 效的图像,无法利用图像对激光点云进行再分割处理,该方法使用场景有局限性。
基于上述所述,为解决上述所述问题,本申请实施例首先提供了一种激光点云的处理方法,通过将激光点云进行语义分割,并与传统的激光聚类算法相结合,改善激光感知中激光点云的过分割、欠分割等问题,从而进一步可提升对关键障碍物的检测性能。
下面结合附图,对本申请的实施例进行描述。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请实施例所提供的激光点云的处理方法可以应用于对各种智能行驶(如,无人驾驶、辅助驾驶等)的智能体进行运动规划(如,速度规划、驾驶行为决策、全局路径规划等)的场景中,以该智能体为自动驾驶车辆为例,先对自动驾驶车辆的总体架构进行说明,具体请参阅图6,图6示意的是自上而下的分层式体系架构,各系统之间可有定义接口,用于对系统间的数据进行传输,以保证数据的实时性和完整性。下面对各个系统进行简单介绍:
(1)环境感知系统
环境感知是智能驾驶车辆中最为基础的一个部分,无论是做驾驶行为决策还是全局路径规划,都需要建立在环境感知的基础上,依据对道路交通环境的实时感知结果,进行相对应的判断、决策和规划,使车辆实现智能驾驶。
环境感知系统主要是利用各种传感器获取相关的环境信息,从而完成对环境模型的构建以及对于交通场景的知识表达,所使用的传感器包括摄像机、单线雷达(SICK)、四线雷达(IBEO)、三维激光雷达(HDL-64E)等,其中,摄像机主要是负责红绿灯检测、车道线检测、道路指示牌检测、车辆识别等;其他的激光雷达传感器主要负责动\静态的关键障碍物的检测、识别和跟踪,以及道路边界、灌木带、周边建筑物等非关键障碍物的检测和提取,例如,三维激光雷达发出的激光一般以10FPS的频率采集外部环境信息,并返回每个时刻的激光点云,具体的,可以是通过对各个时刻获取到的激光点云进行聚类,从而输出目标物体的位置、朝向等信息。最后,基于以上各传感器得到的感知信息进行数据融合处理,映射到一张能表达道路环境的OGM中,并发送给自主决策系统做进一步的决策和规划。
(2)自主决策系统
自主决策系统是智能驾驶车辆中的关键组成部分,该系统主要分为行为决策和运动规划这两个核心子系统,其中,行为决策子系统主要是通过运行全局规划层来获取全局最优行驶路线,以明确具体驾驶任务,再根据环境感知系统发来的当前实时道路信息,基于道路交通规则和驾驶经验,决策出合理的驾驶行为,并将该驾驶行为指令发送给运动规划子系统;运动规划子系统则是根据接收的驾驶行为指令以及当前的局部环境感知信息,基于安全性、平稳性等指标规划处一条可行驾驶轨迹,并发送给控制系统。
(3)控制系统
控制系统具体来说也分为两个部分:控制子系统和执行子系统,其中,控制子系统用于将自主决策系统产生的可行驾驶轨迹转化为各个执行模块的具体执行指令,并传递给执行子系统;执行子系统接收来自控制子系统的执行指令后将其发送给各个控制对象,对车 辆的转向、制动、油门、档位等进行合理的控制,从而使车辆自动行驶以完成对应的驾驶操作。
需要说明的是,在自动驾驶车辆的行驶过程中,自动驾驶车辆的驾驶操作的准确性主要取决于控制系统产生的针对各个执行模块的具体执行指令是否准确,而准确与否又取决于自主决策系统,自主决策系统又面临不确定性,该不确定性因素主要包括以下几个方面:1)环境感知系统中的各个传感器特性及标定误差带来的不确定性,不同传感器的感知机理、感知范围以及相应的误差模式是不一样的,并且其在自动驾驶车辆上安装带来的标定误差,最终都会反映在感知信息的不确定性上;2)环境感知系统数据处理延迟带来的不确定性,这是因为道路环境复杂,数据信息量庞大,这使得环境感知系统数据处理的计算量也大,而整个环境是时刻在变化的,这就必然会导致数据信息的延迟,从而影响自主决策系统的正确判断;3)对感知信息处理方式的不同也会带来不确定性,以本申请实施例为例,若采用传统的聚类方法对激光点云进行聚类,则会带来过分割和/或欠分割问题,若能改善激光点云在聚类过程中存在的过分割和/或欠分割问题,也就能相应减少自主决策系统的不确定性,进而提高控制系统产生的针对各个执行模块的具体执行指令的准确性。
还需要说明的是,图6所示的自动驾驶车辆的总体架构仅为示意,在实际应用中,可包含更多或更少的系统/子系统或模块,并且每个系统/子系统或模块可包括多个部件,具体此处不做限定。
为了更进一步的理解本方案,基于图6对应所述的自动驾驶车辆的总体架构,本申请实施例中还将结合图7对自动驾驶车辆内部各个结构的具体功能进行介绍,请先参阅图7,图7为本申请实施例提供的自动驾驶车辆的一种结构示意图,自动驾驶车辆100配置为完全或部分地自动驾驶模式,例如,自动驾驶车辆100可以在处于自动驾驶模式中的同时控制自身,并且可通过人为操作来确定车辆及其周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制自动驾驶车辆100。在自动驾驶车辆100处于自动驾驶模式中时,也可以将自动驾驶车辆100置为在没有和人交互的情况下操作。
自动驾驶车辆100可包括各种子系统,例如行进系统102、传感器系统104(如,图6中的摄像机、SICK、IBEO、激光雷达等均属于传感器系统104中的模块)、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。可选地,自动驾驶车辆100可包括更多或更少的子系统,并且每个子系统可包括多个部件。另外,自动驾驶车辆100的每个子系统和部件可以通过有线或者无线互连。
行进系统102可包括为自动驾驶车辆100提供动力运动的组件。在一个实施例中,行进系统102可包括引擎118、能量源119、传动装置120和车轮/轮胎121。
其中,引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如,汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为自动驾驶车辆100的其他系统提供能量。传动装置120可以将来自引 擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。
传感器系统104可包括感测关于自动驾驶车辆100周边的环境的信息的若干个传感器。例如,传感器系统104可包括定位系统122(定位系统可以是全球定位GPS系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)124、雷达126、激光测距仪128以及相机130。传感器系统104还可包括被监视自动驾驶车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主自动驾驶车辆100的安全操作的关键功能。在本申请实施例中,激光感知模块是属于传感器系统104中非常重要的一个感知模块。
其中,定位系统122可用于估计自动驾驶车辆100的地理位置。IMU 124用于基于惯性加速度来感知自动驾驶车辆100的位置和朝向变化。在一个实施例中,IMU 124可以是加速度计和陀螺仪的组合。雷达126可利用无线电信号来感知自动驾驶车辆100的周边环境内的物体,具体可以表现为毫米波雷达或激光雷达。在一些实施例中,除了感知物体以外,雷达126还可用于感知物体的速度和/或前进方向。激光测距仪128可利用激光来感知自动驾驶车辆100所位于的环境中的物体。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。相机130可用于捕捉自动驾驶车辆100的周边环境的多个图像。相机130可以是静态相机或视频相机。
控制系统106为控制自动驾驶车辆100及其组件的操作。控制系统106可包括各种部件,其中包括转向系统132、油门134、制动单元136、计算机视觉系统140、线路控制系统142以及障碍避免系统144。
其中,转向系统132可操作来调整自动驾驶车辆100的前进方向。例如在一个实施例中可以为方向盘系统。油门134用于控制引擎118的操作速度并进而控制自动驾驶车辆100的速度。制动单元136用于控制自动驾驶车辆100减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制自动驾驶车辆100的速度。计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别自动驾驶车辆100周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍体。计算机视觉系统140可使用物体识别算法、运动中恢复结构(Structure from Motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。线路控制系统142用于确定自动驾驶车辆100的行驶路线以及行驶速度。在一些实施例中,线路控制系统142可以包括横向规划模块1421和纵向规划模块1422,横向规划模块1421和纵向规划模块1422分别用于结合来自障碍避免系统144、GPS 122和一个或多个预定地图的数据为自动驾驶车辆100确定行驶路线和行驶速度。障碍避免系统144用于识别、评估和避免或者以其他方式越过自动驾驶车辆100的环境中的障碍体,前述障碍体具体可以表现为实际障碍体和可能与自动驾驶车辆100发 生碰撞的虚拟移动体。在一个实例中,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
自动驾驶车辆100通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。在一些实施例中,外围设备108为自动驾驶车辆100的用户提供与用户接口116交互的手段。例如,车载电脑148可向自动驾驶车辆100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于自动驾驶车辆100与位于车内的其它设备通信的手段。例如,麦克风150可从自动驾驶车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器152可向自动驾驶车辆100的用户输出音频。无线通信系统146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,例如CDMA、EVD0、GSM/GPRS,或者4G蜂窝通信,例如LTE。或者5G蜂窝通信。无线通信系统146可利用无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。
电源110可向自动驾驶车辆100的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为自动驾驶车辆100的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。
自动驾驶车辆100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如存储器114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制自动驾驶车辆100的个体组件或子系统的多个计算设备。处理器113可以是任何常规的处理器,诸如商业可获得的中央处理器(central processing unit,CPU)。可选地,处理器113可以是诸如专用集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器、和在相同块中的计算机系统112的其它部件,但是本领域的普通技术人员应该理解该处理器、或存储器实际上可以包括不存储在相同的物理外壳内的多个处理器、或存储器。例如,存储器114可以是硬盘驱动器或位于不同于计算机系统112的外壳内的其它存储介质。因此,对处理器113或存储器114的引用将被理解为包括可以并行操作或者可以不并行操作的处理器或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器113可以位于远离自动驾驶车辆100并且与自动驾驶车辆100进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于自动驾驶车辆100内的处理器113上执行而其它则由远程处理器113执行,包括采取执行单一操 纵的必要步骤。
在一些实施例中,存储器114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行自动驾驶车辆100的各种功能,包括以上描述的那些功能。存储器114也可包含额外的指令,包括向行进系统102、传感器系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。除了指令115以外,存储器114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在自动驾驶车辆100在自主、半自主和/或手动模式中操作期间被自动驾驶车辆100和计算机系统112使用。用户接口116,用于向自动驾驶车辆100的用户提供信息或从其接收信息。可选地,用户接口116可包括在外围设备108的集合内的一个或多个输入/输出设备,例如无线通信系统146、车载电脑148、麦克风150和扬声器152。
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统106)以及从用户接口116接收的输入来控制自动驾驶车辆100的功能。例如,计算机系统112可利用来自控制系统106的输入以便控制转向系统132来避免由传感器系统104和障碍避免系统144检测到的障碍体。在一些实施例中,计算机系统112可操作来对自动驾驶车辆100及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与自动驾驶车辆100分开安装或关联。例如,存储器114可以部分或完全地与自动驾驶车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图7不应理解为对本申请实施例的限制。在道路行进的自动驾驶车辆,如上面的自动驾驶车辆100,可以识别其周围环境内的物体以确定对当前速度的调整。所述物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定自动驾驶车辆所要调整的速度。
可选地,自动驾驶车辆100或者与自动驾驶车辆100相关联的计算设备如图7的计算机系统112、计算机视觉系统140、存储器114可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测所识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。自动驾驶车辆100能够基于预测的所识别的物体的行为来调整它的速度。换句话说,自动驾驶车辆100能够基于所预测的物体的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)什么稳定状态。在这个过程中,也可以考虑其它因素来确定自动驾驶车辆100的速度,诸如,自动驾驶车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。除了提供调整自动驾驶车辆的速度的指令之外,计算设备还可以提供修改自动驾驶车辆100的转向角的指令,以使得自动驾驶车辆100遵循给定的轨迹和/或维持与自动驾驶车辆100附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。
上述自动驾驶车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
本申请实施例提供了一种激光点云的处理方法,可应用于对各种智能行驶(如,无人驾驶、辅助驾驶等)的智能体(如,图6、图7对应的自动驾驶车辆的总体架构及各结构功能模块)进行运动规划(如,速度规划、驾驶行为决策、全局路径规划等)的场景中,请参阅图8,图8为本申请实施例提供的激光点云的处理方法的一种流程示意图,具体包括:
801、对获取到的当前帧的激光点云进行聚类,得到粗分类的N个初始聚类簇。
首先,部署有激光传感器的相关系统(如,上述自动驾驶车辆的环境感知系统)可以通过激光传感器获取任意时刻的激光点云,每当获取到当前时刻的当前帧的激光点云,该系统就可根据预设算法对该当前帧的激光点云进行聚类,得到粗分类的N个初始聚类簇,其中,N为大于等于1的整数。
具体地,系统可通过但不限于如下方式对当前帧的激光点云进行聚类:首先,将获取到的当前帧的激光点云投影到OGM中,由于激光点云中的每个激光点包含的激光信息可记为[x,y,z,intensity],该激光信息表示的分别是每个激光点所打目标位置处在激光坐标系的三维坐标以及该激光点的反射强度,假设当前时刻返回至系统的激光点云为{p i},p i=[x i,y i,z i],i=1~n,其中,i为激光点云中激光点的数量,那么在投影过程中,实际上是忽略了各个激光点的高度信息,而是将三维坐标中的[x i,y i]按照一定的比例进行缩放后投影到OGM中,如图9中左图为各个激光点的俯视图,白色点为采集到的激光点,各个激光点按照设定的分辨率(即OGM中栅格的大小)可以将所有激光点映射到图9右图的OGM中(图9右图仅示意了映射过来的部分激光点),其中OGM中的“黑色方格”为对应激光坐标系的原点。之后,可通过预设算法,如,深度优先算法(Depth-First-Search,DFS),在OGM中对激光点云进行聚类。DFS是一种用于遍历、搜索树或图的算法,沿着树的深度遍历树的节点,尽可能深的搜索树的分支,当某个节点V所在边都己被探寻过,搜索将回溯到发现节点V的那条边的起始节点,这一过程一直进行到已发现从源节点可达的所有节点为止,如果还存在未被发现的节点,则再选择其中一个节点作为源节点并重复以上过程,整个进程反复进行直到所有节点都被访问为止。通过该算法,就可在OGM中实现对激光点云的聚类,得到m个聚类簇,每个聚类簇对应一个通过该方法确定出的目标物体,每个聚类簇都包括OGM图中的至少一个坐标值,如图9中被投影到OGM中的激光点云按照上述算法对激光点云进行聚类,就得到了4个粗分类的初始聚类簇,如,图9中的4块分别连在一起的“灰色方格”就分别为4个粗分类的初始聚类簇。OGM中的坐标值可表示为:{V j},V j={p i},p i=(x i,y i),这里需要注意的是,V j为每个栅格在OGM中的坐标值。在得到了激光点云在OGM中的聚类簇之后,由于激光点云在激光坐标系以及车辆坐标系上的坐标与OGM中的坐标是对应的,对上一步在OGM中进行聚类得到的m个聚类簇可以转换成在车辆坐标系对激光点云的聚类,设在车辆坐标系对激光点云的聚类结果为: {V wj},V wj={p wi},p wi=[x wi,y wi,z wi],j=1~m,i=1~n。其中,m为某个聚类簇,n为激光点的数量,如图10所示,就为激光点云投影到OGM后,经过DFS算法聚类得到的各个聚类簇在车辆坐标系下的聚类结果,其中,每一个白色的凸包就是一个聚类簇,每个聚类簇对应一个通过该聚类方式确定出的目标物体,从图10中可以看出,激光点云聚成了很多个目标,每个目标包含很多个激光点。
这里需要说明的是,在上述所述的对获取到的当前帧的激光点云进行聚类的方式中,得到的粗分类的每个初始聚类簇是被认为对应一个目标物体,据此来确定各个目标物体在车辆坐标系中真实存在的位置、朝向等信息,实际上,每个初始聚类簇是否真正对应一个目标物体是由采用的预设算法决定的,按照目前普遍采用的DFS算法,这种传统的算法会存在过分割和/或欠分割的问题。
802、对激光点云进行语义分割,得到激光点云内每个激光点对应的类别标签。
部署有激光传感器的相关系统还可以通过对激光传感器获取到的当前帧的激光点云进行语义分割,以得到该激光点云内每个激光点对应的类别标签,该类别标签用于表示激光点云中每个激光点所属的分类类别。
对激光点云进行语义分割区别于图像语义分割,但与图像语义分割的思路类似,一般来说,激光点云的语义分割是通过一个特定结构的神经网络对激光点云进行语义分类的,为便于理解,图11示意出一种用于对激光点云进行语义分割的神经网络PointSeg的结构图(为公知常识,具体此处不对该结构进行具体说明),PointSeg是一种基于球形图的实时端到端语义分割目标物体的方法,PointSeg网络的输入是通过激光点云计算得到的球面图,该球面图的配置参数一般为64*512*5,其中,64*512为该球面图的尺寸大小,5为通道数,PointSeg网络的输出则为与输入的球面图等尺寸的标签(label)图,由于激光点云与球面图的坐标也是一一对应的,因此可通过最终输出的标签图得到每个激光点对应的类别标签,从而实现对激光点云的语义分割。
这里需要说明的是,本申请所述的类别标签可根据用户需要自行设置,或车辆在出厂或升级时被设置,具体此处不做限定,一般根据实际场景,类别标签一般可设置为如下几类:“背景”、“汽车”、“卡车”、“有轨电车”、“骑行者”、“行人”等常见的车辆行驶过程中可能遇到的关键障碍物的类别,具体根据实际应用场景确定,此处不做限定。
还需要说明的是,除了PointSeg网络,还可以利用其它的神经网络对激光点云进行语义分割,如,DeepSeg网络,具体此处对该神经网络的具体形式不做限定,只要能实现对激光点云进行语义分割目的的神经网络都可以。
由上述可知,通过步骤802,当前帧的激光点云内的每个激光点都可以对应到一个类别标签上,即每个激光点p i,对应一个类别标签l i
需要说明的是,在本申请实施例中,步骤801和步骤802的执行没有先后顺序,可以先执行步骤801再执行步骤802,也可以先执行步骤802再执行步骤801,也可以同时执行步骤801和步骤802,具体此处不做限定。
803、查询第一初始聚类簇内各个激光点对应的类别标签,并根据第一初始聚类簇内各个激光点对应的类别标签对第一初始聚类簇进行再处理,得到目标聚类簇。
在得到当前帧的激光点云的粗分类的N个初始聚类簇以及激光点云内每个激光点对应的类别标签后,系统将查询N个初始聚类簇中的每个初始聚类簇内各个激光点对应的类别标签,并进一步根据第一初始聚类簇(N个初始聚类簇中的任意一个初始聚类簇可称为第一初始聚类簇)内各个激光点对应的类别标签对第一初始聚类簇进行再处理,得到目标聚类簇。这里需要说明的是,在上述步骤801所述的对获取到的当前帧的激光点云进行聚类的方式中,得到的粗分类的每个初始聚类簇是被认为对应一个目标物体,据此来确定各个目标物体在车辆坐标系中真实存在的位置、朝向等信息,实际上,每个初始聚类簇是否真正对应一个目标物体是由采用的预设算法决定的,按照目前普遍采用的DFS算法,这种传统的算法会存在过分割和/或欠分割的问题。因此在步骤803中,结合每个激光点所属的类别标签,对每个粗分类的初始聚类簇进行再处理,以得到一个或多个目标聚类簇,每个目标聚类簇就对应一个真实的目标物体。
在本申请上述实施方式中,首先对获取到的当前帧的激光点云进行聚类(如,通过DFS算法在OGM中对激光点云进行聚类),从而得到粗分类的N个初始聚类簇,并且可进一步通过相关的神经网络(如,PointSeg网络、DeepSeg网络等)对该激光点云进行语义分割,以得到该激光点云内的每个激光点云对应的类别标签,最后,针对每个初始聚类簇,查询其中各个激光点对应的类别标签,并根据查询到的类别标签的情况对各个初始聚类簇进行再处理(如,再分割、合并等),得到目标聚类簇,其中,一个目标聚类簇就对应一个目标物体。在本申请上述实施方式中,通过将激光点云进行语义分割,并与传统的激光聚类算法相结合,改善激光感知中激光点云的过分割、欠分割等问题,从而提升了对关键障碍物的检测性能。
需要说明的是,在本申请的一些实施方式中,具体如何对每个粗分类的初始聚类簇进行再处理以得到一个或多个目标聚类簇可以通过但不限于如下几种方式:
A、当第一初始聚类簇内各个激光点对应的类别标签存在至少两个,此时第一初始聚类簇存在疑似欠分割情况。
将通过上述步骤801-802得到的初始聚类簇记为T x,其包含了y个激光点{p 0~p y},x为某个初始聚类簇(可称为第一初始聚类簇),这y个激光点中每一个都已确定了与其对应的分类标签。当查询初始聚类簇T x内各个激光点对应的类别标签后,确定该初始聚类簇T x内各个激光点对应的类别标签存在至少两个,则说明该初始聚类簇T x可能存在欠分割情况,此时可按照预设方法对该初始聚类簇T x进行处理,得到与初始聚类簇T x对应的至少一个目标聚类簇,包括但不限于通过如下方式对该初始聚类簇T x进行处理:首先对该初始聚类簇T x按照激光点对应的类别标签重新进行划分,即将该初始聚类簇T x中类别标签一致的激光点按照预设的圈定方式圈定在一起,从而得到多个划分区域,之后,再获取这多个划分区域中第一划分区域与第二划分区域之间的交点数量,并根据交点数量对初始聚类簇T x进行分割,得到与初始聚类簇T x对应的至少一个目标聚类簇。为便于理解,可参阅图12,假设某个初始聚类簇T 1中的激光点对应的类别标签有2类,分别为“汽车”和“行人”,其中,类别标签为“汽车”的激光点以灰点示意,类别标签为“行人”的激光点以黑点示意,那么针对该初始聚类簇T 1就可按照标签类别再分割为2个划分区域(可分别记为区域1、区域2), 每个划分区域对应一个类别标签(区域1、区域2分别对应“行人”、“汽车”),之后,计算这两个区域之间的交点数量(如,图12中示意的交点数量为2,分别为交点a0和b0),再根据交点数量对该初始聚类簇T 1进行分割,得到与初始聚类簇对应的至少一个目标聚类簇。
下面以初始聚类簇T x中各个激光点对应的类别标签为两个(此时划分区域只有两个,分别为第一划分区域和第二划分区域)为例,介绍几种根据交点数量对初始聚类簇T x进行再处理的情况,包括但不限于:
a、当第一划分区域与第二划分区域之间的交点数量为0。
如图13所示,以灰色点示意的各个激光点对应一个类别标签,其位于第一划分区域内,可将第一划分区域记为区域1,以黑色点示意的激光点对应另一个类别标签,其位于第二划分区域内,可将第二划分区域记为区域2,由图13可知,区域2为区域1的子集,此时可认为区域2中的激光点是误分类点,即认为该初始聚类簇T x不存在欠分割情况,该初始聚类簇T x就作为一个目标聚类簇,其对应一个目标物体,该目标物体就为区域1对应的类别标签所表示的物体。
b、当第一划分区域与第二划分区域之间的交点数量为2。
如图14所示,以灰色点示意的各个激光点对应一个类别标签,其位于第一划分区域内,可将第一划分区域记为区域1,以黑色点示意的激光点对应另一个类别标签,其位于第二划分区域内,可将第二划分区域记为区域2,由图14可知,区域1和区域2之间存在两个交点(分为记为a1和b1),此时可认为该初始聚类簇T x存在欠分割的情况,处理方式可以是以交点a1和交点b1之间的连线作为分界线(如图14中的黑色直线),将该初始聚类簇T x分割为两个目标聚类簇,每个目标聚类簇就对应一个目标物体,这两个目标物体就分别为两个类别标签所表示的物体。
c、当第一划分区域与第二划分区域之间的交点数量为4。
如图15所示,以灰色点示意的各个激光点对应一个类别标签,其位于第一划分区域内,可将第一划分区域记为区域1,以黑色点示意的激光点对应另一个类别标签,其位于第二划分区域内,可将第二划分区域记为区域2,由图15可知,区域1和区域2之间存在四个交点(分为记为a2、b2、a3、b3),其中通过一组交点a2和b2将区域1中的激光点分为了左右两部分,其中可将区域1的左边部分称为第一部分,右边部分称为第二部分,由图15可知,第一部分所包含的灰色点示意的激光点数量大于第二部分所包含的灰色点示意的激光点数量,此时认为第二部分所包含的激光点为误分类点,对区域1重新划分得到区域3,区域3即为第一部分所包含的灰色点示意的激光点所占据的区域,可知区域1和区域3之间的交点数量为2,此时再按照上述“情形b”类似的方式对该初始分类簇T x进行再分割,即以交点a2和交点b2之间的连线作为分界线,将该初始聚类簇T x分割为两个目标聚类簇,每个目标聚类簇就对应一个目标物体,这两个目标物体就分别为两个类别标签所表示的物体。
需要说明的是,上述情形a-c是以初始聚类簇T x中各个激光点对应的类别标签为两个(此时划分区域只有两个,分别为第一划分区域和第二划分区域)为例进行说明的,若初始聚 类簇T x中各个激光点对应的类别标签为两个以上(如,3个),则可以按照上述类似的方式依次处理多个划分区域中两两之间的交点数量不同时的情况,为便于理解,可参阅图16,假设某个初始聚类簇T 2中的激光点对应的类别标签有3类,分别为“汽车”、“行人”、“卡车”,其中,类别标签为“汽车”的激光点以灰点示意,类别标签为“行人”的激光点以黑点示意,类别标签为“卡车”的激光点以空心点示意,那么针对该初始聚类簇T 2就可按照标签类别再分割为3个划分区域(可分别记为区域1、区域2、区域3),每个划分区域对应一个类别标签(区域1、区域2、区域3分别对应“行人”、“汽车”、“卡车”),之后,依次两两计算两个区域之间的交点数量,并根据交点数量的不同按照上述情形a-c中的某一种情况进行处理,最终得到与该初始聚类簇T 2对应的至少一个目标聚类簇。例如,先计算区域1和区域2之间的交点数量,根据交点数量的不同按照上述情形a-c中的某一种情况进行处理,类似的,将区域1和区域3、区域2和区域3也根据交点数量的不同按照上述情形a-c中的某一种情况进行处理,直至所有划分区域都与其他划分区域进行了交点数量的比较。
B、当N个初始聚类簇中存在至少两个初始聚类簇内各个激光点对应的类别标签为同一个,此时初始聚类簇存在疑似过分割情况。
将通过上述步骤801-802得到的初始聚类簇记为T x,其包含了y个激光点{p 0~p y},x为某个初始聚类簇,这y个激光点中每一个都已确定了与其对应的分类标签。当存在至少两个初始聚类簇内各个激光点对应的类别标签为同一个,则认为这至少两个初始聚类簇存在疑似过分割情况,此时可先判断这至少两个初始聚类簇所构成的第四划分区域是否满足预设条件,若满足,则将该至少两个初始聚类合并为一个目标聚类簇。为便于理解,可参阅图17,假设初始聚类簇T 3、T 4和T 5内各个激光点对应的标签类别均为同一个类别l 1(由于类别都一样,所以初始聚类簇T 3、T 4和T 5内各个激光点均以黑点示意),此时就判断初始聚类簇T 3、T 4和T 5是否满足预设条件,若满足,就可将这两个初始聚类簇T 3、T 4和T 5合并为一个目标聚类簇。
在本申请的一些实施方式中,如何判断这至少两个初始聚类簇所构成的第四划分区域是否满足预设条件可以通过但不限于如下方式:
a、判断类别标签为同一个的至少两个初始聚类簇所构成的第四划分区域的大小是否在预设尺寸范围内。
当类别标签为同一个的至少两个初始聚类簇所构成的第四划分区域的大小在预设尺寸范围内时,则认为该至少两个初始聚类簇来自于同一目标物体,则可将该至少两个初始聚类簇合并为一个目标聚类簇,该合并成的目标聚类簇就对应一个真实的目标物体,该目标物体就为该类别标签所表示的物体。
为便于理解,下面以两个初始聚类簇内各个激光点对应的类别标签为同一个为例进行示意,请参阅图18,假设初始聚类簇T6和T7对应的类别标签均为l 2,l 2为“汽车”,若初始聚类簇T6和T7共同构成的第四划分区域在车辆坐标系下的尺寸范围在“汽车”的真实尺寸范围(假设已考虑误差值)内,则认为初始聚类簇T6和T7来自同一目标物体“汽车”, 此时即可将初始聚类簇T6和T7合并为一个目标聚类簇Ta,该目标聚类簇Ta就对应于该目标物体“汽车”。
需要说明的是,在本申请实施例中,每个类别标签下的真实的目标物体(如,成年人、大卡车、汽车等)都可根据大数据求得其真实尺寸范围,如,成年人高1.5米至1.9米、宽0.4米至0.8米,据此可求得成年人的真实尺寸范围作为本申请上述所述的类别标签为“行人”对应的预设尺寸范围;类似的,可求出所有类别标签下的真实的目标物体的真实尺寸范围,各个类别标签对应的真实目标物体的尺寸范围就为上述所述的预设尺寸范围。只有当类别标签为同一个的至少两个初始聚类簇所构成的第四划分区域的大小在预设尺寸范围内,才认为该至少两个初始聚类簇来自于同一目标物体。
需要说明的是,在本申请的一些实施方式中,为加快效率,还可将各个类别标签下的真实的目标物体的真实尺寸范围作为搜索区域,按照一定的移动步长滑动各个搜索区域,当在某个搜索区域内存在至少两个初始聚类簇对应于该搜索区域的类别标签,则认为该至少两个初始聚类簇均来自于与该搜索区域对应的目标物体,此时可将该至少两个初始聚类簇合并为一个目标聚类簇。需要注意的是,该搜索区域不限于形状,可以是圆形区域、矩形区域、正方形区域、梯形区域、多边形区域以及不规则形状区域中的任意一种封闭区域,具体此处不做限定。
还需要说明的是,在本申请的一些实施方式中,可用L-shape方式估计待拟合并的至少两个初始聚类簇构成的第四区域的尺寸范围,如图19所示,自车通过激光传感器获取到的前车的激光点云构成的是“L”形(图19示意了两个前车构成的“L”形),假设“L”形的两条边均为被传统聚类算法聚类成了两个初始聚类簇,而经过对激光点云的语义分割得知这两个初始聚类簇均来自于同一类别(即“汽车”),这种情况无法计算一个“L”形的尺寸,此时就可通过L-shape方式将“L”形补全为一个矩形,该矩形就可认为是该第四区域。
b、判断类别标签为同一个的至少两个初始聚类簇所构成的第四划分区域的朝向角度与该至少两个初始聚类簇中的第一初始聚类簇的朝向角度的差值是否在预设角度范围内。
当类别标签为同一个的至少两个初始聚类簇所构成的第四划分区域的朝向角度与该至少两个初始聚类簇中的第一初始聚类簇(该第一初始聚类簇可根据预设方法从该至少两个初始聚类簇中确定,也可以是从该至少两个初始聚类簇任意选择的,具体此处不做限定)的朝向角度的差值在预设角度范围内时,则认为该至少两个初始聚类簇来自于同一目标物体,则可将该至少两个初始聚类簇合并为一个目标聚类簇,该合并成的目标聚类簇就对应一个真实的目标物体,该目标物体就为该类别标签所表示的物体。
为便于理解,下面依以两个初始聚类簇内各个激光点对应的类别标签为同一个为例进行示意,假设预设角度为θ th,初始聚类簇1对应目标的朝向角度为θ 1,初始聚类簇2对应目标2的朝向角度为θ 2,尝试合并两目标后的新目标的角度为θ new,则判断可以将这两个初始聚类簇成功合并的条件可以为:|θ 1new|≤θ th或|θ 2new|≤θ th,如果满足该条件,则认为两个初始聚类簇来自于同一个目标物体,可以将其进行合并。θ th可根据实际情况自行设置,一般设定θ th为10°。
另外需要说明的是,在本申请的一些实施方式中,还可以按照目标物体对应的类别标 签进行设定不同的θ th,对尺寸大的目标物体而言,被分割成为的片段所拥有的激光点数量相对较多,对该片段的激光点云的角度估计更加稳定和准确,因此对大尺寸目标物体可以设定较小的θ th,反之,对小尺寸的目标物体对应的类别标签则可以相对设置大一点的θ th
为便于理解,下面对几种常见的过分割情形进行说明,如图20,(a)场景是常见的卡车在车头处由于激光点云不连续导致出现过分割的情形,在此情况下,可以采用上述方式“b”判断角度的方式,使得(a)场景中的两个初始聚类簇被合并,从而解决该场景下的过分割问题。类似地,如图20中的(b)场景,正前方车辆由于侧边激光扫描到的激光点云极少(一般自车前方向左、向右或向前行驶的他车返回的激光点云区分不出来的,当属于同一类别的两个初始聚类簇的朝向呈90°时则认为这两个初始聚类簇来自于同一目标物体),而且不连续,从而导致过分割。此种场景也可以顺利的通过合并后角度合理性判断是否来自于同一目标物体。
c、判断类别标签为同一个的至少两个初始聚类簇所构成的第四划分区域的大小是否在预设尺寸范围内,且还需判断该第四划分区域的朝向角度与该至少两个初始聚类簇中的第一初始聚类簇的朝向角度的差值是否在预设角度范围内。
方式“c”实际上是指类别标签为同一个的至少两个初始聚类簇所构成的第四划分区域不仅要满足上述方式“a”的在预设尺寸范围内的条件,还要满足上述方式“b”的朝向角度在预设角度范围内的条件,这样使得在处理激光点云过分割和欠分割的问题上更加精准,如图20中的(c)场景,其中灰色点的初始聚类簇与黑色点的初始聚类簇分别为自车前方道路上两个车道上行驶的小车,在这种情况下不应该将两个初始聚类簇合并,如果没有此步的合并后角度合理性判断,若根据前面方式“a”判断尺寸范围的方式,则会将这两个初始聚类簇合并为一个目标聚类簇Tb,从而在解决过分割问题的时候又重新引入了欠分割问题。而此时,再进行角度合理性判断将阻止将这两个初始聚类簇进行合并,从而使在处理过分割问题的过程中不引入欠分割问题。因此,加入合并后角度合理性判断能够在确保需要合并的初始聚类簇成功合并的同时,能够甄别不能合并的初始聚类簇,从而提高系统解决过分割、欠分割问题的能力。
需要说明的是,在本申请的一些实施方式中,判断类别标签为同一个的至少两个初始聚类簇所构成的第四划分区域的大小是否在预设尺寸范围内与判断该第四划分区域的朝向角度与该至少两个初始聚类簇中的第一初始聚类簇的朝向角度的差值是否在预设角度范围内没有先后之分,可根据实际情况自行选择先进行哪种判断,具体此处不做限定。
还需要说明的是,在本申请的一些实施方式中,对上述实施例中所述的各种划分区域(如,第一划分区域至第四划分区域、搜索区域等)的形状并不限定,例如,可以是圆形区域、矩形区域、正方形区域、梯形区域、多边形区域以及不规则形状区域中的任意一种封闭区域。这样使得本申请实施例在实现方式上,可以更加灵活。
为了对本申请实施例所带来的有益效果有更为直观的认识,以下结合图21和图22对本申请实施例所带来的有益效果作进一步的介绍,图21和图22为通过本申请上述实施例在具体应用场景中的使用效果,如图21所示的椭圆框(即初始聚类簇),原本通过传统的聚类算法得到的粗分类的初始聚类簇是使得“小车”和“灌木”被聚类成一个目标物体输出, 且自车正前方的“人”和“小车”被聚类成一个目标物体输出,通过采用本申请上述实施例所述的方式,可有效将各个目标物体分离开来。如图22所示,原本通过传统的聚类算法得到的粗分类的初始聚类簇会使得前面行驶的“卡车”被分割为多个目标,通过采用本申请上述实施例所述的方式,可将其合并为一个目标输出。
在图8所对应的实施例的基础上,为了更好的实施本申请实施例的上述方案,下面还提供用于实施上述方案的相关设备。具体参阅图23,图23为本申请实施例提供的环境感知系统的一种结构示意图,该环境感知系统可应用于各种智能行驶(如,无人驾驶、辅助驾驶等)的智能体(如,轮式移动设备中的自动驾驶车辆、辅助驾驶车辆等),该环境感知系统可以包括:聚类模块2301、语义分割模块2302以及再处理模块2303,其中,聚类模块2301,用于对获取到的当前帧的激光点云进行聚类,得到粗分类的N个初始聚类簇;语义分割模块2302,用于对该激光点云进行语义分割,得到该激光点云内每个激光点对应的类别标签,该类别标签用于表示该激光点云中每个激光点所属的分类类别;再处理模块2303,用于查询该N个初始聚类簇中的每一个初始聚类簇(可称为第一初始聚类簇)内各个激光点对应的类别标签,并根据该第一初始聚类簇内各个激光点对应的类别标签对该第一初始聚类簇进行再处理,得到目标聚类簇,一个目标聚类簇对应一个目标物体。
在本申请上述实施方式中,首先,聚类模块2301对获取到的当前帧的激光点云进行聚类(如,通过DFS算法在OGM中对激光点云进行聚类),从而得到粗分类的N个初始聚类簇,并且可进一步通过语义分割模块2302(如,PointSeg网络、DeepSeg网络等)对该激光点云进行语义分割,以得到该激光点云内的每个激光点云对应的类别标签,最后,针对每个初始聚类簇,通过再处理模块2303查询其中各个激光点对应的类别标签,并根据查询到的类别标签的情况对各个初始聚类簇进行再处理(如,再分割、合并等),得到目标聚类簇,其中,一个目标聚类簇就对应一个目标物体。在本申请上述实施方式中,通过将激光点云进行语义分割,并与传统的激光聚类算法相结合,改善激光感知中激光点云的过分割、欠分割等问题,从而提升了对关键障碍物的检测性能。
在一种可能的设计中,该再处理模块2303,具体用于:当该第一初始聚类簇内各个激光点对应的类别标签存在至少两个,则按预设方法对该第一初始聚类簇进行进一步处理(如,若交点数量为0,则不进行分割;若交点数量≥2,则进行分割),得到与该第一初始聚类簇对应的至少一个目标聚类簇。
在本申请上述实施方式中,说明是通过判断第一初始聚类簇中各激光点对应的类别标签的种类来对第一初始聚类簇进行下一步处理的,从而得到与第一初始聚类簇对应的至少一个目标聚类簇。
在一种可能的设计中,该再处理模块2303,具体还用于:将该第一初始聚类簇按照激光点对应的类别标签进行划分,得到多个划分区域,其中,该多个划分区域中的任意一个划分区域为以预设的圈定方式将该第一初始聚类簇中属于同一类别标签的激光点圈定在一起的区域,之后,再获取该多个划分区域中第一划分区域与第二划分区域之间的交点数量,并根据该交点数量对该第一初始聚类簇进行分割,得到与该第一初始聚类簇对应的至少一个目标聚类簇。
在本申请上述实施方式中,具体阐述了是如何根据预设方法对第一初始聚类簇进行下一步处理以得到与该第一初始聚类簇对应的至少一个目标聚类簇的,即通过对第一初始聚类簇按照类别标签重新划分区域,再计算各个划分区域之间的交点数量,不同的交点数量处理方式也不同,具备实用性和灵活性。
在一种可能的设计中,该再处理模块2303,具体还用于:当该交点数量为0,且该第二划分区域为该第一划分区域的子集,则认为第二划分区域中的激光点是误分类点,此时认为该第一划分区域与第二划分区域之间不存在欠分割情况,则将该第一划分区域作为一个目标聚类簇,即第一划分区域与第二划分区域均对应于同一个目标聚类簇。
在本申请上述实施方式中,具体阐述了当两个划分区域的交点数量为0,且第二划分区域为该第一划分区域的子集,就可将该第一划分区域作为一个目标聚类簇,其对应一个目标物体,该目标物体就为第一划分区域对应的类别标签所表示的物体。
在一种可能的设计中,该再处理模块2303,具体还用于:当该交点数量为2,则认为该第一划分区域与第二划分区域之间存在欠分割的情况,处理的方式就是以交点之间的连接线作为分界线,将该第一初始聚类簇分割为至少两个目标聚类簇,其中,每个目标聚类簇对应一个类别标签。
在本申请上述实施方式中,具体阐述了当两个划分区域的交点数量为2,则将该第一初始聚类簇分割为至少两个目标聚类簇,每个目标聚类簇就对应一个目标物体,这两个目标物体就分别为两个类别标签(即第一划分区域内的激光点对应的类别标签和第二划分区域内的激光点对应的类别标签)所表示的物体。
在一种可能的设计中,该再处理模块2303,具体还用于:当该交点数量为4,且第一交点和第二交点之间的连线将该第一划分区域分为第一部分和第二部分,且该第一部分所包含的激光点数量大于该第二部分所包含的激光点数量,则认为第二部分所包含的激光点为误分类点,此时对该第一划分区域重新划分,得到第三划分区域,该第三划分区域就为仅包括该第一部分内各个激光点的区域,之后,再按照上述交点数量为2类似的方式对该第一初始聚类簇进行再分割,即以该第二划分区域与该第三划分区域之间的两个交点之间的连线为分界线,将该第一初始聚类簇分割为至少两个目标聚类簇,其中,每个目标聚类簇对应一个类别标签。
在本申请上述实施方式中,具体阐述了当两个划分区域的交点数量为4时,该如何对该第一初始聚类簇进行再分割的情形,即先根据一组交点对其中一个划分区域(如,第一划分区域)重新划分得到新的第三划分区域,这时第三划分区域与原来的另一个划分区域(如,第二划分区域)的交点数量为2,再按照上述划分区域间交点数量为2的情形类似处理。此外,在本申请上述三种根据划分区域两两之间交点数量的不同,所采取的再分割的方式也不同,具备灵活性。
在一种可能的设计中,该再处理模块2303,具体还用于:当N个初始聚类簇中存在至少两个初始聚类簇内各个激光点对应的类别标签为同一个,且该至少两个初始聚类簇所构成的第四划分区域满足预设条件,则将该至少两个初始聚类合并为一个目标聚类簇。
在本申请上述实施方式中,阐述了当存在至少两个初始聚类簇内各个激光点对应的类 别标签为同一个,则认为这至少两个初始聚类簇存在疑似过分割情况,此时可先判断这至少两个初始聚类簇所构成的第四划分区域是否满足预设条件,若满足,则将该至少两个初始聚类合并为一个目标聚类簇,具备灵活性。
在一种可能的设计中,该至少两个初始聚类簇所构成的第四划分区域满足预设条件包括:该至少两个初始聚类簇所构成的第四划分区域的大小在预设尺寸范围内,其中,该预设尺寸范围为该至少两个初始聚类簇内各个激光点对应的类别标签所标识的目标物体的实际尺寸;和/或,该至少两个初始聚类簇所构成的该第四划分区域的朝向角度与该至少两个初始聚类簇中的第一初始聚类簇的朝向角度的差值在预设角度范围内。
在本申请上述实施方式中,给出了判断该至少两个初始聚类簇所构成的第四划分区域是否满足预设条件的三种情形,具备选择性和可实现性。
在一种可能的设计中,该第一划分区域至该第四划分区域中的任意一个划分区域包括:圆形区域、矩形区域、正方形区域、梯形区域、多边形区域以及不规则形状区域中的任意一种封闭区域。
在本申请上述实施方式中,并不限定各种划分区域的形状,这样使得本申请实施例在实现方式上,可以更加灵活。
在一种可能的设计中,本申请所述的环境感知系统,可应用于各种智能行驶的智能体上,该智能行驶的智能体可以是自动驾驶车辆(如,智能汽车、智能网联汽车等),也可以是辅助驾驶车辆,具体此处不做限定。
在本申请上述实施方式中,阐述了几种环境感知系统可应用的场景,具备可实现性。
需要说明的是,图23对应实施例所述的环境感知系统中各模块/单元之间的信息交互、执行过程等内容,与本申请中图8对应的方法实施例基于同一构思,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
此外,本申请实施例还提供一种自动驾驶车辆,结合上述对图6以及图7的描述,请参阅图24,图24为本申请实施例提供的自动驾驶车辆的一种结构示意图,其中,自动驾驶车辆2400上可以部署有图23对应实施例中所描述的环境感知系统(图24中未示意出),用于实现图8对应实施例所述的各种功能。由于在部分实施例中,自动驾驶车辆2400还可以包括通信功能,则自动驾驶车辆2400除了包括图7中所示的组件,还可以包括:接收器2401和发射器2402,其中,处理器243可以包括应用处理器2431和通信处理器2432。在本申请的一些实施例中,接收器2401、发射器2402、处理器243和存储器244可通过总线或其它方式连接。
处理器243控制自动驾驶车辆的操作。具体的应用中,自动驾驶车辆2400的各个组件通过总线系统耦合在一起,其中总线系统除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都称为总线系统。
接收器2401可用于接收输入的数字或字符信息,以及产生与自动驾驶车辆的相关设置以及功能控制有关的信号输入。发射器2402可用于通过第一接口输出数字或字符信息;发射器2402还可用于通过第一接口向磁盘组发送指令,以修改磁盘组中的数据;发射器2402还可以包括显示屏等显示设备。
本申请实施例中,应用处理器2431,用于执行图8对应实施例中的激光点云的处理方法。具体地,应用处理器2431用于执行如下步骤:首先,对获取到的当前帧的激光点云进行聚类(如,通过DFS算法在OGM中对激光点云进行聚类),从而得到粗分类的N个初始聚类簇,并且可进一步通过预设神经网络(如,PointSeg网络、DeepSeg网络等)对该激光点云进行语义分割,以得到该激光点云内的每个激光点云对应的类别标签,最后,针对这N个初始聚类簇中的任意一个初始聚类簇(可称为第一初始聚类簇),查询其中各个激光点对应的类别标签,并根据查询到的类别标签的情况对各个初始聚类簇进行再处理(如,再分割、合并等),得到目标聚类簇,其中,一个目标聚类簇就对应一个目标物体。
在一种可能的设计中,应用处理器2431具体用于:当该第一初始聚类簇内各个激光点对应的类别标签存在至少两个,则按预设方法对该第一初始聚类簇进行再处理,得到与该第一初始聚类簇对应的至少一个目标聚类簇。
在一种可能的设计中,应用处理器2431具体还用于:将该第一初始聚类簇按照激光点对应的类别标签进行划分,得到多个划分区域,其中,该多个划分区域中的任意一个划分区域为以预设的圈定方式将该第一初始聚类簇中属于同一类别标签的激光点圈定在一起的区域,之后,再获取该多个划分区域中第一划分区域与第二划分区域之间的交点数量,并根据该交点数量对该第一初始聚类簇进行分割,得到与该第一初始聚类簇对应的至少一个目标聚类簇。
在一种可能的设计中,应用处理器2431具体还用于:当该第一划分区域与该第二划分区域之间的交点数量为0,且该第二划分区域为该第一划分区域的子集,则认为第二划分区域中的激光点是误分类点,此时认为该第一划分区域与第二划分区域之间不存在欠分割情况,则将该第一划分区域作为一个目标聚类簇。
在一种可能的设计中,应用处理器2431具体还用于:当该第一划分区域与该第二划分区域之间的交点数量为2,则认为该第一划分区域与第二划分区域之间存在欠分割的情况,处理的方式就是以交点之间的连接线作为分界线,将该第一初始聚类簇分割为至少两个目标聚类簇,其中,每个目标聚类簇对应一个类别标签。
在一种可能的设计中,应用处理器2431具体还用于:当该第一划分区域与该第二划分区域之间的交点数量为4,且第一交点和第二交点之间的连线将该第一划分区域分为第一部分和第二部分,且该第一部分所包含的激光点数量大于该第二部分所包含的激光点数量,则认为第二部分所包含的激光点为误分类点,此时对该第一划分区域重新划分,得到第三划分区域,该第三划分区域就为仅包括该第一部分内各个激光点的区域,之后,再按照上述交点数量为2类似的方式对该第一初始聚类簇进行再分割,即以该第二划分区域与该第三划分区域之间的两个交点之间的连线为分界线,将该第一初始聚类簇分割为至少两个目标聚类簇,其中,每个目标聚类簇对应一个类别标签。
在一种可能的设计中,应用处理器2431具体还用于:当N个初始聚类簇中存在至少两个初始聚类簇内的各个激光点对应的类别标签为同一个,且上述至少两个初始聚类簇所构成的第四划分区域满足预设条件,则将该至少两个初始聚类合并为一个目标聚类簇。
在一种可能的设计中,该至少两个初始聚类簇所构成的第四划分区域满足预设条件包 括:该至少两个初始聚类簇所构成的第四划分区域的大小在预设尺寸范围内,其中,该预设尺寸范围为该至少两个初始聚类簇内各个激光点对应的类别标签所标识的目标物体的实际尺寸;和/或,该至少两个初始聚类簇所构成的该第四划分区域的朝向角度与该至少两个初始聚类簇中的第一初始聚类簇的朝向角度的差值在预设角度范围内。
在一种可能的设计中,该第一划分区域至该第四划分区域中的任意一个划分区域包括:圆形区域、矩形区域、正方形区域、梯形区域、多边形区域以及不规则形状区域中的任意一种封闭区域。
需要说明的是,对于应用处理器2431执行激光点云的处理方法的具体实现方式以及带来的有益效果,均可以参考图8对应的方法实施例中的叙述,此处不再一一赘述。
本申请实施例中还提供一种计算机可读存储介质,该计算机可读存储介质中存储有用于处理激光点云的程序,当其在计算机上运行时,使得计算机执行如前述图8所示实施例描述的方法中相关系统所执行的步骤。
本申请实施例中还提供一种包括计算机程序产品,当其在计算机上运行时,使得计算机执行如前述图8所示实施例描述的方法中相关系统所执行的步骤。
本申请实施例中还提供一种电路系统,所述电路系统包括处理电路,所述处理电路配置为执行如前述图8所示实施例描述的方法中相关系统所执行的步骤。
还需要说明的是,本申请实施例提供的相关系统(如,图6所述的环境感知系统)或自动驾驶车辆具体可以为芯片,芯片包括:处理单元和通信单元,所述处理单元例如可以是处理器,所述通信单元例如可以是输入/输出接口、管脚或电路等。该处理单元可执行存储单元存储的计算机执行指令,以使服务器内的芯片执行上述图8所示实施例描述的激光点云的处理方法。可选地,所述存储单元为所述芯片内的存储单元,如寄存器、缓存等,所述存储单元还可以是所述无线接入设备端内的位于所述芯片外部的存储单元,如只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)等。
另外,还需说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CLU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中, 如计算机的软盘、U盘、移动硬盘、ROM、RAM、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。

Claims (23)

  1. 一种激光点云的处理方法,其特征在于,包括:
    对获取到的当前帧的激光点云进行聚类,得到粗分类的N个初始聚类簇,其中N≥1;
    对所述激光点云进行语义分割,得到所述激光点云内每个激光点对应的类别标签,所述类别标签用于表示所述激光点云中每个激光点所属的分类类别;
    查询第一初始聚类簇内各个激光点对应的类别标签,并根据所述第一初始聚类簇内各个激光点对应的类别标签对所述第一初始聚类簇进行再处理,得到目标聚类簇,一个目标聚类簇对应一个目标物体,所述第一初始聚类簇为所述N个初始聚类簇中的一个。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一初始聚类簇内各个激光点对应的类别标签对所述第一初始聚类簇进行再处理,得到目标聚类簇包括:
    当所述第一初始聚类簇内各个激光点对应的类别标签存在至少两个,则按预设方法对所述第一初始聚类簇进行分割,得到与所述第一初始聚类簇对应的至少一个目标聚类簇。
  3. 根据权利要求2所述的方法,其特征在于,所述按预设方法对所述第一初始聚类簇进行分割,得到与所述第一初始聚类簇对应的至少一个目标聚类簇包括:
    将所述第一初始聚类簇按照激光点对应的类别标签进行划分,得到多个划分区域,其中,所述多个划分区域中的任意一个划分区域为:以预设的圈定方式将所述初始聚类簇中属于同一类别标签的激光点圈定在一起的区域;
    获取所述多个划分区域中第一划分区域与第二划分区域之间的交点数量,并根据所述交点数量对所述第一初始聚类簇进行分割,得到与所述第一初始聚类簇对应的至少一个目标聚类簇。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述交点数量对所述第一初始聚类簇进行分割,得到与所述第一初始聚类簇对应的至少一个目标聚类簇包括:
    当所述交点数量为2,则以两个交点之间的连线为分界线,将所述第一初始聚类簇分割为至少两个目标聚类簇,其中,每个目标聚类簇对应一个类别标签。
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述交点数量对所述第一初始聚类簇进行分割,得到与所述第一初始聚类簇对应的至少一个目标聚类簇包括:
    当所述交点数量为4,且第一交点和第二交点之间的连线将所述第一划分区域分为第一部分和第二部分,则对所述第一划分区域重新划分,得到第三划分区域,其中,所述第一部分所包含的激光点数量大于所述第二部分所包含的激光点数量,所述第三划分区域为仅包括所述第一部分内各个激光点的区域;
    以所述第二划分区域与所述第三划分区域之间的两个交点之间的连线为分界线,将所述第一初始聚类簇分割为至少两个目标聚类簇,其中,每个目标聚类簇对应一个类别标签。
  6. 根据权利要求1所述的方法,其特征在于,所述根据所述第一初始聚类簇内各个激光点对应的类别标签对所述第一初始聚类簇进行再处理,得到目标聚类簇包括:
    当所述第一初始聚类簇内各个激光点对应的类别标签存在至少两个,则将所述第一初始聚类簇按照激光点对应的类别标签进行划分,得到多个划分区域,其中,所述多个划分区域中的任意一个划分区域为:以预设的圈定方式将所述初始聚类簇中属于同一类别标签 的激光点圈定在一起的区域;
    若所述多个划分区域中的第一划分区域与第二划分区域之间的交点数量为0,且所述第二划分区域为所述第一划分区域的子集,则所述第一划分区域与所述第二划分区域对应于同一个目标聚类簇。
  7. 根据权利要求2-6中任一项所述的方法,其特征在于,所述方法还包括:
    当所述N个初始聚类簇中存在至少两个初始聚类簇内各个激光点对应的类别标签为同一个,且所述至少两个初始聚类簇所构成的第四划分区域满足预设条件,则将所述至少两个初始聚类合并为一个目标聚类簇。
  8. 根据权利要求7所述的方法,其特征在于,所述至少两个初始聚类簇所构成的第四划分区域满足预设条件包括:
    所述至少两个初始聚类簇所构成的第四划分区域的大小在预设尺寸范围内,其中,所述预设尺寸范围为所述至少两个初始聚类簇内各个激光点对应的类别标签所标识的目标物体的实际尺寸;
    和/或,
    所述至少两个初始聚类簇所构成的所述第四划分区域的朝向角度与所述至少两个初始聚类簇中的第一初始聚类簇的朝向角度的差值在预设角度范围内。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述第一划分区域至所述第四划分区域中的任意一个划分区域包括:
    圆形区域、矩形区域、正方形区域、梯形区域、多边形区域以及不规则形状区域中的任意一种封闭区域。
  10. 一种环境感知系统,其特征在于,包括:
    聚类模块,用于对获取到的当前帧的激光点云进行聚类,得到粗分类的N个初始聚类簇,其中N≥1;
    语义分割模块,用于对所述激光点云进行语义分割,得到所述激光点云内每个激光点对应的类别标签,所述类别标签用于表示所述激光点云中每个激光点所属的分类类别;
    再处理模块,用于查询所述第一初始聚类簇内各个激光点对应的类别标签,并根据所述第一初始聚类簇内各个激光点对应的类别标签对所述第一初始聚类簇进行再处理,得到目标聚类簇,一个目标聚类簇对应一个目标物体。
  11. 根据权利要求10所述的系统,其特征在于,所述再处理模块,具体用于:
    当所述第一初始聚类簇内各个激光点对应的类别标签存在至少两个,则按预设方法对所述第一初始聚类簇进行分割,得到与所述第一初始聚类簇对应的至少一个目标聚类簇。
  12. 根据权利要求11所述的系统,其特征在于,所述再处理模块,具体还用于:
    将所述第一初始聚类簇按照激光点对应的类别标签进行划分,得到多个划分区域,其中,所述多个划分区域中的任意一个划分区域为:以预设的圈定方式将所述初始聚类簇中属于同一类别标签的激光点圈定在一起的区域;
    获取所述多个划分区域中第一划分区域与第二划分区域之间的交点数量,并根据所述交点数量对所述第一初始聚类簇进行分割,得到与所述第一初始聚类簇对应的至少一个目 标聚类簇。
  13. 根据权利要求12所述的系统,其特征在于,所述再处理模块,具体还用于:
    当所述交点数量为2,则以两个交点之间的连线为分界线,将所述第一初始聚类簇分割为至少两个目标聚类簇,其中,每个目标聚类簇对应一个类别标签。
  14. 根据权利要求12所述的系统,其特征在于,所述再处理模块,具体还用于:
    当所述交点数量为4,且第一交点和第二交点之间的连线将所述第一划分区域分为第一部分和第二部分,则对所述第一划分区域重新划分,得到第三划分区域,其中,所述第一部分所包含的激光点数量大于所述第二部分所包含的激光点数量,所述第三划分区域为仅包括所述第一部分内各个激光点的区域;
    以所述第二划分区域与所述第三划分区域之间的两个交点之间的连线为分界线,将所述第一初始聚类簇分割为至少两个目标聚类簇,其中,每个目标聚类簇对应一个类别标签。
  15. 根据权利要求10所述的系统,其特征在于,所述再处理模块,具体还用于:
    当所述第一初始聚类簇内各个激光点对应的类别标签存在至少两个,则将所述第一初始聚类簇按照激光点对应的类别标签进行划分,得到多个划分区域,其中,所述多个划分区域中的任意一个划分区域为:以预设的圈定方式将所述初始聚类簇中属于同一类别标签的激光点圈定在一起的区域;
    若所述多个划分区域中的第一划分区域与第二划分区域之间的交点数量为0,且所述第二划分区域为所述第一划分区域的子集,则所述第一划分区域与所述第二划分区域对应于同一个目标聚类簇。
  16. 根据权利要求11-15中任一项所述的系统,其特征在于,所述再处理模块,具体还用于:
    当所述N个初始聚类簇中存在至少两个初始聚类簇内各个激光点对应的类别标签为同一个,且所述至少两个初始聚类簇所构成的第四划分区域满足预设条件,则将所述至少两个初始聚类合并为一个目标聚类簇。
  17. 根据权利要求16所述的系统,其特征在于,所述至少两个初始聚类簇所构成的第四划分区域满足预设条件包括:
    所述至少两个初始聚类簇所构成的第四划分区域的大小在预设尺寸范围内,其中,所述预设尺寸范围为所述至少两个初始聚类簇内各个激光点对应的类别标签所标识的目标物体的实际尺寸;
    和/或,
    所述至少两个初始聚类簇所构成的所述第四划分区域的朝向角度与所述至少两个初始聚类簇中的第一初始聚类簇的朝向角度的差值在预设角度范围内。
  18. 根据权利要求10-17中任一项所述的系统,其特征在于,所述第一划分区域至所述第四划分区域中的任意一个划分区域包括:
    圆形区域、矩形区域、正方形区域、梯形区域、多边形区域以及不规则形状区域中的任意一种封闭区域。
  19. 根据权利要求10-18中任一项所述的系统,其特征在于,所述系统应用于智能行驶 的智能体。
  20. 根据权利要求19所述的系统,其特征在于,所述智能行驶的智能体包括:自动驾驶车辆。
  21. 一种自动驾驶车辆,其特征在于,包括处理器,所述处理器和存储器耦合,所述存储器存储有程序指令,当所述存储器存储的程序指令被所述处理器执行时实现权利要求1至9中任一项所述的方法。
  22. 一种计算机可读存储介质,包括程序,当其在计算机上运行时,使得计算机执行如权利要求1至9中任一项所述的方法。
  23. 一种电路系统,其特征在于,所述电路系统包括处理电路,所述处理电路配置为执行如权利要求1至9中任一项所述的方法。
PCT/CN2021/076816 2020-05-25 2021-02-19 一种激光点云的处理方法及相关设备 WO2021238306A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010449480.5A CN113792566B (zh) 2020-05-25 2020-05-25 一种激光点云的处理方法及相关设备
CN202010449480.5 2020-05-25

Publications (1)

Publication Number Publication Date
WO2021238306A1 true WO2021238306A1 (zh) 2021-12-02

Family

ID=78745567

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/076816 WO2021238306A1 (zh) 2020-05-25 2021-02-19 一种激光点云的处理方法及相关设备

Country Status (2)

Country Link
CN (1) CN113792566B (zh)
WO (1) WO2021238306A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445415A (zh) * 2021-12-14 2022-05-06 中国科学院深圳先进技术研究院 可行驶区域的分割方法以及相关装置
CN115079168A (zh) * 2022-07-19 2022-09-20 陕西欧卡电子智能科技有限公司 基于激光雷达与毫米波雷达融合的建图方法、装置及设备
CN115469292A (zh) * 2022-11-01 2022-12-13 天津卡尔狗科技有限公司 环境感知方法、装置、电子设备和存储介质
CN115719354A (zh) * 2022-11-17 2023-02-28 同济大学 基于激光点云提取立杆的方法与装置
CN115985122A (zh) * 2022-10-31 2023-04-18 内蒙古智能煤炭有限责任公司 无人驾驶系统感知方法
CN116755441A (zh) * 2023-06-19 2023-09-15 国广顺能(上海)能源科技有限公司 移动机器人的避障方法、装置、设备及介质
CN117472595A (zh) * 2023-12-27 2024-01-30 苏州元脑智能科技有限公司 资源分配方法、装置、车辆、电子设备以及存储介质
CN117761704A (zh) * 2023-12-07 2024-03-26 上海交通大学 多机器人相对位置的估计方法及系统
CN117830140A (zh) * 2024-03-04 2024-04-05 厦门中科星晨科技有限公司 无人驾驶控制系统用雾天点云的去噪方法及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114639024A (zh) * 2022-03-03 2022-06-17 江苏方天电力技术有限公司 一种输电线路激光点云自动分类方法
CN115512099B (zh) * 2022-06-10 2023-06-02 探维科技(北京)有限公司 一种激光点云数据处理方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840454A (zh) * 2017-11-28 2019-06-04 华为技术有限公司 目标物定位方法、装置、存储介质以及设备
CN110110802A (zh) * 2019-05-14 2019-08-09 南京林业大学 基于高阶条件随机场的机载激光点云分类方法
CN110136182A (zh) * 2019-05-28 2019-08-16 北京百度网讯科技有限公司 激光点云与2d影像的配准方法、装置、设备和介质
WO2019168869A1 (en) * 2018-02-27 2019-09-06 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
CN110264468A (zh) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 点云数据标注、分割模型确定、目标检测方法及相关设备
CN111126182A (zh) * 2019-12-09 2020-05-08 苏州智加科技有限公司 车道线检测方法、装置、电子设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11644834B2 (en) * 2017-11-10 2023-05-09 Nvidia Corporation Systems and methods for safe and reliable autonomous vehicles
US10657388B2 (en) * 2018-03-13 2020-05-19 Honda Motor Co., Ltd. Robust simultaneous localization and mapping via removal of dynamic traffic participants

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840454A (zh) * 2017-11-28 2019-06-04 华为技术有限公司 目标物定位方法、装置、存储介质以及设备
WO2019168869A1 (en) * 2018-02-27 2019-09-06 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
CN110110802A (zh) * 2019-05-14 2019-08-09 南京林业大学 基于高阶条件随机场的机载激光点云分类方法
CN110136182A (zh) * 2019-05-28 2019-08-16 北京百度网讯科技有限公司 激光点云与2d影像的配准方法、装置、设备和介质
CN110264468A (zh) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 点云数据标注、分割模型确定、目标检测方法及相关设备
CN111126182A (zh) * 2019-12-09 2020-05-08 苏州智加科技有限公司 车道线检测方法、装置、电子设备及存储介质

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445415A (zh) * 2021-12-14 2022-05-06 中国科学院深圳先进技术研究院 可行驶区域的分割方法以及相关装置
CN115079168A (zh) * 2022-07-19 2022-09-20 陕西欧卡电子智能科技有限公司 基于激光雷达与毫米波雷达融合的建图方法、装置及设备
CN115079168B (zh) * 2022-07-19 2022-11-22 陕西欧卡电子智能科技有限公司 基于激光雷达与毫米波雷达融合的建图方法、装置及设备
CN115985122A (zh) * 2022-10-31 2023-04-18 内蒙古智能煤炭有限责任公司 无人驾驶系统感知方法
CN115469292A (zh) * 2022-11-01 2022-12-13 天津卡尔狗科技有限公司 环境感知方法、装置、电子设备和存储介质
CN115469292B (zh) * 2022-11-01 2023-03-24 天津卡尔狗科技有限公司 环境感知方法、装置、电子设备和存储介质
CN115719354A (zh) * 2022-11-17 2023-02-28 同济大学 基于激光点云提取立杆的方法与装置
CN115719354B (zh) * 2022-11-17 2024-03-22 同济大学 基于激光点云提取立杆的方法与装置
CN116755441A (zh) * 2023-06-19 2023-09-15 国广顺能(上海)能源科技有限公司 移动机器人的避障方法、装置、设备及介质
CN116755441B (zh) * 2023-06-19 2024-03-12 国广顺能(上海)能源科技有限公司 移动机器人的避障方法、装置、设备及介质
CN117761704A (zh) * 2023-12-07 2024-03-26 上海交通大学 多机器人相对位置的估计方法及系统
CN117472595A (zh) * 2023-12-27 2024-01-30 苏州元脑智能科技有限公司 资源分配方法、装置、车辆、电子设备以及存储介质
CN117472595B (zh) * 2023-12-27 2024-03-22 苏州元脑智能科技有限公司 资源分配方法、装置、车辆、电子设备以及存储介质
CN117830140A (zh) * 2024-03-04 2024-04-05 厦门中科星晨科技有限公司 无人驾驶控制系统用雾天点云的去噪方法及装置
CN117830140B (zh) * 2024-03-04 2024-05-10 厦门中科星晨科技有限公司 无人驾驶控制系统用雾天点云的去噪方法及装置

Also Published As

Publication number Publication date
CN113792566A (zh) 2021-12-14
CN113792566B (zh) 2024-05-17

Similar Documents

Publication Publication Date Title
WO2021238306A1 (zh) 一种激光点云的处理方法及相关设备
WO2021027568A1 (zh) 障碍物避让方法及装置
US20220332348A1 (en) Autonomous driving method, related device, and computer-readable storage medium
US11110941B2 (en) Centralized shared autonomous vehicle operational management
US11702070B2 (en) Autonomous vehicle operation with explicit occlusion reasoning
WO2021103511A1 (zh) 一种设计运行区域odd判断方法、装置及相关设备
US20180259968A1 (en) Planning for unknown objects by an autonomous vehicle
US20180259967A1 (en) Planning for unknown objects by an autonomous vehicle
WO2021147748A1 (zh) 一种自动驾驶方法及相关设备
US20220080972A1 (en) Autonomous lane change method and apparatus, and storage medium
CN112512887B (zh) 一种行驶决策选择方法以及装置
US12001517B2 (en) Positioning method and apparatus
WO2022142839A1 (zh) 一种图像处理方法、装置以及智能汽车
WO2021189210A1 (zh) 一种车辆换道方法及相关设备
WO2021218693A1 (zh) 一种图像的处理方法、网络的训练方法以及相关设备
US10836405B2 (en) Continual planning and metareasoning for controlling an autonomous vehicle
US20230136798A1 (en) Method and apparatus for detecting lane line
US20220309806A1 (en) Road structure detection method and apparatus
WO2022052881A1 (zh) 一种构建地图的方法及计算设备
WO2022033089A1 (zh) 确定检测对象的三维信息的方法及装置
YU et al. Vehicle Intelligent Driving Technology
US20230221408A1 (en) Radar multipath filter with track priors
US20230399008A1 (en) Multistatic radar point cloud formation using a sensor waveform encoding schema
WO2023102827A1 (zh) 一种路径约束方法及装置
US20230194692A1 (en) Radar tracking association with velocity matching by leveraging kinematics priors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21814644

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21814644

Country of ref document: EP

Kind code of ref document: A1