CN112347999B - Obstacle recognition model training method, obstacle recognition method, device and system - Google Patents

Obstacle recognition model training method, obstacle recognition method, device and system Download PDF

Info

Publication number
CN112347999B
CN112347999B CN202110015844.3A CN202110015844A CN112347999B CN 112347999 B CN112347999 B CN 112347999B CN 202110015844 A CN202110015844 A CN 202110015844A CN 112347999 B CN112347999 B CN 112347999B
Authority
CN
China
Prior art keywords
point cloud
far
obstacle
information
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110015844.3A
Other languages
Chinese (zh)
Other versions
CN112347999A (en
Inventor
丁鲁川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suteng Innovation Technology Co Ltd
Original Assignee
Suteng Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suteng Innovation Technology Co Ltd filed Critical Suteng Innovation Technology Co Ltd
Priority to CN202110015844.3A priority Critical patent/CN112347999B/en
Publication of CN112347999A publication Critical patent/CN112347999A/en
Application granted granted Critical
Publication of CN112347999B publication Critical patent/CN112347999B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Abstract

The embodiment of the application provides an obstacle recognition model training method, an obstacle recognition method, a related device, a system and a computer readable storage medium. The obstacle recognition model training method comprises the following steps: processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; matching corresponding far-field potential obstacles in at least one frame of point cloud data to generate a tracking sequence corresponding to the far-field potential obstacles; determining a traffic flow area according to a tracking sequence corresponding to the far-field potential barrier; rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid; acquiring mark information corresponding to each grid; and taking the marking information corresponding to the multiple groups of grids, the point cloud information corresponding to the grids and the traffic flow information as sample data, and training the obstacle recognition model. By adopting the method and the device, the detection effect of the potential barrier in the far field can be improved.

Description

Obstacle recognition model training method, obstacle recognition method, device and system
Technical Field
The application relates to the technical field of automatic driving, in particular to a method, a device and a system for training an obstacle recognition model.
Background
In the field of automatic driving, the accuracy of obstacle detection is the key to unmanned driving, and has important significance. The laser radar can generate three-dimensional information, the ranging precision is high, the target position can be accurately obtained, and the detection effect of the barrier can be effectively improved. Therefore, the laser radar is widely used in unmanned driving.
In unmanned driving, it is becoming more and more important to improve the detection accuracy of obstacles and further expand the detection range in order to improve the safety of the system.
Disclosure of Invention
The embodiment of the application provides an obstacle recognition model training method, an obstacle recognition device and an obstacle recognition system, which can improve the accuracy of far-field obstacle detection and enlarge the detection range of obstacles.
In a first aspect, an embodiment of the present application provides a method for training an obstacle recognition model, including:
processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
acquiring tracking information of the far-field potential obstacle, and generating a tracking sequence according to the tracking information of the far-field potential obstacle;
matching the corresponding far-field potential obstacles in the at least one frame of point cloud data to generate a tracking sequence corresponding to the far-field potential obstacles;
determining a traffic flow area according to a tracking sequence corresponding to the far-field potential obstacle;
rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
acquiring mark information corresponding to each grid;
and taking the marking information corresponding to the multiple groups of grids, the point cloud information corresponding to the grids and the traffic flow information as sample data, and training an obstacle identification model.
In a second aspect, an embodiment of the present application provides an obstacle identification method, including:
processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
acquiring tracking information of the far-field potential obstacle, and generating a tracking sequence according to the tracking information of the far-field potential obstacle;
determining a traffic flow area according to the tracking sequence of the far-field potential obstacles;
rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
inputting the traffic flow information and the point cloud information corresponding to each grid into an obstacle identification model, and outputting an identification result; the obstacle identification model is the obstacle identification model provided in the first aspect of the present application.
In a third aspect, an embodiment of the present application provides a history data processing apparatus, including:
the processing module is used for processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
the acquisition module is used for acquiring the tracking information of the far-field potential obstacle and generating a tracking sequence according to the tracking information of the far-field potential obstacle;
the determining module is used for determining a traffic flow area according to the tracking sequence of the far-field potential obstacles;
the rasterization module is used for rasterizing the traffic flow area and determining traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
the marking module is used for acquiring marking information corresponding to each grid;
and the training module is used for training the obstacle recognition model by taking the marking information corresponding to the grids and the point cloud information and the traffic flow information corresponding to the grids as sample data.
In a fourth aspect, an embodiment of the present application provides a history data processing apparatus, including:
the processing module is used for processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
the acquisition module is used for acquiring the tracking information of the far-field potential obstacle and generating a tracking sequence according to the tracking information of the far-field potential obstacle;
the determining module is used for determining a traffic flow area according to the tracking sequence corresponding to the far-field potential obstacle;
the rasterization module is used for rasterizing the traffic flow area and determining traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
the identification module is used for inputting the point cloud data corresponding to each grid, the traffic flow information and the point cloud information into an obstacle identification model and outputting an identification result; the obstacle identification model is the obstacle identification model provided in the first aspect of the present application.
In a fifth aspect, an embodiment of the present application provides an obstacle identification system, including:
the sensing and sensing device is used for collecting point cloud data and transmitting the point cloud data to the historical data processing device and the vehicle-mounted terminal;
the historical data processing device is used for processing the point cloud data transmitted by the perception sensing device or the stored historical point cloud data to obtain an identification model of a far-field potential obstacle;
the vehicle-mounted terminal is used for receiving the point cloud data transmitted by the perception sensing device, identifying a near-field obstacle and cutting a far-field point cloud, processing the far-field point cloud to obtain a tracking sequence and a traffic flow area of a far-field potential obstacle, rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid;
the vehicle-mounted terminal is also used for sending the traffic flow information and the point cloud information corresponding to each grid to the historical data processing device;
the historical data processing device is further used for obtaining a far-field potential obstacle recognition result by adopting the far-field potential obstacle recognition model and sending the far-field potential obstacle recognition result to the vehicle-mounted terminal;
and the vehicle-mounted terminal is also used for combining the recognition result of the near-field obstacle and the recognition result of the far-field potential obstacle and outputting a control instruction of the vehicle.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method provided in the first aspect or the second aspect of the embodiment of the present application.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
in one or more embodiments of the present application, point cloud data corresponding to a far-field potential obstacle may be obtained by performing far-field cropping on the point cloud data, a traffic flow area is determined according to the point cloud data, and the traffic flow area is subjected to rasterization to obtain traffic flow information and point cloud information corresponding to each grid. And meanwhile, the mark information corresponding to each grid is obtained. And obtaining the obstacle recognition model through a large number of training models of traffic flow information and point cloud information of grids with known mark information. The obstacle identification model can improve the accuracy and detection range of far-field potential obstacle detection. Under the scene of automatic driving, the safety of the system can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of obstacle identification according to an embodiment of the present application;
FIG. 2A is a schematic structural diagram of an autonomous vehicle according to an embodiment of the present disclosure;
FIG. 2B is a schematic structural diagram of another autonomous vehicle provided in an embodiment of the present application;
fig. 3 is a schematic diagram of an architecture of an obstacle identification system according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for training an obstacle recognition model according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of another obstacle recognition model training method according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of an obstacle identification method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a historical data processing apparatus according to an embodiment of the present application;
FIG. 8 is a schematic flowchart of a historical data processing apparatus according to an embodiment of the present disclosure;
FIG. 9 is a schematic structural diagram of another historical data processing apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of another historical data processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The terms "first," "second," "third," and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Fig. 1 schematically illustrates an application scenario of obstacle identification according to an embodiment of the present application.
As shown in fig. 1, an autonomous vehicle 10 travels on a road at an average speed V1, with a vehicle in front of the vehicle 10 traveling at a speed V2 and an oncoming vehicle traveling at a speed V3, and a pedestrian coming along at the roadside traveling at a speed V4. The two sides of the road are provided with fixed objects such as trees, buildings and the like. In the embodiment of the present application, moving vehicles and pedestrians can be regarded as moving obstacles around the vehicle 10, and stationary objects such as trees and buildings can be regarded as stationary obstacles. It can be known that, taking the mechanical lidar as an example, the detection range of the mechanical lidar is generally a range with the center of the lidar as the center and the radius of R. The detection range of the mechanical lidar is shown in fig. 1 as an example at the right front corner of the vehicle 10. The detection range is as the range covered by the gray circle in the figure.
The autonomous vehicle 10 of the present application may include a sensory device and an on-board terminal. The perception sensing device comprises one or more laser radars, and when the perception sensing device comprises a plurality of laser radars, the plurality of laser radars can form a laser radar system. The laser radar can be generally arranged at four corners of a vehicle, a head of the vehicle, a tail of the vehicle, a door of the vehicle, the vicinity of a roof of the vehicle and the like, and the arrangement position of the laser radar is not limited in the application. Fig. 2A shows an exemplary lidar system, which comprises three lidar systems, which are arranged on the roof (101) and on both sides (102, 103) of the roof of the vehicle 10. Fig. 2B shows an example of another lidar system, which is composed of five lidar systems, which are respectively disposed on the roof (1031), the two sides (1032, 1033) of the vehicle body (1034), and the rear (1035) of the vehicle 10. It is to be understood that the lidar system may further include a greater or lesser number of lidar, and the positions of the respective lidar in the lidar system may also be distributed in other positions of the vehicle, which is not limited in this application.
In the following, taking the laser radar system shown in fig. 2A as an example, the laser radars (101, 102, and 103) may be used to collect point cloud data in the radiation range (around the vehicle) and send the point cloud data to the vehicle-mounted terminal. Wherein the laser radar 101 of the roof can be used for detecting obstacles in a longer distance range; the laser radars 102 and 103 located on both sides of the roof may be used to detect obstacles in the ground near the vehicle body. The vehicle-mounted terminal can process the point cloud data sent by the laser radars (101, 102 and 103) to identify the obstacle.
In the prior art, a machine learning method is usually adopted for identifying the obstacle, but the machine learning method has a high requirement on the density degree of the collected point cloud, and if the point cloud is relatively sparse, the obstacle cannot be accurately identified, so that the detection precision and the detection range of the laser radar or the laser radar system are influenced.
Fig. 3 schematically illustrates a structural diagram of an obstacle identification system provided in an embodiment of the present application. As shown in fig. 3, the obstacle recognition system 300 may include at least: historical data processing device 310, perception sensing device 320 and vehicle-mounted terminal 330.
Wherein: and the historical data processing device 310 is used for processing the received point cloud data transmitted by the perception sensing device 320 or a large amount of stored historical point cloud data to obtain a recognition model of the far-field potential obstacle.
The vehicle-mounted terminal 330 is configured to receive point cloud data acquired by the sensing device 320 in real time, identify a near-field obstacle and cut a far-field point cloud, process the far-field point cloud to obtain a tracking sequence of a far-field potential obstacle and a traffic flow area, perform rasterization on the traffic flow area, and determine traffic flow information and point cloud information corresponding to each grid.
The vehicle-mounted terminal 330 transmits the traffic information and the point cloud information corresponding to each grid to the historical data processing device 310, and the recognition result of the far-field potential obstacle is obtained through the far-field potential obstacle recognition model trained by the historical data processing device 310.
The historical data processing device 310 outputs the identification result to the vehicle-mounted terminal 330, the vehicle-mounted terminal 330 summarizes the identification result of the near-field obstacle and the identification result of the far-field potential obstacle, and the vehicle-mounted terminal 330 outputs a control command of the vehicle according to the identification result of the obstacle.
Wherein, it is understood that the laser radar may be a mechanical laser radar, a solid-state laser radar, etc., and the specific type of the laser radar is not limited herein; alternatively, the sensing device 320 may also be a lidar system composed of a plurality of lidar, and the number of lidar components in the lidar system and the specific form of the lidar system are not limited herein.
It is understood that the historical data processing device 310 may be partially or completely integrated into the sensing device 320, or may exist independently of the sensing device 320. When the historical data processing device 310 is integrated in the sensing device 320, the historical data processing device 310 may be configured to store the first N frames of point cloud data collected by the sensing device 320, train the obstacle recognition model using the first N frames of point cloud data as historical data, and further verify the accuracy of the output result of the obstacle recognition model using the historical data.
Optionally, the historical data processing device 310 may also be partially or completely integrated in the vehicle-mounted terminal 330.
It is understood that, if the sensing device 320 is a lidar system, the number of the historical data processing devices 310 may be one (that is, the lidar system corresponds to one historical data processing device 310), and optionally, the number of the historical data processing devices 310 may also be multiple (that is, each lidar of the lidar system corresponds to one historical data processing device 310, or a plurality of lidar systems corresponds to at least two historical data processing devices 310).
The method for training the obstacle recognition model provided by the embodiment of the present application is described in detail below with reference to specific embodiments. The method may be implemented in dependence on a computer program. The computer program may be integrated into the application or may run as a separate tool-like application.
Fig. 4 is a flowchart illustrating an obstacle recognition model training method. The obstacle recognition model training method may be performed by the above-described historical data processing apparatus 310. As shown in fig. 4, the obstacle recognition model training method may at least include the following steps:
s401: and processing at least one frame of point cloud data to obtain point cloud data corresponding to the far-field potential barrier.
Specifically, a near-field credible obstacle in the at least one frame of point cloud data is identified, and then point cloud data corresponding to the far-field potential obstacle is determined according to the identification result of the near-field credible obstacle.
In the case of a mechanical laser radar, the near field recognition range is the area of a circle surrounded by the laser radar as the center of the circle and the recognition distance as the radius. It can be understood that the determination of the near field identification range mainly depends on the density degree of the point cloud, the near field ranges corresponding to the laser radars of different line bundles are different, and the determination of the near field ranges may be preset, for example, the near field identification range of the 32-line mechanical laser radar may be set to the area of a circle surrounded by the radar as the center and the radius of 60 meters; for another example, the near-field recognition range of the 128-line laser radar may be set to an area of a circle surrounded by the laser radar as a center and a radius of 100 meters. Other line beam mechanical lidar may be set to scan between 60 and 100 meters.
It is understood that, as an alternative implementation manner, the near field identification range may be adjusted according to the accuracy of the near field obstacle identification result, for example, a near field identification obstacle accuracy threshold may be preset, when the real-time identification accuracy reaches the threshold, the near field set range is kept unchanged, and when the near field identification accuracy is smaller than the threshold, the near field identification range is proportionally reduced. By adjusting the near field range, the flexibility of obstacle identification can be ensured while the identification accuracy is ensured.
Specifically, the identifying a near-field trusted obstacle in the at least one frame of point cloud data specifically includes: inputting at least one frame of point cloud data into a near-field obstacle detection module to obtain a near-field obstacle identification result;
the near-field obstacle detection module is obtained by training a large amount of point cloud marking information and point cloud data information based on a machine learning algorithm and is used for extracting features of the near-field obstacle so as to identify and obtain a detection model of the near-field obstacle.
The machine learning algorithm may be pointent + +, for example.
Optionally, the near-field obstacle may be further subjected to motion feature analysis, and the obtained near-field obstacle is further screened according to a rule table by combining the motion feature, the distance range, the height information, and the like, so as to obtain the near-field obstacle with the confidence coefficient reaching a preset value. By further filtering the near-field obstacles, the accuracy of near-field obstacle identification can be further ensured, and the false identification of the near-field obstacles is reduced.
Specifically, after the near-field obstacle is identified, the same near-field obstacle in the multi-frame point cloud data may be subjected to operations such as communication, rasterization, filtering and cropping, clustering, tracking, and the like, so as to obtain a potential obstacle in the far field. The above-described operations will be described separately below.
Communication: and matching near-field obstacles in the multi-frame point cloud data to obtain a tracking sequence of the same near-field obstacle in each frame of point cloud data, and then communicating the tracking sequences of the same near-field obstacle to obtain a communication domain of the near-field obstacle. And connecting all the near-field obstacle tracking sequences to obtain the whole near-field obstacle connected domain. That is, all the connected domains corresponding to the trusted obstacles may constitute the entire near-field obstacle connected domain. Wherein, the tracking sequence can be used for characterizing the position, the speed, the height and the like of the same obstacle in different frames.
Optionally, the multi-frame point cloud data is a current frame and previous M frames of point cloud data continuous with the current frame. Wherein M is a positive integer. For example, if the current frame point cloud data is the nth frame point cloud data, the multi-frame point cloud data may be the first to nth frame point cloud data.
Optionally, the multi-frame point cloud data may also be multi-frame historical point cloud data. The historical point cloud data is point cloud data of other frames including the current frame. For example, one frame of historical point cloud data can be selected for near-field obstacle identification, and then multi-frame point cloud data containing the frame of point cloud data are extracted from the historical point cloud data for near-field obstacle identification, so that a near-field obstacle connected domain is obtained.
Rasterization: and rasterizing the point cloud according to the obtained near-field obstacle connected domain, and performing probability coding on the grid. The probability of the grid is the probability that the point cloud in the grid is an obstacle, and the calculation formula is as follows:
Figure DEST_PATH_IMAGE001
wherein N is the number of point clouds contained in the grid; l is the distance from the grid to the connected domain; n is a radical of0The threshold value of the number of point clouds is shown, and a and b are weight coefficients. The values of a and b mainly depend on the parameters of the laser radar, and the values of a and b corresponding to the laser radars of different wire harnesses are different. N is a radical of0For example, but not limited to, 3, 4, 5, etc. It can be known that the distance from the grid to the connected domain may be regarded as the distance from the center of the grid to the near-field obstacle connected domain, or may be regarded as the average value of the distances from all the point clouds in the grid to the near-field obstacle connected domain, which is not limited in this embodiment of the present application.
Filtering and cutting: first, a probability threshold value can be set, the point clouds with the probability smaller than the probability threshold value are filtered, and the point clouds with the probability larger than or equal to the probability threshold value are left. And then performing far field cropping on the point cloud with the probability threshold value or more, namely determining the point cloud with the distance information greater than or equal to the preset threshold value as the point cloud corresponding to the far field potential obstacle as the cropped point cloud. The preset threshold is, for example, but not limited to, 85 meters. The probability threshold may be, for example, but not limited to, 50%, 80%, etc.
The point cloud data is not limited to being filtered firstly and then being cut in a far field, and the point cloud data can be cut in the far field firstly and then be filtered in specific implementation.
Clustering: and clustering the point clouds of which the probability is greater than the probability threshold and the distance information is greater than or equal to a preset threshold to generate potential obstacles in a far field. Specifically, the DBSCAN method may be used for clustering. DBSCAN is a density-based clustering algorithm that generally assumes that classes can be determined by how closely the samples are distributed. By classifying closely connected samples into one class, a cluster class is obtained. And obtaining a final clustering class result by converting all groups of closely connected samples into different classes.
In some possible embodiments, before S401, the method may further include: at least one frame of point cloud data is obtained.
It is understood that when a laser beam emitted from the laser radar is irradiated onto the surface of an object and reflected by the object to be received by the receiver, the received laser signal is recorded in the form of a point, so that point cloud data is formed. Wherein the point cloud data may include spatial location coordinates, timestamps, and echo intensity information. The echo intensity information is echo reflection intensity information collected by a laser radar receiving device, and the intensity information is related to the surface material, the roughness and the incident angle direction of a target, the emission energy of an instrument and the laser wavelength.
Wherein, the point cloud data obtained after completing one scanning period is a frame of point cloud data. Taking a mechanical radar as an example, the mechanical laser radar completes scanning of the surrounding environment in a mechanical rotation mode, and the time of one rotation is the duration of a point cloud data frame.
S402: and acquiring tracking information of the far-field potential obstacle, and generating a tracking sequence according to the tracking information of the far-field potential obstacle.
Optionally, tracking information of the far-field potential obstacle may be determined according to the current frame and the previous M frames of point cloud data continuous with the current frame, where M is a positive integer.
Optionally, tracking information of the far-field potential obstacle can be determined according to the multi-frame historical point cloud data. The historical point cloud data is point cloud data of other frames before the current frame.
Specifically, an obstacle that may be the same obstacle in each frame of point cloud data is a corresponding far-field potential obstacle. The Hungarian algorithm can be specifically adopted for matching the corresponding far-field potential obstacles in the multi-frame point cloud data. The hungarian algorithm is the algorithm in the graph theory to find the maximum match. The algorithm treats each obstacle as an endpoint in the graph, with endpoints in the same frame grouped together. The hungarian algorithm enables endpoint matching between different groups. Wherein, the position, speed, size and other information of the obstacle are used as the end point weight, and similar end points are more likely to be matched together.
In particular, the tracking sequence may be used to characterize the position, velocity, height, etc. of the same obstacle in different frames.
S403: and determining the traffic flow area according to the tracking sequence of the potential obstacles in the far field.
Specifically, since each moving (having a speed) far-field potential obstacle can be regarded as a vehicle in motion, the traffic flow area can be obtained by connecting the tracking sequences corresponding to the far-field potential obstacles.
S404: and rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid.
The traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids.
Specifically, the traffic flow region is subjected to grid mapping. Specifically, the motion characteristics (such as speed, frequency, etc.) of the obstacle in each grid may be associated with the grid, and these characteristics are filled in the grid using statistical results, for example, a maximum value, an average value, or a statistical result within a preset time range may be selected. In the embodiment of the present application, information corresponding to a grid including obstacle motion characteristics may be traffic flow information. Taking the motion characteristic as an example of the speed, the maximum value may be a maximum value of the speed of the obstacle in the plurality of frames of continuous point cloud data (in a certain period of time), and the average value may be an average value of the speeds of the obstacle in the plurality of frames of continuous point cloud data (in a certain period of time). The statistical result within the preset time range may be a calculation result for the motion characteristic in a specified certain time period.
Specifically, the point cloud information is grid-mapped. Specifically, the grid may be associated with features of the point cloud data included in each grid (e.g., height of the point cloud, reflection intensity of the point cloud, number of the point clouds, relationship between adjacent frames, etc.), and the data may be filled into the grid. In the embodiment of the application, information corresponding to the grids containing the characteristics of the point cloud data is used as point cloud information.
After the traffic flow information and the point cloud information are associated with the grid, the grid at the moment simultaneously has the traffic flow information and the point cloud information, and a spatial relationship exists in the arrangement of the grid.
According to the embodiment of the application, the traffic flow information can be associated with the grid firstly, then the point cloud information is associated with the grid containing the traffic flow information, and the point cloud information can also be associated with the grid firstly, and then the traffic flow information is associated with the grid containing the point cloud information. That is, the order of association between the traffic flow information and the point cloud information with the grid is not limited.
S405: and acquiring mark information corresponding to each grid.
Specifically, the marking information corresponding to the grid may include an obstacle name and a background. The background is an object which does not affect driving, namely a non-obstacle. The obstacle category is an object affecting driving, such as a pedestrian, a vehicle, and the like.
Among other things, it is to be understood that since the obstacles involved in the embodiments of the present application may be moving, over time, the currently identified far-field potential obstacle may have been a trusted obstacle in the near-field at one time.
The method for acquiring the marking information of the far-field potential obstacle according to the point cloud data corresponding to the far-field potential obstacle specifically comprises the following steps: acquiring point cloud data corresponding to the far-field potential barrier; acquiring M frames of point cloud data before a point cloud data frame corresponding to the far-field potential obstacle, and acquiring a near-field obstacle set in the M frames of point cloud data, wherein the near-field obstacle set is a combination of different near-field obstacles identified by the M frames of point cloud data. And matching the point cloud data corresponding to the far-field potential obstacle with the near-field obstacle point cloud data in the near-field obstacle set, and when the matching degree reaches a preset value, taking the marking information corresponding to the near-field obstacle as the marking information of the far-field potential obstacle. That is, the recognition result of the near-field obstacle once recognized can be used as the current far-field potential obstacle recognition result without manually adding the marking information to the near-field potential obstacle recognition result.
If the point cloud shape cannot be matched, marking information can be added to the point cloud shape manually.
Specifically, the user may add the marking information to the point cloud data corresponding to the far-field potential obstacle (i.e., the potential obstacle in the far field). Wherein the marking information may be used to characterize whether an obstacle is present. Further, the marking information may be used to characterize a specific type of obstacle, and the marking information may also be used to characterize a motion state of the specific obstacle.
In addition, the method is not limited to marking the far-field potential obstacle, and in specific implementation, a user can add a mark to the near-field obstacle, so that the recognition result of the near-field obstacle is corrected, and the accuracy of near-field obstacle detection is improved.
S406: and taking the marking information corresponding to the multiple groups of grids, the point cloud information corresponding to the grids and the traffic flow information as sample data, and training the obstacle recognition model.
Specifically, after the marking information of the far-field potential obstacle is obtained, a feature extraction function can be constructed, and point cloud information and traffic flow information are extracted from the obstacle to obtain training data.
The training data may be input data, and the label information may be output data. The input data and the output data may constitute a set of sample data. The far-field potential obstacle recognition model can be trained by adopting multiple groups of sample data.
Optionally, the embodiment of the present application may use a convolutional network to train a far-field potential obstacle recognition model. The convolutional network can classify, a plurality of fully-connected layers are connected behind the convolutional layer, a feature map generated by the convolutional layer is mapped into a feature vector to be input into the fully-connected layers, and finally, the classification probability is obtained.
Optionally, the embodiment of the present application may use a full convolution network to train the far-field potential obstacle recognition model. The full convolution network can accept input of any size, can classify pixel level, realize segmentation of semantic level, and adopt the deconvolution layer to carry out upsampling on the last convolution layer, so that the final output and input realize the same size, thereby realizing the generation of a prediction for each pixel.
Specifically, in the embodiment of the present application, the training of the obstacle identification model by using the marking information corresponding to the multiple groups of grids and the point cloud information and traffic flow information corresponding to the grids as sample data specifically includes: and rasterizing the obtained travelable area, projecting the point cloud information and the traffic flow information to construct a specific data structure, and marking the information of the far-field potential barrier on the grid to form sample data. And training a full convolution network by using the sample data to obtain a semantic classification model of the grid. The model can output classification probability to the grids and carry out probability combination on the grids occupied by the obstacles, so that the classification result of the obstacles is obtained.
Specifically, the output result of the far-field potential obstacle identification model is the type with the highest probability in the types of objects that may be included in each part of the point cloud data. That is, there may be multiple portions of the point cloud data that each contain an obstacle or background. And outputting the result of the obstacle identification model that the object type corresponding to each part is the obstacle or the background.
Optionally, the far-field potential obstacle may be used for detecting a near-field obstacle in addition to the far-field potential obstacle.
In one or more embodiments of the present application, point cloud data corresponding to a far-field potential obstacle may be obtained by performing far-field cropping on the point cloud data, a traffic flow area is determined according to the point cloud data, and the traffic flow area is subjected to rasterization to obtain traffic flow information and point cloud information corresponding to each grid. And meanwhile, the mark information corresponding to each grid is obtained. And obtaining the obstacle recognition model through a large number of training models of traffic flow information and point cloud information of grids with known mark information. The obstacle identification model can improve the accuracy and detection range of far-field potential obstacle detection. Under the scene of automatic driving, the safety of the system can be improved.
Fig. 5 is a flowchart illustrating another obstacle recognition model training method provided in an embodiment of the present application. The obstacle recognition model training method may be performed by the above-described historical data processing apparatus 310. As shown in fig. 5, the obstacle recognition model training method may at least include the following steps:
s501: at least one frame of point cloud data is obtained.
It is understood that when a laser beam emitted from the laser radar is irradiated onto the surface of an object and reflected by the object to be received by the receiver, the received laser signal is recorded in the form of a point, so that point cloud data is formed. Wherein the point cloud data may include spatial location coordinates, timestamps, and echo intensity information. The echo intensity information is echo reflection intensity information collected by a laser radar receiving device, and the intensity information is related to the surface material, the roughness and the incident angle direction of a target, the emission energy of an instrument and the laser wavelength.
Wherein, the point cloud data obtained after completing one scanning period is a frame of point cloud data. Taking a mechanical radar as an example, the mechanical laser radar completes scanning of the surrounding environment in a mechanical rotation mode, and the time of one rotation is the duration of a point cloud data frame.
S502: and determining a travelable area.
Specifically, the drivable region is a region where the vehicle can travel, and is generally a region between road edges and between lane lines. The detection of the travelable area mainly provides path planning assistance for automatic driving, can realize the whole road surface detection, and can only extract partial road information. The travelable area can be constructed from the road edge information and the lane line information. The road edge may be a kerb of a roadside, and the lane line information may be a line (solid line or dotted line) used by the road surface to separate different lanes. Specifically, the road edge and the lane line can be identified through an image identification algorithm. And determining a travelable area according to the road edge and the lane line. Specifically, image information can be acquired through equipment such as a vehicle data recorder and the like, and the image information is sent to the vehicle-mounted terminal for identification.
Optionally, the travelable region may be determined by extracting feature information such as a road edge or a roadside marker obstacle (e.g., a traffic light) according to the acquired point cloud data by using a machine learning algorithm, and identifying the travelable region according to the extracted feature information.
Optionally, the travelable region may also be determined according to the height information in the acquired point cloud information, and the travelable region may be determined according to the ground point information.
It is to be understood that the present embodiment does not limit the method of determining the travelable region.
S503: and determining far-field point cloud data in at least one frame of point cloud data according to the distance information of the point cloud data.
Specifically, the point cloud data includes distance information. And the far-field point cloud data can be screened out according to the distance information of the point cloud data. The far-field point cloud is the point cloud with the distance larger than a preset threshold value. The preset threshold is, for example, but not limited to, 85 meters.
The travelable area is determined first and then the far-field point cloud data is determined, and in specific implementation, the far-field point cloud data is determined first and then the travelable area is determined. That is to say, the embodiment of the present application does not limit the order of implementing S502 and S503.
S504: and filtering the far-field point cloud data in the at least one frame of point cloud data according to the travelable area to obtain the far-field point cloud data in the travelable area.
Specifically, based on the travelable region determined in S502, the far-field point cloud data may be further filtered to obtain far-field point cloud data in the travelable region. Since the obstacle outside the travelable area does not affect the driving of the vehicle, the obstacle recognition method only comprises the data of the far-field point cloud in the travelable area, analyzes the data of the far-field point cloud in the travelable area, can reduce interference information in the processing process, reduces the calculated amount, and improves the obstacle recognition efficiency.
S505: and clustering the far-field point cloud data in the travelable area, and determining the point cloud data corresponding to the far-field potential barrier in the travelable area.
Specifically, the DBSCAN approach may be used for clustering. DBSCAN is a density-based clustering algorithm that generally assumes that classes can be determined by how closely the samples are distributed. By classifying closely connected samples into one class, a cluster class is obtained. And obtaining a final clustering class result by converting all groups of closely connected samples into different classes.
S506: and acquiring tracking information of the far-field potential obstacle, and generating a tracking sequence according to the tracking information of the far-field potential obstacle.
Specifically, S506 is identical to S402, and is not described herein again.
S507: and determining the traffic flow area according to the tracking sequence of the potential obstacles in the far field.
Specifically, S507 is identical to S403, and is not described herein again.
S508: and rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid.
Specifically, S508 is identical to S404, and is not described herein again.
S509: and acquiring mark information corresponding to each grid.
Specifically, S509 corresponds to S405, and is not described herein again.
S510: and taking the marking information corresponding to the multiple groups of grids, the point cloud information corresponding to the grids and the traffic flow information as sample data, and training the obstacle recognition model.
Specifically, S510 is identical to S406, and is not described herein again.
In one or more embodiments of the present application, point cloud data corresponding to a far-field potential obstacle may be obtained by performing far-field cropping on the point cloud data, a traffic flow area is determined according to the point cloud data, and the traffic flow area is subjected to rasterization to obtain traffic flow information and point cloud information corresponding to each grid. And meanwhile, the mark information corresponding to each grid is obtained. And obtaining the obstacle recognition model through a large number of training models of traffic flow information and point cloud information of grids with known mark information. The obstacle identification model can improve the accuracy and detection range of far-field potential obstacle detection. Under the scene of automatic driving, the safety of the system can be improved.
According to the embodiment of the application, the obstacle can be recognized based on the obstacle recognition model obtained by training the obstacle recognition model training method provided by the embodiment of fig. 4 or 5, so that the obstacle detection accuracy is improved, and especially the accuracy of the far-field potential obstacle detection is improved.
Next, the obstacle recognition method provided in the embodiment of the present application is described with reference to the obstacle recognition model obtained by training the obstacle recognition model training method provided in the embodiments of fig. 4 and 5.
The obstacle identification method provided by the embodiment of the present application is described in detail below with reference to specific embodiments. The method may be implemented in dependence on a computer program, operable on a von neumann based obstacle recognition device. The computer program may be integrated into the application or may run as a separate tool-like application.
Fig. 6 illustrates a flow chart of an obstacle identification method. The obstacle recognition method may be performed by the above-described history data processing apparatus 310. As shown in fig. 6, the obstacle identification method may include at least the following steps:
s601: and processing at least one frame of point cloud data to obtain point cloud data corresponding to the far-field potential barrier.
Specifically, S601 is identical to S401, and is not described herein again.
S602: and acquiring tracking information of the far-field potential obstacle, and generating a tracking sequence according to the tracking information of the far-field potential obstacle.
Specifically, S602 is identical to S402, and is not described herein again.
S603: and determining the traffic flow area according to the tracking sequence of the potential obstacles in the far field.
Specifically, S603 is identical to S403, and is not described herein again.
S604: and rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid.
Specifically, S604 is identical to S404, and is not described herein again.
S605: and inputting the traffic flow information and the point cloud information corresponding to each grid into the obstacle identification model, and outputting an identification result.
Specifically, the far-field potential obstacle recognition model may be an obstacle recognition model trained in the embodiment of fig. 4 or fig. 5.
Possibly, the traffic information and the point cloud information corresponding to the grid may be input into a trained obstacle recognition model. The obstacle recognition model may output a recognition result, which may be whether the category to which the grid corresponds is an obstacle or a background.
In one or more embodiments of the present application, point cloud data corresponding to a far-field potential obstacle may be obtained by performing far-field cropping on the point cloud data, a traffic flow area is determined according to the point cloud data, and the traffic flow area is subjected to rasterization to obtain traffic flow information and point cloud information corresponding to each grid. And meanwhile, the mark information corresponding to each grid is obtained. And obtaining the obstacle recognition model through a large number of training models of traffic flow information and point cloud information of grids with known mark information. The obstacle is detected by using the far-field potential obstacle recognition model in the subsequent automatic driving process, so that the accuracy and the detection range of the far-field potential obstacle detection can be improved. The embodiment of the application maps traffic flow information and point cloud information into the grid in the scene of automatic driving, then uses the deep learning model to extract characteristics of the new structure, and further detects the obstacle, makes full use of the point cloud information and the traffic flow information, can effectively detect the background and the obstacle, and especially can greatly improve the detection effect of the obstacle under the condition that the background disturbance is very large, and the robustness is good.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 7, a schematic structural diagram of a history data processing apparatus according to an exemplary embodiment of the present application is shown. The historical data processing means may be implemented by software, hardware or a combination of both. The history data processing apparatus 70 includes: processing module 710, acquisition module 720, determination module 730, rasterization module 740, labeling model 750, and training module 760. Wherein:
the processing module 710 is configured to process at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
an obtaining module 720, configured to obtain tracking information of the far-field potential obstacle, and generate a tracking sequence according to the tracking information of the far-field potential obstacle;
a determining module 730, configured to determine a traffic flow region according to the tracking sequence of the far-field potential obstacle;
the rasterizing module 740 is configured to perform rasterization processing on the traffic flow region, and determine traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
a marking module 750, configured to obtain marking information corresponding to each grid;
and the training module 760 is configured to train the obstacle identification model by using the label information corresponding to the multiple sets of grids and the point cloud information and the traffic information corresponding to the grids as sample data.
In some possible embodiments, the obtaining module 720 is specifically configured to: and determining tracking information of the far-field potential barrier according to a current frame and previous M frames of point cloud data continuous with the current frame, wherein M is a positive integer.
In some possible embodiments, the obtaining module 720 is specifically configured to: and determining the tracking information of the far-field potential obstacle according to the multi-frame historical point cloud data.
In some possible embodiments, the processing module 710 includes a determining unit, a processing unit; wherein:
a determination unit configured to determine a travelable region;
and the processing unit is used for processing at least one frame of point cloud data to obtain point cloud data corresponding to the far-field potential barrier in the travelable area.
In some possible embodiments, the point cloud data includes distance information;
the processing unit is specifically used for determining far-field point cloud data in the at least one frame of point cloud data according to the distance information of the point cloud data;
filtering far-field point cloud data in the at least one frame of point cloud data according to the travelable area to obtain the far-field point cloud data in the travelable area;
and clustering the far-field point cloud data in the travelable area, and determining the point cloud data corresponding to the far-field potential barrier in the travelable area.
In some possible embodiments, the obtaining unit 750 is further configured to obtain at least one frame of point cloud data before the processing module 710 processes the at least one frame of point cloud data.
In some possible embodiments, the traffic information corresponding to the grid is a motion feature of a far-field potential obstacle included in the point cloud data corresponding to the grid.
In some possible embodiments, the grid-corresponding point cloud information includes at least one of: height of the point cloud, reflection intensity of the point cloud, number of the point clouds.
In some possible embodiments, the marking information is a background or an obstacle name.
It should be noted that, when the history data processing apparatus provided in the foregoing embodiment executes the obstacle recognition model training method, only the division of the above functional modules is taken as an example, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the historical data processing device provided by the above embodiment and the embodiment of the obstacle recognition model training method belong to the same concept, and details of the implementation process are shown in the embodiment of the method, which are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In one or more embodiments of the present application, point cloud data corresponding to a far-field potential obstacle may be obtained by performing far-field cropping on the point cloud data, a traffic flow area is determined according to the point cloud data, and the traffic flow area is subjected to rasterization to obtain traffic flow information and point cloud information corresponding to each grid. And meanwhile, the mark information corresponding to each grid is obtained. And obtaining the obstacle recognition model through a large number of training models of traffic flow information and point cloud information of grids with known mark information. The obstacle identification model can improve the accuracy and detection range of far-field potential obstacle detection. Under the scene of automatic driving, the safety of the system can be improved.
Referring to fig. 8, a schematic structural diagram of a history data processing apparatus according to an exemplary embodiment of the present application is shown. The historical data processing means may be implemented by software, hardware or a combination of both. The history data processing apparatus 80 includes: processing module 810, acquisition module 820, determination module 830, rasterization module 840, and identification module 850. Wherein:
the processing module 810 is configured to process at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
an obtaining module 820, configured to obtain tracking information of the far-field potential obstacle, and generate a tracking sequence according to the tracking information of the far-field potential obstacle;
a determining module 830, configured to determine a traffic flow region according to the tracking sequence of the far-field potential obstacle;
the rasterizing module 840 is used for rasterizing the traffic flow region and determining traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
the identification module 850 is used for inputting the point cloud data corresponding to each grid, the traffic flow information and the point cloud information into an obstacle identification model and outputting an identification result; the obstacle recognition model is an obstacle recognition model obtained by training of a historical data processing device in the embodiment of fig. 7 of the application.
It should be noted that, when the history data processing apparatus provided in the above embodiment executes the obstacle identification method, only the division of the above functional modules is taken as an example, and in practical applications, the above functions may be distributed to different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the historical data processing apparatus provided in the above embodiment and the embodiment of the obstacle identification method belong to the same concept, and details of implementation processes thereof are referred to in the embodiment of the method, and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In one or more embodiments of the present application, point cloud data corresponding to a far-field potential obstacle may be obtained by performing far-field cropping on the point cloud data, a traffic flow area is determined according to the point cloud data, and the traffic flow area is subjected to rasterization to obtain traffic flow information and point cloud information corresponding to each grid. And meanwhile, the mark information corresponding to each grid is obtained. And obtaining the obstacle recognition model through a large number of training models of traffic flow information and point cloud information of grids with known mark information. The obstacle is detected by using the far-field potential obstacle recognition model in the subsequent automatic driving process, so that the accuracy and the detection range of the far-field potential obstacle detection can be improved. According to the embodiment of the application, under the scene of automatic driving, traffic flow information and point cloud information are mapped to the grids, then a deep learning model is used for extracting characteristics of the new structure, and then the obstacles are detected, the point cloud information and the traffic flow information are fully utilized, and the detection effect of the obstacles is greatly improved.
Referring to fig. 9, a schematic structural diagram of another historical data processing device is provided in the embodiment of the present application. As shown in fig. 9, the history data processing apparatus 90 may include: at least one processor 901, at least one network interface 904, a user interface 903, memory 905, at least one communication bus 902.
Wherein a communication bus 902 is used to enable connective communication between these components.
The user interface 903 may include a Display screen (Display) and a sensor interface, and the optional user interface 903 may also include a standard wired interface and a wireless interface.
The network interface 904 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 901 may include one or more processing cores, among other things. The processor 901 connects various parts within the entire history data processing apparatus 90 using various interfaces and lines, and executes various functions of the history data processing apparatus 90 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 905 and calling data stored in the memory 905. Optionally, the processor 901 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 901 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 901, but may be implemented by a single chip.
The Memory 905 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 905 includes a non-transitory computer-readable medium. The memory 905 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 905 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described method embodiments, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 905 may optionally be at least one memory device located remotely from the processor 901. As shown in fig. 9, the memory 905, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a wireless screen-projection connection application program.
In the history data processing apparatus 90 shown in fig. 9, the user interface 903 is an interface for mainly providing an input for a user, and acquiring data input by the user; and the processor 901 may be configured to invoke the obstacle recognition model training application stored in the memory 905 and specifically perform the following operations:
processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
acquiring tracking information of the far-field potential obstacle, and generating a tracking sequence according to the tracking information of the far-field potential obstacle; determining a traffic flow area according to the tracking sequence of the far-field potential obstacles;
rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
acquiring mark information corresponding to each grid;
and taking the marking information corresponding to the multiple groups of grids, the point cloud information corresponding to the grids and the traffic flow information as sample data, and training an obstacle identification model.
In some possible embodiments, the processor 901 specifically performs when acquiring the tracking information of the far-field potential obstacle: and determining tracking information of the far-field potential barrier according to a current frame and previous M frames of point cloud data continuous with the current frame, wherein M is a positive integer.
In some possible embodiments, the processor 901 specifically performs when acquiring the tracking information of the far-field potential obstacle: and determining the tracking information of the far-field potential obstacle according to the multi-frame historical point cloud data.
In some possible embodiments, the processor 901 specifically executes the following steps when processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle:
determining a drivable area;
and processing at least one frame of point cloud data to obtain point cloud data corresponding to the far-field potential barrier in the travelable area.
In some possible embodiments, the point cloud data includes distance information;
the processor 901 processes at least one frame of point cloud data, and specifically executes the following steps when obtaining point cloud data corresponding to a far-field potential obstacle in the travelable area:
determining far-field point cloud data in the at least one frame of point cloud data according to the distance information of the point cloud data;
filtering far-field point cloud data in the at least one frame of point cloud data according to the travelable area to obtain the far-field point cloud data in the travelable area;
and clustering the far-field point cloud data in the travelable area, and determining the point cloud data corresponding to the far-field potential barrier in the travelable area.
In some possible embodiments, the processor 901 is further configured to perform, before processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle: and acquiring the at least one frame of point cloud data.
In some possible embodiments, the traffic information corresponding to the grid is a motion feature of a far-field potential obstacle included in the point cloud data corresponding to the grid.
In some possible embodiments, the grid-corresponding point cloud information includes at least one of: height of the point cloud, reflection intensity of the point cloud, number of the point clouds.
In some possible embodiments, the marking information is a background or an obstacle name.
In one or more embodiments of the present application, point cloud data corresponding to a far-field potential obstacle may be obtained by performing far-field cropping on the point cloud data, a traffic flow area is determined according to the point cloud data, and the traffic flow area is subjected to rasterization to obtain traffic flow information and point cloud information corresponding to each grid. And meanwhile, the mark information corresponding to each grid is obtained. And obtaining the obstacle recognition model through a large number of training models of traffic flow information and point cloud information of grids with known mark information. The obstacle identification model can improve the accuracy and detection range of far-field potential obstacle detection. Under the scene of automatic driving, the safety of the system can be improved.
Embodiments of the present application also provide a computer-readable storage medium, which stores instructions that, when executed on a computer or a processor, cause the computer or the processor to perform one or more of the steps in the embodiments shown in fig. 4 to 5. The respective constituent modules of the above-described history data processing apparatus may be stored in the computer-readable storage medium if they are implemented in the form of software functional units and sold or used as independent products.
Referring to fig. 10, a schematic structural diagram of another historical data processing apparatus is provided in an embodiment of the present application. As shown in fig. 10, the history data processing apparatus 100 may include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
Wherein a communication bus 1002 is used to enable connective communication between these components.
The user interface 1003 may include a Display screen (Display) and a sensor interface, and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001 connects various parts within the entire history data processing apparatus 100 by various means and lines, and executes various functions of the history data processing apparatus 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and calling data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1001, but may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 10, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an obstacle recognition application program.
In the history data processing apparatus 100 shown in fig. 10, the user interface 1003 is an interface for mainly providing an input for the user, and acquires data input by the user; and the processor 1001 may be configured to call the wireless screen projection connection application stored in the memory 1005, and specifically perform the following operations:
processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
acquiring tracking information of the far-field potential obstacle, and generating a tracking sequence according to the tracking information of the far-field potential obstacle;
determining a traffic flow area according to the tracking sequence of the far-field potential obstacles;
rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
inputting the traffic flow information and the point cloud information corresponding to each grid into an obstacle identification model, and outputting an identification result; the obstacle recognition model is an obstacle recognition model obtained by training with an obstacle recognition model training device in the embodiment of fig. 7 of the present application.
In one or more embodiments of the present application, point cloud data corresponding to a far-field potential obstacle may be obtained by performing far-field cropping on the point cloud data, a traffic flow area is determined according to the point cloud data, and the traffic flow area is subjected to rasterization to obtain traffic flow information and point cloud information corresponding to each grid. And meanwhile, the mark information corresponding to each grid is obtained. And obtaining the obstacle recognition model through a large number of training models of traffic flow information and point cloud information of grids with known mark information. The obstacle is detected by using the far-field potential obstacle recognition model in the subsequent automatic driving process, so that the accuracy and the detection range of the far-field potential obstacle detection can be improved. According to the embodiment of the application, under the scene of automatic driving, traffic flow information and point cloud information are mapped to the grids, then a deep learning model is used for extracting characteristics of the new structure, and then the obstacles are detected, the point cloud information and the traffic flow information are fully utilized, and the detection effect of the obstacles is greatly improved.
Embodiments of the present application also provide a computer-readable storage medium having stored therein instructions, which when executed on a computer or a processor, cause the computer or the processor to perform one or more of the steps in the embodiment shown in fig. 6 described above. The respective constituent modules of the above-described history data processing apparatus may be stored in the computer-readable storage medium if they are implemented in the form of software functional units and sold or used as independent products.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. And the aforementioned storage medium includes: various media capable of storing program codes, such as a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk. The technical features in the present examples and embodiments may be arbitrarily combined without conflict.
The above-described embodiments are merely preferred embodiments of the present application, and are not intended to limit the scope of the present application, and various modifications and improvements made to the technical solutions of the present application by those skilled in the art without departing from the design spirit of the present application should fall within the protection scope defined by the claims of the present application.

Claims (10)

1. An obstacle recognition model training method, comprising:
processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
acquiring tracking information of the far-field potential obstacle, and generating a tracking sequence according to the tracking information of the far-field potential obstacle;
determining a traffic flow area according to the tracking sequence of the far-field potential obstacles;
rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
acquiring mark information corresponding to each grid;
and taking the marking information corresponding to the multiple groups of grids, the point cloud information corresponding to the grids and the traffic flow information as sample data, and training an obstacle identification model.
2. The method of claim 1, wherein the processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle comprises:
determining a drivable area;
and processing at least one frame of point cloud data to obtain point cloud data corresponding to the far-field potential barrier in the travelable area.
3. The method of claim 2, wherein the point cloud data includes distance information;
the processing of at least one frame of point cloud data to obtain point cloud data corresponding to the far-field potential obstacle in the travelable area comprises the following steps:
determining far-field point cloud data in the at least one frame of point cloud data according to the distance information of the point cloud data;
filtering far-field point cloud data in the at least one frame of point cloud data according to the travelable area to obtain the far-field point cloud data in the travelable area;
and clustering the far-field point cloud data in the travelable area, and determining the point cloud data corresponding to the far-field potential barrier in the travelable area.
4. The method of claim 1, wherein the traffic information corresponding to the grid is a motion feature of a far-field potential obstacle contained in the point cloud data corresponding to the grid; the point cloud information corresponding to the grid comprises at least one of the following items: height of the point cloud, reflection intensity of the point cloud, number of the point clouds.
5. The method of claim 1, wherein the marker information is a background or an obstacle name.
6. An obstacle recognition method, comprising:
processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
acquiring tracking information of the far-field potential obstacle, and generating a tracking sequence according to the tracking information of the far-field potential obstacle;
determining a traffic flow area according to the tracking sequence of the far-field potential obstacles;
rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
inputting the traffic flow information and the point cloud information corresponding to each grid into an obstacle identification model, and outputting an identification result; the obstacle recognition model is the obstacle recognition model of any one of claims 1-5.
7. A history data processing apparatus, characterized by comprising:
the processing module is used for processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
the acquisition module is used for acquiring the tracking information of the far-field potential obstacle and generating a tracking sequence according to the tracking information of the far-field potential obstacle;
the determining module is used for determining a traffic flow area according to the tracking sequence of the far-field potential obstacles;
the rasterization module is used for rasterizing the traffic flow area and determining traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
the marking module is used for acquiring marking information corresponding to each grid;
and the training module is used for training the obstacle recognition model by taking the marking information corresponding to the grids and the point cloud information and the traffic flow information corresponding to the grids as sample data.
8. A history data processing apparatus, characterized by comprising:
the processing module is used for processing at least one frame of point cloud data to obtain point cloud data corresponding to a far-field potential obstacle; the far-field potential barrier is a barrier outside a preset range;
the acquisition module is used for acquiring the tracking information of the far-field potential obstacle and generating a tracking sequence according to the tracking information of the far-field potential obstacle;
the determining module is used for determining a traffic flow area according to the tracking sequence of the far-field potential obstacles;
the rasterization module is used for rasterizing the traffic flow area and determining traffic flow information and point cloud information corresponding to each grid; the traffic flow information is obtained according to the motion characteristics of the far-field potential obstacles corresponding to the grids, and the point cloud information is obtained according to the point cloud data corresponding to the grids;
the identification module is used for inputting the point cloud data corresponding to each grid, the traffic flow information and the point cloud information into an obstacle identification model and outputting an identification result; the obstacle recognition model is the obstacle recognition model of any one of claims 1-5.
9. An obstacle recognition system, comprising:
the sensing and sensing device is used for collecting point cloud data and transmitting the point cloud data to the historical data processing device and the vehicle-mounted terminal;
the historical data processing device is used for processing the point cloud data transmitted by the perception sensing device or the stored historical point cloud data to obtain an identification model of a far-field potential obstacle;
the vehicle-mounted terminal is used for receiving the point cloud data transmitted by the perception sensing device, identifying a near-field obstacle and cutting a far-field point cloud, processing the far-field point cloud to obtain a tracking sequence and a traffic flow area of a far-field potential obstacle, rasterizing the traffic flow area, and determining traffic flow information and point cloud information corresponding to each grid;
the vehicle-mounted terminal is also used for sending the traffic flow information and the point cloud information corresponding to each grid to the historical data processing device;
the historical data processing device is further used for obtaining a far-field potential obstacle recognition result by adopting the far-field potential obstacle recognition model and sending the far-field potential obstacle recognition result to the vehicle-mounted terminal;
and the vehicle-mounted terminal is also used for combining the recognition result of the near-field obstacle and the recognition result of the far-field potential obstacle and outputting a control instruction of the vehicle.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202110015844.3A 2021-01-07 2021-01-07 Obstacle recognition model training method, obstacle recognition method, device and system Active CN112347999B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110015844.3A CN112347999B (en) 2021-01-07 2021-01-07 Obstacle recognition model training method, obstacle recognition method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110015844.3A CN112347999B (en) 2021-01-07 2021-01-07 Obstacle recognition model training method, obstacle recognition method, device and system

Publications (2)

Publication Number Publication Date
CN112347999A CN112347999A (en) 2021-02-09
CN112347999B true CN112347999B (en) 2021-05-14

Family

ID=74427997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110015844.3A Active CN112347999B (en) 2021-01-07 2021-01-07 Obstacle recognition model training method, obstacle recognition method, device and system

Country Status (1)

Country Link
CN (1) CN112347999B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112943025B (en) * 2021-02-20 2022-08-16 广州小鹏自动驾驶科技有限公司 Automatic starting and stopping method of vehicle door and related device
CN112991735B (en) * 2021-03-05 2022-10-14 北京百度网讯科技有限公司 Test method, device and equipment of traffic flow monitoring system
CN112734810B (en) * 2021-04-06 2021-07-02 北京三快在线科技有限公司 Obstacle tracking method and device
CN113052131A (en) * 2021-04-20 2021-06-29 深圳市商汤科技有限公司 Point cloud data processing and automatic driving vehicle control method and device
CN113269168B (en) * 2021-07-19 2021-10-15 禾多阡陌科技(北京)有限公司 Obstacle data processing method and device, electronic equipment and computer readable medium
CN113466850A (en) * 2021-09-01 2021-10-01 北京智行者科技有限公司 Environment sensing method and device and mobile tool
CN113806464A (en) * 2021-09-18 2021-12-17 北京京东乾石科技有限公司 Road tooth determining method, device, equipment and storage medium
WO2023166700A1 (en) * 2022-03-04 2023-09-07 パイオニア株式会社 Information processing device, control method, program, and storage medium
CN116912403A (en) * 2023-07-03 2023-10-20 上海鱼微阿科技有限公司 XR equipment and obstacle information sensing method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN111626314A (en) * 2019-02-28 2020-09-04 深圳市速腾聚创科技有限公司 Point cloud data classification method and device, computer equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106772435B (en) * 2016-12-12 2019-11-19 浙江华飞智能科技有限公司 A kind of unmanned plane barrier-avoiding method and device
CN106845416B (en) * 2017-01-20 2021-09-21 百度在线网络技术(北京)有限公司 Obstacle identification method and device, computer equipment and readable medium
CN106709475B (en) * 2017-01-22 2021-01-22 百度在线网络技术(北京)有限公司 Obstacle recognition method and device, computer equipment and readable storage medium
CN107316048B (en) * 2017-05-03 2020-08-28 深圳市速腾聚创科技有限公司 Point cloud classification method and device
CN110596731A (en) * 2019-09-12 2019-12-20 天津市市政工程设计研究院 Active obstacle detection system and method for metro vehicle
CN111337898B (en) * 2020-02-19 2022-10-14 北京百度网讯科技有限公司 Laser point cloud processing method, device, equipment and storage medium
CN111881245B (en) * 2020-08-04 2023-08-08 深圳安途智行科技有限公司 Method, device, equipment and storage medium for generating visibility dynamic map

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626314A (en) * 2019-02-28 2020-09-04 深圳市速腾聚创科技有限公司 Point cloud data classification method and device, computer equipment and storage medium
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud

Also Published As

Publication number Publication date
CN112347999A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN112347999B (en) Obstacle recognition model training method, obstacle recognition method, device and system
KR102210715B1 (en) Method, apparatus and device for determining lane lines in road
CN112417967B (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
JP7395301B2 (en) Obstacle detection method, obstacle detection device, electronic equipment, vehicle and storage medium
CN110226186B (en) Method and device for representing map elements and method and device for positioning
CN112329754B (en) Obstacle recognition model training method, obstacle recognition method, device and system
WO2021097618A1 (en) Point cloud segmentation method and system, and computer storage medium
CN108509820B (en) Obstacle segmentation method and device, computer equipment and readable medium
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
CN111427979B (en) Dynamic map construction method, system and medium based on laser radar
US11556745B2 (en) System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor
CN108470174B (en) Obstacle segmentation method and device, computer equipment and readable medium
CN111291697B (en) Method and device for detecting obstacles
CN113366486A (en) Object classification using out-of-region context
CN112562314A (en) Road end sensing method and device based on deep fusion, road end equipment and system
CN110956137A (en) Point cloud data target detection method, system and medium
CN114821507A (en) Multi-sensor fusion vehicle-road cooperative sensing method for automatic driving
CN115147333A (en) Target detection method and device
US8483478B1 (en) Grammar-based, cueing method of object recognition, and a system for performing same
EP3764335A1 (en) Vehicle parking availability map systems and methods
CN112823353A (en) Object localization using machine learning
CN115331214A (en) Sensing method and system for target detection
CN116863325A (en) Method for multiple target detection and related product
CN114545424A (en) Obstacle recognition method, obstacle recognition device, obstacle recognition model training method, obstacle recognition model training device, obstacle recognition equipment and storage medium
WO2020103043A1 (en) Linear object identification method, device and system and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant