CN115056771A - Collision detection method and device, vehicle and storage medium - Google Patents

Collision detection method and device, vehicle and storage medium Download PDF

Info

Publication number
CN115056771A
CN115056771A CN202210630726.8A CN202210630726A CN115056771A CN 115056771 A CN115056771 A CN 115056771A CN 202210630726 A CN202210630726 A CN 202210630726A CN 115056771 A CN115056771 A CN 115056771A
Authority
CN
China
Prior art keywords
point cloud
cloud data
vehicle
time
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210630726.8A
Other languages
Chinese (zh)
Inventor
郑知润
常浩
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weride Technology Co Ltd
Original Assignee
Guangzhou Weride Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weride Technology Co Ltd filed Critical Guangzhou Weride Technology Co Ltd
Publication of CN115056771A publication Critical patent/CN115056771A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0011Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles

Abstract

The invention discloses a collision detection method, a collision detection device, a vehicle and a storage medium, wherein the method comprises the following steps: acquiring first point cloud data and first time of a laser radar, and acquiring first vehicle pose information corresponding to the first time; filtering and/or performing motion compensation on the first point cloud data to obtain second point cloud data; establishing a target area, acquiring second point cloud data in the target area to obtain third point cloud data, and marking a target object on the third point cloud data; judging whether the vehicle collides with the target object or not according to the target object and the first vehicle pose information; if a collision occurs, a collision time is calculated. By the method and the device, the detected position information of the obstacle has high accuracy, the brake response is faster, and the processing time is earlier and safer than the automatic driving perception and planning processing time. Thereby improving the safety and reliability of the unmanned vehicle.

Description

Collision detection method and device, vehicle and storage medium
Technical Field
The invention relates to the technical field of vehicles, in particular to a collision detection method and device, a vehicle and a storage medium.
Background
With the development of technology, unmanned automobiles develop faster and faster. The problem of automobile safety is more prominent when the unmanned automobile brings convenience and driving experience to people. Current unmanned drives carries out collision detection through automatic driving perception and planning, and current unmanned drives collision detection response speed is slower, needs to spend longer time just can accomplish collision detection, leads to unmanned vehicles security and reliability lower.
Disclosure of Invention
The invention mainly aims to provide a collision detection method, a collision detection device, a vehicle and a storage medium, and aims to solve the problem that in the prior art, the unmanned vehicle is low in safety and reliability due to the fact that the response speed of collision detection is slow through automatic driving perception and planning.
In order to achieve the above object, the present invention provides a collision detection method, including the steps of:
s1: acquiring first point cloud data and first time of a laser radar, and acquiring first vehicle pose information corresponding to the first time;
s2: filtering and/or performing motion compensation on the first point cloud data to obtain second point cloud data;
s3: establishing a target area, acquiring the second point cloud data in the target area to obtain third point cloud data, and labeling a target object on the third point cloud data;
s4: judging whether a vehicle collides with the target object or not according to the target object and the first vehicle pose information; if a collision occurs, a collision time is calculated.
Optionally, the filtering the first point cloud data includes:
and carrying out noise point filtering and/or vehicle body point cloud filtering and/or voxel filtering on the first point cloud data.
Optionally, the motion compensating the first point cloud data includes:
a1: initializing initial data for motion compensation, the initial data comprising: second vehicle pose information and a second time; constructing a first conversion matrix based on the second vehicle pose information;
a2: acquiring vehicle pose information of the current time to obtain third vehicle pose information; acquiring current time to obtain third time;
a3: acquiring points in the range from the second time to the third time, and performing matrix operation on the points one by using the first conversion matrix to obtain a motion-compensated point;
a4: and repeating the steps A2-A3, and processing all points of the first point cloud data to obtain the second point cloud data after motion compensation.
Optionally, the step S3 includes the following steps:
dividing the second point cloud data into regions according to horizontal angles to obtain a first region;
horizontally dividing the first area according to the line number of the laser radar to obtain a second area;
dividing the second area according to the cells in the designated range to obtain a third area;
judging whether the height values of the points of the second point cloud data and the ground in the third area are larger than a height threshold value one by one; and if so, marking the point as the target object and allocating a unique identifier to each target object.
Optionally, the step S4 includes the following steps:
constructing a second conversion matrix according to the first vehicle pose information corresponding to the plurality of second point cloud data and the variation of the vehicle pose information at the current moment;
performing matrix operation on the second conversion matrix to convert a coordinate system to obtain a fixed coordinate system;
predicting the motion track of the vehicle in a first time length range in the fixed coordinate system to obtain a first motion track; predicting the motion track of the moving object in the first time length range in the fixed coordinate system to obtain a second motion track;
judging whether the first motion track and the second motion track are crossed or not; if the intersection exists, acquiring a second time length for the vehicle to move to the intersection and acquiring a third time length for the moving object to move to the intersection;
calculating a fourth time length for the moving object to leave the first track according to the vehicle width of the vehicle, the length of the moving object and the angle for the moving object to enter the second motion track;
if the second duration is longer than the third duration and shorter than the fourth duration, the vehicle collides with the moving object, and the collision time is the sum of the second duration and the time difference between the first time the moving object enters the first motion track and the intersection of the moving object and the vehicle in the first track.
Optionally, the method further comprises the following steps:
whether the second point cloud data exists in the third area or not, and if the second point cloud data exists, identifying the third area as an effective area; if the second point cloud data does not exist, identifying the third area as an invalid area;
converting the plurality of second point cloud data in the third area marked as the effective area into the same coordinate system, matching target objects segmented by each second point cloud data, and marking the target objects as static objects or moving objects;
and tracking the position change of the moving object according to time, and calculating the motion direction and the angle of the moving object.
Optionally, the method further comprises the following steps:
determining whether the static object and/or the dynamic object is a reflected object; if so, marking the static object or the dynamic object as a reflected object;
judging whether the reflected object is in an indoor or traffic jam section: and if so, deleting the static object or the moving object corresponding to the reflected object.
Further, to achieve the above object, the present invention also proposes a collision detection device including:
the point cloud obtaining unit is used for obtaining first point cloud data and first time of the laser radar and obtaining first vehicle pose information corresponding to the first time;
the point cloud compensation unit is used for filtering and/or performing motion compensation on the first point cloud data to obtain second point cloud data;
the point cloud marking unit is used for establishing a target area, acquiring the second point cloud data in the target area to obtain third point cloud data, and marking a target object on the third point cloud data;
and the collision judging unit is used for judging whether the vehicle collides with the target object according to the target object and the first vehicle pose information, and if so, calculating the collision time.
Further, to achieve the above object, the present invention also proposes a vehicle comprising: a memory, a processor and a collision detection program stored on the memory and executable on the processor, the collision detection program being configured to implement the steps of the collision detection method as described above.
Furthermore, to achieve the above object, the present invention also proposes a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, implements the steps of the collision detection method as described above.
By the method and the device, the detected position information of the obstacle has high accuracy, the brake response is faster, and the processing time is earlier and safer than the automatic driving perception and planning processing time. Thereby improving the safety and reliability of the unmanned vehicle.
Drawings
Fig. 1 is a schematic flow chart of a collision detection method according to the present invention.
Fig. 2 is a schematic flow chart of motion compensation provided by the present invention.
Fig. 3 is a schematic flow chart of target labeling according to the present invention.
Fig. 4 is a schematic flow chart of determining whether a vehicle is collided according to the present invention.
Fig. 5 is a schematic flowchart of the valid area identifier provided in the present invention.
Fig. 6 is a schematic flow chart of deleting a reflected object according to the present invention.
Fig. 7 is a block diagram showing a structure of an embodiment of the collision detecting apparatus according to the present invention.
Fig. 8 is a vehicle structure diagram of a hardware operating environment according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
In one embodiment, as shown in fig. 1, the present invention provides a collision detection method, the method comprising:
s1, first point cloud data and first time of the laser radar are obtained, and first vehicle position and pose information corresponding to the first time is obtained.
The automatic driving vehicle receives a frame of point cloud data of the laser radar and obtains time corresponding to the frame of point cloud data. The automatic driving vehicle is transmitted to the laser radar driving layer through the UDP data packet, and the laser radar driving layer analyzes all points according to the protocol to form a frame of point cloud data.
And when the point cloud data is acquired, acquiring the pose information of the current vehicle. The vehicle pose information includes: x, y, z coordinates, pitch angle picth, heading angle yaw, roll angle roll. The information obtained by the vehicle is shown in the following table:
first time First point cloud data First vehicle pose information
2021-12-28 15:35:26:256 Point cloud data A Vehicle pose information A
The autonomous vehicle stores the acquired time, point cloud data, and vehicle pose information locally, such as in a database.
And S2, filtering and/or performing motion compensation on the first point cloud data to obtain second point cloud data.
The laser radar driving layer carries out noise point filtering, vehicle body point cloud filtering and voxel filtering on one frame of point cloud, and the number of points can be reduced through filtering, so that the point cloud form is guaranteed. And sending the point cloud data subjected to filtering processing to other modules for subsequent processing, such as a degradation thread.
The main function of the motion is to perform motion compensation on points in a frame of point cloud so as to make up for errors brought by vehicle motion to the point cloud and reduce distortion.
The automatic driving vehicle simultaneously carries out noise filtering and motion compensation processing, for example, two threads or processes are used for respectively processing the noise filtering and the motion compensation, so that the processing speed of point cloud data is increased. When one thread or process finishes noise filtering or motion compensation and the other thread or process finishes processing, the other thread or process waits for processing of the subsequent steps after finishing processing of the other thread or process.
Motion compensation referring to the flow shown in fig. 2:
step A1: initializing initial data for motion compensation, the initial data comprising: second vehicle pose information and a second time; and constructing a first conversion matrix based on the second vehicle pose information.
Some data structures required for the removal are initialized, such as vehicle pose information (position information), and removal sync time is defined, that is, points in each frame of point cloud are subjected to transformation matrix construction based on the position information at this moment.
When motion compensation is performed, a reference time and vehicle pose information corresponding to the reference time need to be selected. When the motion compensation is performed on the points in the subsequent point cloud data, the matrix conversion is performed by using the position and orientation information.
The vehicle pose information (position information) includes: x, y, z coordinates, pitch angle picth, heading angle yaw, roll angle roll.
The initial data are shown in the following table:
Figure BDA0003679576190000061
and constructing a rotation matrix transformed by the three-dimensional space coordinate system by using the initialized vehicle pose information.
Step A2: acquiring vehicle pose information of the current time to obtain third vehicle pose information; and obtaining the current time to obtain a third time.
And acquiring positioning information (x, y, z; roll, yaw, picture) of the positioning module at the current moment (namely vehicle pose information, which is acquired once in default of 10ms, and can be of other durations). If the locating information is invalid (which can be confirmed from the flag in the information), the speed and steering angle information provided by the autonomous vehicle is used. The accuracy of the information provided by the vehicle is lower, and the accuracy of the information provided by the vehicle is generally within the allowable error range. The vehicle pose information acquired at the current moment is shown in the following table:
Figure BDA0003679576190000062
Figure BDA0003679576190000071
then, a transformation matrix is constructed based on the vehicle pose information (position information) corresponding to the first time (sync time) in the initialization using step a 1.
Step A3: and acquiring points in the range from the second time to the third time, and performing matrix operation on the points one by using the first conversion matrix to obtain a motion-compensated point.
And carrying out matrix operation on the scanned points between the two vehicle poses one by one to obtain a new point, wherein the new point is the point after motion compensation.
Step A4: and repeating the steps A2-A3, and processing all points of the first point cloud data to obtain the second point cloud data after motion compensation.
And repeating the steps A2-A3, and waiting for all the points on the frame of point cloud to finish motion compensation to obtain the frame of point cloud data after motion compensation.
Step S3: establishing a target area, acquiring the second point cloud data in the target area to obtain third point cloud data, and labeling a target object on the third point cloud data.
And the point cloud after motion compensation is subjected to region division, so that the point cloud data can be rapidly processed. Area division, see the flow shown in fig. 3:
step S101: and dividing the second point cloud data into areas according to horizontal angles to obtain a first area.
Step S102: and horizontally dividing the first area according to the line number of the laser radar to obtain a second area.
The points in the point cloud are divided into a region according to a certain horizontal angle (such as 5 degrees, and other horizontal angles can also be set), and then the corresponding ring is divided in each region according to the line number of the laser radar. For example, 64 rings per region (note that the number of rings is related to the laser Lidar, and how many lines of laser Lidar are the number of fewer rings, so the size of one RingMap is HFOV/5 rings. this scheme is 360/5 × 64).
In the table of each ring map, there is height information of several points stored on each ring, which are higher than each other, and the average of the heights of these points is the height (Z) of this ring map. By analogy, after all height information calculation filling is completed, one ring map is completed.
Step S103: and dividing the second area according to the cells in the designated range to obtain a third area.
And constructing a cell with a specified range (such as 30X30cm (X Y)) for the ring map according to the point cloud data, and using the cell to divide the region for target segmentation and identification. Here the cells are divided according to the horizontal information. A point within the corresponding area is placed within each cell,
the effectiveness of the divided cells needs to be judged, and only the effective cells are processed. Referring to the flow shown in FIG. 5:
step S301: whether the second point cloud data exist in the third area or not, and if the second point cloud data exist, identifying the third area as an effective area; identifying the third region as an invalid region if the second point cloud data is not present.
And filling points inwards in the cell construction process, recording, and judging the validity of the cell according to the record. If the cell has no dot data, the cell is invalid and does not need to be processed subsequently; if there is some point data in the cell, the cell is valid, and the cell needs to be processed subsequently.
Step S302: and converting the plurality of second point cloud data in the third area marked as the effective area into the same coordinate system, matching the target object segmented by each second point cloud data, and marking the target object as a static object or a moving object.
Step S303: and tracking the position change of the moving object according to time, and calculating the motion direction and the angle of the moving object.
And (3) marking the target object by meshing the multi-frame (such as 8 frames) point cloud data. And the latest 8 frames are transferred to the same coordinate system, and the targets segmented from each frame are matched, so that the target tracking is convenient. The data are put into a coordinate system, if the data are static objects, the points of the objects are basically overlapped in the coordinate system, so that the static objects are conveniently marked, if the data are dynamic objects, the position change of the dynamic objects is tracked according to time, the motion direction and the angle of the dynamic objects can be calculated, and the data are provided for estimating the track of the moving object for a period of time (such as 2.5s) in the future.
Step S104: judging whether the height values of the points of the second point cloud data and the ground in the third area are larger than a height threshold value one by one; and if so, marking the point as the target object and allocating a unique identifier to each target object.
Traversing all the effective cells, and if the height of a point in the cell from the ground exceeds a certain distance, recording the point as a target object; if the object is the ground, the object is marked as the uninteresting object, the uninteresting object is marked with a uniform ID, and the target object is marked with a unique ID.
For targets in the cells, the reflective targets need to be removed. Referring specifically to the flow shown in fig. 6:
step S401: determining whether the static object and/or the dynamic object is a reflected object; if so, marking the static object or the dynamic object as a reflected object.
Step S402: judging whether the reflected object is in an indoor or traffic jam section: and if so, deleting the static object or the moving object corresponding to the reflected object.
The reflection points can be deleted by ray tracing algorithms and filtering of the points below the ground, which belongs to the prior art and is not described in detail.
Judging whether the area is in a basement or a traffic jam area: if the traffic jam section exists, partial reflection points are removed. The traffic jam section is judged by the following scheme:
scanning three zones right in front, left in back and right in back according to the zones of the ring map, calculating the nearest non-ground ring (the ring is smaller and closer to the vehicle), if the ring value is smaller, indicating that congestion occurs, recording a congestion value, continuously observing at least 3 frames, if the ring value is not smaller each time, subtracting 1 from the congestion value, and stopping decreasing until the congestion value is less than or equal to 0. When the congestion value is 0, representing a non-congestion zone; otherwise, the congested segment is represented.
Step S4: judging whether a vehicle collides with the target object or not according to the target object and the first vehicle pose information; if a collision occurs, a collision time is calculated.
Whether the target object (moving object, static object identified in the cell) and the vehicle collide is judged, and the flow is described with reference to fig. 4:
step S201: and constructing a second conversion matrix according to the first vehicle pose information corresponding to the plurality of second point cloud data and the variation of the vehicle pose information at the current moment.
Step S202: and performing matrix operation on the second conversion matrix to convert the coordinate system to obtain a fixed coordinate system.
The method comprises the steps of transferring previous point cloud data of multiple frames (such as 8 frames) to the lower side of the current same coordinate system, finding out the changes of six variables, namely x, y and z axis coordinates, a pitch angle picth, a course angle yaw and a roll angle roll of corresponding vehicle position information (pos information) and current vehicle position and attitude information (pos information) according to the time corresponding to each frame, constructing a conversion matrix, and converting the coordinate system through matrix operation.
Step S203: predicting the motion track of the vehicle in a first time length range in the fixed coordinate system to obtain a first motion track; and predicting the motion track of the moving object in the first time length range in the fixed coordinate system to obtain a second motion track.
Step S204: judging whether the first motion track and the second motion track are crossed or not; and if the intersection exists, acquiring a second time period for the vehicle to move to the intersection and acquiring a third time period for the moving object to move to the intersection.
The motion trail of a certain time (such as 2.5 seconds) in the future is predicted according to the motion changes of the moving object in the x and y directions and the angle of the course angle yaw, and the same motion trail prediction is also carried out on the vehicle. If the track of the vehicle and the track of the moving object intersect, the time for the first intersection of the moving object and the vehicle is respectively calculated, for example, the time for the moving object to move to the intersection is Tf and the time for the vehicle to move to the intersection is Ta.
Step S205: and calculating a fourth time length for the moving object to leave the first track according to the vehicle width of the vehicle, the length of the moving object and the angle for the moving object to enter the second motion track.
And calculating the time Tg of the object leaving the vehicle track according to the vehicle width, the object length and the angle of the object entering the track.
Step S206: if the second duration is longer than the third duration and shorter than the fourth duration, the vehicle collides with the moving object, and the collision time is the sum of the second duration and the time difference between the first time the moving object enters the first motion track and the intersection of the moving object and the vehicle in the first track.
If Tf > Ta, it indicates that the vehicle has traveled past when the object did not reach the first intersection, and if Tg < Ta, it indicates that the object has left the vehicle trajectory before the vehicle driven to the intersection. In both cases, the collision time is infinite, i.e. no collision occurs, and when Tf < Ta < Tg, a collision occurs, where the collision time is: ta + Toffset, the difference in time between the first entry of an object into the trajectory of the vehicle and the intersection of the object with the vehicle within the trajectory, is related to the angle and speed at which the east object enters the trajectory.
According to the driving angle and the speed of the vehicle, a driving track of the vehicle from the current time to a period of time (such as 2.5 seconds) is estimated, and if an object exists on the track, the object can be used as a target for tracking. Until the object leaves the trajectory. The vehicle can make the object leave the track through steering operation if the object is static, and if the object reaches the target on the track within a period of time (such as 2.5 seconds), deceleration and braking operation are carried out.
And the vehicle generates commands such as braking, steering and the like according to the collision detection result, and sends the commands to the DBW to execute corresponding operations.
By the method and the device, the detected position information of the obstacle has high accuracy, the brake response is faster, and the processing time is earlier and safer than the automatic driving perception and planning processing time. Thereby improving the safety and reliability of the unmanned vehicle.
Furthermore, an embodiment of the present invention further provides a collision detection apparatus, and with reference to fig. 7, the collision detection apparatus includes: the system comprises a point cloud obtaining unit 10, a point cloud compensation unit 20, a point cloud labeling unit 30 and a collision judgment unit 40;
the point cloud obtaining unit 10 is configured to obtain first point cloud data and first time of a laser radar, and obtain first vehicle pose information corresponding to the first time;
the point cloud compensation unit 20 is configured to filter and/or perform motion compensation on the first point cloud data to obtain second point cloud data;
the point cloud marking unit 30 is configured to establish a target area, obtain the second point cloud data in the target area to obtain third point cloud data, and mark a target object on the third point cloud data;
and the collision judging unit 40 is used for judging whether the vehicle collides with the target object according to the target object and the first vehicle pose information, and if so, calculating the collision time.
By the method and the device, the detected position information of the obstacle has high accuracy, the brake response is faster, and the processing time is earlier and safer than the automatic driving perception and planning. Thereby improving the safety and reliability of the unmanned vehicle.
It should be noted that each unit in the apparatus may be configured to implement each step in the method, and achieve the corresponding technical effect, which is not described herein again.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a vehicle in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 8, the vehicle may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include standard wired interfaces, wireless interfaces (e.g., WI-FI, 4G, 5G interfaces). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 8 does not constitute a limitation of the vehicle and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 8, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a collision detection program.
In the vehicle shown in fig. 8, the network interface 1004 is mainly used for data communication with an external network; the user interface 1003 is mainly used for receiving input instructions of a user; the vehicle invokes the collision detection program stored in the memory 1005 by the processor 1001 and performs the following operations:
s1: acquiring first point cloud data and first time of a laser radar, and acquiring first vehicle pose information corresponding to the first time;
s2: filtering and/or performing motion compensation on the first point cloud data to obtain second point cloud data;
s3: establishing a target area, acquiring the second point cloud data in the target area to obtain third point cloud data, and labeling a target object on the third point cloud data;
s4: judging whether a vehicle collides with the target object or not according to the target object and the first vehicle pose information; if a collision occurs, a collision time is calculated.
Optionally, the filtering the first point cloud data includes:
and carrying out noise point filtering and/or vehicle body point cloud filtering and/or voxel filtering on the first point cloud data.
Optionally, the motion compensation of the first point cloud data includes the following steps:
a1: initializing initial data for motion compensation, the initial data comprising: second vehicle pose information and a second time; constructing a first conversion matrix based on the second vehicle pose information;
a2: acquiring vehicle pose information of the current time to obtain third vehicle pose information; acquiring current time to obtain third time;
a3: acquiring points in the range from the second time to the third time, and performing matrix operation on the points one by using the first conversion matrix to obtain a motion-compensated point;
a4: and repeating the steps A2-A3, and processing all points of the first point cloud data to obtain the second point cloud data after motion compensation.
Optionally, the step S3 includes the following steps:
dividing the second point cloud data into regions according to horizontal angles to obtain a first region;
horizontally dividing the first area according to the line number of the laser radar to obtain a second area;
dividing the second area according to the cells in the designated range to obtain a third area;
judging whether the height values of the points of the second point cloud data and the ground in the third area are larger than a height threshold value one by one; and if so, marking the point as the target object and allocating a unique identifier to each target object.
Optionally, the step S4 includes the following steps:
constructing a second conversion matrix according to the first vehicle pose information corresponding to the plurality of second point cloud data and the variation of the vehicle pose information at the current moment;
performing matrix operation on the second conversion matrix to convert a coordinate system to obtain a fixed coordinate system;
predicting the motion track of the vehicle in a first time length range in the fixed coordinate system to obtain a first motion track; predicting the motion track of the moving object in the first time length range in the fixed coordinate system to obtain a second motion track;
judging whether the first motion track and the second motion track are crossed or not; if the intersection exists, acquiring a second time length for the vehicle to move to the intersection and acquiring a third time length for the moving object to move to the intersection;
calculating a fourth time length for the moving object to leave the first track according to the vehicle width of the vehicle, the length of the moving object and the angle for the moving object to enter the second motion track;
if the second duration is longer than the third duration and shorter than the fourth duration, the vehicle collides with the moving object, and the collision time is the second duration plus the time difference between the first time the moving object enters the first motion track and the intersection of the vehicle in the first track.
Optionally, the method further comprises the following steps:
whether the second point cloud data exists in the third area or not, and if the second point cloud data exists, identifying the third area as an effective area; identifying the third region as an invalid region if the second point cloud data does not exist;
converting the plurality of second point cloud data in the third area marked as the effective area into the same coordinate system, matching target objects segmented by each second point cloud data, and marking the target objects as static objects or moving objects;
and tracking the position change of the moving object according to time, and calculating the motion direction and the angle of the moving object.
Optionally, the method further comprises the following steps:
determining whether the static object and/or the dynamic object is a reflected object; if so, marking the static object or the dynamic object as a reflected object;
judging whether the reflected object is in an indoor or traffic jam section: and if so, deleting the static object or the moving object corresponding to the reflected object.
By the method and the device, the detected position information of the obstacle has high accuracy, the brake response is faster, and the processing time is earlier and safer than the automatic driving perception and planning. Thereby improving the safety and reliability of the unmanned vehicle.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a collision detection program is stored on the computer-readable storage medium, and when executed by a processor, the collision detection program implements the following operations:
s1: acquiring first point cloud data and first time of a laser radar, and acquiring first vehicle pose information corresponding to the first time;
s2: filtering and/or performing motion compensation on the first point cloud data to obtain second point cloud data;
s3: establishing a target area, acquiring the second point cloud data in the target area to obtain third point cloud data, and labeling a target object on the third point cloud data;
s4: judging whether a vehicle collides with the target object or not according to the target object and the first vehicle pose information; if a collision occurs, a collision time is calculated.
Optionally, the filtering the first point cloud data includes:
and carrying out noise point filtering and/or vehicle body point cloud filtering and/or voxel filtering on the first point cloud data.
Optionally, the motion compensating the first point cloud data includes:
a1: initializing initial data of motion compensation, the initial data comprising: second vehicle pose information and a second time; constructing a first conversion matrix based on the second vehicle pose information;
a2: acquiring vehicle pose information of the current time to obtain third vehicle pose information; acquiring current time to obtain third time;
a3: acquiring points in the range from the second time to the third time, and performing matrix operation on the points one by using the first conversion matrix to obtain a motion-compensated point;
a4: and repeating the steps A2-A3, and processing all points of the first point cloud data to obtain the second point cloud data after motion compensation.
Optionally, the step S3 includes the following steps:
dividing the second point cloud data into regions according to horizontal angles to obtain a first region;
horizontally dividing the first area according to the line number of the laser radar to obtain a second area;
dividing the second area according to the cells in the designated range to obtain a third area;
judging whether the height values of the points of the second point cloud data and the ground in the third area are larger than a height threshold value one by one; and if so, marking the point as the target object and allocating a unique identifier to each target object.
Optionally, the step S4 includes the following steps:
constructing a second conversion matrix according to the first vehicle pose information corresponding to the plurality of second point cloud data and the variation of the vehicle pose information at the current moment;
performing matrix operation on the second conversion matrix to convert a coordinate system to obtain a fixed coordinate system;
predicting the motion track of the vehicle in a first time length range in the fixed coordinate system to obtain a first motion track; predicting the motion track of the moving object in the first time length range in the fixed coordinate system to obtain a second motion track;
judging whether the first motion track and the second motion track are crossed or not; if the intersection exists, acquiring a second time length for the vehicle to move to the intersection and acquiring a third time length for the moving object to move to the intersection;
calculating a fourth time length for the moving object to leave the first track according to the vehicle width of the vehicle, the length of the moving object and the angle for the moving object to enter the second motion track;
if the second duration is longer than the third duration and shorter than the fourth duration, the vehicle collides with the moving object, and the collision time is the sum of the second duration and the time difference between the first time the moving object enters the first motion track and the intersection of the moving object and the vehicle in the first track.
Optionally, the method further comprises the following steps:
whether the second point cloud data exists in the third area or not, and if the second point cloud data exists, identifying the third area as an effective area; identifying the third region as an invalid region if the second point cloud data does not exist;
converting the plurality of second point cloud data in the third area marked as the effective area into the same coordinate system, matching target objects segmented by each second point cloud data, and marking the target objects as static objects or moving objects;
and tracking the position change of the moving object according to time, and calculating the motion direction and the angle of the moving object.
Optionally, the method further comprises the following steps:
determining whether the static object and/or the dynamic object is a reflected object; if so, marking the static object or the dynamic object as a reflected object;
judging whether the reflected object is in an indoor or traffic jam section: and if so, deleting the static object or the moving object corresponding to the reflected object.
By the method and the device, the detected position information of the obstacle has high accuracy, the brake response is faster, and the processing time is earlier and safer than the automatic driving perception and planning processing time. Thereby improving the safety and reliability of the unmanned vehicle.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or system in which the element is included.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controller, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A collision detection method, characterized in that the method comprises the steps of:
s1: acquiring first point cloud data and first time of a laser radar, and acquiring first vehicle pose information corresponding to the first time;
s2: filtering and/or performing motion compensation on the first point cloud data to obtain second point cloud data;
s3: establishing a target area, acquiring the second point cloud data in the target area to obtain third point cloud data, and labeling a target object on the third point cloud data;
s4: judging whether a vehicle collides with the target object or not according to the target object and the first vehicle pose information; if a collision occurs, a collision time is calculated.
2. The method of claim 1, wherein the filtering the first point cloud data comprises:
and carrying out noise point filtering and/or vehicle body point cloud filtering and/or voxel filtering on the first point cloud data.
3. The method according to claim 1, wherein the motion compensating the first point cloud data comprises:
a1: initializing initial data for motion compensation, the initial data comprising: second vehicle pose information and a second time; constructing a first conversion matrix based on the second vehicle pose information;
a2: acquiring vehicle pose information of the current time to obtain third vehicle pose information; acquiring current time to obtain third time;
a3: acquiring points in the range from the second time to the third time, and performing matrix operation on the points one by using the first conversion matrix to obtain a motion-compensated point;
a4: and repeating the steps A2-A3, and processing all points of the first point cloud data to obtain the second point cloud data after motion compensation.
4. The method according to claim 1, wherein the step S3 includes the steps of:
dividing the second point cloud data into regions according to horizontal angles to obtain a first region;
horizontally dividing the first area according to the line number of the laser radar to obtain a second area;
dividing the second area according to the cells in the designated range to obtain a third area;
judging whether the height values of the points of the second point cloud data and the ground in the third area are larger than a height threshold value one by one; and if so, marking the point as the target object and allocating a unique identifier to each target object.
5. The method according to claim 1, wherein the step S4 includes the steps of:
constructing a second conversion matrix according to the first vehicle pose information corresponding to the plurality of second point cloud data and the variation of the vehicle pose information at the current moment;
performing matrix operation on the second conversion matrix to convert a coordinate system to obtain a fixed coordinate system;
predicting the motion track of the vehicle in a first time length range in the fixed coordinate system to obtain a first motion track; predicting the motion trail of the moving object in the first time length range in the fixed coordinate system to obtain a second motion trail;
judging whether the first motion track and the second motion track are crossed or not; if the intersection exists, acquiring a second time length for the vehicle to move to the intersection and acquiring a third time length for the moving object to move to the intersection;
calculating a fourth time length for the moving object to leave the first track according to the vehicle width of the vehicle, the length of the moving object and the angle for the moving object to enter the second motion track;
if the second duration is longer than the third duration and shorter than the fourth duration, the vehicle collides with the moving object, and the collision time is the sum of the second duration and the time difference between the first time the moving object enters the first motion track and the intersection of the moving object and the vehicle in the first track.
6. The method of claim 4, further comprising the steps of:
whether the second point cloud data exists in the third area or not, and if the second point cloud data exists, identifying the third area as an effective area; identifying the third region as an invalid region if the second point cloud data does not exist;
converting the plurality of second point cloud data in the third area marked as the effective area into the same coordinate system, matching target objects segmented by each second point cloud data, and marking the target objects as static objects or moving objects;
and tracking the position change of the moving object according to time, and calculating the motion direction and the angle of the moving object.
7. The method of claim 6, further comprising the steps of:
determining whether the static object and/or the dynamic object is a reflected object; if so, marking the static object or the dynamic object as a reflected object;
judging whether the reflected object is in an indoor or traffic jam section: and if so, deleting the static object or the moving object corresponding to the reflected object.
8. A collision detecting device, characterized by comprising:
the point cloud obtaining unit is used for obtaining first point cloud data and first time of the laser radar and obtaining first vehicle pose information corresponding to the first time;
the point cloud compensation unit is used for filtering and/or performing motion compensation on the first point cloud data to obtain second point cloud data;
the point cloud marking unit is used for establishing a target area, acquiring the second point cloud data in the target area to obtain third point cloud data, and marking a target object on the third point cloud data;
and the collision judging unit is used for judging whether the vehicle collides with the target object according to the target object and the first vehicle pose information, and if so, calculating the collision time.
9. A vehicle, characterized in that it comprises: memory, a processor and a collision detection program stored on the memory and executable on the processor, the collision detection program being configured to implement the steps of the collision detection method according to any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the collision detection method according to any one of claims 1 to 7.
CN202210630726.8A 2022-02-28 2022-06-06 Collision detection method and device, vehicle and storage medium Pending CN115056771A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210188308 2022-02-28
CN2022101883088 2022-02-28

Publications (1)

Publication Number Publication Date
CN115056771A true CN115056771A (en) 2022-09-16

Family

ID=83201352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210630726.8A Pending CN115056771A (en) 2022-02-28 2022-06-06 Collision detection method and device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115056771A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115308771A (en) * 2022-10-12 2022-11-08 深圳市速腾聚创科技有限公司 Obstacle detection method and apparatus, medium, and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115308771A (en) * 2022-10-12 2022-11-08 深圳市速腾聚创科技有限公司 Obstacle detection method and apparatus, medium, and electronic device
CN115308771B (en) * 2022-10-12 2023-03-14 深圳市速腾聚创科技有限公司 Obstacle detection method and apparatus, medium, and electronic device

Similar Documents

Publication Publication Date Title
US20200331476A1 (en) Automatic lane change with minimum gap distance
US10068485B2 (en) Platooning autonomous vehicle navigation sensory exchange
CN111976720B (en) Autonomous passenger-replacing parking method, device, equipment and storage medium
CN111665852B (en) Obstacle avoiding method and device, vehicle and storage medium
US20200307589A1 (en) Automatic lane merge with tunable merge behaviors
CN111976718B (en) Automatic parking control method and system
CN112339748B (en) Method and device for correcting vehicle pose information through environment scanning in automatic parking
CN109484399B (en) Vehicle driving auxiliary device and method
CN110588273B (en) Parking assistance method, system, device and storage medium based on road surface detection
CN115056771A (en) Collision detection method and device, vehicle and storage medium
CN114475593B (en) Travel track prediction method, vehicle, and computer-readable storage medium
CN109887321B (en) Unmanned vehicle lane change safety judgment method and device and storage medium
CN113771841A (en) Driving assistance system, method, computer device and storage medium for a fleet of vehicles
CN112249007A (en) Vehicle danger alarm method and related equipment
CN114419930B (en) Intelligent information processing system of automobile internet of things
CN116609777A (en) Multi-scan sensor fusion for object tracking
Oniga et al. A fast ransac based approach for computing the orientation of obstacles in traffic scenes
CN113479191B (en) Lane-line-free lane boundary detection system and method for parking and vehicle
CN111881245B (en) Method, device, equipment and storage medium for generating visibility dynamic map
US20240123973A1 (en) Apparatus and method for automatic parking based on parking area environment recognition
CN113065517A (en) Safety obstacle avoidance method, vehicle and computer storage medium
CN115092175A (en) Method and device for detecting collision based on object state and storage medium
CN114325756A (en) Short-distance obstacle avoidance method and device based on laser radar, vehicle and storage medium
CN117302191A (en) Parking path dynamic planning method, electronic equipment and storage medium
KR20230163790A (en) METHOD AND SYSTEM FOR SHARING PARKING INFORMATION USING IoT SERVICE

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination