CN113985405A - Obstacle detection method and obstacle detection equipment applied to vehicle - Google Patents

Obstacle detection method and obstacle detection equipment applied to vehicle Download PDF

Info

Publication number
CN113985405A
CN113985405A CN202111089391.5A CN202111089391A CN113985405A CN 113985405 A CN113985405 A CN 113985405A CN 202111089391 A CN202111089391 A CN 202111089391A CN 113985405 A CN113985405 A CN 113985405A
Authority
CN
China
Prior art keywords
point cloud
grid
cloud data
target
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111089391.5A
Other languages
Chinese (zh)
Inventor
薛高茹
刘诗萌
刘嵩
郭志伟
秦屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Whst Co Ltd
Original Assignee
Whst Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Whst Co Ltd filed Critical Whst Co Ltd
Priority to CN202111089391.5A priority Critical patent/CN113985405A/en
Publication of CN113985405A publication Critical patent/CN113985405A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles

Abstract

The invention provides an obstacle detection method and obstacle detection equipment applied to a vehicle. The method comprises the following steps: determining target point cloud data according to the point cloud data acquired by the radar sensor and image information acquired by the image sensor; respectively carrying out obstacle identification on the target point cloud data and the image information to obtain first obstacle information corresponding to the target point cloud data and second obstacle information corresponding to the image information; and performing obstacle fusion on the first obstacle information and the second obstacle information to determine a target obstacle. The invention can improve the detection precision of the target barrier.

Description

Obstacle detection method and obstacle detection equipment applied to vehicle
Technical Field
The invention relates to the technical field of automatic driving, in particular to an obstacle detection method and obstacle detection equipment applied to a vehicle.
Background
Autonomous vehicles can provide greater safety, productivity, and traffic rates, and will play an important role in future urban traffic systems. In most automatic driving scenes or auxiliary driving scenes, the surrounding environment perception is a vital task, and a single sensor has different disadvantages in the environment perception, so that the multi-sensor fusion becomes a necessary means for improving the effect of a perception system.
At present, a multi-sensor fusion method is generally adopted for obstacle detection, namely, a data-level fusion obstacle detection method. The obstacle detection method of data level fusion is to transmit all raw data to a processor for data processing so as to determine obstacles.
However, the obstacle detection method using the data-level fusion has a problem of low obstacle detection accuracy.
Disclosure of Invention
The embodiment of the invention provides an obstacle detection method and obstacle detection equipment applied to a vehicle, and aims to solve the problem of low obstacle detection precision in the detection method in the prior art.
In a first aspect, an embodiment of the present invention provides an obstacle detection method, including:
determining target point cloud data according to the point cloud data acquired by the radar sensor and image information acquired by the image sensor;
respectively carrying out obstacle identification on the target point cloud data and the image information to obtain first obstacle information corresponding to the target point cloud data and second obstacle information corresponding to the image information;
and performing obstacle fusion on the first obstacle information and the second obstacle information to determine a target obstacle.
In a second aspect, an embodiment of the present invention provides an obstacle detection apparatus applied to a vehicle, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect or any one of the possible implementation manners of the first aspect when executing the computer program.
In a third aspect, an embodiment of the present invention provides an obstacle detection apparatus, including:
the point cloud data determining module is used for determining target point cloud data according to the point cloud data acquired by the radar sensor and the image information acquired by the image sensor;
the obstacle information determination module is used for respectively carrying out obstacle identification on the target point cloud data and the image information to obtain first obstacle information corresponding to the target point cloud data and second obstacle information corresponding to the image information;
and the target obstacle determining module is used for performing obstacle fusion on the first obstacle information and the second obstacle information to determine a target obstacle.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the method according to the first aspect or any one of the possible implementation manners of the first aspect.
The embodiment of the invention provides an obstacle detection method and obstacle detection equipment applied to a vehicle, wherein point cloud data are acquired through a radar sensor, a target image is acquired through an image sensor, the target image and the point cloud data are fused on a data level to determine target point cloud data, then obstacle identification is carried out on the target point cloud data to obtain obstacle information, meanwhile obstacle identification is carried out on the target image to obtain other obstacle information, then the first obstacle information and the second obstacle information are fused on the target level to jointly determine a target obstacle, as the target image acquired by the image sensor not only assists the radar sensor in obstacle identification on the target level, but also assists the radar sensor in judging the point cloud data on the data level to determine the target point cloud data and carries out obstacle identification based on the target point cloud data, therefore, the obtained detection result is more accurate, and the detection precision of the target obstacle is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a centralized fusion architecture diagram provided by an embodiment of the present invention;
FIG. 2 is a block diagram of a distributed fusion architecture provided by an embodiment of the present invention;
FIG. 3 is a hybrid fusion architecture diagram provided by an embodiment of the present invention;
fig. 4 is a flowchart of an implementation of a method for detecting an obstacle according to an embodiment of the present invention;
FIG. 5 is a diagram of an improved hybrid fusion architecture provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a positional relationship between a radar, a vehicle and a camera according to an embodiment of the present invention;
fig. 7 is a flowchart of an implementation of a method for detecting an obstacle according to another embodiment of the present invention;
FIG. 8 is a schematic diagram of the geometric relationship between data points and the vehicle coordinate system and the sensor coordinate system provided by the embodiment of the present invention;
FIG. 9 is a schematic view of a vehicle turning geometry provided by an embodiment of the present invention;
FIG. 10 is a schematic diagram of data time synchronization provided by an embodiment of the present invention;
FIG. 11 is a schematic diagram of a forward radar coordinate system provided by an embodiment of the present invention;
FIG. 12 is a schematic diagram of the positional relationship of the image coordinate system, the camera coordinate system and the vehicle coordinate system provided by the embodiment of the invention;
FIG. 13 is a schematic diagram of a position relationship between an image coordinate system and a pixel coordinate system according to an embodiment of the present invention;
FIG. 14 is a flow chart of an implementation of obstacle fusion provided by an embodiment of the present invention;
fig. 15 is a schematic structural diagram of an obstacle detection device according to an embodiment of the present invention;
fig. 16 is a schematic structural diagram of an obstacle detection device applied to a vehicle according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following description is made by way of specific embodiments with reference to the accompanying drawings.
Autonomous vehicles can provide greater safety, productivity, and traffic rates, and will play an important role in future urban traffic systems. Ambient perception is a crucial task in most autonomous or assisted driving scenarios. Single sensors (such as laser radar, millimeter wave radar, camera and ultrasonic wave) have different disadvantages in environmental perception, and therefore, multi-sensor fusion becomes a necessary means for improving the environmental perception effect.
The fusion mode of the multiple sensors can be divided according to the data processing degree of the local sensor in the fusion of the multiple sensors, and the fusion mode is mainly divided into a centralized mode, a distributed mode and a mixed mode. The three fusion modes are described with reference to fig. 1 to 3, and specifically as follows:
fig. 1 is a centralized fusion structure diagram, and it can be seen from fig. 1 that the centralized type is to send all information of the sensors to the domain controller, perform data association, measurement fusion, and target tracking in sequence, finally obtain position and state information of the target, and finally perform decision making. The centralized type has the advantage of high precision of data processing, and has the disadvantage that a large amount of data easily causes overlarge communication load and has high requirements on the performance of controller processing. Fig. 2 is a distributed fusion structure diagram, and it can be known from fig. 2 that in the distributed mode, the target observation result of each sensor is locally subjected to related target detection and tracking processing, and then sent to the domain controller to obtain the local track information of multi-target tracking. The distributed type has the advantages of low communication bandwidth requirement and high calculation speed, and the defect that the tracking precision is far from being centralized. Fig. 3 is a hybrid structure diagram, and it can be seen from fig. 3 that the hybrid structure is a hybrid structure formed according to different requirements for sensor data, and has the advantages of both centralized and distributed structures, and makes up for the disadvantages of the two.
However, in the hybrid fusion structure shown in fig. 3, the fusion of different types of sensors, such as the fusion of an image sensor (camera) and a radar sensor (angle radar, forward radar), is mainly focused on the target level, and the detection accuracy of the obstacle is low, where 5R1V refers to a sensor configuration of 5 millimeter wave radars and 1 forward-looking multifunctional camera. The embodiment of the invention is based on the hybrid fusion structure shown in fig. 3, and provides an improved hybrid fusion structure and an obstacle detection method for further improving the obstacle detection precision.
Referring to fig. 4 and 5, fig. 4 is a flowchart of an implementation of an obstacle detection method according to an embodiment of the present invention, which is suitable for the improved hybrid fusion structure shown in fig. 5, in which the sensors for obstacle detection in the improved hybrid fusion structure shown in fig. 5 include a radar sensor and an image sensor, where the radar sensor includes a forward radar sensor and a lateral radar sensor, and the number of the lateral radar sensors may be multiple, such as 2, 4 or more. Preferably, the number of the lateral radar sensors is 4, and in this case, the hybrid fusion structure shown in fig. 5 may be referred to as a 5R1V fusion structure, where "5R" refers to 5 radar sensors, i.e., 1 forward radar sensor and 4 lateral radar sensors, and "1V" refers to 1 image sensor, and the structure can obtain more accurate and reliable obstacle detection results mainly for a high-speed driving scene facing the L3 level. In some embodiments, the radar sensor may be a millimeter wave radar sensor, and the image sensor is a camera.
The following will describe a specific flow of the obstacle detection method according to each embodiment of the present invention by taking the hybrid fusion structure shown in fig. 5 as an example. Which comprises the following steps:
step S101: determining target point cloud data according to the point cloud data acquired by the radar sensor and image information acquired by the image sensor;
step S102: respectively carrying out obstacle identification on the target point cloud data and the image information to obtain first obstacle information corresponding to the target point cloud data and second obstacle information corresponding to the image information;
step S103: and performing obstacle fusion on the first obstacle information and the second obstacle information to determine a target obstacle.
Specifically, fig. 6 shows the positional relationship of the radar, the camera, and the vehicle, in which 1 forward radar sensor and 1 camera are installed at the front end of the vehicle, and 4 high-resolution lateral radar sensors (angle radar) are installed at 4 lateral directions of the vehicle. With reference to fig. 6, the specific process of the present invention is as follows: in the vehicle driving process, a front radar sensor collects point cloud data corresponding to a detection area, 4 lateral radar sensors respectively collect the point cloud data corresponding to the detection area, a camera collects image information in a camera shooting range, target point cloud data are obtained through the point cloud data collected by the radar sensors and the image information collected by the camera, then obstacle recognition is carried out on the target point cloud data to determine first obstacle information, obstacle recognition is carried out on the image information to determine second obstacle information, and finally obstacle fusion is carried out through the first obstacle information and the second obstacle information to obtain a target obstacle. Further, the steps of determining the first obstacle and determining the second obstacle do not define an order, and may be performed simultaneously.
Compared with the prior art, the obstacle detection method provided by the embodiment of the invention has the advantages that the point cloud data is obtained through the radar sensor, the target image is collected through the image sensor, then the target image and the point cloud data are fused on the data level to determine the target point cloud data, then the obstacle identification is carried out on the target point cloud data to obtain one obstacle information (namely, first obstacle information), meanwhile, the obstacle identification is carried out on the target image to obtain the other obstacle information (namely, second obstacle information), then, the first obstacle information and the second obstacle information are fused on the target level to determine the target obstacle together, as the target image collected by the image sensor not only assists the radar sensor in identifying the obstacle on the target level, but also assists the radar sensor in judging the point cloud data on the data level to determine the target point cloud data, and the obstacle identification is carried out based on the target point cloud data, so that the obtained detection result is more accurate, and the detection precision of the target obstacle is improved.
In an embodiment, the step S101 may include the following steps:
step S201: carrying out synchronous processing on the point cloud data and the image information to obtain synchronized point cloud data and synchronized image information;
wherein the synchronization process includes a time synchronization process and a space synchronization process.
Step S202: rasterizing the synchronized point cloud data to obtain a first raster image, and rasterizing the synchronized image information to obtain a second raster image;
step S203: fusing the first grid image and the first grid image according to a preset fusion method to obtain a fused grid image;
step S204: and performing attribute correction on the point cloud data based on the fusion raster image to obtain target point cloud data.
Specifically, a specific implementation flow of the obstacle detection method is described below by taking a 5R1V hybrid fusion structure shown in fig. 5 as an example, that is, the radar sensor includes a forward radar sensor and a lateral radar sensor, and when the radar sensor is a millimeter wave radar sensor.
When obstacle detection is performed by a plurality of sensors, it is necessary that data of the plurality of sensors satisfy requirements of time synchronization and space synchronization, and therefore, before processing data of the sensors, it is necessary to perform synchronization processing on data of each sensor.
Fig. 10 is a schematic diagram of time synchronization of data collected by a radar sensor and an image sensor according to an embodiment of the present invention. According to the invention, GPS unified time service is carried out on the radar sensor and the image sensor, and the radar sensor and the image sensor carry out time synchronization according to Lagrange interpolation after the unified time service. As can be seen from fig. 10, the data acquired by each sensor has a GPS timestamp, the GPS timestamp of the radar sensor may be regarded as the time when the domain controller acquires the point cloud data acquired by the radar sensor in the current reporting period, and the GPS timestamp of the image sensor may be regarded as the time when the domain controller acquires the image information acquired by the image sensor in the current reporting period. And after each sensor has a corresponding GPS time stamp, performing time synchronization on each sensor by adopting a Lagrange difference value. The process of time synchronizing the sensors is common knowledge and will not be described in detail herein.
After the time synchronization, the data of each sensor after the time synchronization is further subjected to spatial synchronization, where the spatial synchronization mainly maps the data collected by each sensor into a unified coordinate system, and in this embodiment, the unified coordinate system is a coordinate system centered on the rear axle of the vehicle (hereinafter, collectively described as a vehicle coordinate system). The process of spatial synchronization for different sensors is as follows:
(1) and for the forward millimeter wave radar, converting point cloud data corresponding to the forward millimeter wave radar into a vehicle coordinate system based on the conversion relation between the coordinate system of the forward millimeter wave radar and the vehicle coordinate system to obtain synchronous data corresponding to the forward millimeter wave radar.
Forward millimeter wave radar coordinate system XRYRZR-ORAs shown in fig. 11, the mounting position of the forward millimeter wave radar is defined as the origin of coordinates ORThe direction of the three coordinate axes is the same as the direction of the vehicle coordinate system, the detection direction of the forward millimeter wave radar is the X-axis direction, and X is the X-axis directionRORYRFor detecting plane, Y, by forward millimeter-wave radarRORZRIs a mounting plane. The target data output by the forward millimeter wave radar comprises distance, vehicle speed, relative angle and the like, and is a coordinate system X of the forward millimeter wave radarRORYRTwo-dimensional information in a plane. Y isRORZRPlane and YWOWZWPlane parallel to, distance X0,XRORYRPlane and XWOWYWThe planes are parallel, the distance is H, and for a forward millimeter wave radar target P (R, alpha), the conversion relation between a forward millimeter wave radar coordinate system and a vehicle coordinate system is as follows:
Figure BDA0003266757020000081
where R represents the target distance and α represents the azimuth.
(2) And for the lateral millimeter wave radar, converting the point cloud data corresponding to the lateral millimeter wave radar into a vehicle coordinate system based on the conversion relation between the lateral millimeter wave radar coordinate system and the vehicle coordinate system to obtain synchronous data corresponding to the lateral millimeter wave radar.
Lateral millimeter wave radar coordinate system XRiYRiZRi-ORiThe conversion relation between i 1,2,3,4 and the vehicle coordinate system is as follows:
Figure BDA0003266757020000082
wherein (X)i,Yi,Zi) For mounting position of lateral millimeter-wave radar in vehicle coordinate system, thetaPi
Figure BDA0003266757020000083
The ith lateral millimeter wave radar detects the azimuth angle and the pitch angle omega of the targeti、φiAzimuth and pitch angles for the installation of lateral millimeter wave radar.
(3) For the image sensor, taking a camera as an example, when performing spatial synchronization, the image information after time synchronization is converted into a vehicle coordinate system based on a conversion relation between an image coordinate system and a pixel coordinate system, a conversion relation between a camera coordinate system and an image coordinate system, a conversion relation between a world coordinate system and a camera coordinate system, and a conversion relation between the world coordinate system and the pixel coordinate system, so as to obtain the image information after spatial synchronization.
The specific implementation process of performing spatial synchronization on the image information may refer to fig. 12 and 13, where fig. 12 is a schematic diagram of a position relationship between an image coordinate system, a camera coordinate system, and a vehicle coordinate system, fig. 13 is a schematic diagram of a position relationship between an image coordinate system and a pixel coordinate system, and based on a linear camera model, coordinates of each point in an image are determined through a conversion relationship between a camera coordinate system and the image coordinate system, and then coordinates corresponding to the coordinates of each point in the image projected into a world coordinate system are obtained through the conversion relationship between the image coordinate system and the pixel coordinate system and the conversion relationship between the pixel coordinate system and the world coordinate system in sequence, where the world coordinate system is the vehicle coordinate system. The conversion of the vehicle coordinate system and the camera coordinate system is completed through the conversion process so as to realize the three-dimensional reconstruction of the point coordinates in the plane image.
Image coordinate system xoy: the camera passes objects in the three-dimensional real environment through a coordinate system of the imaging plane after perspective projection. And defining the intersection point of the optical axis and the imaging plane as a coordinate origin O, wherein the imaging plane is a coordinate system plane. The computer-stored image information is based on a pixel coordinate system that defines the top left vertex of the image as pixel coordinate system uO0v, as shown in fig. 13. The origin O of the image coordinate system is located at a pixel point (u) under the pixel coordinate system0,v0) Then, the conversion relationship between the image coordinate system and the pixel coordinate system is:
Figure BDA0003266757020000091
wherein dx and dy respectively represent the physical size of each pixel point in the x and y directions of the image coordinate system.
Camera coordinate system XcYcZc-Oc: using the center of the camera optical lens as the origin OcWith the optical axis of the camera as ZcAxes establishing a coordinate system. The coordinate axis is parallel to the image coordinate axis, and the conversion relationship between the camera coordinate system and the image coordinate system (f is the focal length of the camera) is as follows:
Figure BDA0003266757020000092
world coordinate system (X)WYWZW): as a reference coordinate system, it is used to describe the installation positions of the radar and the camera (i.e., the camera in the present application), and the positions of other objects in space. The conversion relation between the world coordinate system and the camera coordinate system is as follows:
Figure BDA0003266757020000093
the rotation matrix R is a 3 × 3 unit orthogonal matrix, and represents a rotation relationship of the camera coordinate system with respect to the world coordinate system. Translation vector TcA vector suitable for describing the translation relationship of the camera coordinate system with respect to the world coordinate system.
Finally, the conversion relation between the world coordinate system and the pixel coordinate system is obtained as follows:
Figure BDA0003266757020000101
Figure BDA0003266757020000102
in the formula, M1Is a camera intrinsic parameter matrix, M2Is the camera extrinsic parameter matrix.
After the data of each sensor is time-synchronized and space-synchronized, the data of each sensor needs to be fused at a data level. When data fusion is performed, firstly, the synchronized point cloud data is rasterized to obtain a first raster image, the synchronized image information is rasterized to obtain a second raster image, then the first raster image and the first raster image are fused according to a preset fusion method to obtain a fusion raster image, and the point cloud data is subjected to attribute correction based on the fusion raster image to obtain target point cloud data, so that the process of correcting the point cloud data by using the image information is realized, namely the process from the step S202 to the step S204 is performed.
Before rasterizing the point cloud data, dynamic and static separation needs to be carried out on the point cloud data to obtain dynamic point cloud data and static point cloud data. The dynamic and static separation of the point cloud data is mainly realized based on the vehicle projection speed and the point cloud Doppler speed. Specifically, the implementation of the dynamic and static separation comprises the following steps:
(11) acquiring the actual measurement Doppler velocity of each data point in the point cloud data;
(12) calculating a target Doppler velocity of each data point according to the current vehicle velocity;
(13) calculating the difference value between the actually measured Doppler velocity and the target Doppler velocity;
(14) when the absolute value of the difference is greater than a preset threshold, the data point is marked as a moving point, and when the absolute value of the difference is less than or equal to the preset threshold, the data point is marked as a static point, so that the point cloud data is divided into dynamic point cloud data corresponding to the moving point and static point cloud data corresponding to the static point.
In the implementation process, the actually measured doppler velocity is velocity information carried in the point cloud data, the target doppler velocity is related to the current vehicle velocity, and the characterization modes of the vehicle velocity are different according to different vehicle running states. The vehicle running state in the present embodiment includes: the linear driving state and the non-linear driving state are different in the current vehicle speed representation mode based on different driving states, so that the target Doppler speed is also different in the calculation mode. The following describes the processes of moving and static separation in different vehicle driving states, respectively.
(1) When the vehicle is in a straight driving state, the vehicle speed is projected to the direction of a connecting line of a point and the radar center, namely the point cloud doppler can be expressed as: vdi=VegocosθiWherein V isdiThe target Doppler velocity is the point cloud Doppler velocity reversely deduced according to the vehicle velocity, namely the target Doppler velocity; vegoAs the vehicle running speed, thetaiFor azimuth and safety of the ith point cloudTaking a right front side radar sensor in the radar sensor as an example, a mounting angle α is an included angle between a connecting line of a vehicle coordinate system origin E and a right front side radar sensor coordinate system origin S and a vehicle coordinate system Y axis, and an azimuth γ is an angle through which a connecting line of an ith data point and the right front side radar sensor coordinate system origin S rotates to a normal vector Y axis of the right front side radar sensor coordinate system origin S by a minimum path, wherein when the connecting line rotates counterclockwise, the azimuth γ is a positive value, and when the connecting line rotates clockwise, the azimuth γ is a negative value; the difference from the target doppler velocity is then calculated as:
Figure BDA0003266757020000111
if the speed difference is larger than a preset threshold value, the speed difference is a moving point, otherwise, the speed difference is a static point, wherein,
Figure BDA0003266757020000112
is the actual measured point cloud doppler velocity of the radar.
(2) When the vehicle is in a non-straight-line driving state, VdiThe calculation of (2) is determined by calculating the linear velocity of the vehicle, and the linear velocity calculation formula of the vehicle is as follows: where ω is angular velocity (yawRate), R is turning radius, and V is linear velocity. The turning radii of the left wheel and the right wheel are different, so that the linear speeds of the left wheel and the right wheel are different when the wheels turn, and beta is the wheel turning angle. Engineering experiments show that when the vehicle speed is the inner wheel speed during turning, the Doppler measured value and the theoretical value of the target point have small and stable errors. In conjunction with FIG. 9, one can obtain
Figure BDA0003266757020000121
So Vdi=Vycosθi+VxsinθiWherein V isxIs the linear velocity, V, of the vehicle coordinate system in the X-axis directionyIs the linear velocity of the vehicle coordinate system in the Y-axis direction.
Furthermore, because the accuracy of the dynamic and static separation of the point cloud data directly affects the detection precision of the travelable area and the judgment of the attributes of the target obstacle, the dynamic and static separation of the point cloud data obtained by each radar sensor is respectively carried out, namely, the dynamic and static separation of the point cloud data obtained by the forward millimeter wave radar and the dynamic and static point cloud data obtained by the lateral millimeter wave radar are respectively carried out, so that the dynamic point cloud data and the static point cloud data corresponding to the forward millimeter wave radar and the dynamic point cloud data and the static point cloud data corresponding to the lateral millimeter wave radar are obtained. In an embodiment, the step of performing dynamic and static separation on the point cloud data is performed before the synchronization processing, that is, during the synchronization processing, the obtained dynamic point cloud data and static point cloud data of each radar sensor are synchronized.
In step S202, the synchronized point cloud data is rasterized to obtain a first raster image, and the synchronized image information is rasterized to obtain a second raster image.
The rasterizing process of the point cloud data acquired by each radar sensor may include the following steps: the method comprises the steps of rasterizing a detection area of the radar sensor, counting the number of static points in each grid, when the number of the static points contained in the grid is larger than a first target threshold value, the grid is an occupied grid, marking the attribute of the grid as occupied, when the number of the static points contained in the grid is smaller than or equal to the first target threshold value, the grid is an invalid grid, marking the attribute of the grid as invalid, and obtaining a first grid map formed by the occupied grid and the invalid grid. In the process, the process of marking the attributes of each grid is the process of rasterizing the point cloud data.
When the radar sensor includes a plurality of the first grid patterns, the number of the first grid patterns is a plurality. Further, taking the example that the radar sensor includes a forward radar sensor and a lateral radar sensor, at this time, the first raster image includes a forward raster image and a lateral raster image, where the forward raster image is the first raster image corresponding to the point cloud data obtained by the forward radar sensor, and the lateral raster image is the first raster image corresponding to the point cloud data obtained by the lateral radar sensor. Further, when the number of the lateral radar sensors is 4, the number of the lateral grid patterns is also 4. And for the forward radar sensor, rasterizing a detection area of the forward radar sensor, counting the number of static points in each grid, determining the occupied grid when the number of static points contained in the grid is greater than a first target threshold, otherwise determining the invalid grid, and determining a forward grid image through the occupied grid and the invalid grid. Similarly, based on the above operation steps, a grid map corresponding to the lateral radar sensor, that is, a lateral grid map, may be determined, and detailed steps are not repeated.
The process of rasterizing image information acquired by an image sensor comprises the following steps:
and rasterizing the camera acquisition area, and when a non-moving target falls into a plurality of grids corresponding to the camera acquisition area, marking the grid as an occupied grid if the non-moving target falls into the certain grid, or marking the grid as an invalid grid, thereby obtaining a second grid graph consisting of the occupied grid and the invalid grid. Wherein the non-moving object is a stationary object, such as a stationary vehicle, a bush, or a light pole. Further, before the step of rasterizing the image information, a target tracking process is also included, namely, the image information acquired by the camera is subjected to target tracking to obtain lane lines and target information. And then carrying out data time synchronization and data time synchronization on the lane line and the target information in sequence to obtain synchronized image information.
Optionally, the step S203 may include the following steps:
(1) determining the occupancy value of the grid according to the occupancy result of the grid in the first grid map and the preset weight of the area to which the grid belongs and the occupancy result of the grid in the second grid map and the preset weight of the area to which the grid belongs;
(2) the occupancy value of the grid and the number of the grid graphs are subjected to quotient to obtain the average occupancy value of the grid;
(3) and when the average occupancy value of the grid is larger than the second target threshold value, the grid is an occupied grid, otherwise, the grid is an invalid grid, and the fusion grid map is determined through the occupied grid and the invalid grid.
Specifically, continuing with the example of 5R1V modified hybrid fusion structure shown in fig. 5, a total of 6 sensors, which are 1 forward millimeter-wave radar, 4 lateral millimeter-wave radar, and 1 camera, are included in the fusion structureIncluding 5 first grid maps (denoted as 1 forward grid map and 4 lateral grid maps, respectively) and 1 second grid map. In grid fusion, firstly, according to the detection precision of each sensor, weight [ i ] is set in different areas for each sensor in a priori manner]I is the sensor number; then, traversing each grid j of the forward grid map, the lateral grid map and the second grid map, counting the attributes (namely occupation or invalidity) of each grid in the forward grid map, the lateral grid map and the second grid map, and calculating a target occupation value cellValue of each grid in the fusion grid map to be obtained based on the statistical result, wherein the calculation formula of the target occupation value is as follows:
Figure BDA0003266757020000141
wherein N represents the number of sensors, cellj[i]Indicating the occupancy value of the jth grid in the grid map of the ith sensor, wherein the occupancy value corresponding to the grid is 1 when the grid attribute is occupancy, and the occupancy value corresponding to the grid is 0 when the grid attribute is invalid; and then, calculating an average occupancy value, wherein a specific calculation formula is as follows: avgcell value is cellValue/sensorNum, where sensorNum is the number of sensors; and finally, if the cellValue is greater than a first preset threshold value, the jth grid of the fused grid map is occupied, otherwise, the jth grid of the fused grid map is invalid, and the fused grid map consisting of the occupied grid and the invalid grid is obtained. And calculating the grid attributes in the fusion grid map through the first grid map and the second grid map, wherein the process of calculating the grid attributes in the fusion grid map is the process of data fusion.
Optionally, the step S204 may include the following steps:
(1) when the data point in the point cloud data corresponds to the grid in the fusion grid map as an occupied grid and the current attribute of the data point is a moving point, correcting the attribute of the data point into a static point;
(2) and the point cloud data corresponding to the corrected static point and the static point cloud data are jointly used as target point cloud data.
Specifically, if a data point in the point cloud data falls within an occupancy grid in the fused grid map, the data point is set as a static point regardless of the attributes preceding the data point, wherein the attributes of the point cloud include a static point and a dynamic point.
Through the steps S201 to S204, a process of fusing the image information acquired by the camera and the point cloud data acquired by the radar sensor on the data level and correcting the point cloud data by using the image information is realized.
In one embodiment, step S102 includes: performing clustering analysis on the target point cloud data to obtain obstacle point cloud data; and carrying out obstacle tracking on the obstacle point cloud data to obtain obstacle information corresponding to the target point cloud data. Specifically, the point cloud data acquired by the radar and the image data acquired by the camera are fused, and the target point cloud determined after fusion is subjected to cluster analysis and tracking, so that the identification precision of the obstacle can be effectively improved.
In one embodiment, step S103 includes:
step S301: preprocessing the first obstacle information to obtain radar track information, and preprocessing the second obstacle information to obtain visual track information;
step S302: respectively carrying out association operation on the radar track information, the visual track information and the fusion track information to determine the fusion track information which is successfully associated;
step S303: updating the track state of the successfully associated fusion track information to obtain updated fusion track information;
step S304: and calculating the track confidence of the updated fusion track information to obtain the target barrier.
Specifically, with reference to fig. 14, first, the obstacle information corresponding to the target point cloud data and the obstacle information in the image information obtained by the camera are transmitted to the fusion module, and operations such as uniform data format adaptation, spatial synchronization, speed conversion, and the like are performed to obtain the millimeter wave radar track information and the visual track information. And secondly, associating the radar track information and the visual track information with the fusion track information respectively, wherein different association logics or association parameters and the like need to be considered during association due to different radar and visual characteristics. Thirdly, performing Kalman filtering updating operation on the fusion track information successfully associated with the millimeter wave radar track information and the visual track information, namely performing track state calculation, namely updating information such as the type, the motion state, the motion mode, the track source and the like of the fusion track. And finally, calculating the track confidence coefficient of the updated fusion track information.
In addition, the flight path information is initially determined by the flight path information of the radar or the visual flight path information fused with the flight path information by the flight path management module, and the specific implementation mode is as follows: under the condition of receiving multi-frame data, identifying whether the first frame data is radar track information or visual track information, and starting the radar track information as fused track information when the first frame data is the radar track information; and when the first frame data is the visual track information, starting the visual track information as the fusion track information.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The following are embodiments of the apparatus of the invention, reference being made to the corresponding method embodiments described above for details which are not described in detail therein.
Fig. 15 is a schematic structural diagram of an obstacle detection device according to an embodiment of the present invention, and for convenience of description, only the parts related to the embodiment of the present invention are shown, which are detailed as follows:
as shown in fig. 15, an obstacle detection device includes:
a point cloud data determining module 151, configured to determine target point cloud data according to point cloud data obtained by the radar sensor and image information acquired by the image sensor;
the obstacle information determining module 152 is configured to perform obstacle identification on the target point cloud data and the image information respectively to obtain first obstacle information corresponding to the target point cloud data and second obstacle information corresponding to the image information;
and the target obstacle determining module 153 is configured to perform obstacle fusion on the first obstacle information and the second obstacle information to determine a target obstacle.
In one possible implementation, the target obstacle determining module 153 includes:
the preprocessing submodule is used for respectively preprocessing the first obstacle information and the second obstacle information to obtain radar track information and visual track information;
the association submodule is used for respectively carrying out association operation on the radar track information, the visual track information and the fusion track information and determining the fusion track information which is successfully associated, wherein the fusion track information is initially determined through the radar track information or the visual track information;
the updating submodule is used for updating the track state of the successfully associated fusion track information to obtain updated fusion track information;
and the track estimation submodule is used for calculating the track confidence of the updated fusion track information to obtain the target barrier.
In one possible implementation, the point cloud data determining module 151 includes:
the synchronous processing submodule is used for carrying out synchronous processing on the point cloud data and the image information to obtain synchronized point cloud data and synchronized image information;
the rasterization sub-module is used for rasterizing the synchronized point cloud data to obtain a first raster image and rasterizing the synchronized image information to obtain a second raster image;
the fusion submodule is used for fusing the first grid image and the second grid image according to a preset fusion method to obtain a fusion grid image;
and the correction submodule is used for correcting the point cloud data according to the fusion raster image to obtain target point cloud data.
In a possible implementation manner, before rasterizing the sub-modules, the method further includes:
the dynamic and static separation submodule is used for performing dynamic and static separation on the point cloud data acquired by the radar sensor to obtain dynamic point cloud data and static point cloud data;
correspondingly, the rasterizing submodule comprises: and rasterizing a detection area of the radar sensor, counting the number of static points in each grid, determining the occupied grid when the number of static points contained in the grid is greater than a first target threshold value, otherwise determining the invalid grid, and determining a first grid map through the occupied grid and the invalid grid.
In one possible implementation, the merging submodule includes:
an occupancy value calculation unit for sequentially calculating target occupancy values of respective grids corresponding to the fusion grid map, based on the weights occupied by the first grid map and the second grid map in the fusion grid map, and the occupancy values of the respective grids in the first grid map and the second grid map;
the average value calculating unit is used for dividing the target occupation value by the sum of the number of the first grid graph and the second grid graph and sequentially calculating the average occupation value of each corresponding grid in the fusion grid graph;
and the fusion grid determining unit is used for marking the corresponding grid in the fusion grid map as an occupied grid when the average occupied value is larger than the second target threshold value, and marking the corresponding grid as an invalid grid if the average occupied value is not larger than the second target threshold value, so as to obtain the fusion grid map consisting of the occupied grid and the invalid grid.
In one possible implementation, the modification submodule includes:
the judging unit is used for correcting the attribute of the data point into a static point when the data point in the point cloud data corresponds to the occupied grid in the fusion grid map and the current attribute of the data point is a dynamic point;
and the target point cloud determining unit is used for taking the point cloud data corresponding to the corrected static point and the static point cloud data as target point cloud data together.
In one possible implementation, the dynamic-static separation submodule includes:
the actual measurement value acquisition unit is used for acquiring the actual measurement Doppler velocity of each data point in the point cloud data;
a target value calculation unit for calculating a target doppler velocity of each data point according to a current vehicle velocity;
a difference value calculating unit for calculating the difference value between the actual measurement Doppler velocity and the target Doppler velocity;
and the judging unit is used for marking the data points as moving points when the absolute value of the difference is greater than a preset threshold, marking the data points as static points when the absolute value of the difference is less than or equal to the preset threshold, and dividing the point cloud data into dynamic point cloud data corresponding to the moving points and static point cloud data corresponding to the static points.
In one possible implementation, the target value calculation unit includes:
when the vehicle is in a straight-line driving state, calculating the target doppler velocity of each data point according to the current vehicle velocity specifically comprises:
Vdi=Vegocosθi
wherein, VdiIs the target doppler velocity; vegoAs the vehicle running speed, thetaiThe azimuth angle is the included angle between a connecting line of the origin of the vehicle coordinate system and the origin of the radar sensor or the image sensor coordinate system and the Y axis of the vehicle coordinate system, and the azimuth angle is the angle through which the connecting line of the origin of the radar sensor or the image sensor coordinate system rotates to the Y axis of the normal vector of the origin of the radar sensor or the image sensor coordinate system by the minimum path, wherein when the connecting line rotates anticlockwise, the azimuth angle is a positive value, and when the connecting line rotates clockwise, the azimuth angle is a negative value;
when the vehicle is in a non-linear driving state, calculating the target doppler velocity of each data point according to the current vehicle velocity specifically comprises:
Vdi=Vycosθi+Vxsinθi
wherein, VdiIs the target Doppler velocity, VxIs the linear velocity, V, of the vehicle running speed in the X-axis direction of the vehicle coordinate systemyIs the linear speed theta of the running speed of the vehicle in the Y-axis direction of the vehicle coordinate systemiIs the sum of the azimuth and the setting angle of the ith data point.
In one possible implementation, the radar sensor is a millimeter wave radar sensor, and includes 1 forward radar sensor and 4 lateral radar sensors, and the image sensor includes 1 camera.
Fig. 16 is a schematic diagram of an obstacle detection device applied to a vehicle according to an embodiment of the present invention. As shown in fig. 16, the apparatus 16 of this embodiment includes: a processor 160, a memory 161, and a computer program 162 stored in the memory 161 and executable on the processor 160. The processor 160 executes the computer program 162 to implement the steps in the above-described embodiments of the obstacle detection method, such as the steps 101 to 103 shown in fig. 4. Alternatively, the processor 160 implements the functions of the modules/units in the above-described device embodiments, for example, the functions of the modules/units 151 to 153 shown in fig. 15, when executing the computer program 162.
Illustratively, the computer program 162 may be divided into one or more modules/units, which are stored in the memory 161 and executed by the processor 160 to carry out the invention. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions that describe the execution of the computer program 162 in the device 16. For example, the computer program 162 may be divided into the modules/units 151 to 153 shown in fig. 15.
The device 16 may be a computing device such as a desktop computer, a laptop, a palmtop, and a cloud server. Device 16 may include, but is not limited to, a processor 160, a memory 161. Those skilled in the art will appreciate that fig. 16 is merely an example of a device 16 and does not constitute a limitation of device 16 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., a terminal may also include input-output devices, network access devices, buses, etc.
The Processor 160 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 161 may be an internal storage unit of the device 16, such as a hard disk or a memory of the device 16. The memory 161 may also be an external storage device of the device 16, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc., provided on the device 16. Further, memory 161 may also include both internal storage units of device 16 and external storage devices. The memory 161 is used to store computer programs and other programs and data required by the terminal. The memory 161 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, head mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the above embodiments may be implemented by a computer program, which is stored in a computer readable storage medium and used for instructing related hardware to implement the steps of the above embodiments of the obstacle detection method when being executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may include any suitable increase or decrease as required by legislation and patent practice in the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An obstacle detection method, comprising:
determining target point cloud data according to the point cloud data acquired by the radar sensor and image information acquired by the image sensor;
respectively carrying out obstacle identification on the target point cloud data and the image information to obtain first obstacle information corresponding to the target point cloud data and second obstacle information corresponding to the image information;
and performing obstacle fusion on the first obstacle information and the second obstacle information to determine a target obstacle.
2. The method of claim 1, wherein performing obstacle fusion on the first obstacle information and the second obstacle information to determine a target obstacle comprises:
preprocessing the first obstacle information and the second obstacle information respectively to obtain radar track information and visual track information;
respectively carrying out association operation on the radar track information, the visual track information and the fusion track information, and determining the fusion track information which is successfully associated, wherein the fusion track information is initially determined through the radar track information or the visual track information;
updating the track state of the successfully associated fusion track information to obtain updated fusion track information;
and calculating the track confidence of the updated fusion track information to obtain the target obstacle.
3. The method of claim 1, wherein determining target point cloud data from the point cloud data acquired by the radar sensor and the image information acquired by the image sensor comprises:
performing synchronization processing on the point cloud data and the image information to obtain synchronized point cloud data and synchronized image information;
rasterizing the synchronized point cloud data to obtain a first raster image, and rasterizing the synchronized image information to obtain a second raster image;
fusing the first grid map and the second grid map according to a preset fusion method to obtain a fused grid map;
and correcting the point cloud data according to the fusion raster image to obtain the target point cloud data.
4. The method of claim 3, wherein before rasterizing the synchronized point cloud data to obtain the first raster map, the method further comprises:
performing dynamic and static separation on the point cloud data acquired by the radar sensor to obtain dynamic point cloud data and static point cloud data;
correspondingly, the rasterizing the synchronized point cloud data to obtain a first raster image includes:
and rasterizing a detection area of the radar sensor, counting the number of static points in each grid, determining the grid as an occupied grid when the number of static points contained in the grid is greater than a first target threshold, otherwise determining the first grid map according to the occupied grid and the invalid grid.
5. The method according to claim 3, wherein the fusing the first grid map and the second grid map according to a preset fusion method to obtain a fused grid map comprises:
sequentially calculating the target occupation value of each corresponding grid in the fusion grid map according to the weight occupied by the first grid map and the second grid map in the fusion grid map and the occupation value of each grid in the first grid map and the second grid map;
taking the target occupation value and the sum of the number of the first grid graph and the number of the second grid graph as a quotient, and sequentially calculating the average occupation value of each corresponding grid in the fusion grid graph;
and when the average occupation value is larger than a second target threshold value, marking the corresponding grid in the fusion grid map as an occupation grid, otherwise, marking the corresponding grid as an invalid grid, and obtaining the fusion grid map formed by the occupation grid and the invalid grid.
6. The method of claim 4, wherein the modifying the point cloud data according to the fused raster map to obtain target point cloud data comprises:
when a data point in the point cloud data corresponds to a grid in the fusion grid map and is an occupied grid and the current attribute of the data point is a moving point, correcting the attribute of the data point to be a static point;
and taking the point cloud data corresponding to the corrected static point and the static point cloud data as the target point cloud data together.
7. The method of claim 4, wherein the dynamic and static separation of the point cloud data obtained by the radar sensor to obtain dynamic point cloud data and static point cloud data comprises:
acquiring the actual measurement Doppler velocity of each data point in the point cloud data;
calculating a target Doppler velocity of each data point according to the current vehicle velocity;
calculating the difference value of the measured Doppler velocity and the target Doppler velocity;
when the absolute value of the difference is larger than a preset threshold, the data point is marked as a moving point, when the absolute value of the difference is smaller than or equal to the preset threshold, the data point is marked as a static point, and the point cloud data is divided into dynamic point cloud data corresponding to the moving point and static point cloud data corresponding to the static point.
8. The method of claim 5, wherein said calculating a target Doppler velocity for each data point based on a current vehicle velocity comprises:
when the vehicle is in a straight-line driving state, the calculating the target doppler velocity of each data point according to the current vehicle velocity specifically includes:
Vdi=Vegocosθi
wherein, VdiIs the target doppler velocity; vegoAs the vehicle running speed, thetaiIs the sum of the azimuth angle and the setting angle of the ith data point, the setting angleThe azimuth angle is an included angle between a connecting line of an origin of a vehicle coordinate system and the origin of the radar sensor or the image sensor coordinate system and a Y axis of the vehicle coordinate system, and the azimuth angle is an angle which a connecting line of an ith data point and the origin of the radar sensor or the image sensor coordinate system rotates to a normal vector Y axis of the origin of the radar sensor or the image sensor coordinate system by a minimum path, wherein when the connecting line rotates anticlockwise, the azimuth angle is a positive value, and when the connecting line rotates clockwise, the azimuth angle is a negative value;
when the vehicle is in a non-linear driving state, the calculating the target doppler velocity of each data point according to the current vehicle velocity specifically includes:
Vdi=Vycosθi+Vxsinθi
wherein, VdiIs the target Doppler velocity, VxIs the linear velocity, V, of the vehicle running speed in the X-axis direction of the vehicle coordinate systemyIs the linear speed theta of the running speed of the vehicle in the Y-axis direction of the vehicle coordinate systemiIs the sum of the azimuth and the setting angle of the ith data point.
9. The method of any one of claims 1 to 8, wherein the radar sensor is a millimeter wave radar sensor and comprises 1 forward radar sensor and 4 side radar sensors, and the image sensor comprises 1 camera.
10. An obstacle detection apparatus for application to a vehicle, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the obstacle detection method as set forth in any one of the preceding claims 1 to 9.
CN202111089391.5A 2021-09-16 2021-09-16 Obstacle detection method and obstacle detection equipment applied to vehicle Pending CN113985405A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111089391.5A CN113985405A (en) 2021-09-16 2021-09-16 Obstacle detection method and obstacle detection equipment applied to vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111089391.5A CN113985405A (en) 2021-09-16 2021-09-16 Obstacle detection method and obstacle detection equipment applied to vehicle

Publications (1)

Publication Number Publication Date
CN113985405A true CN113985405A (en) 2022-01-28

Family

ID=79735954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111089391.5A Pending CN113985405A (en) 2021-09-16 2021-09-16 Obstacle detection method and obstacle detection equipment applied to vehicle

Country Status (1)

Country Link
CN (1) CN113985405A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724116A (en) * 2022-05-23 2022-07-08 禾多科技(北京)有限公司 Vehicle traffic information generation method, device, equipment and computer readable medium
CN115616560A (en) * 2022-12-02 2023-01-17 广汽埃安新能源汽车股份有限公司 Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium
CN117008122A (en) * 2023-08-04 2023-11-07 江苏苏港智能装备产业创新中心有限公司 Method and system for positioning surrounding objects of engineering mechanical equipment based on multi-radar fusion

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724116A (en) * 2022-05-23 2022-07-08 禾多科技(北京)有限公司 Vehicle traffic information generation method, device, equipment and computer readable medium
CN115616560A (en) * 2022-12-02 2023-01-17 广汽埃安新能源汽车股份有限公司 Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium
CN117008122A (en) * 2023-08-04 2023-11-07 江苏苏港智能装备产业创新中心有限公司 Method and system for positioning surrounding objects of engineering mechanical equipment based on multi-radar fusion

Similar Documents

Publication Publication Date Title
CN109270534B (en) Intelligent vehicle laser sensor and camera online calibration method
WO2022022694A1 (en) Method and system for sensing automated driving environment
CN111553859B (en) Laser radar point cloud reflection intensity completion method and system
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
CN113985405A (en) Obstacle detection method and obstacle detection equipment applied to vehicle
CN111192295B (en) Target detection and tracking method, apparatus, and computer-readable storage medium
CN113989766A (en) Road edge detection method and road edge detection equipment applied to vehicle
EP4080248A1 (en) Method and apparatus for vehicle positioning, controller, smart car and system
CN112581612B (en) Vehicle-mounted grid map generation method and system based on fusion of laser radar and all-round-looking camera
CN109636837B (en) Method for evaluating calibration accuracy of external parameters of monocular camera and millimeter wave radar
CN117441113A (en) Vehicle-road cooperation-oriented perception information fusion representation and target detection method
US20220373353A1 (en) Map Updating Method and Apparatus, and Device
CN110378919B (en) Narrow-road passing obstacle detection method based on SLAM
WO2020215254A1 (en) Lane line map maintenance method, electronic device and storage medium
CN111209956A (en) Sensor data fusion method, and vehicle environment map generation method and system
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN112748421A (en) Laser radar calibration method based on automatic driving of straight road section
CN111699410A (en) Point cloud processing method, device and computer readable storage medium
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN110750153A (en) Dynamic virtualization device of unmanned vehicle
CN115410167A (en) Target detection and semantic segmentation method, device, equipment and storage medium
CN111382591B (en) Binocular camera ranging correction method and vehicle-mounted equipment
CN111538008B (en) Transformation matrix determining method, system and device
CN114140533A (en) Method and device for calibrating external parameters of camera
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination