CN113989766A - Road edge detection method and road edge detection equipment applied to vehicle - Google Patents

Road edge detection method and road edge detection equipment applied to vehicle Download PDF

Info

Publication number
CN113989766A
CN113989766A CN202111088074.1A CN202111088074A CN113989766A CN 113989766 A CN113989766 A CN 113989766A CN 202111088074 A CN202111088074 A CN 202111088074A CN 113989766 A CN113989766 A CN 113989766A
Authority
CN
China
Prior art keywords
grid
point cloud
target
cloud data
road edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111088074.1A
Other languages
Chinese (zh)
Inventor
薛高茹
刘诗萌
刘嵩
郭志伟
秦屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Whst Co Ltd
Original Assignee
Whst Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Whst Co Ltd filed Critical Whst Co Ltd
Priority to CN202111088074.1A priority Critical patent/CN113989766A/en
Publication of CN113989766A publication Critical patent/CN113989766A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention provides a road edge detection method and road edge detection equipment applied to a vehicle. The method comprises the following steps: determining target point cloud data according to the point cloud data acquired by the radar sensor and image information acquired by the image sensor; extracting the characteristics of the target point cloud data, and determining first road edge information corresponding to the target point cloud data; carrying out road edge identification on the image information to obtain second road edge information and lane line information corresponding to the image information; and determining the target road edge and the target lane line according to the first road edge information, the second road edge information and the lane line information. The invention can improve the detection precision of the road edge.

Description

Road edge detection method and road edge detection equipment applied to vehicle
Technical Field
The invention relates to the technical field of automatic driving, in particular to a road edge detection method and road edge detection equipment applied to vehicles.
Background
Autonomous vehicles can provide greater safety, productivity, and traffic rates, and will play an important role in future urban traffic systems. In most automatic driving scenes or auxiliary driving scenes, the surrounding environment perception is a vital task, and a single sensor has different disadvantages in the environment perception, so that the multi-sensor fusion becomes a necessary means for improving the effect of a perception system.
At present, a multi-sensor fusion method is generally adopted to detect road edges, namely a data-level fusion road edge detection method. The road edge detection method of data level fusion is to transmit all the raw data to a processor for data processing so as to determine the road edge.
However, the road edge detection method using data level fusion has a problem of low detection accuracy.
Disclosure of Invention
The embodiment of the invention provides a road edge detection method and road edge information detection equipment applied to vehicles, and aims to solve the problem of low road edge detection precision in the detection method in the prior art.
In a first aspect, an embodiment of the present invention provides a road edge detection method, including:
determining target point cloud data according to the point cloud data acquired by the radar sensor and image information acquired by the image sensor;
extracting the characteristics of the target point cloud data, and determining first road edge information corresponding to the target point cloud data;
carrying out road edge identification on the image information to obtain second road edge information and lane line information corresponding to the image information;
and determining the target road edge and the target lane line according to the first road edge information, the second road edge information and the lane line information.
In a second aspect, an embodiment of the present invention provides a road edge detection device applied to a vehicle, including a memory, a processor, and a computer program stored in the memory and operable on the processor, where the processor implements the steps of the method according to the first aspect or any one of the possible implementation manners of the first aspect when executing the computer program.
The embodiment of the invention provides a road edge detection method and road edge detection equipment applied to a vehicle. According to the method, data collected by a radar sensor and data collected by an image sensor are combined to determine target point cloud data, road edge fusion is further performed on first road edge information corresponding to the target point cloud data and second road edge information corresponding to the image sensor, the fused road edge information is corrected through target lane line information corresponding to the target point cloud data, the target road edge and the target lane line are determined, and the detection accuracy of the target road edge and the target lane line is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a centralized fusion architecture diagram provided by an embodiment of the present invention;
FIG. 2 is a block diagram of a distributed fusion architecture provided by an embodiment of the present invention;
FIG. 3 is a hybrid fusion architecture diagram provided by an embodiment of the present invention;
fig. 4 is a flowchart of an implementation of a road edge detection method according to an embodiment of the present invention;
FIG. 5 is a diagram of an improved hybrid fusion architecture provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a positional relationship between a radar, a vehicle and a camera according to an embodiment of the present invention;
fig. 7 is a flowchart illustrating an implementation of a road edge detection method according to another embodiment of the present invention;
FIG. 8 is a schematic diagram of the geometric relationship between data points and the vehicle coordinate system and the sensor coordinate system provided by the embodiment of the present invention;
FIG. 9 is a schematic view of a vehicle turning geometry provided by an embodiment of the present invention;
FIG. 10 is a schematic diagram of data time synchronization provided by an embodiment of the present invention;
FIG. 11 is a schematic diagram of a forward radar coordinate system provided by an embodiment of the present invention;
FIG. 12 is a schematic diagram of the positional relationship of the image coordinate system, the camera coordinate system and the vehicle coordinate system provided by the embodiment of the invention;
FIG. 13 is a schematic diagram of a position relationship between an image coordinate system and a pixel coordinate system according to an embodiment of the present invention;
FIG. 14 is a diagram of a grid corresponding to a target lane line and a grid corresponding to target road edge information according to an embodiment of the present invention;
fig. 15 is a schematic diagram of the grid corresponding to the target lane line and the grid corresponding to the target road edge information being matched in registration according to the embodiment of the present invention;
FIG. 16 is a schematic diagram of a grid corresponding to an edge of a target road according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of a road edge detection device according to an embodiment of the present invention;
fig. 18 is a schematic structural diagram of a road edge detecting apparatus applied to a vehicle according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following description is made by way of specific embodiments with reference to the accompanying drawings.
Autonomous vehicles can provide greater safety, productivity, and traffic rates, and will play an important role in future urban traffic systems. Ambient perception is a crucial task in most autonomous or assisted driving scenarios. Single sensors (such as laser radar, millimeter wave radar, camera and ultrasonic wave) have different disadvantages in environmental perception, and therefore, multi-sensor fusion becomes a necessary means for improving the environmental perception effect.
The fusion mode of the multiple sensors can be divided according to the data processing degree of the local sensor in the fusion of the multiple sensors, and the fusion mode is mainly divided into a centralized mode, a distributed mode and a mixed mode. The three fusion modes are described with reference to fig. 1 to 3, and specifically as follows: fig. 1 is a centralized fusion structure diagram, and it can be seen from fig. 1 that the centralized type is to send all information of the sensors to the domain controller, perform data association, measurement fusion, and target tracking in sequence, finally obtain position and state information of the target, and finally perform decision making. The centralized type has the advantage of high precision of data processing, and has the disadvantage that a large amount of data easily causes overlarge communication load and has high requirements on the performance of controller processing. Fig. 2 is a distributed fusion structure diagram, and it can be known from fig. 2 that in the distributed mode, the target observation result of each sensor is locally subjected to related target detection and tracking processing, and then sent to the domain controller to obtain the local track information of multi-target tracking. The distributed type has the advantages of low communication bandwidth requirement and high calculation speed, and the defect that the tracking precision is far from being centralized. Fig. 3 is a hybrid structure diagram, and it can be seen from fig. 3 that the hybrid structure is a hybrid structure formed according to different requirements for sensor data, and has the advantages of both centralized and distributed structures, and makes up for the disadvantages of the two.
However, in the hybrid fusion structure shown in fig. 3, the fusion of different types of sensors, such as the fusion of an image sensor (camera) and a radar sensor (angle radar, forward radar), is mainly focused on the target level, and the solution of 5R1V fusion is mainly focused on the data level, and the detection accuracy of the road edge is low, where 5R1V refers to a sensor configuration of 5 millimeter-wave radars and 1 forward-looking multifunctional camera. The embodiment of the invention is based on the hybrid fusion structure shown in fig. 3, and provides an improved hybrid fusion structure and a road edge detection method for further improving the road edge detection precision.
Referring to fig. 4 and 5, fig. 4 is a flowchart of an implementation of a road edge detection method according to an embodiment of the present invention, which is applicable to the improved hybrid fusion structure shown in fig. 5, in which the sensors for performing road edge detection in the improved hybrid fusion structure shown in fig. 5 include a radar sensor and an image sensor, where the radar sensor includes a forward radar sensor and a lateral radar sensor, and the number of the lateral radar sensors may be multiple, such as 2, 4 or more. Preferably, the number of the lateral radar sensors is 4, and in this case, the hybrid fusion structure shown in fig. 5 may be referred to as a 5R1V fusion structure, where "5R" refers to 5 radar sensors, i.e., 1 forward radar sensor and 4 lateral radar sensors, and "1V" refers to 1 image sensor, and this structure can obtain more accurate and reliable road edge detection results mainly for a high-speed driving scene facing the L3 level. In some embodiments, the radar sensor may be a millimeter wave radar sensor, and the image sensor is a camera.
The following will describe a specific flow of the road edge detection method according to each embodiment of the present invention by taking the hybrid fusion structure shown in fig. 5 as an example. Which comprises the following steps:
step S101: determining target point cloud data according to the point cloud data acquired by the radar sensor and image information acquired by the image sensor;
step S102: extracting the characteristics of the target point cloud data, and determining first road edge information corresponding to the target point cloud data;
step S103: carrying out road edge identification on the image information to obtain second road edge information and lane line information corresponding to the image information;
step S104: and determining the target road edge and the target lane line according to the first road edge information, the second road edge information and the lane line information.
Specifically, fig. 6 shows the positional relationship of the radar, the camera, and the vehicle, in which 1 forward radar sensor and 1 camera are installed at the front end of the vehicle, and 4 high-resolution lateral radar sensors (angle radar) are installed at 4 lateral directions of the vehicle. With reference to fig. 6, the specific process of the present invention is as follows: in the driving process of a vehicle, a front radar sensor collects point cloud data corresponding to a detection area, 4 lateral radar sensors respectively collect the point cloud data corresponding to the detection area, a camera collects image information in a camera shooting range, target point cloud data are determined through the point cloud data obtained by the radar sensor and the image information collected by the image sensor, feature extraction is carried out on the target point cloud data, first road edge information corresponding to the target point cloud data is determined, road edge identification is carried out on the image information, second road edge information and lane line information corresponding to the image information are obtained, and finally, a target road edge and a target lane line are determined according to the first road edge information, the second road edge information and the lane line information. And the second road edge information is determined by detecting and tracking the image information acquired by the camera. Further, the execution of step S102 and step S103 does not limit the order, and may be executed simultaneously.
Compared with the prior art, the road edge detection method provided by the embodiment of the invention has the advantages that the point cloud data is obtained through the radar sensor, the target image is collected through the image sensor, the target image and the point cloud data are fused on the data level to determine the target point cloud data, then the characteristic extraction is carried out on the target point cloud data to obtain one piece of road edge information (namely first road edge information), meanwhile, the road edge identification is carried out on the target image to obtain the other piece of road edge information (namely second road edge information) and lane line information, then, the first road edge information and the second road edge information are fused on the target level, the fused road edge information is corrected through the lane line information, and the target image collected through the image sensor not only assists the radar sensor in identifying the road edge and the lane line on the target level, meanwhile, the radar sensor can be assisted in the data level to judge the point cloud data to determine target point cloud data, and road edge identification is carried out on the basis of the target point cloud data, so that the obtained detection result is more accurate, and the detection precision of the target road edge and the target lane line is improved.
In an embodiment, the step S101 may include the following steps:
step S201: carrying out synchronous processing on the point cloud data and the image information to obtain synchronized point cloud data and synchronized image information;
wherein the synchronization process includes a time synchronization process and a space synchronization process.
Step S202: rasterizing static point cloud data in the synchronized point cloud data to obtain a first raster image, and rasterizing the synchronized image information to obtain a second raster image;
step S203: fusing the first grid image and the first grid image according to a preset fusion method to obtain a fused grid image;
step S204: and performing attribute correction on the point cloud data based on the fusion raster image to obtain target point cloud data.
Specifically, a specific implementation flow of the obstacle detection method is described below by taking a 5R1V hybrid fusion structure shown in fig. 5 as an example, that is, the radar sensor includes a forward radar sensor and a lateral radar sensor, and when the radar sensor is a millimeter wave radar sensor.
When obstacle detection is performed by a plurality of sensors, it is necessary that data of the plurality of sensors satisfy requirements of time synchronization and space synchronization, and therefore, before processing data of the sensors, it is necessary to perform synchronization processing on data of each sensor.
Fig. 10 is a schematic diagram of time synchronization of data collected by a radar sensor and an image sensor according to an embodiment of the present invention. According to the invention, GPS unified time service is carried out on the radar sensor and the image sensor, and the radar sensor and the image sensor carry out time synchronization according to Lagrange interpolation after the unified time service. As can be seen from fig. 10, the data acquired by each sensor has a GPS timestamp, the GPS timestamp of the radar sensor may be regarded as the time when the domain controller acquires the point cloud data acquired by the radar sensor in the current reporting period, and the GPS timestamp of the image sensor may be regarded as the time when the domain controller acquires the image information acquired by the image sensor in the current reporting period. And after each sensor has a corresponding GPS time stamp, performing time synchronization on each sensor by adopting a Lagrange difference value. The process of time synchronizing the sensors is common knowledge and will not be described in detail herein.
After the time synchronization, the data of each sensor after the time synchronization is further subjected to spatial synchronization, where the spatial synchronization mainly maps the data collected by each sensor into a unified coordinate system, and in this embodiment, the unified coordinate system is a coordinate system centered on the rear axle of the vehicle (hereinafter, collectively described as a vehicle coordinate system). The process of spatial synchronization for different sensors is as follows:
(1) and for the forward millimeter wave radar, converting point cloud data corresponding to the forward millimeter wave radar into a vehicle coordinate system based on the conversion relation between the coordinate system of the forward millimeter wave radar and the vehicle coordinate system to obtain synchronous data corresponding to the forward millimeter wave radar.
Forward millimeter wave radar coordinate system XRYRZR-ORAs shown in fig. 11, the mounting position of the forward millimeter wave radar is defined as the origin of coordinates ORThe direction of the three coordinate axes is the same as the direction of the vehicle coordinate system, the detection direction of the forward millimeter wave radar is the X-axis direction, and X is the X-axis directionRORYRFor detecting plane, Y, by forward millimeter-wave radarRORZRIs a mounting plane. Target data output by the forward millimeter wave radar comprises distance, vehicle speed and relative dataAngle, etc., is the forward millimeter wave radar coordinate system XRORYRTwo-dimensional information in a plane. Y isRORZRPlane and YWOWZWPlane parallel to, distance X0,XRORYRPlane and XWOWYWThe planes are parallel, the distance is H, and for a forward millimeter wave radar target P (R, alpha), the conversion relation between a forward millimeter wave radar coordinate system and a vehicle coordinate system is as follows:
Figure BDA0003266272370000081
where R represents the target distance and α represents the azimuth.
(2) And for the lateral millimeter wave radar, converting the point cloud data corresponding to the lateral millimeter wave radar into a vehicle coordinate system based on the conversion relation between the lateral millimeter wave radar coordinate system and the vehicle coordinate system to obtain synchronous data corresponding to the lateral millimeter wave radar.
Lateral millimeter wave radar coordinate system XRiYRiZRi-ORiThe conversion relation between i 1,2,3,4 and the vehicle coordinate system is as follows:
Figure BDA0003266272370000082
wherein (X)i,Yi,Zi) For the installation position of the lateral millimeter wave radar in the vehicle coordinate system,
Figure BDA0003266272370000083
the ith lateral millimeter wave radar detects the azimuth angle and the pitch angle omega of the targeti、φiAzimuth and pitch angles for the installation of lateral millimeter wave radar.
(3) For the image sensor, taking a camera as an example, when performing spatial synchronization, the image information after time synchronization is converted into a vehicle coordinate system based on a conversion relation between an image coordinate system and a pixel coordinate system, a conversion relation between a camera coordinate system and an image coordinate system, a conversion relation between a world coordinate system and a camera coordinate system, and a conversion relation between the world coordinate system and the pixel coordinate system, so as to obtain the image information after spatial synchronization.
The specific implementation process of performing spatial synchronization on the image information may refer to fig. 12 and 13, where fig. 12 is a schematic diagram of a position relationship between an image coordinate system, a camera coordinate system, and a vehicle coordinate system, fig. 13 is a schematic diagram of a position relationship between an image coordinate system and a pixel coordinate system, and based on a linear camera model, coordinates of each point in an image are determined through a conversion relationship between a camera coordinate system and the image coordinate system, and then coordinates corresponding to the coordinates of each point in the image projected into a world coordinate system are obtained through the conversion relationship between the image coordinate system and the pixel coordinate system and the conversion relationship between the pixel coordinate system and the world coordinate system in sequence, where the world coordinate system is the vehicle coordinate system. The conversion of the vehicle coordinate system and the camera coordinate system is completed through the conversion process so as to realize the three-dimensional reconstruction of the point coordinates in the plane image.
Image coordinate system xoy: the camera passes objects in the three-dimensional real environment through a coordinate system of the imaging plane after perspective projection. And defining the intersection point of the optical axis and the imaging plane as a coordinate origin O, wherein the imaging plane is a coordinate system plane. The computer-stored image information is based on a pixel coordinate system that defines the top left vertex of the image as pixel coordinate system uO0v, as shown in fig. 13. The origin O of the image coordinate system is located at a pixel point (u) under the pixel coordinate system0,v0) Then, the conversion relationship between the image coordinate system and the pixel coordinate system is:
Figure BDA0003266272370000091
wherein dx and dy respectively represent the physical size of each pixel point in the x and y directions of the image coordinate system.
Camera coordinate system XcYcZc-Oc: using the center of the camera optical lens as the origin OcWith the optical axis of the camera as ZcAxes establishing a coordinate system.The coordinate axis is parallel to the image coordinate axis, and the conversion relationship between the camera coordinate system and the image coordinate system (f is the focal length of the camera) is as follows:
Figure BDA0003266272370000092
world coordinate system (X)WYWZW): as a reference coordinate system, it is used to describe the installation positions of the radar and the camera (i.e., the camera in the present application), and the positions of other objects in space. The conversion relation between the world coordinate system and the camera coordinate system is as follows:
Figure BDA0003266272370000101
the rotation matrix R is a 3 × 3 unit orthogonal matrix, and represents a rotation relationship of the camera coordinate system with respect to the world coordinate system. Translation vector TcA vector suitable for describing the translation relationship of the camera coordinate system with respect to the world coordinate system.
Finally, the conversion relation between the world coordinate system and the pixel coordinate system is obtained as follows:
Figure BDA0003266272370000102
Figure BDA0003266272370000103
in the formula, M1Is a camera intrinsic parameter matrix, M2Is the camera extrinsic parameter matrix.
After the data of each sensor is time-synchronized and space-synchronized, the data of each sensor needs to be fused at a data level. When data fusion is performed, firstly, static point cloud data in the synchronized point cloud data is rasterized to obtain a first raster image, and the synchronized image information is rasterized to obtain a second raster image, then the first raster image and the first raster image are fused according to a preset fusion method to obtain a fusion raster image, and attribute correction is performed on the point cloud data based on the fusion raster image to obtain target point cloud data, so that the process of correcting the point cloud data by using the image information is realized, namely the process from the step S202 to the step S204 is performed.
Before rasterizing the point cloud data, dynamic and static separation needs to be carried out on the point cloud data to obtain dynamic point cloud data and static point cloud data. The dynamic and static separation of the point cloud data is mainly realized based on the vehicle projection speed and the point cloud Doppler speed. Specifically, the implementation of the dynamic and static separation comprises the following steps:
(11) acquiring the actual measurement Doppler velocity of each data point in the point cloud data;
(12) calculating a target Doppler velocity of each data point according to the current vehicle velocity;
(13) calculating the difference value between the actually measured Doppler velocity and the target Doppler velocity;
(14) when the absolute value of the difference is greater than a preset threshold, the data point is marked as a moving point, and when the absolute value of the difference is less than or equal to the preset threshold, the data point is marked as a static point, so that the point cloud data is divided into dynamic point cloud data corresponding to the moving point and static point cloud data corresponding to the static point.
In the implementation process, the actually measured doppler velocity is velocity information carried in the point cloud data, the target doppler velocity is related to the current vehicle velocity, and the characterization modes of the vehicle velocity are different according to different vehicle running states. The vehicle running state in the present embodiment includes: the linear driving state and the non-linear driving state are different in the current vehicle speed representation mode based on different driving states, so that the target Doppler speed is also different in the calculation mode. The following describes the processes of moving and static separation in different vehicle driving states, respectively.
(1) When the vehicle is in a straight driving state, the vehicle speed is projected to the direction of a connecting line of a point and the radar center, namely the point cloud doppler can be expressed as: vdi=VegocosθiWherein, in the step (A),Vdithe target Doppler velocity is the point cloud Doppler velocity reversely deduced according to the vehicle velocity, namely the target Doppler velocity; vegoAs the vehicle running speed, thetaiTaking the sum of the azimuth angle and the installation angle of the ith point cloud, and taking a right front side radar sensor in the radar sensor as an example with reference to fig. 8, the installation angle α is an included angle between a connecting line of a vehicle coordinate system origin E and a right front side radar sensor coordinate system origin S and a vehicle coordinate system Y axis, and the azimuth angle γ is an angle through which the connecting line of the ith data point and the right front side radar sensor coordinate system origin S rotates to a normal vector Y axis of the right front side radar sensor coordinate system origin S by a minimum path, wherein during counterclockwise rotation, the azimuth angle γ is a positive value, and during clockwise rotation, the azimuth angle γ is a negative value; the difference from the target doppler velocity is then calculated as:
Figure BDA0003266272370000111
if the speed difference is larger than a preset threshold value, the speed difference is a moving point, otherwise, the speed difference is a static point, wherein,
Figure BDA0003266272370000121
is the actual measured point cloud doppler velocity of the radar.
(2) When the vehicle is in a non-straight-line driving state, VdiThe calculation of (2) is determined by calculating the linear velocity of the vehicle, and the linear velocity calculation formula of the vehicle is as follows: where ω is angular velocity (yawRate), R is turning radius, and V is linear velocity. The turning radii of the left wheel and the right wheel are different, so that the linear speeds of the left wheel and the right wheel are different when the wheels turn, and beta is the wheel turning angle. Engineering experiments show that when the vehicle speed is the inner wheel speed during turning, the Doppler measured value and the theoretical value of the target point have small and stable errors. In conjunction with FIG. 9, one can obtain
Figure BDA0003266272370000122
So Vdi=Vycosθi+VxsinθiWherein V isxIs the linear velocity, V, of the vehicle coordinate system in the X-axis directionyIs the linear velocity of the vehicle coordinate system in the Y-axis direction.
Furthermore, because the accuracy of the dynamic and static separation of the point cloud data directly affects the detection precision of the travelable area and the judgment of the attributes of the target obstacle, the dynamic and static separation of the point cloud data obtained by each radar sensor is respectively carried out, namely, the dynamic and static separation of the point cloud data obtained by the forward millimeter wave radar and the dynamic and static point cloud data obtained by the lateral millimeter wave radar are respectively carried out, so that the dynamic point cloud data and the static point cloud data corresponding to the forward millimeter wave radar and the dynamic point cloud data and the static point cloud data corresponding to the lateral millimeter wave radar are obtained. In an embodiment, the step of performing dynamic and static separation on the point cloud data is performed before the synchronization processing, that is, during the synchronization processing, the obtained dynamic point cloud data and static point cloud data of each radar sensor are synchronized.
In step S202, static point cloud data in the synchronized point cloud data is rasterized to obtain a first raster image, and the synchronized image information is rasterized to obtain a second raster image.
The rasterizing process of the point cloud data acquired by each radar sensor may include the following steps: the method comprises the steps of rasterizing a detection area of the radar sensor, counting the number of static points in each grid, when the number of the static points contained in the grid is larger than a first target threshold value, the grid is an occupied grid, marking the attribute of the grid as occupied, when the number of the static points contained in the grid is smaller than or equal to the first target threshold value, the grid is an invalid grid, marking the attribute of the grid as invalid, and obtaining a first grid map formed by the occupied grid and the invalid grid. In the process, the process of marking the attributes of each grid is the process of rasterizing the point cloud data.
When the radar sensor includes a plurality of the first grid patterns, the number of the first grid patterns is a plurality. Further, taking the example that the radar sensor includes a forward radar sensor and a lateral radar sensor, at this time, the first raster image includes a forward raster image and a lateral raster image, where the forward raster image is the first raster image corresponding to the point cloud data obtained by the forward radar sensor, and the lateral raster image is the first raster image corresponding to the point cloud data obtained by the lateral radar sensor. Further, when the number of the lateral radar sensors is 4, the number of the lateral grid patterns is also 4. And for the forward radar sensor, rasterizing a detection area of the forward radar sensor, counting the number of static points in each grid, determining the occupied grid when the number of static points contained in the grid is greater than a first target threshold, otherwise determining the invalid grid, and determining a forward grid image through the occupied grid and the invalid grid. Similarly, based on the above operation steps, a grid map corresponding to the lateral radar sensor, that is, a lateral grid map, may be determined, and detailed steps are not repeated.
The process of rasterizing image information acquired by an image sensor comprises the following steps: and rasterizing the camera acquisition area, and when a non-moving target falls into a plurality of grids corresponding to the camera acquisition area, marking the grid as an occupied grid if the non-moving target falls into the certain grid, or marking the grid as an invalid grid, thereby obtaining a second grid graph consisting of the occupied grid and the invalid grid. Wherein the non-moving object is a stationary object, such as a stationary vehicle, a bush, or a light pole. Further, before the step of rasterizing the image information, a target tracking process is also included, namely, the image information acquired by the camera is subjected to target tracking to obtain lane lines and target information. And then carrying out data time synchronization and data time synchronization on the lane line and the target information in sequence to obtain synchronized image information.
Optionally, the step S203 may include the following steps:
(1) determining the occupancy value of the grid according to the occupancy result of the grid in the first grid map and the preset weight of the area to which the grid belongs and the occupancy result of the grid in the second grid map and the preset weight of the area to which the grid belongs;
(2) the occupancy value of the grid and the number of the grid graphs are subjected to quotient to obtain the average occupancy value of the grid;
(3) and when the average occupancy value of the grid is larger than the second target threshold value, the grid is an occupied grid, otherwise, the grid is an invalid grid, and the fusion grid map is determined through the occupied grid and the invalid grid.
In particular, the amount of the solvent to be used,continuing with the example of the 5R1V improved hybrid fusion structure shown in fig. 5, a total of 6 sensors, which are respectively 1 forward millimeter wave radar, 4 lateral millimeter wave radar and 1 camera, include 5 first grid maps (respectively denoted as 1 forward grid map and 4 lateral grid maps) and 1 second grid map. In grid fusion, firstly, according to the detection precision of each sensor, weight [ i ] is set in different areas for each sensor in a priori manner]I is the sensor number; then, traversing each grid j of the forward grid map, the lateral grid map and the second grid map, counting the attributes (namely occupation or invalidity) of each grid in the forward grid map, the lateral grid map and the second grid map, and calculating a target occupation value cellValue of each grid in the fusion grid map to be obtained based on the statistical result, wherein the calculation formula of the target occupation value is as follows:
Figure BDA0003266272370000141
wherein N represents the number of sensors, cellj[i]Indicating the occupancy value of the jth grid in the grid map of the ith sensor, wherein the occupancy value corresponding to the grid is 1 when the grid attribute is occupancy, and the occupancy value corresponding to the grid is 0 when the grid attribute is invalid; and then, calculating an average occupancy value, wherein a specific calculation formula is as follows: avgcell value is cellValue/sensorNum, where sensorNum is the number of sensors; and finally, if the cellValue is greater than a first preset threshold value, the jth grid of the fused grid map is occupied, otherwise, the jth grid of the fused grid map is invalid, and the fused grid map consisting of the occupied grid and the invalid grid is obtained. And calculating the grid attributes in the fusion grid map through the first grid map and the second grid map, wherein the process of calculating the grid attributes in the fusion grid map is the process of data fusion.
Optionally, the step S204 may include the following steps:
(1) when the data point in the point cloud data corresponds to the grid in the fusion grid map as an occupied grid and the current attribute of the data point is a moving point, correcting the attribute of the data point into a static point;
(2) and the point cloud data corresponding to the corrected static point and the static point cloud data are jointly used as target point cloud data.
Specifically, if a data point in the point cloud data falls within an occupancy grid in the fused grid map, the data point is set as a static point regardless of the attributes preceding the data point, wherein the attributes of the point cloud include a static point and a dynamic point.
Through the steps S201 to S204, a process of fusing the image information acquired by the camera and the point cloud data acquired by the radar sensor on the data level and correcting the point cloud data by using the image information is realized.
In one embodiment, step S102 includes: accumulating the target point cloud data, and determining a drivable area of a lane where the vehicle is located; and performing feature extraction in the travelable area to obtain road edge information and lane line information corresponding to the target point cloud data.
Specifically, the point cloud data acquired by the radar sensor and the image information acquired by the image sensor are subjected to point cloud fusion, the target point cloud data determined after fusion are accumulated, the travelable area of the lane where the vehicle is located can be visually determined, and then the data with different characteristics in the travelable area are distinguished to extract the road edge information and the lane line information, so that the lane information identification precision can be effectively improved.
In an embodiment, the present invention determines the target road edge by correcting the grid corresponding to the target road edge information using the grid corresponding to the target lane line, and specifically executing step S103 includes:
step S301: extracting a target lane line corresponding to lane line information, and based on a preset lane line equation, performing vertical coordinate sampling on the target lane line at a preset distance to obtain a plurality of vertical coordinates, wherein the preset lane line equation is an equation corresponding to the target lane line;
step S302: and determining a plurality of sampling points according to the plurality of vertical coordinates and a preset lane line equation, and rounding the horizontal coordinate and the vertical coordinate of each sampling point in the plurality of sampling points to obtain a grid corresponding to the target lane line.
Specifically, a target lane line is determined, and then a grid corresponding to the target lane line is determined based on the target lane line. With reference to fig. 14, the lane line information is preprocessed to directly determine a target lane line (a dotted line in the left graph of the Y-axis), and then vertical coordinate sampling is performed on the target lane line at a preset distance to obtain a plurality of vertical coordinates (Y values). Because the target lane line is determined, a plurality of vertical coordinates can be substituted into the curve equation corresponding to the target lane line to obtain the horizontal coordinate corresponding to each vertical coordinate, and the corresponding sampling point (the dot in the graph on the left side of the Y axis) is determined through the horizontal coordinate and the vertical coordinate. And then, rounding the abscissa and the ordinate of the sampling point to obtain a grid (a grid in the left graph) corresponding to the target lane line, wherein the grid on the right side is a grid corresponding to the edge of the target lane.
Step S303: carrying out coincidence matching on the grids corresponding to the target lane line and the grids corresponding to the target road edge information, and determining the coincided grids, the grid similarity and the grid coincidence degree, wherein the grids corresponding to the target road edge information are subjected to road edge fusion determination through the first road edge information and the second road edge information;
step S304: and correcting the superposed grids according to the grid similarity and the grid contact ratio to obtain the target road edge.
Specifically, the implementation of determining the target road edge is described in conjunction with fig. 14-16. The grid on the right side in fig. 14 is a grid corresponding to target road edge information determined by road edge fusion using the first road edge information and the second road edge information. The grid in the left graph of fig. 14 is shifted to the right, and the grid in the left graph is overlapped with the grid on the right, so that the overlapped grid, i.e., the grid on the right of the arrow in fig. 15, is obtained. Then, the similarity and the contact ratio of the grid corresponding to the target lane line and the grid corresponding to the target road edge information are calculated, the superposed grids are corrected through the similarity and the contact ratio, the grid corresponding to the target road edge (the grid corresponding to the dark line frame in fig. 16) is obtained, and the target road edge can be determined through the grid corresponding to the target road edge. Optionally, the correction may be added or deleted to the superimposed grid.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The following are embodiments of the apparatus of the invention, reference being made to the corresponding method embodiments described above for details which are not described in detail therein.
Fig. 17 is a schematic structural diagram of a road edge detection device according to an embodiment of the present invention, which only shows parts related to the embodiment of the present invention for convenience of description, and the details are as follows:
as shown in fig. 17, a road edge detecting apparatus includes:
a point cloud data determining module 171, configured to determine target point cloud data according to the point cloud data obtained by the radar sensor and the image information acquired by the image sensor;
the feature extraction module 172 is configured to perform feature extraction on the target point cloud data, and determine first road edge information corresponding to the target point cloud data;
a road edge recognition module 173, configured to perform road edge recognition on the image information to obtain second road edge information and lane line information corresponding to the image information;
and a target lane information determining module 174 configured to determine a target road edge and a target lane line according to the first road edge information, the second road edge information, and the lane line information.
In one possible implementation, the target road-edge determination module 174 includes: the vertical coordinate selection submodule is used for extracting a target lane line corresponding to lane line information, and sampling vertical coordinates on the target lane line at a preset distance based on a preset lane line equation to obtain a plurality of vertical coordinates, wherein the preset lane line equation is an equation corresponding to the target lane line; the lane line grid determining submodule is used for determining a plurality of sampling points according to the plurality of vertical coordinates and a preset lane line equation, and rounding the horizontal coordinates and the vertical coordinates of each sampling point in the plurality of sampling points to obtain a grid corresponding to the target lane line; the grid coincidence matching submodule is used for performing coincidence matching on the grid corresponding to the target lane line and the grid corresponding to the target road edge information, and determining the coincident grid, the grid similarity and the grid coincidence degree, wherein the grid corresponding to the target road edge information is subjected to road edge fusion determination through the first road edge information and the second road edge information; and the grid correction submodule is used for correcting the superposed grids according to the grid similarity and the grid contact ratio to obtain the target road edge.
In one possible implementation, the point cloud data determining module 171 includes: the synchronous processing submodule is used for carrying out synchronous processing on the point cloud data and the image information to obtain synchronized point cloud data and synchronized image information; the rasterization sub-module is used for rasterizing static point cloud data in the synchronized point cloud data to obtain a first raster image, and rasterizing the synchronized image information to obtain a second raster image; the fusion submodule is used for fusing the first grid image and the second grid image according to a preset fusion method to obtain a fusion grid image; and the correction submodule is used for correcting the point cloud data according to the fusion raster image to obtain target point cloud data.
In a possible implementation manner, before rasterizing the sub-modules, the method further includes: the dynamic and static separation submodule is used for performing dynamic and static separation on the point cloud data acquired by the radar sensor to obtain dynamic point cloud data and static point cloud data; correspondingly, the rasterizing submodule comprises: and rasterizing a detection area of the radar sensor, counting the number of static points in each grid, determining the occupied grid when the number of static points contained in the grid is greater than a first target threshold value, otherwise determining the invalid grid, and determining a first grid map through the occupied grid and the invalid grid.
In one possible implementation, the merging submodule includes: an occupancy value calculation unit that determines, for each grid, an occupancy value of the grid from an occupancy result of the grid in the first grid map and a preset weight of an area to which the grid belongs, and an occupancy result of the grid in the second grid map and a preset weight of an area to which the grid belongs; the average value calculation unit is used for dividing the occupation value of the grid and the number of the grid graphs to obtain the average occupation value of the grid; and a fused grid determining unit, configured to determine the grid as an occupied grid when the average occupancy value of the grid is greater than the second target threshold, otherwise, determine the fused grid map by the occupied grid and the invalid grid.
In one possible implementation, the modification submodule includes: the judging unit is used for correcting the attribute of the data point into a static point when the data point in the point cloud data corresponds to the occupied grid in the fusion grid map and the current attribute of the data point is a dynamic point; and the target point cloud determining unit is used for taking the point cloud data corresponding to the corrected static point and the static point cloud data as target point cloud data together.
In one possible implementation, the dynamic-static separation submodule includes: the actual measurement value acquisition unit is used for acquiring the actual measurement Doppler velocity of each data point in the point cloud data; a target value calculation unit for calculating a target doppler velocity of each data point according to a current vehicle velocity; a difference value calculating unit for calculating the difference value between the actual measurement Doppler velocity and the target Doppler velocity; and the judging unit is used for marking the data points as moving points when the absolute value of the difference is greater than a preset threshold, marking the data points as static points when the absolute value of the difference is less than or equal to the preset threshold, and dividing the point cloud data into dynamic point cloud data corresponding to the moving points and static point cloud data corresponding to the static points.
In one possible implementation, the target value calculation unit includes: when the vehicle is in a straight-line driving state, calculating the target doppler velocity of each data point according to the current vehicle velocity specifically comprises:
Vdi=Vegocosθi
wherein, VdiIs the target doppler velocity; vegoAs the vehicle running speed, thetaiThe azimuth angle is the sum of the azimuth angle and the installation angle of the ith data point, the installation angle is the included angle between the connecting line of the origin of the coordinate system of the vehicle and the origin of the coordinate system of the radar sensor or the image sensor and the Y axis of the coordinate system of the vehicle, and the azimuth angle is the minimum path for the connecting line of the ith data point and the origin of the coordinate system of the radar sensor or the image sensor to rotate to the radar sensor or the image sensorThe angle passed by a y axis of a normal vector of an origin of a coordinate system of the image sensor is an angle, wherein when the coordinate system rotates anticlockwise, an azimuth angle is a positive value, and when the coordinate system rotates clockwise, the azimuth angle is a negative value; when the vehicle is in a non-linear driving state, calculating the target doppler velocity of each data point according to the current vehicle velocity specifically comprises:
Vdi=Vycosθi+Vxsinθi
wherein, VdiIs the target Doppler velocity, VxIs the linear velocity, V, of the vehicle running speed in the X-axis direction of the vehicle coordinate systemyIs the linear speed theta of the running speed of the vehicle in the Y-axis direction of the vehicle coordinate systemiIs the sum of the azimuth and the setting angle of the ith data point.
In one possible implementation, the radar sensor is a millimeter wave radar sensor, and includes 1 forward radar sensor and 4 lateral radar sensors, and the image sensor includes 1 camera.
Fig. 18 is a schematic diagram of a road edge detecting apparatus applied to a vehicle according to an embodiment of the present invention. As shown in fig. 18, the road edge detecting device 18 applied to a vehicle of this embodiment includes: a processor 180, a memory 181, and a computer program 182 stored in memory 181 and executable on processor 180. The processor 180, when executing the computer program 182, implements the steps in the above-described embodiments of the road edge detection method, such as the steps 101 to 104 shown in fig. 4. Alternatively, the processor 180, when executing the computer program 182, implements the functions of the respective modules/units in the above-described respective apparatus embodiments, for example, the functions of the modules/units 171 to 174 shown in fig. 17.
Illustratively, the computer program 182 may be divided into one or more modules/units, which are stored in the memory 181 and executed by the processor 180 to implement the present invention. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions that describe the execution of the computer program 182 in the road-edge detecting device 18 applied to the vehicle. For example, the computer program 182 may be divided into the modules/units 171 to 174 shown in fig. 17.
The road edge detection device 18 applied to the vehicle may be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server. Road edge detection device 18 as applied to a vehicle may include, but is not limited to, a processor 180, a memory 181. Those skilled in the art will appreciate that fig. 18 is merely an example of a road edge detection device 18 applied to a vehicle and does not constitute a limitation of the road edge detection device 18 applied to a vehicle, and may include more or fewer components than those shown, or combine certain components, or different components, e.g., a terminal may also include an input-output device, a network access device, a bus, etc.
The Processor 180 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 181 may be an internal storage unit of the road edge detection device 18 applied to the vehicle, such as a hard disk or a memory of the road edge detection device 18 applied to the vehicle. The memory 181 may also be an external storage device applied to the road edge detecting device 18 of the vehicle, such as a plug-in hard disk provided on the road edge detecting device 18 applied to the vehicle, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like. Further, the memory 181 may also include both an internal storage unit and an external storage device of the road edge detection device 18 applied to the vehicle. The memory 181 is used to store computer programs and other programs and data required by the terminal. The memory 181 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the above-mentioned embodiments of the method may be implemented by the present invention, and a computer program that can be implemented by instructing related hardware through a computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of the above-mentioned embodiments of the lane information detection method may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may include any suitable increase or decrease as required by legislation and patent practice in the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A road edge detection method, comprising:
determining target point cloud data according to the point cloud data acquired by the radar sensor and image information acquired by the image sensor;
extracting the characteristics of the target point cloud data, and determining first road edge information corresponding to the target point cloud data;
performing road edge identification on the image information to obtain second road edge information and lane line information corresponding to the image information;
and determining the edge of the target road and the target lane line according to the first road edge information, the second road edge information and the lane line information.
2. The method of claim 1, wherein determining a target road edge and a target lane line based on the first road edge information, the second road edge information, and the lane line information comprises:
extracting a target lane line corresponding to the lane line information, and based on a preset lane line equation, performing vertical coordinate sampling on the target lane line at a preset distance to obtain a plurality of vertical coordinates, wherein the preset lane line equation is an equation corresponding to the target lane line;
determining a plurality of sampling points according to the plurality of vertical coordinates and the preset lane line equation, and rounding the horizontal coordinates and the vertical coordinates of each sampling point in the plurality of sampling points to obtain a grid corresponding to the target lane line;
carrying out coincidence matching on the grids corresponding to the target lane line and the grids corresponding to the target road edge information, and determining the coincided grids, the grid similarity and the grid coincidence degree, wherein the grids corresponding to the target road edge information are subjected to road edge fusion determination through the first road edge information and the second road edge information;
and correcting the superposed grids according to the grid similarity and the grid contact ratio to obtain the edge of the target road.
3. The method of claim 1, wherein determining target point cloud data from the point cloud data acquired by the radar sensor and the image information acquired by the image sensor comprises:
performing synchronization processing on the point cloud data and the image information to obtain synchronized point cloud data and synchronized image information;
rasterizing static point cloud data in the synchronized point cloud data to obtain a first raster image, and rasterizing the synchronized image information to obtain a second raster image;
fusing the first grid map and the second grid map according to a preset fusion method to obtain a fused grid map;
and correcting the point cloud data according to the fusion raster image to obtain the target point cloud data.
4. The method according to claim 3, wherein before rasterizing the static point cloud data in the synchronized point cloud data to obtain the first raster map, the method further comprises:
performing dynamic and static separation on the point cloud data acquired by the radar sensor to obtain dynamic point cloud data and static point cloud data;
correspondingly, the rasterizing the static point cloud data in the synchronized point cloud data to obtain a first raster image includes:
and rasterizing a detection area of the radar sensor, counting the number of static points in each grid, determining the grid as an occupied grid when the number of static points contained in the grid is greater than a first target threshold, otherwise determining the first grid map according to the occupied grid and the invalid grid.
5. The method according to claim 3, wherein the fusing the first grid map and the second grid map according to a preset fusion method to obtain a fused grid map comprises:
sequentially calculating the target occupation value of each corresponding grid in the fusion grid map according to the weight occupied by the first grid map and the second grid map in the fusion grid map and the occupation value of each grid in the first grid map and the second grid map;
taking the target occupation value and the sum of the number of the first grid graph and the number of the second grid graph as a quotient, and sequentially calculating the average occupation value of each corresponding grid in the fusion grid graph;
and when the average occupation value is larger than a second target threshold value, marking the corresponding grid in the fusion grid map as an occupation grid, otherwise, marking the corresponding grid as an invalid grid, and obtaining the fusion grid map formed by the occupation grid and the invalid grid.
6. The method of claim 4, wherein the modifying the point cloud data according to the fused raster map to obtain target point cloud data comprises:
when a data point in the point cloud data corresponds to a grid in the fusion grid map and is an occupied grid and the current attribute of the data point is a moving point, correcting the attribute of the data point to be a static point;
and taking the point cloud data corresponding to the corrected static point and the static point cloud data as the target point cloud data together.
7. The method of claim 4, wherein the dynamic and static separation of the point cloud data obtained by the radar sensor to obtain dynamic point cloud data and static point cloud data comprises:
acquiring the actual measurement Doppler velocity of each data point in the point cloud data;
calculating a target Doppler velocity of each data point according to the current vehicle velocity;
calculating the difference value of the measured Doppler velocity and the target Doppler velocity;
when the absolute value of the difference is larger than a preset threshold, the data point is marked as a moving point, when the absolute value of the difference is smaller than or equal to the preset threshold, the data point is marked as a static point, and the point cloud data is divided into dynamic point cloud data corresponding to the moving point and static point cloud data corresponding to the static point.
8. The method of claim 5, wherein said calculating a target Doppler velocity for each data point based on a current vehicle velocity comprises:
when the vehicle is in a straight-line driving state, the calculating the target doppler velocity of each data point according to the current vehicle velocity specifically includes:
Vdi=Vegocosθi
wherein, VdiIs the target doppler velocity; vegoAs the vehicle running speed, thetaiThe azimuth angle is the sum of the azimuth angle and the installation angle of the ith data point, the installation angle is the included angle between the connecting line of the origin of the coordinate system of the vehicle and the origin of the coordinate system of the radar sensor or the image sensor and the Y axis of the coordinate system of the vehicle, and the azimuth angle is the minimum path rotation of the connecting line of the ith data point and the origin of the coordinate system of the radar sensor or the image sensor to the radarThe angle passed by the y axis of the normal vector of the origin of the coordinate system of the sensor or the image sensor is an angle, wherein when the sensor or the image sensor rotates anticlockwise, the azimuth angle is a positive value, and when the sensor or the image sensor rotates clockwise, the azimuth angle is a negative value;
when the vehicle is in a non-linear driving state, the calculating the target doppler velocity of each data point according to the current vehicle velocity specifically includes:
Vdi=Vycosθi+Vxsinθi
wherein, VdiIs the target Doppler velocity, VxIs the linear velocity, V, of the vehicle running speed in the X-axis direction of the vehicle coordinate systemyIs the linear speed theta of the running speed of the vehicle in the Y-axis direction of the vehicle coordinate systemiIs the sum of the azimuth and the setting angle of the ith data point.
9. The method of any one of claims 1 to 8, wherein the radar sensor is a millimeter wave radar sensor and comprises 1 forward radar sensor and 4 side radar sensors, and the image sensor comprises 1 camera.
10. A road edge detection device for a vehicle, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the road edge detection method as claimed in any one of the preceding claims 1 to 9.
CN202111088074.1A 2021-09-16 2021-09-16 Road edge detection method and road edge detection equipment applied to vehicle Pending CN113989766A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111088074.1A CN113989766A (en) 2021-09-16 2021-09-16 Road edge detection method and road edge detection equipment applied to vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111088074.1A CN113989766A (en) 2021-09-16 2021-09-16 Road edge detection method and road edge detection equipment applied to vehicle

Publications (1)

Publication Number Publication Date
CN113989766A true CN113989766A (en) 2022-01-28

Family

ID=79735983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111088074.1A Pending CN113989766A (en) 2021-09-16 2021-09-16 Road edge detection method and road edge detection equipment applied to vehicle

Country Status (1)

Country Link
CN (1) CN113989766A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114526746A (en) * 2022-03-15 2022-05-24 智道网联科技(北京)有限公司 Method, device and equipment for generating high-precision map lane line and storage medium
CN114724116A (en) * 2022-05-23 2022-07-08 禾多科技(北京)有限公司 Vehicle traffic information generation method, device, equipment and computer readable medium
CN115840227A (en) * 2023-02-27 2023-03-24 福思(杭州)智能科技有限公司 Road edge detection method and device
CN116503383A (en) * 2023-06-20 2023-07-28 上海主线科技有限公司 Road curve detection method, system and medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114526746A (en) * 2022-03-15 2022-05-24 智道网联科技(北京)有限公司 Method, device and equipment for generating high-precision map lane line and storage medium
CN114724116A (en) * 2022-05-23 2022-07-08 禾多科技(北京)有限公司 Vehicle traffic information generation method, device, equipment and computer readable medium
CN115840227A (en) * 2023-02-27 2023-03-24 福思(杭州)智能科技有限公司 Road edge detection method and device
CN116503383A (en) * 2023-06-20 2023-07-28 上海主线科技有限公司 Road curve detection method, system and medium
CN116503383B (en) * 2023-06-20 2023-09-12 上海主线科技有限公司 Road curve detection method, system and medium

Similar Documents

Publication Publication Date Title
CN109270534B (en) Intelligent vehicle laser sensor and camera online calibration method
CN113989766A (en) Road edge detection method and road edge detection equipment applied to vehicle
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
CN108805934B (en) External parameter calibration method and device for vehicle-mounted camera
CN112581612B (en) Vehicle-mounted grid map generation method and system based on fusion of laser radar and all-round-looking camera
CN111192295B (en) Target detection and tracking method, apparatus, and computer-readable storage medium
CN103065323B (en) Subsection space aligning method based on homography transformational matrix
CN113985405A (en) Obstacle detection method and obstacle detection equipment applied to vehicle
CN117441113A (en) Vehicle-road cooperation-oriented perception information fusion representation and target detection method
US11783507B2 (en) Camera calibration apparatus and operating method
CN113034586B (en) Road inclination angle detection method and detection system
CN110555407A (en) pavement vehicle space identification method and electronic equipment
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN112748421A (en) Laser radar calibration method based on automatic driving of straight road section
CN112740225A (en) Method and device for determining road surface elements
CN111736613A (en) Intelligent driving control method, device and system and storage medium
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN111538008B (en) Transformation matrix determining method, system and device
CN111382591B (en) Binocular camera ranging correction method and vehicle-mounted equipment
US20190003822A1 (en) Image processing apparatus, device control system, imaging apparatus, image processing method, and recording medium
CN114792416A (en) Target detection method and device
CN114140533A (en) Method and device for calibrating external parameters of camera
CN115618602A (en) Lane-level scene simulation method and system
WO2022133986A1 (en) Accuracy estimation method and system
CN115239822A (en) Real-time visual identification and positioning method and system for multi-module space of split type flying vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination