CN111505623B - Method and system for detecting obstacle in driving process of unmanned vehicle and vehicle - Google Patents
Method and system for detecting obstacle in driving process of unmanned vehicle and vehicle Download PDFInfo
- Publication number
- CN111505623B CN111505623B CN202010333455.0A CN202010333455A CN111505623B CN 111505623 B CN111505623 B CN 111505623B CN 202010333455 A CN202010333455 A CN 202010333455A CN 111505623 B CN111505623 B CN 111505623B
- Authority
- CN
- China
- Prior art keywords
- target
- sensor
- obstacle
- type
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000008569 process Effects 0.000 title claims abstract description 15
- 238000001514 detection method Methods 0.000 claims abstract description 31
- 238000005259 measurement Methods 0.000 claims description 17
- 230000004888 barrier function Effects 0.000 claims description 15
- 238000012216 screening Methods 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000007405 data analysis Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 7
- 230000007704 transition Effects 0.000 claims description 7
- 238000004891 communication Methods 0.000 claims description 3
- 230000001351 cycling effect Effects 0.000 claims description 3
- 229910052731 fluorine Inorganic materials 0.000 claims 1
- 125000001153 fluoro group Chemical group F* 0.000 claims 1
- 230000017105 transposition Effects 0.000 claims 1
- 230000008447 perception Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/867—Combination of radar systems with cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Radar Systems Or Details Thereof (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a method, a system and a vehicle for detecting obstacles in the running process of an unmanned vehicle, wherein a radar coordinate system and a camera coordinate system are unified into a vehicle body coordinate system; reading and analyzing original message data of a first sensor and a second sensor to obtain an original obstacle target under the vehicle body coordinate system; judging whether the original obstacle targets detected by the first sensor and the second sensor are the same obstacle or not; and fusing the targets judged to be the same obstacle to obtain final obstacle target information. The invention integrates the data of the first sensor and the second sensor, solves the problems that the existing method can not meet the real-time requirement, has low detection efficiency and high cost, and is easily influenced by the environment, and improves the detection reliability and the detection efficiency.
Description
Technical Field
The invention relates to the field of target perception of unmanned vehicles, in particular to a method, a system and a vehicle for detecting obstacles in the driving process of an unmanned vehicle.
Background
Along with the great heat of artificial intelligence, unmanned driving, which is an important branch, is also rapidly developing. The automatic driving automobile comprises modules of perception, decision, control and the like, wherein environment perception relates to various sensors, the modules are important guarantee for safety of the automatic driving automobile, and a target detection task is the most critical ring of a perception task. Target detection refers to a task of giving information of various obstacles such as vehicles in an automatic driving scene. To better represent the different degrees of autodrive of a vehicle, the american society of automotive engineers divides autodrive into five levels. With the increase of the automatic driving level, the requirement of the automobile on target detection is higher and higher. Therefore, it is important to study automatic driving target detection. With attendant stringent demands on the perception system of the unmanned vehicle. The improvement of the perception performance is imminent for the further development of the unmanned technology.
At present, an environment sensing module in an unmanned system depends on various sensors, and a camera, a millimeter wave radar and a laser radar are the most important and widely used three types of sensors. Compared with other sensors, the visual sensor is lower in price and lower in technical difficulty, abundant information can be provided, the obtained targets are easy to classify, and the short-distance measurement result is accurate. However, it has the disadvantage of being sensitive to light and weather conditions, requiring complex calculations to ensure accuracy. The millimeter wave radar sensor detects objects by emitting millimeter wave electric signals and analyzing frequency shift in reflection, can stably operate in different environmental factors, and meanwhile is high in measuring accuracy of the speed of the obstacle and capable of measuring the speed of the obstacle in a long distance. Millimeter wave radars launched by delfu corporation, mainland germany and bosch are stable and reliable, so that they are applied to research in many universities and enterprises. However, it has the disadvantage that the objects obtained are difficult to classify, limited in spatial resolution and sometimes false positives, missing heels. The laser radar has the advantages of accurate 3D perception capability, insensitivity to light change, rich information and the like, and can perform very accurate 3D perception on a target by relying on a traditional feature extraction method and an advanced point cloud deep learning network. It has the disadvantages of sensitivity to weather such as rain, fog and the like and high price; if the requirement of real-time property cannot be met only by means of vision, the radar detection rate is not high only by means of radar. Therefore, to obtain a real and reliable vehicle front obstacle recognition system, it is difficult to achieve the real and reliable vehicle front obstacle recognition system only by means of the single sensor, and multiple sensors need to be fused to meet the complex road traffic environment.
Disclosure of Invention
The invention aims to solve the technical problem that in order to overcome the defects in the prior art, the invention provides the method, the system and the vehicle for detecting the obstacle in the driving process of the unmanned vehicle, and the obstacle target in the driving process of the unmanned vehicle is effectively detected.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: an obstacle detection method in the driving process of an unmanned vehicle comprises the following steps:
1) Unifying a radar coordinate system and a camera coordinate system into a vehicle body coordinate system;
2) Reading and analyzing original message data of a first sensor and a second sensor to obtain an original obstacle target under the vehicle body coordinate system;
3) Judging whether the original obstacle targets detected by the first sensor and the second sensor are the same obstacle or not;
4) Fusing the targets judged as the same barrier to obtain final barrier target information;
wherein, the first sensor and the second sensor are different kinds of sensors.
The invention integrates the data of two sensors, solves the problems that the reliability is not high and the complex road traffic environment is difficult to meet when the prior art relies on a single type of sensor for detection, improves the real-time property and the detection efficiency of the detection and is not easy to be influenced by the environment.
And 2), reading the original message data of the first sensor and the second sensor in an online or offline mode. The real-time online mode can be used for unmanned vehicle operation and debugging, and the offline reading mode can be used for demonstration and experiments.
In order to further improve the white noise resistance of the first sensor and the second sensor and reduce the probability of the occurrence of false targets, the following operations are further executed between the step 2) and the step 3) of the invention: and screening the original obstacle target to eliminate false target and white noise.
The life cycle method has the advantages that a simpler algorithm can be used, high reliability is achieved, effective targets can be screened, various measuring conditions are adapted, and the robustness of sensor target identification is improved. Specifically, the specific implementation process of screening the original obstacle target includes:
a) When an original barrier target is decided as an effective target for the first time in a certain period, a life threshold value is set for the effective target, and the influence of transient data interference and loss cannot be effectively eliminated due to the smaller life threshold value; the larger life threshold value will cause the delay of effective target switching, so the life threshold value is generally taken as 5-8 frames (one frame is an adopted period), the invention sets the life threshold value as 6 frames, and the effective target is tracked and measured;
b) Starting from the period, if the target newly detected by the first sensor and the second sensor is not consistent with the effective target continuously for N times, and the target newly detected in the period from the (N + 1) th time to the effective target is consistent with the effective target again, keeping the original effective target unchanged; wherein N is less than a life threshold; if the number of times of continuous inconsistency between the target newly detected from the period and the effective target exceeds a life threshold, abandoning the original effective target, taking the target newly detected by the first sensor and the second sensor as a new effective target, and performing tracking measurement on the new effective target;
c) And circularly executing the steps A and B to respectively obtain effective target output sets screened by the millimeter wave first sensor and the millimeter wave second sensor. Providing input data for step 3).
In step 4), the specific implementation process of determining whether the original obstacle targets detected by the first sensor and the second sensor are the same obstacle includes:
1) Marking an obstacle target detected by a first sensor as an R-type target, and marking an obstacle target detected by a second sensor as a V-type target; the target position coordinate of the R-type obstacle is (x) R ,y R ) At a velocity of V R (ii) a The target position coordinate of the V-type obstacle is (x) V ,y V ) At a velocity of V V ;
2) Judging the relationship between weighted values of the distance information and the speed information of the R-type target and the V-type target and a threshold value, judging that the weighted values (W') are not the same obstacle when being more than or equal to the threshold value (W), and judging that the weighted values are the same obstacle when being less than the threshold value; wherein the threshold value W = λ · 0.1 Δ L R +(1-λ)ρ·V R (ii) a W '= λ Δ L + (1- λ) Δ V, W' is a weighted value of distance information and speed information of the R-class target and the V-class target; λ is a weight; rho is a speed error coefficient;ΔV=|V R -V V |。
the judging process fully considers the characteristic of high detection accuracy of the sensor on the position information, comprehensively considers the position and speed information of the target, and considers that the error is increased along with the increase of the detected target speed of the obstacle, so that a corresponding speed error coefficient is set, and the judging method has higher robustness.
λ is a weight used for weighting the distance difference and the velocity difference of the two types of targets, and considering that the measurement accuracy of the two types of sensors on the position information is greater than that of the velocity information, the invention takes λ =0.7.
Rho is a speed error coefficient, and a corresponding speed error coefficient can be set according to the value of the speed, so that the error increase caused by the speed increase is avoided. The speed error coefficient of the invention is specifically set as follows:
when V is R At < 40 km/h: ρ =0.1;
when V is less than or equal to 40km/h R At < 80 km/h: ρ =0.09;
when V is more than or equal to 80km/h R At < 120 km/h: ρ =0.07;
when 120km/h is less than V R The method comprises the following steps: ρ =0.05. If the detection accuracy of the first sensor for the obstacle is higher than that of the second sensor, the threshold is established based on the first sensor, and therefore, in order to further improve the detection accuracy, the threshold of the present invention is specifically set as follows:
in the step 4), the Kalman filtering method is used for fusing and judging the targets of the same barrier, classical Kalman filtering is adopted, the algorithm has better robustness and real-time performance, and compared with fusion algorithms such as a neural network and a D-S evidence theory, the method is lighter and has wider applicability; preferably, the specific implementation process of fusing the targets judged as the same obstacle by using the kalman filtering method includes:
a) Initialization, after receiving the first effective measured value of the camera, the system state is checkedAnd (3) initializing:
wherein: x is the number of V Is a V-type target horizontal coordinate; y is V Is a V-type target longitudinal coordinate; v V-X Is the target lateral velocity of the V type; v V-Y Is the class V target longitudinal velocity; alpha is the Euclidean distance of the V-type target;is the absolute value of the target speed of the V type; phi is the orientation angle of the class V target.
B) Substituting the position and speed information of the V-type target initialized at the k-1 moment into a state prediction formula to obtain a predicted value at the k moment
P k Indicating the degree of uncertainty of the system state, usingRepresents the initial covariance P of the invention 0 The following settings are set: />
And the covariance is updated by the following equation:
is in a predicted state->Error covariance of (P) k-1 Is->Is greater than or equal to> Fusing a system state predicted value for Kalman filtering at the moment k; f is a state transition matrix which is a model of target state transition; q is the inherent noise interference of the target state conversion model;
the specific updating steps are as follows:
covariance P according to initialization 0 Calculate to obtain 1 timeHas a covariance->By>Covariance of->Calculate time 1 &>Of (2) covariance P 1 Is based on>Covariance P of 1 Calculate time 2 &>Has a covariance->Until a moment k-1 is reached>Covariance P of k-1 Calculating a time k->Is greater than or equal to>Is based on the time k>Is greater than or equal to>The calculated k moment->Error covariance P of k ;
From the predicted value at time kObserved value z of sensor k And Kalman gain K k Predictive estimation of system states can be updated: />
Wherein, K k Is the Kalman gain; h is a conversion matrix from the state variable to the observation variable;the optimal state is corrected through measurement; r is the measurement noise covariance; z is a radical of formula k Is the first sensor observation or the second sensor observation;
c) Substituting class V targets into observed value z at time k k And obtaining the optimal state of the system according to the updated prediction estimation formula in the step B)Based on the system optimum status at time k>Substituting the predicted estimation of the system state in the step B) to obtain the predicted value of the system state at the moment k +1, and substituting the R-type targets which are judged as the same obstacle into the observed value z at the moment k +1 k+1 And updating the prediction estimate to obtain fused effective barrier information->
D) Cycling the steps B) and C) in time sequence, and measuring the corrected optimal state each timeThe fused set of obstacle target outputs.
Correspondingly, the invention also provides an obstacle detection system in the driving process of the unmanned vehicle, which comprises the following steps: the coordinate conversion module is used for unifying the first sensor coordinate system and the second sensor coordinate system into a vehicle body coordinate system; the data analysis module is used for reading and analyzing the original message data of the first sensor and the second sensor to obtain an original obstacle target under the vehicle body coordinate system;
the judging module is used for judging whether the original obstacle targets detected by the first sensor and the second sensor are the same obstacle or not;
and the fusion module is used for fusing the targets judged as the same obstacle to obtain the final obstacle target information.
The system integrates the first sensor data and the second sensor data, solves the problems that in the prior art, when the single type of sensor is used for detection, the reliability is not high, and the complex road traffic environment is difficult to meet, improves the real-time performance and the detection efficiency of detection, and is not easy to be influenced by the environment.
In order to further improve the detection precision, the system of the invention further comprises a screening module, wherein the screening module is used for screening the original obstacle target output by the data analysis module, eliminating the false target and the white noise, and transmitting the original obstacle target after eliminating the false target and the white noise to the judgment module.
The screening module screens the original obstacle target by using an effective life cycle method.
The judging module comprises:
a marking unit for marking the barrier detected by the first sensorThe obstacle target is marked as an R-type target, and the obstacle target detected by the second sensor is marked as a V-type target; the target position coordinate of the R-type obstacle is (x) R ,y R ) At a velocity of V R (ii) a The target position coordinate of the V-type obstacle is (x) V ,y V ) At a velocity of V V ;
A comparison unit for judging the relationship between the weighted value of the distance information and the velocity information of the R-type target and the V-type target and the threshold value, wherein the weighted value (W ') is larger than or equal to the threshold value (W) and is judged not to be the same obstacle, and the weighted value (W') is smaller than the threshold value and is judged to be the same obstacle; wherein the threshold value W = λ · 0.1 Δ L R +(1-λ)ρ·V R (ii) a W '= λ Δ L + (1- λ) Δ V, W' is a weighted value of distance information and speed information of the R-class target and the V-class target; λ is a weight; rho is a speed error coefficient;ΔV=|V R -V V l, |; preferably, the speed error coefficient is specifically set as follows:
when V is R When the speed is less than 40 km/h: ρ =0.1;
when V is less than or equal to 40km/h R At < 80 km/h: ρ =0.09;
when the voltage is less than or equal to 80km/h R At < 120 km/h: ρ =0.07;
when 120km/h is less than V R The method comprises the following steps: ρ =0.05.
Preferably, the threshold is specifically set as follows:
the invention also provides, as an inventive concept, an unmanned vehicle comprising a processor for performing the above method steps.
The processor can control the unmanned vehicle to avoid the final obstacle target information according to the final obstacle target information, and ensures the driving safety of the unmanned vehicle.
The processor is in communication with the display module for facilitating visual display of the obstacle target.
The processor of the invention is communicated with the data acquisition module; the data acquisition module comprises at least one first sensor and at least one second sensor which are arranged on the front part of a vehicle body; the first sensor and the second sensor are heterogeneous sensors.
In order to save cost and be convenient to use, the first sensor of the invention is a radar, and the second sensor is a camera. Compared with the prior art, the invention has the beneficial effects that:
1) The invention integrates two types of sensor data, solves the problems that the existing method can not meet the real-time requirement, has low detection efficiency, high cost and is easily influenced by the environment, and improves the detection reliability and the detection efficiency;
2) The method can avoid white noise interference when the target of the unmanned vehicle is detected in the driving process, eliminate false targets and further improve the detection reliability and detection precision;
3) The method has the advantages that two analysis modes of online analysis and offline analysis can be selected, the application range is wide, and effective barrier targets on the road can be accurately detected;
4) The invention can intuitively display the coordinate position information of the effective obstacle target and provide the coordinate position information to a decision module (processor) of the unmanned automobile, and has wider application prospect.
Drawings
FIG. 1 is a flowchart of a method according to an embodiment of the present invention.
FIG. 2 is a diagram of millimeter wave radar and camera coordinate transformation in accordance with an embodiment of the present invention;
FIG. 3 is a flow chart of target Kalman filtering data fusion according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an exemplary target detection result according to an embodiment of the present invention;
FIG. 5 is a block diagram of the system architecture of the present invention;
FIG. 6 is a block diagram of a determining module according to the present invention.
Detailed Description
The 12-meter intelligent driving passenger car adopting the scheme is taken as an example in the embodiment. As shown in fig. 1, the method comprises the following steps: s1, unifying the coordinates of the millimeter wave radar and the camera into a coordinate system of the vehicle body. As shown in fig. 2, the positions of the millimeter wave radar and the camera with respect to the vehicle body are fixed on the smart-drive passenger car. The detection angle of the millimeter wave radar in the Z direction is generally only +/-5 degrees, the installation height of the radar is too high, the blind area is increased, the radar beam is emitted to the ground due to too low installation height, clutter interference is brought by ground reflection, and the judgment of the radar is influenced. Therefore, the arrangement height of the millimeter wave radar (i.e., the distance from the ground to the center point of the radar module) is generally recommended to be between 500 (full load state) and 800mm (no load state). The millimeter wave radar is arranged at the position 0.75m away from the ground in the center of the front part of the vehicle body, and the camera is arranged at the position 1.2m away from the ground above the millimeter wave radar in order to avoid being shielded in view of the camera. According to the translation and rotation relation between the camera coordinate system and the camera projection coordinate system, a coordinate conversion formula between the camera coordinate system and the camera projection coordinate system can be obtained:
wherein x c ,y c ,z c Is the coordinate system of the camera;
x rc ,y rc ,z rc a millimeter wave radar self coordinate system;
theta is an inclination angle between the camera and the millimeter wave radar;
h is the rotation coefficient of the camera and the millimeter wave radar;
and S2, reading and analyzing original message data of the millimeter wave radar and the camera to obtain an original obstacle target in the same coordinate system (a vehicle body coordinate system).
In order to facilitate debugging and interface demonstration, the invention provides two data analysis modes of online and offline:
in the off-line mode, original message data of the millimeter wave radar are read and analyzed from the selectable path, and the off-line file analyzed in the embodiment is from data acquired by real-time operation of the intelligent passenger car and is stored in a CSV format file. With reference to fig. 4, a user can select a millimeter wave radar and a camera original message data offline file to be analyzed at the lower right corner of the interface, because the millimeter wave radar of type ARS408-21 manufactured by the continental germany is adopted, and the camera is an IFVS-400 manufactured by the shanghai zhihuan company. Therefore, the invention strictly reads and analyzes the original data according to the corresponding model message protocol manual;
and in a real-time online mode, corresponding original message data CAN be read and analyzed from CAN-USB interfaces of the running millimeter wave radar and the camera.
And S3, screening the original obstacle target by using an effective life cycle method, and eliminating a false target and white noise. The effective life cycle method comprises the following specific steps:
step one, when the millimeter wave radar or camera processing module analyzes the obstacle target in the original data for 6 continuous frames, the target is determined to be an effective target. Setting a life threshold value for the effective target (taking 6 continuous frames to detect the target), and carrying out tracking measurement on the effective target.
And step two, after N times of continuous inconsistency (namely not the same target) between the newly detected target and the effective target from a certain period, the newly detected target and the effective target in the (N + 1) th and subsequent detection periods are changed into consistency again, wherein N is smaller than a life threshold (namely the target only has 1-5 frames, and N is less than 6), which shows that the phenomenon of the N times of continuous inconsistency in the middle is caused by interference under various working conditions such as vehicle bump, vehicle swing, missed detection or false target, and the phenomenon of the inconsistency is temporary. The target inconsistency processing performed at this time is to keep the original effective target unchanged, and in order to ensure the continuity of the output result of the running state information of the original effective target in the middle N periods, the switching of a new target is not required.
And step three, the number of times that the newly detected target and the effective target are continuously inconsistent (namely are not the same target) from a certain frame exceeds a life threshold (the target is detected by taking 6 continuous frames), which indicates that the effective target is not the most dangerous target and the new target is more threatening. And in the target inconsistency processing, target switching is carried out, the original effective target is abandoned, the newly detected target is used as a new effective target, the measurement information of the new effective target is used as a state initial value of the effective target, and tracking measurement is carried out on the new effective target.
S4, judging whether the effective targets detected by the millimeter wave radar and the camera are the same obstacle or not
The detailed steps are as follows:
the method comprises the following steps: and recording an effective target point detected by the millimeter wave radar as an R-type target, and recording an effective target point detected by the camera as a V-type target. The target position coordinate of the R-type obstacle is (x) R ,y R ) At a velocity of V R (ii) a The target position coordinate of the V-type obstacle is (x) V ,y V ) At a velocity of V V 。
Step two: setting the threshold value as follows: w = λ · 0.1 Δ L R +(1-λ)ρ·V R ;
Wherein λ is a weight used for weighting the distance difference and the speed difference of the two types of targets, and λ =0.7 is taken in consideration that the measurement accuracy of the two types of sensors on the position information is greater than that of the speed information;
the distance difference of the effective target detected by the millimeter wave radar and the camera;
ΔV=|V R -V V the velocity difference of the effective target detected by the millimeter wave radar and the camera is |;
rho is a speed error coefficient, and a corresponding speed error coefficient can be set according to the value of the speed, so that the error increase caused by the speed increase is avoided. The speed error coefficient of the invention is specifically set as follows:
when V is R When the speed is less than 40 km/h: ρ =0.1;
when V is less than or equal to 40km/h R At < 80 km/h: ρ =0.09;
when the voltage is less than or equal to 80km/h R When the speed is less than 120 km/h: ρ =0.07;
when 120km/h is less than V R The method comprises the following steps: ρ =0.05.
Considering that the detection accuracy of the millimeter wave radar for the obstacle is greater than that of the camera, the threshold is set up with the millimeter wave radar as a standard:
step three: and substituting the distance information and the speed information of the R-type target and the V-type target into a weighted value formula W ' = lambda delta L + (1-lambda) delta V, and if the weighted value (W ') is larger than or equal to the threshold (W), judging that the R-type target and the V-type target are not the same obstacle, and if the weighted value (W ') is smaller than the threshold, judging that the R-type target and the V-type target are the same obstacle.
And S5, performing data fusion on the targets judged to be the same obstacle by using Kalman filtering.
The specific implementation steps are as follows:
a) Initialization, after receiving the first valid measurement value of the camera, the system state is checkedAnd (3) initializing:
wherein: x is the number of V Is a V-type target horizontal coordinate; y is V Is a V-type target longitudinal coordinate; v V-X Is the V-type target transverse speed; v V-Y Is the class V target longitudinal velocity; alpha is the Euclidean distance of the V-type target;is the absolute value of the class V target velocity; phi is the orientation angle of the class V target.
B) Substituting the position and speed information of the V-class target initialized at the k-1 moment into a state prediction formula to obtain a predicted value at the k moment
P k Indicating the degree of uncertainty of the system stateIs represented by the covariance ofInitial covariance P of 0 The following settings are set:
and the covariance is updated by the following equation:
in a predictive state>Error covariance of (P) k-1 Is->The covariance of (a); />Fusing a system state predicted value for Kalman filtering at the moment k; f is a state transition matrix, which is a model of target state transition, and for a linear model, the embodiments of the present invention can obtain:
q is the inherent noise interference of the target state transition model, and because the obstacle is screened, the inherent white noise of the model is filtered, so that the value of Q is zero in the embodiment;
the specific updating steps are as follows:
covariance P according to initialization 0 Calculate to obtain 1 timeHas a covariance->By>Covariance of->Calculate time of day 1Covariance P of 1 Is based on>Covariance P of 1 Calculate time instant 2->Covariance of->Up to a moment k-1>Covariance P of k-1 Calculating the k moment->Is greater than or equal to>By time k>Is greater than or equal to>The calculated k moment->Error covariance P of k ;
From predicted stateIs greater than or equal to>Calculating Kalman gain at the k moment: />Based on the prediction value at the time k>Observed value z of sensor k And Kalman gain K k Predictive estimation of system states can be updated: />
Wherein, K k Is the Kalman gain; h is a conversion matrix from the state variable to the observation variable;the optimal state is corrected through measurement; r is the measurement noise covariance; z is a radical of formula k Is the first sensor observation or the second sensor observation;
c) Substituting class V targets into observed value z at time k k And obtaining the optimal state of the system according to the updated prediction estimation formula in the step B)Based on the system optimum status at time k>Substituting the predicted estimation of the system state in the step B) to obtain the predicted value of the system state at the time of k +1, and substituting the R-type targets which are determined as the same obstacle into the predicted value at the time of k +1Observed value z k+1 And updating the prediction estimation to obtain fused effective obstacle information->
D) Cycling the steps B) and C) in time sequence, and measuring the corrected optimal state each timeIs the fused set of obstacle target outputs.
The control decision module of the intelligent driving passenger car can execute corresponding driving behaviors such as lane changing or parking according to the fused reliable barrier target.
As shown in fig. 5, corresponding to the method of the foregoing embodiment, the system of this embodiment includes:
the coordinate conversion module is used for unifying a radar coordinate system and a camera coordinate system into a vehicle body coordinate system;
the data analysis module is used for reading and analyzing original message data of the radar and the camera to obtain an original obstacle target under the vehicle body coordinate system;
the judging module is used for judging whether the radar and the camera shooting original obstacle target are the same obstacle or not; the fusion module is used for fusing the targets judged to be the same barrier to obtain final barrier target information; the screening module is used for screening the original obstacle target output by the data analysis module, eliminating false targets and white noise, and transmitting the original obstacle target after eliminating the false targets and the white noise to the judgment module; preferably, the screening module screens the original obstacle target using an effective life cycle method.
As shown in fig. 6, the determining module of this embodiment includes:
the marking unit is used for marking the obstacle target detected by the radar as an R-type target and marking the obstacle target detected by the camera as a V-type target; the target position coordinate of the R-type obstacle is (x) R ,y R ) At a velocity of V R (ii) a Class V obstacle target position seatIs marked as (x) V ,y V ) At a velocity of V V ;
A comparison unit for judging the relationship between the weighted value of the distance information and the velocity information of the R-type target and the V-type target and the threshold value, wherein the weighted value (W ') is larger than or equal to the threshold value (W) and is judged not to be the same obstacle, and the weighted value (W') is smaller than the threshold value and is judged to be the same obstacle; wherein the threshold value W = λ · 0.1 Δ L R +(1-λ)ρ·V R (ii) a W '= λ Δ L + (1- λ) Δ V, W' is a weighted value of distance information and speed information of the R-class target and the V-class target; λ is a weight; rho is a speed error coefficient;ΔV=|V R -V V l. In the comparing unit, the speed error coefficient is specifically set as follows:
when V is R At < 40 km/h: ρ =0.1;
when V is less than or equal to 40km/h R At < 80 km/h: ρ =0.09;
when the voltage is less than or equal to 80km/h R At < 120 km/h: ρ =0.07;
when 120km/h is less than V R The method comprises the following steps: ρ =0.05.
The threshold is specifically set as follows:
the present embodiments also provide an unmanned vehicle comprising a processor for performing the steps of the method of an embodiment of the invention. The processor controls the unmanned vehicle to avoid the final obstacle target information according to the final obstacle target information; in order to visually display the obstacle target information, in the present embodiment, the processor communicates with the display module. The display module may be a terminal device, such as a mobile terminal or the like.
Claims (13)
1. A method for detecting an obstacle during the driving of an unmanned vehicle is characterized by comprising the following steps:
1) Unifying a first sensor coordinate system and a second sensor coordinate system into a vehicle body coordinate system;
2) Reading and analyzing original message data of a first sensor and a second sensor to obtain an original obstacle target under the vehicle body coordinate system;
3) Judging whether the original obstacle targets detected by the first sensor and the second sensor are the same obstacle or not;
4) Fusing the targets judged as the same barrier to obtain final barrier target information;
the first sensor and the second sensor are different sensors;
in step 4), the specific implementation process of determining whether the original obstacle targets detected by the first sensor and the second sensor are the same obstacle includes:
i) Marking an obstacle target detected by a first sensor as an R-type target, and marking an obstacle target detected by a second sensor as a V-type target; the target position coordinate of the R-type obstacle is (x) R ,y R ) At a velocity of V R (ii) a The target position coordinate of the V-type obstacle is (x) V ,y V ) At a velocity of V V ;
ii) judging the relationship between the weighted values of the distance information and the speed information of the R-type target and the V-type target and the threshold value, judging that the weighted values are not the same obstacle if the weighted values are larger than or equal to the threshold value W, and judging that the weighted values are the same obstacle if the weighted values are smaller than the threshold value W; wherein the threshold value W = λ · 0.1 Δ L R +(1-λ)ρ·V R ;W′=λΔL+(1-lambda) Δ V, W' is the weighted value of the distance information and the speed information of the R-type target and the V-type target; λ is a weight; rho is a speed error coefficient;ΔV=|V R -V V |;λ=0.7;
the threshold is specifically set as follows:
when the voltage is less than or equal to 80km/h R < 120km/h, ρ =0.07:
2. the method for detecting the obstacle during the driving of the unmanned vehicle as claimed in claim 1, wherein in step 2), the raw message data of the first sensor and the second sensor are read in an online or offline manner.
3. The method for detecting the obstacle during the driving of the unmanned vehicle according to claim 1, wherein between the step 2) and the step 3), the following operations are further performed: and screening the original obstacle target to eliminate false targets and white noise.
4. The method for detecting the obstacle during the driving process of the unmanned vehicle according to claim 3, wherein the step of screening the original obstacle target comprises:
a) When an original barrier target is decided as an effective target for the first time in a certain period, setting a life threshold value for the effective target, and carrying out tracking measurement on the effective target;
b) Starting from the period, if the target newly detected by the first sensor and the second sensor is inconsistent with the corresponding effective target continuously for N times, and the target newly detected in the period from the (N + 1) th time to the effective target is consistent with the effective target again, keeping the original effective target unchanged; if the number of times of continuous inconsistency between the newly detected target and the effective target from the period exceeds a life threshold, abandoning the original effective target, taking the newly detected target of the first sensor and the second sensor as a new effective target, and carrying out tracking measurement on the new effective target; wherein N is less than a life threshold;
c) And circularly executing the steps A) and B) to respectively obtain effective target output sets screened by the first sensor and the second sensor.
5. The method according to claim 1, wherein in step 4), the targets determined to be the same obstacle are fused by using a kalman filter method.
6. The method according to claim 5, wherein the step of fusing the targets determined as the same obstacle by using the Kalman filtering method comprises:
a) After receiving the first valid measurement from the second sensor, initializing using:
wherein: x is a radical of a fluorine atom V Is a V-type target horizontal coordinate; y is V Is a V-type target longitudinal coordinate; v V-X Is the target lateral velocity of the V type; v V-Y Is the class V target longitudinal velocity; alpha is the Euclidean distance of the V-type target;is the absolute value of the target speed of the V type; phi is the orientation angle of the V-type target;
b) Substituting the position information and the speed information of the V-type target initialized at the k-1 moment into a state prediction formula to obtain a state prediction value of a Kalman filtering fusion system at the k moment
C) Updating the prediction estimate of the Kalman filter fusion system state using:
wherein, K k In order to be the basis of the kalman gain,h is a conversion matrix from the state variable to the observation variable, and superscript T represents matrix transposition; />The optimal state is corrected through measurement; r is the measurement noise covariance; z is a radical of formula k A first sensor observation or a second sensor observation at time k; p k Is->Represents the uncertainty of the state; /> Is composed ofError covariance of (P) k-1 Is->Is greater than or equal to>F is a state transition matrix which is a model of target state transition; q is the inherent noise interference of the target state conversion model;
d) Substituting class V targets into observed value z at time k k And obtaining the optimal state of the system according to the Kalman filtering fusion system state prediction value estimation formula in the step B)System optimum status at time k>Substituting the prediction estimation formula of the Kalman filtering fusion system state in the step C) to obtain the predicted value of the system state at the moment k +1, and substituting the R-type targets which are judged as the same obstacle into the observed value z at the moment k +1 k+1 And updating the prediction estimation of the Kalman filtering fusion system state to obtain the fused effective obstacle information->
7. An obstacle detection system during travel of an unmanned vehicle, comprising:
the coordinate conversion module is used for unifying the first sensor coordinate system and the second sensor coordinate system into a vehicle body coordinate system;
the data analysis module is used for reading and analyzing original message data of the first sensor and the second sensor to obtain an original obstacle target under the vehicle body coordinate system;
the judging module is used for judging whether the original obstacle targets detected by the first sensor and the second sensor are the same obstacle or not;
the fusion module is used for fusing the targets judged to be the same barrier to obtain final barrier target information;
the first sensor and the second sensor are different sensors;
the judging module comprises:
the marking unit is used for marking the obstacle target detected by the first sensor as an R-type target and the obstacle target detected by the second sensor as a V-type target; the target position coordinate of the R-type obstacle is (x) R ,y R ) At a velocity of V R (ii) a The target position coordinate of the V-type obstacle is (x) V ,y V ) At a velocity of V V ;
A comparison unit for judging the relationship between the weighted value of the distance information and the velocity information of the R-type target and the V-type target and the threshold value, wherein the weighted value W 'is greater than or equal to the threshold value W and is judged to be not the same obstacle, and the weighted value W' is smaller than the threshold value and is judged to be the same obstacle; wherein the threshold value W = λ · 0.1 Δ L R +(1-λ)ρ·V R (ii) a W '= λ Δ L + (1- λ) Δ V, W' is a weighted value of distance information and speed information of the R-class target and the V-class target; λ is a weight; rho is a speed error coefficient;ΔV=|V R -V V l; the threshold is specifically set as follows:
when V is less than or equal to 40km/h R < 80km/h, ρ =0.09:
when the voltage is less than or equal to 80km/h R < 120km/h, ρ =0.07:
8. the system as claimed in claim 7, further comprising a screening module for screening the original obstacle targets outputted from the data analysis module, eliminating false targets and white noise, and transmitting the original obstacle targets after eliminating false targets and white noise to the determination module.
9. The system of claim 8, wherein the screening module screens the original obstacle targets using an active life-cycle method.
10. An unmanned vehicle comprising a processor for performing the method steps of any one of claims 1 to 6.
11. The unmanned vehicle of claim 10, wherein the processor controls the unmanned vehicle to avoid the final obstacle target information based on the final obstacle target information.
12. The unmanned vehicle of claim 10, wherein the processor is in communication with a display module.
13. The unmanned vehicle of claim 10, wherein the processor is in communication with a data acquisition module; the data acquisition module comprises at least one first sensor and at least one second sensor which are arranged on the front part of the vehicle body; the first sensor and the second sensor are heterogeneous sensors.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010333455.0A CN111505623B (en) | 2020-04-24 | 2020-04-24 | Method and system for detecting obstacle in driving process of unmanned vehicle and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010333455.0A CN111505623B (en) | 2020-04-24 | 2020-04-24 | Method and system for detecting obstacle in driving process of unmanned vehicle and vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111505623A CN111505623A (en) | 2020-08-07 |
CN111505623B true CN111505623B (en) | 2023-04-18 |
Family
ID=71874777
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010333455.0A Active CN111505623B (en) | 2020-04-24 | 2020-04-24 | Method and system for detecting obstacle in driving process of unmanned vehicle and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111505623B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112462368B (en) * | 2020-11-25 | 2022-07-12 | 中国第一汽车股份有限公司 | Obstacle detection method and device, vehicle and storage medium |
CN112924960B (en) * | 2021-01-29 | 2023-07-18 | 重庆长安汽车股份有限公司 | Target size real-time detection method, system, vehicle and storage medium |
CN113296088A (en) * | 2021-05-11 | 2021-08-24 | 雄狮汽车科技(南京)有限公司 | Dynamic target tracking method and device for vehicle and vehicle |
CN114148301B (en) * | 2021-12-20 | 2022-11-18 | 岚图汽车科技有限公司 | Vehicle brake control method, device, equipment and readable storage medium |
CN117369507A (en) * | 2023-10-31 | 2024-01-09 | 河海大学 | Unmanned aerial vehicle dynamic path planning method of self-adaptive particle swarm algorithm |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3661411B2 (en) * | 1998-06-08 | 2005-06-15 | 日産自動車株式会社 | Vehicle driving force control device |
WO2018154367A1 (en) * | 2017-02-23 | 2018-08-30 | Kpit Technologies Limited | System and method for target track management of an autonomous vehicle |
CN106909148A (en) * | 2017-03-10 | 2017-06-30 | 南京沃杨机械科技有限公司 | Based on the unmanned air navigation aid of agricultural machinery that farm environment is perceived |
CN107632308B (en) * | 2017-08-24 | 2021-02-05 | 吉林大学 | Method for detecting contour of obstacle in front of vehicle based on recursive superposition algorithm |
US11037453B2 (en) * | 2018-10-12 | 2021-06-15 | Aurora Flight Sciences Corporation | Adaptive sense and avoid system |
CN109886308B (en) * | 2019-01-25 | 2023-06-23 | 中国汽车技术研究中心有限公司 | Target level-based dual-sensor data fusion method and device |
CN110850413A (en) * | 2019-11-26 | 2020-02-28 | 奇瑞汽车股份有限公司 | Method and system for detecting front obstacle of automobile |
-
2020
- 2020-04-24 CN CN202010333455.0A patent/CN111505623B/en active Active
Non-Patent Citations (1)
Title |
---|
刘志强 ; 王盈盈 ; 倪捷 ; 刘恒 ; .前方车辆障碍物检测方法的研究.机械设计与制造.(第07期), * |
Also Published As
Publication number | Publication date |
---|---|
CN111505623A (en) | 2020-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111505623B (en) | Method and system for detecting obstacle in driving process of unmanned vehicle and vehicle | |
US11155258B2 (en) | System and method for radar cross traffic tracking and maneuver risk estimation | |
US20230137183A1 (en) | Vehicular environment estimation device | |
US10489664B2 (en) | Image processing device, device control system, and computer-readable storage medium | |
EP3477616A1 (en) | Method for controlling a vehicle using a machine learning system | |
US20210117787A1 (en) | Method for determining a quality grade of data sets of sensors | |
US20150239472A1 (en) | Vehicle-installed obstacle detection apparatus having function for judging motion condition of detected object | |
CN110929796B (en) | Multi-source sensor-based decision layer data fusion method and system and storage medium | |
Rawashdeh et al. | Collaborative automated driving: A machine learning-based method to enhance the accuracy of shared information | |
US20200019792A1 (en) | Self-Position Estimation Method and Self-Position Estimation Device | |
CN112154455A (en) | Data processing method, equipment and movable platform | |
CN112285714A (en) | Obstacle speed fusion method and device based on multiple sensors | |
WO2022080049A1 (en) | Object recognition device | |
CN111959515A (en) | Forward target selection method, device and system based on visual detection | |
Domhof et al. | Multi-sensor object tracking performance limits by the cramer-rao lower bound | |
CN114298142A (en) | Multi-source heterogeneous sensor information fusion method and device for camera and millimeter wave radar | |
Nguyen et al. | Optimized grid-based environment perception in advanced driver assistance systems | |
Matthaei et al. | Motion classification for cross traffic in urban environments using laser and radar | |
CN114084129A (en) | Fusion-based vehicle automatic driving control method and system | |
Wei et al. | Multi-sensor environmental perception and adaptive cruise control of intelligent vehicles using kalman filter | |
CN109932721B (en) | Error and detection probability analysis method applied to multi-sensor fusion | |
CN114730495A (en) | Method for operating an environment detection device with grid-based evaluation and with fusion, and environment detection device | |
KR102643361B1 (en) | Adaptive intervention determination method of vehicle accident avoidance system | |
CN111797701B (en) | Road obstacle sensing method and system for vehicle multi-sensor fusion system | |
US20230168352A1 (en) | Method for assessing a measuring inaccuracy of an environment detection sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |