CN108983219B - Fusion method and system for image information and radar information of traffic scene - Google Patents

Fusion method and system for image information and radar information of traffic scene Download PDF

Info

Publication number
CN108983219B
CN108983219B CN201810939902.XA CN201810939902A CN108983219B CN 108983219 B CN108983219 B CN 108983219B CN 201810939902 A CN201810939902 A CN 201810939902A CN 108983219 B CN108983219 B CN 108983219B
Authority
CN
China
Prior art keywords
information
image information
radar
target
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810939902.XA
Other languages
Chinese (zh)
Other versions
CN108983219A (en
Inventor
余贵珍
张思佳
王章宇
张艳飞
吴新开
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tage Idriver Technology Co Ltd
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201810939902.XA priority Critical patent/CN108983219B/en
Publication of CN108983219A publication Critical patent/CN108983219A/en
Application granted granted Critical
Publication of CN108983219B publication Critical patent/CN108983219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • G06F18/256Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9327Sensor installation details
    • G01S2013/93271Sensor installation details in the front of the vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Electromagnetism (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fusion method of image information and radar information of a traffic scene, which comprises the following steps: preprocessing the image information in front of the vehicle obtained by the camera; extracting characteristic information in the image information, and comparing and judging the characteristic information with prestored traffic scene information; and classifying the current traffic scene according to the comparison and judgment result, executing a corresponding fusion algorithm according to a preset fusion method of the image information and the radar information which are matched with the current traffic scene category, and outputting the result of the fusion algorithm. The invention judges the scene according to the collected image information, switches among different fusion algorithms, effectively utilizes resources and improves the scene adaptability; the invention fully utilizes the redundancy and complementary characteristics among different sensor data, and improves the robustness and reliability of the system; in the aspect of image information processing, the invention adopts a deep learning algorithm, so that the real-time performance is higher, and the target identification is more accurate.

Description

Fusion method and system for image information and radar information of traffic scene
Technical Field
The invention relates to the field of safe driving, in particular to a fusion method and system of image information and radar information of a traffic scene.
Background
With the continuous improvement of the electric, intelligent and networking degree of automobiles, advanced vehicle Assistant Driving (ADAS) has become a key research direction of various enterprises, colleges and research institutes at present, and environmental awareness is the most basic key technology in the ADAS system. The method can accurately acquire the effective target information in front of the road, can provide powerful technical support for active safety technologies such as an adaptive cruise system (ACC) and an automatic emergency braking system (AEB), and has important significance for the development of an ADAS (adaptive cruise control system). The existing environment perception technology mostly adopts a single sensor or simply superposes data of multiple sensors. The method cannot meet the requirements of high precision and all-weather perception of the intelligent vehicle and is difficult to comprehensively reflect the detected object. The multi-sensor fusion technology is a method for synthesizing information collected by a plurality of sensors such as a visual sensor, a radar sensor and the like to form comprehensive description of environmental characteristics, and can make full use of redundancy and complementary characteristics among data of the plurality of sensors to obtain complete and sufficient information required by the intelligent vehicle. The camera and the millimeter wave radar realize good complementation with rich information content and good weather adaptability, and become two sensors which are most applied in the field of information fusion.
In early researches, multi-sensor information fusion is only to simply overlay multi-sensor information on a data level, and with continuous progress of radar information processing algorithms, vision algorithms and the like, fusion methods based on features and decisions are concerned by more and more scholars. In 2012, R.Omar Chavez-Garcia et al proposed that radar and monocular vision were used for forward target perception, that the original data of radar and camera were used as input to detect moving objects, and further that the information of these moving objects was fused using D-S evidence fusion theory; wu et al propose to obtain a target contour through three-dimensional depth information, find a point closest to a visual sensor, then fuse the point with information detected by a radar to obtain a fused closest point, and further determine a fused contour. Although the detection accuracy of the methods is improved compared with that of a single sensor, the images need to be subjected to traversal processing, and the traditional image processing algorithm is adopted, so that the visual calculation intensity is high, and the requirement of people on real-time performance is difficult to meet. Liu et al have proposed a method for classifying road vehicles using an SVM-based classifier; Chavez-Garcia et al and Vu et al use HOG features and Boosting classifiers to classify vehicles. These machine learning approaches, while improving the accuracy of detection while reducing computational intensity, rely heavily on the training data set of the experimental environment. In 2015, Alencar uses a millimeter wave radar and a camera to perform data fusion to realize identification and classification of multiple targets of a road, performs comprehensive analysis on camera data and millimeter wave radar data by using k-means clustering analysis, a support vector machine and kernel principal component analysis to obtain the road targets, and is high in accuracy but only suitable for identification of close-range targets in good weather.
In summary, most of the existing researches are only discussed for the detection problem in a specific traffic scene, the advantages of how to utilize different sensors in different traffic scenes are not considered, the fusion method has poor adaptability to application scene changes, and the construction of the fusion structure and the optimization of the overall performance are not noticed.
Disclosure of Invention
The invention aims to provide a method and a system for fusing image information and radar information of a traffic scene, which have the advantages of high accuracy, high processing speed and strong adaptability.
In order to achieve the above object, a technical method of the present invention is to provide a method for fusing image information and radar information of a traffic scene, comprising the following steps: preprocessing the image information in front of the vehicle obtained by the camera; extracting characteristic information in the image information, and comparing and judging the characteristic information with prestored traffic scene information; and classifying the current traffic scene according to the comparison and judgment result, executing a corresponding fusion algorithm according to a preset fusion method of the image information and the radar information which are matched with the current traffic scene category, and outputting the result of the fusion algorithm.
Further, before the step of preprocessing the image information in front of the vehicle obtained by the camera, the method further comprises the following steps: and classifying traffic scenes by using a deep learning method, and establishing a corresponding fusion method of image information and radar information aiming at different classifications.
Further, before the step of preprocessing the image information in front of the vehicle obtained by the camera, the method further comprises the following steps: and installing the two sensors on the vehicle according to the installation criteria of the camera and the millimeter wave radar, and respectively calibrating and jointly calibrating the two sensors to obtain the related parameters.
Furthermore, the camera is installed at a position 1-3 cm below the base of the rearview mirror inside the vehicle, and the millimeter wave radar is installed at the center of the license plate at the front end of the vehicle.
Further, the step of executing a corresponding fusion algorithm according to a preset fusion method of image information and radar information adapted to the current traffic scene specifically includes: and processing the acquired image information and radar information according to the scene classification result, including matrix conversion between coordinate systems, effective target screening, target identification, monocular distance measurement and the like, and simultaneously executing a corresponding fusion algorithm.
Further, the fusion method of the image information and the radar information comprises the following steps: a fusion method based on radar information, a fusion method based on image information, and a fusion method based on a common decision.
Specifically, the fusion method mainly based on radar information comprises the following steps: the position information of the effective target obtained by the radar is converted into a pixel coordinate system of an image through projection transformation, an interested area is formed in the image, target identification is carried out by using a deep learning method, effective target information is processed by using an information fusion algorithm, and information such as the position, the speed, the type and the like of the fused target is output.
Specifically, the fusion method mainly based on image information comprises the following steps: the method comprises the steps of performing target identification by using a deep learning algorithm from image information, performing matching judgment on the image information of a target and radar information of the target, fusing the image information of the target and the radar information of the target if the image information of the target is matched with the radar information of the target, outputting information such as position, speed and type of the fused target, rejecting the radar information if the image information of the target is not matched with the radar information of the target, and outputting information such as the position, the speed and the type of the target according to the image information.
Specifically, the fusion method of the common decisions comprises the following steps: the method comprises the steps of finishing primary selection on radar targets by using a target screening algorithm, outputting effective target information, finishing target identification in images returned by a camera by using a deep learning algorithm, obtaining transverse and longitudinal distance position information of the targets by using a monocular distance measuring algorithm, finishing observation value matching of the radar information and the image information by using the Mahalanobis distance, finishing data fusion by using a joint probability density algorithm after finishing matching, and outputting information such as the position, the speed, the type and the like of the targets.
In order to achieve the above object, a technical method of the present invention is to provide a system for fusing image information and radar information of a traffic scene, including: a processor, a memory, and communication circuitry, the processor coupling the memory and the communication circuitry; the memory stores communication data information, image information, traffic scene classification information and working program data of the processor, the communication circuit is used for information transmission, and the processor executes the program data when working so as to realize any one of the fusion methods of the image information and the radar information of the traffic scene.
The invention has the following beneficial effects:
(1) the invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which are used for judging the scene according to the acquired image information and switching among different fusion algorithms, thereby effectively utilizing resources and improving the scene adaptability.
(2) The invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which fully utilize the redundancy and complementary characteristics among different sensor data and improve the robustness and reliability of the system.
(3) The invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which adopt a deep learning algorithm in the aspect of image information processing, and have higher real-time performance and more accurate target identification compared with the traditional image processing algorithm.
Drawings
In order to more clearly illustrate the embodiments or technical measures of the invention, reference will now be made briefly to the attached drawings which are needed in the description of the embodiments, it being apparent that the drawings in the description below are only some embodiments of the invention and that, to a person skilled in the art, other drawings can be derived therefrom without inventive effort, wherein:
FIG. 1 is a schematic diagram of a schematic framework of an embodiment of a method and system for fusing image information and radar information of a traffic scene according to the present invention;
FIG. 2 is a flowchart of a fusion algorithm based on millimeter wave radar information in an embodiment of a fusion method of image information and radar information of a traffic scene according to the present invention;
FIG. 3 is a flowchart of a fusion algorithm based on image information in an embodiment of a fusion method of image information and radar information of a traffic scene according to the present invention;
fig. 4 is a flowchart of a fusion algorithm for making a decision by a millimeter wave radar and a camera in the embodiment of the fusion method for image information and radar information of a traffic scene.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely below, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a principle framework of an embodiment of a method and a system for fusing image information and radar information of a traffic scene according to the present invention. Firstly, an extracted typical traffic scene in life is processed by a deep learning method, such as: and classifying the straight road in the sunny day, the straight road in the rainy day, the ramp in the sunny day, the curve in the sunny day at night, the curve in the rainy day at night and the like, and establishing a corresponding fusion method of the image information and the radar information according to the classification information. In the present embodiment, three fusion methods are mainly included: the fusion method mainly comprises a fusion method mainly based on radar information, a fusion method mainly based on image information and a fusion method for jointly deciding the radar information and the image information. The method mainly comprises the steps that an algorithm flow chart is shown in figure 2, a region of interest (ROI) is preliminarily determined according to detection target information of a millimeter wave radar, then projection transformation is carried out, and target classification detection and feature extraction are carried out by applying an image processing algorithm according to the ROI region. The fusion method mainly based on image information is characterized in that an algorithm flow chart is shown in fig. 3, a target recognition algorithm based on CNN is established, relevant information of effective targets in an image is extracted, and the relevant target information is supplemented by combining radar information. The fusion method of the common decision is characterized in that an algorithm flow chart is shown in 4, a camera and a radar respectively make decisions, observation value matching is completed by using the Mahalanobis distance after space-time joint calibration is completed, then weight distribution of a sensor is determined by using a joint probability density algorithm, data fusion is completed, and therefore information such as speed, type and position of a forward dangerous target is determined. The most appropriate fusion method is selected to carry out environment detection under different corresponding scenes, so that the detection precision and reliability of the forward object are improved, and the adaptability to different scenes is improved.
In a more specific embodiment, the present invention comprises the steps of:
(1) and installing the two sensors on the vehicle according to the installation criteria of the camera and the millimeter wave radar, and respectively calibrating and jointly calibrating the two sensors to obtain the related parameters. The millimeter wave radar is arranged at the center of the front end of the vehicle, the height from the ground is ensured to be between 35cm and 65cm, the mounting plane of the millimeter wave radar is perpendicular to the ground as much as possible and is perpendicular to the longitudinal plane of the vehicle body, and the pitch angle and the yaw angle are close to 0 degrees. The camera is arranged at a position 1-3 cm under a rearview mirror base in the vehicle, the pitch angle of the camera is adjusted, and when the scene is a straight road and the vehicle body is parallel to the road, the 2/3 area under the picture is the road. Calibrating the internal parameters of the camera by using a checkerboard calibration method, and jointly calibrating the two sensors by combining the respective position information of the camera and the radar and the angle information of the checkerboard calibration plate to obtain the required parameters.
(2) The method comprises the steps of preprocessing image information in front of a vehicle obtained by a camera, extracting characteristic information in the image information, comparing and judging the characteristic information with prestored traffic scene information, and classifying the current traffic scene according to the result of comparison and judgment. And information is acquired by taking the sensor with a small sampling frame rate as a reference, so that the time of the information acquired by the sensor is uniform. For the image information collected by the camera, image preprocessing is carried out, and the image preprocessing comprises the following steps: filtering, graying, normalizing and the like, and after preprocessing, inputting the images into a SEnet for classification. Compared with a general convolutional neural network, the SENET adopts a brand-new characteristic recalibration strategy: the importance degree of each feature channel is automatically acquired through a learning mode, and then useful features are promoted according to the importance degree and the features which are not useful for the current task are suppressed. The recalibration mainly comprises the following three steps: firstly, the process of Squeeze is carried out, the feature compression is carried out along the space dimension, each two-dimensional feature channel is changed into a real number, the real number has a global receptive field to some extent, and the output dimension is matched with the number of the input feature channels. It characterizes the global distribution of responses over the feature channels so that the superficial layer can also obtain a global receptive field. The second is the Excitation operation, which is a mechanism similar to the gate in the recurrent neural network. Weights are generated for each feature channel by parameters that are learned to explicitly model the correlation between feature channels. And finally, a Reweight operation, wherein the weight of the output of the Excitation is regarded as the importance of each feature channel after feature selection, and then the feature channels are weighted to the previous feature channel by channel through multiplication, so that the original feature is recalibrated in the channel dimension. Before an input image is tested, a large number of pictures are needed to be trained to obtain a corresponding network structure, because the core of the SEnet is an SE module, the SE module can be embedded into almost all the existing network structures, so that the SE module is embedded into building block units of structures of ResNet, BN-inclusion and inclusion-ResNet-v 2 during training, model results are compared, and an optimal model is reserved. For the setting of parameters in the network, the adjustment can be carried out according to the training result until a satisfactory result is obtained, and a final model is output. After the pictures are input into the trained model, the network can automatically extract the picture characteristics to complete scene classification.
(3) And (4) combining the scene classification result, executing a corresponding fusion algorithm according to a preset fusion method of the image information and the radar information which are matched with the current traffic scene category, and outputting the result of the fusion algorithm. More specifically, the acquired image information and radar information are processed, including matrix conversion between coordinate systems, effective target screening, target identification, monocular distance measurement and the like, corresponding fusion algorithms are executed simultaneously, and finally fusion algorithm results are output.
Scene one
For severe environments such as haze, rainstorm, snowstorm and the like or environments with poor illumination such as night, the performance of the camera is affected, the detection reliability is reduced, and at the moment, a multi-sensor fusion method mainly based on radar is adopted. With reference to fig. 3, the fusion method based on radar information includes the following steps: the position information of the effective target obtained by the radar is converted into a pixel coordinate system of an image through projection transformation, an interested area is formed in the image, target identification is carried out by using a deep learning method, effective target information is processed by using an information fusion algorithm, and information such as the position, the speed, the type and the like of the fused target is output.
The method comprises the steps of firstly, judging information output by a radar according to the detection range of the vehicle-mounted radar in combination with technical parameters such as measurement accuracy and resolution of the vehicle-mounted radar, and removing unreasonable target information, secondly, when an automobile runs, the number of nearby targets is relatively small, effective obstacle targets cannot be detected in more channels of the radar, the returned target signals are the most original signals of the radar, for the signals, corresponding conditions are set according to the definition of each type of radar to remove the signals, and meanwhile, false signals are generated when echo energy is uneven due to radar vibration.
The projective transformation matrix involved in the method is:
Figure GDA0001804904070000081
wherein (x)w,yw,zw) As world coordinate system coordinates, (u, v) as image pixel coordinate system coordinates, (x)c,yc,zc) For the coordinates of the camera coordinate system, R represents a rotation matrix, t represents a translation matrix, f represents a focal length, dx and dy represent the length units occupied by one pixel in the x direction and the y direction of the image physical coordinate system, and u represents0,v0Representing the center pixel coordinate (O) of the image1) And image origin pixel coordinates (O)0) The number of horizontal and vertical pixels of the phase difference therebetween.
The size of the region of interest involved in the method is not fixed, and is inversely proportional to the distance of the vehicle relative to the millimeter wave radar. The coordinates acquired by the radar are generally vehicle centroid coordinates, the vehicle centroid coordinates are used as the center of the region of interest, and the region of interest is drawn by adopting a self-adaptive threshold value method.
The deep learning algorithm related to the method considers the characteristics of traffic scenes: the target characteristics are obvious, mutual shielding may exist between targets, a Caffe-Net model is selected, and the model is finely adjusted according to the recognition result during training.
The information fusion algorithm related to the method considers that the confidence coefficient of radar information is high, a simpler weighted average information fusion algorithm can be adopted, and a high weight is given to the radar information and a low weight is given to the image information.
Scene two
Because the radar detection plane is a horizontal plane and the azimuth angle is small, the detection function is limited to a certain extent for scenes such as uphill and downhill roads, curves and the like, and at the moment, a fusion method mainly based on image information is adopted, and the method mainly comprises the following steps: the method comprises the steps of performing target identification by using a deep learning algorithm from image information, performing matching judgment on the image information of a target and radar information of the target, fusing the image information of the target and the radar information of the target if the image information of the target is matched with the radar information of the target, outputting information such as position, speed and type of the fused target, rejecting the radar information if the image information of the target is not matched with the radar information of the target, and outputting information such as the position, the speed and the type of the target according to the image information.
Scene three
In general, the performance of both radar and camera can be maintained in a better state, and a fusion method of common decision of radar information and image information is adopted. The method specifically comprises the following steps: and (4) finishing primary selection on the radar target by using a target screening algorithm, and outputting effective target information. And completing target identification in the image returned by the camera by using a deep learning algorithm, and acquiring the transverse and longitudinal distance position information of the target by using a monocular distance measuring algorithm. Matching the radar information and the image information by applying the mahalanobis distance to finish the observation value, and specifically, defining Vk as the most likely area of the observation value of the current target:
Figure GDA0001804904070000091
according to the statistical data, when c is 3, the probability of the observed value in the effective area is 99.8%. And after matching is finished, data fusion is finished by using a joint probability density algorithm, and information such as the position, the speed, the type and the like of the target is output.
The more detailed procedure is as follows:
A. establishing a system state equation and an observation equation xi,k=Fkxi,k-1+vk,i=1,2,3,...,zij,k=Hjxi,k+wj,k,j=1,2,
Wherein xi,kRepresenting the state vector of the ith target in the kth state. v. ofkIs white Gaussian noise with mean value of 0 and covariance matrix E (v)kvk T)=Qk,zij,kIndicating the observed value of the ith target detected and output by the jth sensor at time k. Wherein HjTo convert the matrix, wj,kIs white Gaussian noise, the average value is also 0, and the covariance thereof satisfies
Figure GDA0001804904070000101
Depending on the type of sensor.
B. And predicting the state value and the observed value of the last step by using Kalman filtering:
Figure GDA0001804904070000102
the state of this cycle (time k) is updated as:
x′ij,k=x′ij,k|k-1+Kij(zij,k-z′k|k-1)
Figure GDA0001804904070000103
wherein x'ij,kRepresents the ith objective according toAnd updating the observed value output by the jth sensor. Kij is the kalman gain matrix of the present system,
C. updating the covariance matrix of the predicted value and the observed value as follows:
Figure GDA0001804904070000104
D. updating a Kalman gain matrix:
Figure GDA0001804904070000105
E. updating the estimated value by adopting a weighted average method:
Figure GDA0001804904070000106
β thereinijThe probability of the jth sensor observation being generated for the ith target, then the state covariance is updated as follows:
Figure GDA0001804904070000111
η thereinijFor hypothetical deviations, define:
ηij=(x′ij-x′i,k|k)(x′ij-x′i,k|k)T
F. solving β according to Poisson distribution theoryij
Figure GDA0001804904070000112
Wherein gamma isij=zij,k-z′k|k-1Residual vectors that are observations and predictors.
The invention relates to a scene-based vision and millimeter wave radar information fusion system and a scene-based vision and millimeter wave radar information fusion method, which mainly comprise three blocks: sensor installation and calibration, scene classification, fusion algorithm selection and result output. The method comprises the following steps of finishing the installation and calibration of a camera and a millimeter wave radar by combining the characteristics of a sensor and a checkerboard marking method, classifying scenes by using an optimal model in ResNet, BN-inclusion and inclusion-ResNet-v 2 embedded in an SE module, and selecting a proper fusion algorithm by combining scene classification results: a fusion algorithm based on radar information, a fusion algorithm based on image information and a fusion algorithm of common decision, and outputting a final fusion result.
The invention also provides a system for fusing the image information and the radar information of the traffic scene, which comprises the following steps: a processor, a memory, and communication circuitry, the processor coupling the memory and the communication circuitry; the memory stores communication data information, image information, traffic scene classification information and working program data of the processor, the communication circuit is used for information transmission, and the processor executes the program data when working so as to realize any one of the fusion methods of the image information and the radar information of the traffic scene. For a detailed description of related contents, please refer to the above method section, which is not described herein again.
The invention further provides a device with a storage function, wherein program data are stored on the device, and when the program data are executed by a processor, the method for fusing the image information and the radar information of the traffic scene is implemented.
The device with storage function may be at least one of a server, a floppy disk drive, a hard disk drive, a CD-ROM reader, a magneto-optical disk reader, and the like.
The invention has the following beneficial effects:
(1) the invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which are used for judging the scene according to the acquired image information and switching among different fusion algorithms, thereby effectively utilizing resources and improving the scene adaptability.
(2) The invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which fully utilize the redundancy and complementary characteristics among different sensor data and improve the robustness and reliability of the system.
(3) The invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which adopt a deep learning algorithm in the aspect of image information processing, and have higher real-time performance and more accurate target identification compared with the traditional image processing algorithm.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by the present specification, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (6)

1. A fusion method of image information and radar information based on traffic scenes is characterized by comprising the following steps:
preprocessing the image information in front of the vehicle obtained by the camera;
extracting characteristic information in the image information, and comparing and judging the characteristic information with prestored traffic scene information;
classifying the current traffic scene according to the comparison and judgment result,
executing a corresponding fusion algorithm according to a preset fusion method of image information and radar information which is matched with the current traffic scene category, and outputting the result of the fusion algorithm;
the step of executing the corresponding fusion algorithm according to the preset fusion method of the image information and the radar information which are matched with the current traffic scene category specifically comprises the following steps: processing the collected image information and radar information according to the scene classification result, including matrix conversion between coordinate systems, effective target screening, target identification and monocular distance measurement, and simultaneously executing a corresponding fusion algorithm;
the fusion method of the image information and the radar information comprises the following steps: a fusion method mainly based on radar information, a fusion method mainly based on image information and a fusion method for jointly deciding radar information and image information;
the fusion method mainly based on image information comprises the following steps:
starting from image information, the target is identified by applying a deep learning algorithm,
the image information of the target and the radar information of the target are subjected to matching judgment,
if the image information of the target is matched with the radar information of the target, the information of the image information of the target and the radar information of the target is fused, the position, the speed and the type information of the fused target are output,
and if the image information of the target is not matched with the radar information of the target, after the radar information is removed, outputting the position, the speed and the type information of the target only according to the image information.
2. The method for fusing image information and radar information based on traffic scenes according to claim 1, wherein the step of preprocessing the image information in front of the vehicle obtained by the camera further comprises the following steps:
and classifying traffic scenes by using a deep learning method, and establishing a corresponding fusion method of image information and radar information aiming at different classifications.
3. The method for fusing image information and radar information based on traffic scenes according to claim 2, wherein the step of preprocessing the image information in front of the vehicle obtained by the camera further comprises the following steps: and installing the two sensors on the vehicle according to the installation criteria of the camera and the millimeter wave radar, and respectively calibrating and jointly calibrating the two sensors to obtain the related parameters.
4. The fusion method of image information and radar information based on traffic scenes according to claim 3, wherein the camera is installed at a position 1-3 cm under the base of the rearview mirror inside the vehicle, and the millimeter wave radar is installed at the center of the front license plate of the vehicle.
5. The fusion method of image information and radar information based on traffic scene according to claim 1, wherein the fusion method based on radar information comprises the following steps:
converting the position information of the effective target obtained by the radar into a pixel coordinate system of an image through projection transformation to form an interested area in the image,
the target recognition is carried out by a deep learning method,
and processing the effective target information by using an information fusion algorithm, and outputting the position, speed and type information of the fused target.
6. The fusion method of image information and radar information based on traffic scene as claimed in claim 1, wherein the fusion method of radar information and image information decision making together comprises the following steps:
the radar target is primarily selected by applying a target screening algorithm, effective target information is output,
the recognition of the target in the image returned by the camera is finished by applying a deep learning algorithm, the information of the transverse and longitudinal distance position of the target is obtained by applying a monocular distance measuring algorithm,
matching the radar information with the image information by applying the Mahalanobis distance to finish the observed value,
and after matching is finished, data fusion is finished by applying a joint probability density algorithm, and the position, speed and type information of the target is output.
CN201810939902.XA 2018-08-17 2018-08-17 Fusion method and system for image information and radar information of traffic scene Active CN108983219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810939902.XA CN108983219B (en) 2018-08-17 2018-08-17 Fusion method and system for image information and radar information of traffic scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810939902.XA CN108983219B (en) 2018-08-17 2018-08-17 Fusion method and system for image information and radar information of traffic scene

Publications (2)

Publication Number Publication Date
CN108983219A CN108983219A (en) 2018-12-11
CN108983219B true CN108983219B (en) 2020-04-07

Family

ID=64553993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810939902.XA Active CN108983219B (en) 2018-08-17 2018-08-17 Fusion method and system for image information and radar information of traffic scene

Country Status (1)

Country Link
CN (1) CN108983219B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109633621A (en) * 2018-12-26 2019-04-16 杭州奥腾电子股份有限公司 A kind of vehicle environment sensory perceptual system data processing method
CN109693672B (en) * 2018-12-28 2020-11-06 百度在线网络技术(北京)有限公司 Method and device for controlling an unmanned vehicle
CN109613537A (en) * 2019-01-16 2019-04-12 南京奥杰智能科技有限公司 A kind of hologram radar
CN109871385B (en) * 2019-02-28 2021-07-27 北京百度网讯科技有限公司 Method and apparatus for processing data
CN109720280A (en) * 2019-03-01 2019-05-07 山东华宇信息空间技术有限公司 A kind of exact image information transmission system combined based on radar with camera
CN110095770A (en) * 2019-04-26 2019-08-06 东风柳州汽车有限公司 The detection method of vehicle-surroundings object
CN110068818A (en) * 2019-05-05 2019-07-30 中国汽车工程研究院股份有限公司 The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device
CN110135387B (en) * 2019-05-24 2021-03-02 李子月 Image rapid identification method based on sensor fusion
CN110217271A (en) * 2019-05-30 2019-09-10 成都希格玛光电科技有限公司 Fast railway based on image vision invades limit identification monitoring system and method
CN110378946B (en) * 2019-07-11 2021-10-01 Oppo广东移动通信有限公司 Depth map processing method and device and electronic equipment
CN110532896B (en) * 2019-08-06 2022-04-08 北京航空航天大学 Road vehicle detection method based on fusion of road side millimeter wave radar and machine vision
CN110428626A (en) * 2019-08-13 2019-11-08 舟山千眼传感技术有限公司 A kind of wagon detector and its installation method of microwave and video fusion detection
CN110412986A (en) * 2019-08-19 2019-11-05 中车株洲电力机车有限公司 A kind of vehicle barrier detection method and system
CN110568437A (en) * 2019-09-27 2019-12-13 中科九度(北京)空间信息技术有限责任公司 Precise environment modeling method based on radar assistance
CN110987463B (en) * 2019-11-08 2020-12-01 东南大学 Multi-scene-oriented intelligent driving autonomous lane change performance test method
CN113257021B (en) * 2020-02-13 2022-12-23 宁波吉利汽车研究开发有限公司 Vehicle safety early warning method and system
CN111401208B (en) * 2020-03-11 2023-09-22 阿波罗智能技术(北京)有限公司 Obstacle detection method and device, electronic equipment and storage medium
CN111090096B (en) * 2020-03-19 2020-07-10 南京兆岳智能科技有限公司 Night vehicle detection method, device and system
CN111327790B (en) * 2020-03-27 2022-02-08 武汉烛照科技有限公司 Video processing chip
CN111582130B (en) * 2020-04-30 2023-04-28 长安大学 Traffic behavior perception fusion system and method based on multi-source heterogeneous information
CN111780981B (en) * 2020-05-21 2022-02-18 东南大学 Intelligent vehicle formation lane change performance evaluation method
CN111666989A (en) * 2020-05-26 2020-09-15 三一专用汽车有限责任公司 Construction vehicle and object recognition method
CN111568437B (en) * 2020-06-01 2021-07-09 浙江大学 Non-contact type bed leaving real-time monitoring method
CN113759363B (en) * 2020-06-02 2023-09-19 杭州海康威视数字技术股份有限公司 Target positioning method, device, monitoring system and storage medium
CN111856441B (en) * 2020-06-09 2023-04-25 北京航空航天大学 Train positioning method based on vision and millimeter wave radar fusion
CN111753757B (en) * 2020-06-28 2021-06-18 浙江大华技术股份有限公司 Image recognition processing method and device
CN111953934B (en) * 2020-07-03 2022-06-10 北京航空航天大学杭州创新研究院 Target marking method and device
CN111845709B (en) * 2020-07-17 2021-09-10 燕山大学 Road adhesion coefficient estimation method and system based on multi-information fusion
CN111967525A (en) * 2020-08-20 2020-11-20 广州小鹏汽车科技有限公司 Data processing method and device, server and storage medium
CN112085952B (en) * 2020-09-07 2022-06-03 平安科技(深圳)有限公司 Method and device for monitoring vehicle data, computer equipment and storage medium
CN113033684A (en) * 2021-03-31 2021-06-25 浙江吉利控股集团有限公司 Vehicle early warning method, device, equipment and storage medium
CN113269121B (en) * 2021-06-08 2023-02-10 兰州大学 Fishing boat fishing state identification method based on fusion CNN model
CN113487529B (en) * 2021-07-12 2022-07-26 吉林大学 Cloud map target detection method for meteorological satellite based on yolk
CN113658427A (en) * 2021-08-06 2021-11-16 深圳英飞拓智能技术有限公司 Road condition monitoring method, system and equipment based on vision and radar
CN113807471B (en) * 2021-11-18 2022-03-15 浙江宇视科技有限公司 Radar and vision integrated vehicle identification method, device, equipment and medium
WO2023087248A1 (en) * 2021-11-19 2023-05-25 华为技术有限公司 Information processing method and apparatus
CN115379408B (en) * 2022-10-26 2023-01-13 斯润天朗(北京)科技有限公司 Scene perception-based V2X multi-sensor fusion method and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231205A (en) * 2011-06-24 2011-11-02 北京戎大时代科技有限公司 Multimode monitoring device and method
US9429650B2 (en) * 2012-08-01 2016-08-30 Gm Global Technology Operations Fusion of obstacle detection using radar and camera
US9568611B2 (en) * 2014-08-20 2017-02-14 Nec Corporation Detecting objects obstructing a driver's view of a road
CN105205805A (en) * 2015-08-19 2015-12-30 奇瑞汽车股份有限公司 Vision-based intelligent vehicle transverse control method
CN108062864A (en) * 2016-11-09 2018-05-22 奥迪股份公司 A kind of traffic scene visualization system and method and vehicle for vehicle
CN107202983B (en) * 2017-05-19 2020-11-13 深圳佑驾创新科技有限公司 Automatic braking method and system based on image recognition and millimeter wave radar fusion
CN107235044B (en) * 2017-05-31 2019-05-28 北京航空航天大学 A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior
CN107807355A (en) * 2017-10-18 2018-03-16 轩辕智驾科技(深圳)有限公司 It is a kind of based on infrared and millimetre-wave radar technology vehicle obstacle-avoidance early warning system

Also Published As

Publication number Publication date
CN108983219A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
CN108983219B (en) Fusion method and system for image information and radar information of traffic scene
CN111369541B (en) Vehicle detection method for intelligent automobile under severe weather condition
CN110487562B (en) Driveway keeping capacity detection system and method for unmanned driving
CN112215306B (en) Target detection method based on fusion of monocular vision and millimeter wave radar
JP4723582B2 (en) Traffic sign detection method
EP2574958B1 (en) Road-terrain detection method and system for driver assistance systems
CN107463890B (en) A kind of Foregut fermenters and tracking based on monocular forward sight camera
CN113156421A (en) Obstacle detection method based on information fusion of millimeter wave radar and camera
CN112396650A (en) Target ranging system and method based on fusion of image and laser radar
KR101569919B1 (en) Apparatus and method for estimating the location of the vehicle
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN108764108A (en) A kind of Foregut fermenters method based on Bayesian inference
CN113822221A (en) Target detection method based on antagonistic neural network and multi-sensor fusion
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN114118252A (en) Vehicle detection method and detection device based on sensor multivariate information fusion
CN112183330B (en) Target detection method based on point cloud
CN117058646B (en) Complex road target detection method based on multi-mode fusion aerial view
CN116738211A (en) Road condition identification method based on multi-source heterogeneous data fusion
CN113449650A (en) Lane line detection system and method
CN115166717A (en) Lightweight target tracking method integrating millimeter wave radar and monocular camera
CN105160324B (en) A kind of vehicle checking method based on space of components relationship
JP4969359B2 (en) Moving object recognition device
JP2018124963A (en) Image processing device, image recognition device, image processing program, and image recognition program
CN116587978A (en) Collision early warning method and system based on vehicle-mounted display screen
WO2018143278A1 (en) Image processing device, image recognition device, image processing program, and image recognition program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211123

Address after: 100176 901, 9th floor, building 2, yard 10, KEGU 1st Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Patentee after: BEIJING TAGE IDRIVER TECHNOLOGY CO.,LTD.

Address before: 100191 No. 37, Haidian District, Beijing, Xueyuan Road

Patentee before: BEIHANG University