CN113156421A - Obstacle detection method based on information fusion of millimeter wave radar and camera - Google Patents

Obstacle detection method based on information fusion of millimeter wave radar and camera Download PDF

Info

Publication number
CN113156421A
CN113156421A CN202110371841.3A CN202110371841A CN113156421A CN 113156421 A CN113156421 A CN 113156421A CN 202110371841 A CN202110371841 A CN 202110371841A CN 113156421 A CN113156421 A CN 113156421A
Authority
CN
China
Prior art keywords
camera
target
gaussian distribution
coordinate system
millimeter wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110371841.3A
Other languages
Chinese (zh)
Inventor
陆音
张宇婷
杨楚瀛
杨佩佩
李清远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202110371841.3A priority Critical patent/CN113156421A/en
Publication of CN113156421A publication Critical patent/CN113156421A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a barrier detection method based on information fusion of a millimeter wave radar and a camera, which comprises the steps of carrying out combined calibration on the millimeter wave radar and the camera to realize the conversion of coordinate systems where two sensors are positioned; calibrating a camera to obtain internal and external parameters of the camera, and projecting a target position obtained by a radar onto an image obtained by the camera; fusing millimeter wave radar data and camera data in time; preprocessing a target signal returned by the radar to obtain an effective target, and generating a target region of interest aiming at the effective target; detecting a moving target obstacle based on an image acquired by a camera; and calculating the target contact ratio in the target region of interest obtained based on the millimeter wave radar and the moving target obstacle result obtained based on the camera, and finally determining whether the target obstacle exists according to the target contact ratio. Experimental results show that the fusion scheme can correctly identify the barrier, and has high precision and good robustness.

Description

Obstacle detection method based on information fusion of millimeter wave radar and camera
Technical Field
The invention belongs to the technical field of target detection, and particularly relates to a method for detecting an obstacle based on information fusion of a millimeter wave radar and a camera.
Background
In recent years, with the development of traffic and technology, the intensity and density of road traffic are rapidly increased, resulting in frequent occurrence of various traffic accidents. Research shows that more than 90% of traffic accidents can be avoided if the alarm can be effectively sent to the driver 1 second in advance. Thus, Autonomous Driving Assistance Systems (ADAS) have come into play. The most important research direction in the automatic driving auxiliary system is vehicle intelligent obstacle detection, which means that a vehicle can sense the surrounding environment by means of a sensor, acquire surrounding environment information in real time and judge whether an obstacle exists or not, so that accidents caused by collision are avoided. Vehicles and pedestrians are main obstacles in front of the vehicles running on a highway, obstacle detection is a research hotspot in the field of intelligent automobiles, and how to efficiently and accurately identify the pedestrians and the vehicles is a technical key. The obstacle detection method mainly comprises machine vision detection and radar detection.
The detection method based on machine vision mainly comprises methods of motion feature detection, shape feature detection and feature and template matching. With the continuous progress of automatic driving and intelligent auxiliary driving technologies, the radar is widely applied to obstacle detection.
The machine vision detection range is wide, and the obtained target information is complete and is easily influenced by illumination change; the millimeter wave radar has strong rain and fog penetration capability and stable work, can accurately acquire the depth information of the obstacle, and is not sensitive to the characteristics of the target such as texture, shape and the like.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an obstacle detection method based on information fusion of a millimeter wave radar and a camera.
In order to achieve the technical purpose, the invention adopts the following technical scheme.
In one aspect, the invention provides a method for detecting an obstacle based on information fusion of a millimeter wave radar and a camera, comprising the following steps: performing combined calibration on the millimeter wave radar and the camera to realize the establishment of the conversion of the coordinate systems of the millimeter wave radar and the camera; calibrating a camera to obtain internal and external parameters of the camera, and projecting a target position obtained by a radar onto an image obtained by the camera;
fusing millimeter wave radar data and camera data in time;
preprocessing a target signal returned by the radar to obtain an effective target and generating a target region of interest;
detecting a moving target obstacle based on an image acquired by a camera; and calculating the target contact ratio in the target region of interest and moving target obstacle result, and finally determining whether the target obstacle exists according to the target contact ratio.
Further, a specific method for detecting a moving target obstacle based on an image acquired by a camera is as follows: expressing each pixel point in the background image by K Gaussian distribution functions; for a certain pixel point XtThen its probability density function is represented by K gaussian distribution functions as:
Figure BDA0003009582970000021
wherein the content of the first and second substances,
Figure BDA0003009582970000022
representing a pixel point XtK is the number of Gaussian distribution functions, ωi,tIs the weight of the ith Gaussian distribution function at time t, η (X)ti,t,∑i,t) Is the probability density function of the ith gaussian distribution function at time t, defined by equation (2):
Figure BDA0003009582970000031
wherein, mui,tIs the mean of the ith Gaussian distribution function at time t; sigmai,tA covariance matrix of the ith Gaussian distribution function at time t; xtIs n-dimensionalVector, when n is 1, XtRepresenting gray value of pixel point, X when n is 3tRepresenting the RGB values of the pixel points, and T is transposition;
initializing the weight omega, the mean value mu and the covariance sigma of K Gaussian distribution functions, and acquiring the pixel value X of each new capture frametAnd respectively matching with K Gaussian distribution functions, wherein the matching formula is as follows:
|Xti,t-1|≤2.5σi,t-1 (3)
wherein σi,t-1Is the variance of the ith Gaussian distribution function at time t-1, mui,tIs the mean of the ith Gaussian distribution function at time t-1;
when expression (3) is satisfied, it indicates that the pixel value matches one of the K gaussian distribution functions, and the weight ω, the mean μ, and the variance σ for the gaussian distribution function2The updating is performed according to the following formula:
ωi,t=(1-α)ωi,t-1
μi,t=(1-ρ)μi,t-1+ρXt (4)
Figure BDA0003009582970000032
wherein, ω isi,tThe weight of the ith Gaussian distribution function at the time t is shown, alpha is a set learning rate, alpha is more than or equal to 0 and less than or equal to 1, and the updating speed of the background is determined by the size of alpha; ρ is the parameter learning rate, and ρi,t≈α/ωi,t
If there is no Gaussian distribution function and XtAnd matching, replacing the Gaussian distribution function with the minimum weight by a new Gaussian distribution function, keeping the mean value and the variance of other Gaussian distribution functions unchanged, and attenuating the weight according to the following formula:
ωi,t=(1-α)ωi,t-1 (5)
after updating the weights of all Gaussian distribution functions, carrying out normalization treatment on the weights:
Figure BDA0003009582970000041
the K Gaussian distribution functions are arranged according to omegai,ti,tSorting is carried out, and pixels represented by a set number of Gaussian distribution functions are selected from large to small in the sorting to serve as background pixels;
to pixel X againtMatching with the selected set number of Gaussian distribution functions according to the formula (3), if X istAnd matching with one of the selected Gaussian distribution functions with the set number, wherein the pixel point is a background pixel, and otherwise, the pixel point is a foreground pixel.
Still further, the determination method of the set number is to select the number of the gaussian distribution functions with the minimum sum of the weighted values of the gaussian distribution functions larger than the set threshold in the sequence from large to small, and the expression is as follows:
Figure BDA0003009582970000042
wherein, ω isk,tTo represent the weight of the kth Gaussian distribution function at time T, TwIs a weight threshold.
Further, the method for calculating the target coincidence degree in the target region of interest and moving target obstacle results is as follows:
Figure BDA0003009582970000043
wherein R iscoincideFor the part where the radar detection area overlaps the machine vision detection area, RROIFor the region of interest, RcAnd c is the contact ratio of the machine vision detection area.
Further, the method for jointly calibrating the millimeter wave radar and the camera to establish the coordinate system conversion of the two sensors comprises the following steps:
establishing 5 coordinate systems which are respectively a millimeter wave radar coordinate system, a three-dimensional world coordinate system, a camera coordinate system, an image coordinate system and a camera pixel coordinate system;
establishing a conversion relation between a millimeter wave radar coordinate system and a world coordinate system, establishing a conversion relation between a camera coordinate system and the world coordinate system, establishing a coordinate conversion relation between the camera coordinate system and an image coordinate system, establishing a conversion relation between the image coordinate system and a pixel coordinate system, and establishing a conversion relation between the world coordinate system and the pixel coordinate system.
Further, the method of generating the target region of interest is as follows:
Figure BDA0003009582970000051
wherein (x)1,y1)、(x2,y2) The image coordinates of the upper left corner and the lower right corner of the dynamic interesting area are respectively, (x, y) are pixel coordinates of a vehicle centroid, W/H is a common aspect ratio of the vehicle, H is the vehicle height, H is the height of the interesting area, L is a relative distance between a target obstacle and the radar, theta is a relative angle between the obstacle target and the radar, and f is a focal length of the camera.
The invention also provides a computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to any one of the claims provided in any one of the possible embodiments of the above solution.
The invention has the following beneficial technical effects:
the obstacle detection method based on the information fusion of the millimeter wave radar and the camera provided by the invention firstly uses time synchronization to enable the two sensors to obtain target information at the same time according to different working frequencies of the two sensors, and then projects the target information obtained by the millimeter wave radar to an image at the same time through coordinate conversion of joint calibration to form an interested area. And detecting the moving target by using a moving target detection method of a Gaussian mixture model, finally calculating the contact ratio of the two interested areas, and judging whether the target is a pedestrian or a vehicle. The experimental result shows that the method provided by the invention makes up the defects of a single sensor in the obstacle detection, can correctly identify the obstacle, and has higher precision and good robustness.
Drawings
FIG. 1 is a diagram of the world coordinate system, camera coordinate system, and radar coordinate system position relationships in an embodiment;
FIG. 2 is a schematic diagram illustrating a transformation between a world coordinate system and a millimeter wave radar coordinate system according to an embodiment;
FIG. 3 is a schematic diagram of a camera calibration checkerboard in an exemplary embodiment;
FIG. 4 is a schematic diagram of spatial synchronization between a camera and a millimeter wave radar in an exemplary embodiment;
FIG. 5 is a flow diagram illustrating millimeter wave radar and machine vision time synchronization in an exemplary embodiment;
FIG. 6 is a diagram illustrating radar-based target detection results in an exemplary embodiment, where 6(a) is an original target signal detected by a radar and 6(b) is a filtered valid target;
FIG. 7 is a schematic illustration of a region of interest of a radar target in an exemplary embodiment, wherein 7(a) shows the target as a pedestrian and 7(b) shows the target as a vehicle;
FIG. 8 is the image target detection result in the embodiment, in which 8(a) is the 95 th frame vehicle original image, 8(b) is the 117 th frame pedestrian original image, 8(c) is the 95 th frame GMM detection foreground image, 8(d) is the 117 th frame GMM detection foreground image, 8(e) is the 95 th frame detection result, and 8(f) is the 117 th frame detection result;
fig. 9 is a diagram illustrating a single target pedestrian detection result according to the embodiment, where 9(a) is a 144 th frame machine vision detection map, 9(b) is a 144 th frame radar detection map, and 9(c) is a 144 th frame fusion detection map;
FIG. 10 is a diagram of multiple target pedestrian detection results in an exemplary embodiment, wherein 10(a) is a 174 th machine vision detection diagram; 10(b) is a 174 th frame radar detection map, and 10(c) is a 174 th frame fusion detection map;
FIG. 11 is a diagram illustrating the results of a detection of a target vehicle in an exemplary embodiment, where 11(a) is a machine vision detection map at frame 87, 11(b) is a radar detection map at frame 87, and 11(c) is a fusion detection map at frame 87;
fig. 12 is a diagram illustrating a target detection result of a vehicle in an embodiment, where 12(a) is a 107 th frame radar detection map, and 12(b) is a 107 th frame fusion detection map.
Detailed Description
The invention is further described below with reference to the figures and the specific examples.
Example 1: the obstacle detection method based on the information fusion of the millimeter wave radar and the camera comprises the following steps:
(1) performing combined calibration on the millimeter wave radar and the camera to realize the establishment of the conversion of the coordinate systems where the two sensors are located; calibrating a camera to obtain internal and external parameters of the camera;
(2) fusing millimeter wave radar data and camera data in time;
(3) preprocessing a target signal returned by the radar to obtain an effective target, and generating a target region of interest aiming at the effective target;
(4) detecting a moving target obstacle based on an image acquired by a camera;
(5) and calculating the target contact ratio in the target region of interest obtained based on the millimeter wave radar and the result of moving the target obstacle obtained based on the camera, and finally determining whether the obstacle is the target obstacle according to the target contact ratio.
Wherein the step (1) specifically comprises the following steps: the purpose of spatial fusion of the millimeter wave radar and the camera is to correspond an object in a three-dimensional world detected by the radar to an object in an image, and the radar and the camera are sensors with different coordinate systems, so that a conversion model of the coordinate systems where the two sensors are located must be established to realize the spatial fusion of the radar and the camera. The invention relates to a total of 5 coordinate systems, namely a millimeter wave radar coordinate system, a three-dimensional world coordinate system, a camera coordinate system, an image coordinate system and a camera pixel coordinate system. FIG. 1 shows a world coordinate system Ow-XwYwZwCamera coordinate system Oc-XcYcZcAnd millimeter wave radar coordinate system Or-XrYrZrThe positional relationship therebetween. As shown, the millimeter wave radar coordinate system coincides with the XOY plane of the camera coordinate system, the X plane of the radar coordinate systemrOrYrX of the surface and world coordinate systemwOwYwThe surfaces are parallel and the height difference in the Y-axis direction is Y1. The world coordinate system coincides with the XOZ plane of the camera coordinate system, the X plane of the radar coordinate systemrOrZrWith the plane parallel to the X of the camera coordinate systemcOcZcSurface, distance in Z-axis direction is Z1
The conversion relation between the millimeter wave radar coordinate system and the world coordinate system is as follows:
Figure BDA0003009582970000081
wherein L is the relative distance between the target obstacle and the radar, and Y1As X of a radar coordinate systemrOrYrX of the surface and world coordinate systemwOwYwHeight difference of surface in Y-axis direction, Z1As X of a radar coordinate systemrOrZrX of the plane and camera coordinate systemcOcZcThe distance of the plane in the Z-axis direction, θ is the relative angle between the target obstacle and the radar.
Fig. 2 shows a schematic diagram of the conversion between the world coordinate system and the millimeter-wave radar coordinate system.
The transformation relationship between the camera coordinate system and the world coordinate system can be expressed by a rotation matrix R and a translation vector Q, and for any point P in the three-dimensional space, the coordinates of the point in the camera coordinate system and the world coordinate system are (X) respectivelyc,Yc,Zc) And (X)w,Yw,Zw) The coordinate transformation (i.e. camera coordinate system and world coordinate system) between them is formulated as:
Figure BDA0003009582970000091
in the formula, M2Is an external reference matrix, R is a rotation matrix, and is a 3 multiplied by 3 orthogonal identity matrix; q is a three-dimensional translation vector; vector 0T=[0 0 0]。
The camera coordinate system is a three-dimensional space and the image coordinate system is a two-dimensional space. The coordinate transformation relationship between the camera coordinate system and the image coordinate system is thus many-to-one, which can be expressed as:
Figure BDA0003009582970000092
the above equation is the conversion from the camera coordinate system to the image coordinate system. Wherein (X, y) is the coordinate in the image coordinate system, (X)c,Yc,Zc) Is the camera coordinate and f is the focal length of the camera.
The conversion principle between the image coordinate system and the pixel coordinate system is matrix operation, and the conversion relationship can be expressed by a matrix form:
Figure BDA0003009582970000101
one point P (X) in the three-dimensional world coordinate system can be obtained by integrating the conversion relationship between the millimeter wave radar coordinate system and the world coordinate system (equation (10)), the conversion relationship between the camera coordinate system and the world coordinate system (equation (11)), the coordinate conversion relationship between the camera coordinate system and the image coordinate system (equation (12)), and the conversion relationship between the image coordinate system and the pixel coordinate system (equation (13))w,Yw,Zw) The transformation relationship of the projection point P' (μ, v) to the pixel coordinate system of the two-dimensional imaging plane is:
Figure BDA0003009582970000102
in the formula, ax=f/dx,ay=f/dy,(μ0,v0) Is a representation of the principal point of the imaging plane in a pixel coordinate system, using a matrix M1Representing these several parameters, and all parameters in the matrixThe number is an internal parameter of the camera, and is determined by the internal structure of the camera. Using M as matrix containing rotation matrix R and translation vector Q2It is shown that R and Q are external parameters of the camera, determined by the mounting position of the camera in the world coordinate system. M is a projection matrix, and both the internal parameters and the external parameters of the camera can be obtained by a camera calibration method.
The internal parameters and the external parameters of the camera are obtained by a method for calibrating the camera. The invention adopts the Zhangyingyou calibration method to calibrate the camera to obtain the related parameters, and the Zhangyingyou calibration method has the characteristics of simple operation, strong adaptability, high calibration precision and the like. The Zhangyingyou calibration method is a linear calibration method of a nonlinear model camera, and a two-dimensional plane target is adopted to acquire a plurality of different viewpoints of an image, so that the calibration of the camera is realized. In this embodiment, a calibration board with a side length of 12 × 12 and a small square of 60mm × 60mm is selected for calibration (the camera calibration board is shown in fig. 4). Firstly, 15 chessboard images in different directions and different postures are shot by a camera, and then calibration can be simply, accurately and effectively completed through an MATLAB calibration tool box based on the Zhang Zhengyou calibration principle, so that internal and external parameters of the camera can be obtained.
The internal and external parameters of the camera are obtained as follows:
internal parameters:
Figure BDA0003009582970000111
external parameters:
rotating the matrix:
Figure BDA0003009582970000112
translation vector: q ═ 221.901074146.5081396532.543130
After the internal and external parameters of the camera are obtained, the internal and external parameters are substituted into the formula (10) and the formula (14), the radar coordinates can be converted into the pixel coordinates of the camera, and the position of the target acquired by the radar can be projected into an image. As shown in fig. 4, the red point in the graph is a coordinate point projected to the image by the radar target position.
The step (2) specifically comprises the following steps: the fusion of the millimeter wave radar and the camera data in time mainly refers to the obstacle information processed by the millimeter wave radar and the camera at the same time, namely the synchronization of the sensors in time. Because the sampling frequencies of different sensors are different, the sensors collect information which is not at the same time. The sampling frame rate of the camera selected by the invention is 10 frames/second, and the sampling rate of the millimeter wave radar is 20 frames/second. Therefore, the two collected data information are not the same time. The invention takes the camera sensor with lower working frequency as a reference, and compatibly and synchronously samples the information of the two sensors, and realizes the fusion of the data of the two sensors in time by adopting a multithread synchronization mode. FIG. 5 is a schematic diagram of time fusion of two sensors. As shown in fig. 5, with the time of data acquisition by the camera as a reference, after the time of the camera is triggered, the information acquired by the camera at the current time is acquired, and then the acquisition thread of the millimeter wave radar is triggered. Similarly, after the radar is triggered, the information detected by the radar at the current moment is collected, then the camera at the current moment and the image data of the millimeter wave radar are combined and added to the tail of the queue of the buffer area, and a data processing main thread is entered to synchronously process the image and the radar data.
The step (3) specifically comprises the following steps:
in an actual driving environment, due to unstable working performance of a radar or uneven distribution of target echo energy and random jolt and swing in a vehicle driving process, noise and interference signals exist in signals returned by the radar, so that output target information contains empty targets and false targets or target frame break conditions cause target discontinuity, all target signals output by the radar are projected onto an image, as shown in fig. 6(a), it can be seen that original target signals detected by the radar contain all detected targets including obstacles such as trees, guardrails and the like, and therefore the target signals returned by the radar must be preprocessed first to remove the noise and obtain effective and stable target information. For an original target detected by a radar, firstly, a vehicle transverse and longitudinal distance threshold value is set, the transverse distance threshold value is set to be 10m wide, a rectangular range which is 100m away from the longitudinal distance threshold value is used as a target identification area, the identification area is analyzed according to the speed and relative position relation, a target with the absolute value of the speed v smaller than 0.2 is regarded as a static target, static targets such as isolation guardrails and trees are removed accordingly, and effective target information obtained by the radar is obtained, the table 1 is initially selected effective target part information, in the table, ID is a target number, L is a relative distance between a target obstacle and the radar, theta is a relative angle between the target obstacle and the radar, v is a relative speed between the target obstacle and the radar, and p is the reflection intensity of a front target. The effective target information is finally obtained as shown in fig. 6 (b).
TABLE 1 Primary selection of valid target signals
Figure BDA0003009582970000131
In actual processing, due to the large data volume of the acquired image, the whole image cannot be searched globally, and a region of interest needs to be established to narrow the search range. The millimeter wave radar returns the position of a target signal in a point form, and a target region of interest is generated according to the position of the point in practical application. According to the invention, the input of the centroid point of the front vehicle is obtained by the radar, the conversion relation between the radar coordinate of the centroid point and the image pixel coordinate is established comprehensively, then the vehicle is projected to the pixel plane according to the aspect ratio of the common vehicle, the dynamic region of interest is established, the searching time on the image is shortened, and the calculated amount is reduced.
The dynamic region of interest generation method comprises the following steps:
Figure BDA0003009582970000141
in the formula (x)1,y1)、(x2,y2) The image coordinates of the upper left corner and the lower right corner of the dynamic interesting area are respectively, (x, y) are pixel coordinates of the vehicle centroid point, W/H is the common aspect ratio of the vehicle, according to statistics, the aspect ratio of the common vehicle is in the range of 0.7-2.0, and the aspect ratio of the common vehicle is in the range of 0.7-2.0The ratio is in the range of 0.7-1.3, and the maximum width-to-height ratio W/H of the most common vehicle types is 1.3. The region of interest formed on the image by coordinate transformation of targets at different distances in the radar coordinate system is shown in fig. 7.
The step (4) specifically comprises the following steps:
when the radar detection target interesting area is obtained, the moving target obstacle needs to be detected in an image processing mode, then the detection results of the moving target obstacle and the moving target obstacle are compared, and finally whether the target obstacle exists or not is determined.
The moving target obstacle detection based on the image information mainly detects pedestrians and vehicles on the road surface, and the pedestrians and vehicles are generally represented by rectangular frames. The commonly used moving object detection methods mainly include an inter-frame difference method, an optical flow method, a background subtraction method and the like. The background subtraction method is the most common method in the field of moving object detection due to its simple principle and wide applicability. Background subtraction is an effective algorithm for detecting moving objects, and the basic idea is to approximate the pixel value of a background image by using a parameter model of the background, and perform differential comparison between a current frame and the background image to detect a moving region, wherein a pixel region with a larger difference is considered as a moving region, and a pixel region with a smaller difference is considered as a background region. The key to detect a moving object by using a background subtraction method is to select a suitable background modeling method, and several commonly used background modeling methods currently include: mean filtering, statistical averaging, Gaussian mixture model.
The step (5) specifically comprises the following steps: for the same frame of image, a target region of interest ROI can be detected through a millimeter wave radar, meanwhile, a target rectangular region can also be obtained through machine vision, whether 2 rectangular regions detected by the radar and the machine vision are overlapped or not is verified, and if the rectangular regions are overlapped, the fact that the target exists in the region is detected through the radar and the machine vision is proved. For the overlapped region, the overlap ratio, i.e. the percentage of the overlapped region area to the target region of interest, needs to be calculated. The calculation method of the contact ratio is as follows:
Figure BDA0003009582970000151
wherein R iscoincideFor the part where the radar detection area overlaps the machine vision detection area, RROIFor the target region of interest, i.e. radar detection region, RcAnd c is the contact ratio of the machine vision detection area. Judging whether the target obstacle is a pedestrian or a vehicle according to the contact ratio, and optionally judging that the target obstacle is the vehicle if the contact ratio is larger than a set threshold value according to an experimental result; and if the value is less than the set threshold value, the pedestrian is judged.
Example 2: on the basis of the above embodiments, in this embodiment, a background subtraction method based on a gaussian mixture model is used to distinguish a background from a foreground, detect a moving object, and determine whether an obstacle exists in front of a vehicle.
In the background subtraction method, the gaussian mixture model method is the most widely used background modeling method, and can adaptively update the background.
The background modeling process of the Gaussian mixture model comprises background model initialization, background model parameter updating, background model selection and moving object extraction. The core idea of the gaussian mixture model is that each pixel point in the background image is represented by K gaussian distributions. The number of K is generally 3-5. For a certain pixel point XtThen its mathematical model is represented by K gaussian distributions as:
Figure BDA0003009582970000161
wherein eta (X)ti,t,∑i,t) Is the density function of the ith gaussian distribution at time t, defined by equation (2):
Figure BDA0003009582970000162
in the formula, K is the number of Gaussian distribution; omegai,tIs the weight of the ith Gaussian distribution at time t; mu.si,tIs at the time tThe mean of the i gaussian distributions; sigmai,tIs the covariance matrix of the ith gaussian distribution at time t. XtIs a vector of n dimensions, X when n is 1tRepresenting gray value of pixel point, X when n is 3tRepresenting the RGB values of the pixel points.
Initializing the weight omega, the mean value mu and the covariance sigma of K Gaussian models, and acquiring the pixel value X of each new capture frametAnd respectively matching with K Gaussian distributions of the Gaussian mixture model, wherein the matching formula is as follows:
|Xti,t-1|≤2.5σi,t-1 (3)
when equation (3) is satisfied, the pixel value is matched with one of the K models, and the weight ω, the mean μ, and the variance σ of the Gaussian distribution are matched2The updating is performed according to the following formula:
ωi,t=(1-α)ωi,t-1
μi,t=(1-ρ)μi,t-1+ρXt (4)
Figure BDA0003009582970000171
wherein alpha is the learning rate defined by the user, alpha is more than or equal to 0 and less than or equal to 1, and the updating speed of the background is determined by the size of alpha. ρ is the parameter learning rate and ρi,t≈α/ωi,t
If there is no Gaussian distribution and XtMatching, the gaussian function with the minimum weight is replaced by a new gaussian function, the mean and variance of other gaussian functions are kept unchanged, and the weight is attenuated according to the following formula:
ωi,t=(1-α)ωi,t-1 (5)
after updating all the weights of the Gaussian distribution, carrying out normalization treatment on the weights:
Figure BDA0003009582970000172
distributing K gaussians according to omegai,ti,tSorting from big to small, ωi,ti,tThe larger the size, the more likely it is to become a background. In this embodiment, in order to improve the recognition accuracy, the value of B is obtained from equation (7), and the top B sorted values are extracted as background pixels according to the above sorting. Then, to X againtMatching with the B Gaussian distributions according to formula (3) if XtAnd matching with one of the first B Gaussian distributions, determining the pixel point as a background point, and determining the pixel point as a foreground point if the pixel point is not matched with the first B Gaussian distributions. Thus, moving object detection is completed.
Figure BDA0003009582970000173
In the formula, TwAs weight threshold, Tw∈(0.5,1)。
In other embodiments, the number of gaussian distribution functions may be selected by other methods, which are not limited herein.
The GMM-based background subtraction method can describe a bimodal or multimodal state of pixel value distribution, and can accurately model a background which is relatively complex, has slow illumination change and has small-amplitude repeated motion. The method is adopted to segment the foreground and the background in the video image and extract the moving foreground object. The invention selects the 95 th frame and 117 th frame of the video to carry out experiments respectively, as shown in the figure, the figure 8(a) and the figure 8(b) are original images, the original images are subjected to foreground extraction by using a Gaussian mixture model algorithm to obtain GMM binarization foreground detection results as shown in the figures 8(c) and 8(d), and finally, the contour of an extracted target is fitted into a rectangular frame based on machine vision by using image morphology calculation as shown in the figures 8(e) and 8 (f).
In order to verify the effectiveness of the algorithm of the invention in detecting the obstacles, experimental verification is carried out on the algorithm. The experimental equipment is ESR (electronic scanning radar) millimeter wave radar of Delphi company, and the working frequency is 20 Hz; the Haokwev video DS-2CD5032FWD- (A) (P) camera has the sampling frequency of 10 Hz; and a computer with CPU of Intel (R) core (TM) i5-8250 U1.6GHz; and the method is developed based on an OpenCV platform.
The method comprises the steps of firstly obtaining barrier information and two sections of videos at the same moment through a millimeter wave radar and a camera, carrying out information fusion on the barrier information and the two sections of videos by adopting the algorithm of the invention to obtain a fused projection image, and finally calculating the contact ratio according to the projection image. Selecting 144 th frame and 174 th frame images of the first section of video and 87 th frame and 107 th frame images of the second section of video to perform a fusion experiment, and respectively obtaining detection results based on machine vision detection results, millimeter wave radar detection results and millimeter wave radar and machine vision fusion:
the first-stage video detection results are shown in fig. 9 and 10, and the second-stage video detection results are shown in fig. 11 and 12.
The results of the contact ratio calculation are shown in the following table, in which 10 experiments are performed at different distances with human and vehicle targets, respectively.
TABLE 2 coincidence degree of radar and machine vision detection area
Figure BDA0003009582970000191
By studying the table 2, the contact ratio of the detection areas of the two sensors is 55-72% when the obstacle target is a vehicle; when the obstacle target is a pedestrian, the contact ratio of the detection areas of the two sensors is 25% -32%; the two groups of data are relatively stable, and the difference of the coincidence degrees of the two groups of data is obvious, so that whether the obstacle target is a pedestrian or a vehicle can be judged through the numerical value of the coincidence degree.
The invention provides an obstacle detection method based on millimeter wave radar and machine vision information fusion. The method comprises the steps of firstly, utilizing time synchronization according to different working frequencies of two sensors to enable the two sensors to obtain target information at the same moment, and then projecting the target information obtained by the millimeter wave radar to an image at the same moment through coordinate conversion of joint calibration to form an interested area. And detecting the moving target by a moving target detection method of a Gaussian mixture model, finally calculating the contact ratio of the two interested areas, and judging whether the target is a pedestrian or a vehicle. Experimental results show that the fusion scheme can correctly identify the barrier, and has high precision and good robustness.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

1. The obstacle detection method based on the information fusion of the millimeter wave radar and the camera is characterized by comprising the following steps of:
carrying out combined calibration on the millimeter wave radar and the camera to realize the conversion of the coordinate systems of the millimeter wave radar and the camera; calibrating a camera to obtain internal and external parameters of the camera, and projecting a target position obtained by a radar onto an image obtained by the camera;
fusing millimeter wave radar data and camera data in time;
preprocessing a target signal returned by the radar to obtain an effective target and generate a target region of interest;
detecting a moving target obstacle based on an image acquired by a camera; and calculating the target contact ratio in the target region of interest and moving target obstacle result, and finally determining whether the obstacle is the target obstacle according to the target contact ratio.
2. The obstacle detection method based on the information fusion of the millimeter wave radar and the camera as claimed in claim 1, wherein the specific method for detecting the moving target obstacle based on the image obtained by the camera is as follows: expressing each pixel point in the background image by K Gaussian distribution functions; for a certain pixel point XtThe probability density function is represented by K gaussian distribution functions as:
Figure FDA0003009582960000011
wherein, P(Xt)Representing a pixel point XtK is the number of Gaussian distribution functions, ωi,tIs the weight of the ith Gaussian distribution function at time t, η (X)ti,t,∑i,t) Is the probability density function of the ith gaussian distribution function at time t, defined by equation (2):
Figure FDA0003009582960000021
wherein, mui,tIs the mean of the ith Gaussian distribution function at time t; sigmai,tA covariance matrix of the ith Gaussian distribution function at time t; xtIs n-dimensional vector, X when n is 1tRepresenting gray value of pixel point, X when n is 3tRepresenting the RGB values of the pixel points, and T is transposition;
initializing the weight omega, the mean value mu and the covariance sigma of K Gaussian distribution functions, and acquiring the pixel value X of each new capture frametAnd respectively matching with K Gaussian distribution functions, wherein the matching formula is as follows:
|Xti,t-1|≤2.5σi,t-1 (3)
wherein σi,t-1Is the variance of the ith Gaussian distribution function at time t-1, mui,tIs the mean of the ith Gaussian distribution function at time t-1;
when equation (3) is satisfied, the pixel value is matched with one of K Gaussian distribution functions, and the weight ω, the mean μ, and the variance σ of the Gaussian distribution function are matched2The updating is performed according to the following formula:
Figure FDA0003009582960000022
wherein, ω isi,tThe weight of the ith Gaussian distribution function at the time t is shown, alpha is a set learning rate, alpha is more than or equal to 0 and less than or equal to 1, and the updating speed of the background is determined by the size of alpha; ρ is the parameter learning rate, and ρi,t≈α/ωi,t
If there is no Gaussian distribution function and XtIf the two gaussian distribution functions are matched, the gaussian distribution function with the minimum weight is replaced by a new gaussian distribution function, the mean value and the variance of the other gaussian distribution functions are kept unchanged, and the weight is attenuated according to the following formula:
ωi,t=(1-α)ωi,t-1 (5)
after updating the weights of all Gaussian distribution functions, carrying out normalization treatment on the weights:
Figure FDA0003009582960000031
the K Gaussian distribution functions are arranged according to omegai,ti,tSorting, wherein pixels represented by a set number of Gaussian distribution functions are selected as background pixels in the sorting from big to small;
to pixel X againtMatching with the selected set number of Gaussian distribution functions according to the formula (3), if X istAnd matching with one of the selected Gaussian distribution functions with the set number, wherein the pixel point is a background pixel, and otherwise, the pixel point is a foreground pixel.
3. The obstacle detection method based on the information fusion of the millimeter wave radar and the camera according to claim 2, wherein the set number is determined by selecting the number of the gaussian distribution functions with the minimum sum of the weighted values of the gaussian distribution functions larger than the set threshold in the order from large to small in the sequence, and the expression is as follows:
Figure FDA0003009582960000032
wherein, ω isk,tTo represent the weight of the kth Gaussian distribution function at time T, TwIs a weight threshold.
4. The obstacle detection method based on the information fusion of the millimeter wave radar and the camera as claimed in claim 1, wherein the method for calculating the target contact ratio in the target region of interest and moving target obstacle results is as follows:
Figure FDA0003009582960000033
wherein R iscoincideFor the part where the radar detection area overlaps the machine vision detection area, RROIFor the region of interest, RcAnd c is the contact ratio of the machine vision detection area.
5. The obstacle detection method based on the information fusion of the millimeter wave radar and the camera as claimed in claim 1, wherein the method for implementing the coordinate system conversion of the two sensors by jointly calibrating the millimeter wave radar and the camera comprises:
establishing 5 coordinate systems which are respectively a millimeter wave radar coordinate system, a three-dimensional world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system;
establishing a conversion relation between a millimeter wave radar coordinate system and a world coordinate system, establishing a conversion relation between a camera coordinate system and the world coordinate system, establishing a coordinate conversion relation between the camera coordinate system and an image coordinate system, establishing a conversion relation between the image coordinate system and a pixel coordinate system, and establishing a conversion relation between the world coordinate system and the pixel coordinate system.
6. The obstacle detection method based on the information fusion of the millimeter wave radar and the camera as claimed in claim 1, wherein the method for generating the target region of interest is as follows:
Figure FDA0003009582960000041
wherein (x)1,y1)、(x2,y2) Are respectively dynamic feelingThe image coordinates of the upper left corner and the lower right corner of the interest area are (x, y) the pixel coordinates of the centroid of the vehicle, W/H the common aspect ratio of the vehicle, H the height of the vehicle, H the height of the interest area, L the relative distance between the target obstacle and the radar, theta the relative angle between the obstacle target and the radar, and f the focal length of the camera.
7. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202110371841.3A 2021-04-07 2021-04-07 Obstacle detection method based on information fusion of millimeter wave radar and camera Pending CN113156421A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110371841.3A CN113156421A (en) 2021-04-07 2021-04-07 Obstacle detection method based on information fusion of millimeter wave radar and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110371841.3A CN113156421A (en) 2021-04-07 2021-04-07 Obstacle detection method based on information fusion of millimeter wave radar and camera

Publications (1)

Publication Number Publication Date
CN113156421A true CN113156421A (en) 2021-07-23

Family

ID=76888868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110371841.3A Pending CN113156421A (en) 2021-04-07 2021-04-07 Obstacle detection method based on information fusion of millimeter wave radar and camera

Country Status (1)

Country Link
CN (1) CN113156421A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570622A (en) * 2021-07-26 2021-10-29 北京全路通信信号研究设计院集团有限公司 Obstacle determination method and device, electronic equipment and storage medium
CN113682259A (en) * 2021-09-22 2021-11-23 海南大学 Vehicle door opening early warning anti-collision system and control method
CN113744348A (en) * 2021-08-31 2021-12-03 南京慧尔视智能科技有限公司 Parameter calibration method and device and radar vision fusion detection equipment
CN113838125A (en) * 2021-09-17 2021-12-24 中国第一汽车股份有限公司 Target position determining method and device, electronic equipment and storage medium
CN113899405A (en) * 2021-10-20 2022-01-07 中煤科工集团沈阳设计研究院有限公司 Integrated online slope intelligent monitoring and early warning system and early warning method
CN114170809A (en) * 2021-12-03 2022-03-11 智道网联科技(北京)有限公司 Overspeed detection method, device, system, electronic device and medium
CN114236528A (en) * 2022-02-23 2022-03-25 浙江高信技术股份有限公司 Target detection method and system based on millimeter wave radar and video fusion
CN114494806A (en) * 2021-12-17 2022-05-13 湖南国天电子科技有限公司 Target identification method, system, device and medium based on multivariate information fusion
CN114782923A (en) * 2022-05-07 2022-07-22 厦门瑞为信息技术有限公司 Vehicle blind area detection system
CN114814826A (en) * 2022-04-08 2022-07-29 苏州大学 Radar rail-mounted area environment sensing method based on target grid
CN115082924A (en) * 2022-04-26 2022-09-20 电子科技大学 Three-dimensional target detection method based on monocular vision and radar pseudo-image fusion
CN115169452A (en) * 2022-06-30 2022-10-11 北京中盛国芯科技有限公司 System and method for fusing target information based on space-time synchronization queue characteristics
CN115184917A (en) * 2022-09-13 2022-10-14 湖南华诺星空电子技术有限公司 Regional target tracking method integrating millimeter wave radar and camera
CN115204221A (en) * 2022-06-28 2022-10-18 深圳市华屹医疗科技有限公司 Method and device for detecting physiological parameters and storage medium
CN115223133A (en) * 2022-02-14 2022-10-21 广州汽车集团股份有限公司 Parking obstacle detection method, parking obstacle detection device, vehicle, and storage medium
CN115346368A (en) * 2022-07-30 2022-11-15 东南大学 Traffic roadside sensing system and method based on integration of far and near view multiple sensors
CN115436899A (en) * 2022-08-31 2022-12-06 中国第一汽车股份有限公司 Method, device, equipment and storage medium for processing millimeter wave radar detection data
CN116148801A (en) * 2023-04-18 2023-05-23 深圳市佰誉达科技有限公司 Millimeter wave radar-based target detection method and system
CN117022264A (en) * 2023-10-09 2023-11-10 广州市德赛西威智慧交通技术有限公司 Obstacle detection method and device based on radar fusion
CN117214966A (en) * 2023-08-01 2023-12-12 珠海微度芯创科技有限责任公司 Image mapping method, device, equipment and medium of millimeter wave security inspection imaging equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005175603A (en) * 2003-12-08 2005-06-30 Suzuki Motor Corp Method and system for displaying obstacle using radar
CN104637059A (en) * 2015-02-09 2015-05-20 吉林大学 Night preceding vehicle detection method based on millimeter-wave radar and machine vision
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
CN107991671A (en) * 2017-11-23 2018-05-04 浙江东车智能科技有限公司 A kind of method based on radar data and vision signal fusion recognition risk object
CN109143241A (en) * 2018-07-26 2019-01-04 清华大学苏州汽车研究院(吴江) The fusion method and system of radar data and image data
WO2020103533A1 (en) * 2018-11-20 2020-05-28 中车株洲电力机车有限公司 Track and road obstacle detecting method
CN112418040A (en) * 2020-11-16 2021-02-26 南京邮电大学 Binocular vision-based method for detecting and identifying fire fighting passage occupied by barrier

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005175603A (en) * 2003-12-08 2005-06-30 Suzuki Motor Corp Method and system for displaying obstacle using radar
CN104637059A (en) * 2015-02-09 2015-05-20 吉林大学 Night preceding vehicle detection method based on millimeter-wave radar and machine vision
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
CN107991671A (en) * 2017-11-23 2018-05-04 浙江东车智能科技有限公司 A kind of method based on radar data and vision signal fusion recognition risk object
CN109143241A (en) * 2018-07-26 2019-01-04 清华大学苏州汽车研究院(吴江) The fusion method and system of radar data and image data
WO2020103533A1 (en) * 2018-11-20 2020-05-28 中车株洲电力机车有限公司 Track and road obstacle detecting method
CN112418040A (en) * 2020-11-16 2021-02-26 南京邮电大学 Binocular vision-based method for detecting and identifying fire fighting passage occupied by barrier

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
孙青: "基于视觉与雷达信息融合的障碍物检测识别方法研究", 中国优秀博硕士学位论文全文数据库 *
杨欣; 刘加;费树岷;周大可: "一种自适应的基于混合高斯模型的运动目标检测算法", JOURNAL OF SOUTHEAST UNIVERSITY(ENGLISH EDITION) *
翟光耀;陈蓉;张剑锋;张继光;吴澄;汪一鸣;: "基于毫米波雷达和机器视觉信息融合的障碍物检测", 物联网学报 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570622A (en) * 2021-07-26 2021-10-29 北京全路通信信号研究设计院集团有限公司 Obstacle determination method and device, electronic equipment and storage medium
CN113744348A (en) * 2021-08-31 2021-12-03 南京慧尔视智能科技有限公司 Parameter calibration method and device and radar vision fusion detection equipment
CN113838125A (en) * 2021-09-17 2021-12-24 中国第一汽车股份有限公司 Target position determining method and device, electronic equipment and storage medium
CN113682259A (en) * 2021-09-22 2021-11-23 海南大学 Vehicle door opening early warning anti-collision system and control method
CN113899405A (en) * 2021-10-20 2022-01-07 中煤科工集团沈阳设计研究院有限公司 Integrated online slope intelligent monitoring and early warning system and early warning method
CN114170809A (en) * 2021-12-03 2022-03-11 智道网联科技(北京)有限公司 Overspeed detection method, device, system, electronic device and medium
CN114494806A (en) * 2021-12-17 2022-05-13 湖南国天电子科技有限公司 Target identification method, system, device and medium based on multivariate information fusion
CN115223133A (en) * 2022-02-14 2022-10-21 广州汽车集团股份有限公司 Parking obstacle detection method, parking obstacle detection device, vehicle, and storage medium
CN114236528A (en) * 2022-02-23 2022-03-25 浙江高信技术股份有限公司 Target detection method and system based on millimeter wave radar and video fusion
CN114814826B (en) * 2022-04-08 2023-06-16 苏州大学 Radar orbit area environment sensing method based on target grid
CN114814826A (en) * 2022-04-08 2022-07-29 苏州大学 Radar rail-mounted area environment sensing method based on target grid
CN115082924A (en) * 2022-04-26 2022-09-20 电子科技大学 Three-dimensional target detection method based on monocular vision and radar pseudo-image fusion
CN115082924B (en) * 2022-04-26 2024-03-29 电子科技大学 Three-dimensional target detection method based on monocular vision and radar pseudo-image fusion
CN114782923B (en) * 2022-05-07 2024-05-03 厦门瑞为信息技术有限公司 Detection system for dead zone of vehicle
CN114782923A (en) * 2022-05-07 2022-07-22 厦门瑞为信息技术有限公司 Vehicle blind area detection system
CN115204221A (en) * 2022-06-28 2022-10-18 深圳市华屹医疗科技有限公司 Method and device for detecting physiological parameters and storage medium
CN115169452A (en) * 2022-06-30 2022-10-11 北京中盛国芯科技有限公司 System and method for fusing target information based on space-time synchronization queue characteristics
CN115169452B (en) * 2022-06-30 2023-04-28 北京中盛国芯科技有限公司 Target information system and method based on space-time synchronous queue characteristic radar fusion
CN115346368B (en) * 2022-07-30 2024-01-05 东南大学 Traffic road side sensing system and method based on integrated fusion of far-view and near-view multiple sensors
CN115346368A (en) * 2022-07-30 2022-11-15 东南大学 Traffic roadside sensing system and method based on integration of far and near view multiple sensors
CN115436899A (en) * 2022-08-31 2022-12-06 中国第一汽车股份有限公司 Method, device, equipment and storage medium for processing millimeter wave radar detection data
CN115184917A (en) * 2022-09-13 2022-10-14 湖南华诺星空电子技术有限公司 Regional target tracking method integrating millimeter wave radar and camera
CN116148801A (en) * 2023-04-18 2023-05-23 深圳市佰誉达科技有限公司 Millimeter wave radar-based target detection method and system
CN117214966A (en) * 2023-08-01 2023-12-12 珠海微度芯创科技有限责任公司 Image mapping method, device, equipment and medium of millimeter wave security inspection imaging equipment
CN117214966B (en) * 2023-08-01 2024-04-05 珠海微度芯创科技有限责任公司 Image mapping method, device, equipment and medium of millimeter wave security inspection imaging equipment
CN117022264B (en) * 2023-10-09 2024-02-02 广州市德赛西威智慧交通技术有限公司 Obstacle detection method and device based on radar fusion
CN117022264A (en) * 2023-10-09 2023-11-10 广州市德赛西威智慧交通技术有限公司 Obstacle detection method and device based on radar fusion

Similar Documents

Publication Publication Date Title
CN113156421A (en) Obstacle detection method based on information fusion of millimeter wave radar and camera
CN111369541B (en) Vehicle detection method for intelligent automobile under severe weather condition
Teoh et al. Symmetry-based monocular vehicle detection system
US9846946B2 (en) Objection recognition in a 3D scene
JP5822255B2 (en) Object identification device and program
US8744122B2 (en) System and method for object detection from a moving platform
CN111260683A (en) Target detection and tracking method and device for three-dimensional point cloud data
CN114022830A (en) Target determination method and target determination device
CN115049700A (en) Target detection method and device
CN112825192B (en) Object identification system and method based on machine learning
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
Li et al. Road markings extraction based on threshold segmentation
CN114118252A (en) Vehicle detection method and detection device based on sensor multivariate information fusion
CN114399675A (en) Target detection method and device based on machine vision and laser radar fusion
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
CN115327572A (en) Method for detecting obstacle in front of vehicle
Dai et al. A vehicle detection method via symmetry in multi-scale windows
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
JP2005156199A (en) Vehicle detection method and vehicle detector
CN114792416A (en) Target detection method and device
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
Michael et al. Fast change detection for camera-based surveillance systems
Liu et al. Obstacle recognition for ADAS using stereovision and snake models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210723