CN116794650A - Millimeter wave radar and camera data fusion target detection method and device - Google Patents

Millimeter wave radar and camera data fusion target detection method and device Download PDF

Info

Publication number
CN116794650A
CN116794650A CN202310332750.8A CN202310332750A CN116794650A CN 116794650 A CN116794650 A CN 116794650A CN 202310332750 A CN202310332750 A CN 202310332750A CN 116794650 A CN116794650 A CN 116794650A
Authority
CN
China
Prior art keywords
millimeter wave
wave radar
target
data
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310332750.8A
Other languages
Chinese (zh)
Inventor
华炜
苏志毅
明彬彬
冯权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202310332750.8A priority Critical patent/CN116794650A/en
Publication of CN116794650A publication Critical patent/CN116794650A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A method and a device for detecting a target by fusing millimeter wave radar and camera data are provided, wherein the method comprises the following steps: relatively fixing the millimeter wave radar and the camera, and collecting data of the millimeter wave radar and the camera; aligning millimeter wave radar and camera data according to acquisition time; performing target detection on image data acquired by a camera to obtain a pixel area range and a category occupied by a target; projecting millimeter wave radar data points to pixel positions in an image plane, and establishing a corresponding relation between a target and the millimeter wave radar data points; filtering and clustering millimeter wave radar data points by utilizing target class information based on images; defining the position and speed of the target by utilizing the position and speed information of millimeter wave radar data points; and finally outputting the complete three-dimensional detection information of the target. The invention applies the image-based target detection information and the millimeter wave radar-based target detection information to assist in identifying the domain.

Description

Millimeter wave radar and camera data fusion target detection method and device
Technical Field
The invention relates to the field of fusion perception of multiple types of sensors, in particular to a method, a device and equipment for fusing millimeter wave radar and camera data.
Background
The camera sensor technology is mature, the price is low, and especially compared with the laser radars which are popular in the current market, the vision scheme mainly comprising the camera is the first choice for mass production of the automatic driving automobile. Secondly, the image collected by the camera contains rich color, texture, contour and brightness information, which are incomparable by sensors such as laser radar, millimeter wave radar and the like. For example, traffic light monitoring and traffic sign recognition can only be realized through a camera. However, the defects are also obvious, the camera is a passive sensor, the illumination change is very sensitive, and the imaging quality of the camera can be greatly reduced under the influence of weather such as rain, fog, night and the like, so that the detection and identification of an object are difficult to realize by a perception algorithm. In addition, as a passive sensor, the camera is inferior to a laser radar and a millimeter wave radar in terms of ranging and speed measuring performance. Compared with optical sensors such as infrared sensors, laser sensors and cameras, the millimeter wave radar has strong capability of penetrating fog, smoke and dust, can work all day and day by day, can provide high-precision ranging, speed measuring and angle measuring results, and has the same obvious defects: the data points are sparse, so that semantic information such as texture, outline, color and the like of the target is difficult to provide.
Millimeter wave radar and camera are two mainstream sensors, and have low cost, but are respectively lacking in functionality, accuracy and reliability when used alone. The performance complementarity of the millimeter wave radar and the camera is strong, however, the existing millimeter wave radar and camera fusion algorithm generally stays in the 'target layer' fusion or the 'decision layer' fusion, and the complementary characteristics of the millimeter wave radar and the camera data cannot be well utilized, so that the information utilization is insufficient, and the accuracy, the stability and the like are insufficient to meet the actual demands.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, realize the complementary advantages of millimeter wave radar and camera data, and provide a target detection method for data fusion of the millimeter wave radar and the camera.
The invention adopts the following technical scheme:
a method for detecting a target by fusing millimeter wave radar and camera data comprises the following steps:
s1, mounting a millimeter wave radar and a camera at fixed relative positions, and calibrating the relative position relationship between the millimeter wave radar and the camera in order to effectively implement data fusion, wherein the fields of view of the millimeter wave radar and the camera are coincident as much as possible;
s2, acquiring millimeter wave Lei Dadian cloud data in real time, wherein each frame of millimeter wave Lei Dadian cloud data is a set of time and data pointsWherein t is r For the time when the previous frame of millimeter wave radar data, < >>The number of radar data points in the current frame is N. Each data point contains position, velocity and radar cross-sectional area (RCS) information, typically the position and velocity information in turn contains longitudinal and transverse components, described belowWherein (1)>Represents the lateral position +.>Indicates the longitudinal position +.>Represents lateral speed, +.>Representing the longitudinal speed;
s3, acquiring camera data in real time, wherein each frame of camera data is time and a two-dimensional image,(t c M), wherein t c For the time of the previous frame of the camera data, M is a two-dimensional matrix representing the image, the length and width of M are the resolution of the image, and each element M of M x,y Is one pixel in the image;
and S4, matching millimeter wave radar data with camera data according to time. In general, the sampling frequency of the camera is higher than that of the millimeter wave radar, and for each frame of image data, the sampling frequency is found out to be equal to t c Nearest t r Using the frame millimeter wave radar data as data paired therewith;
s5, millimeter wave radar data and camera data paired for each frameThe following steps are performed:
s5.1, extracting the characteristics of the picture to obtain a characteristic tensor, wherein the method specifically comprises the following steps: accessing each characteristic tensor into a target detection head, and outputting a two-dimensional bounding box of each target and class information of the target, wherein the total number of classes which can be detected by the target detection head is C;
s5.2, adjusting the range of the ROI (Region Of Interest ) according to a specific scene, filtering the ROI, and deleting the millimeter wave radar data points beyond the ROI; correlating the results of target identification and classification in S5.1, setting a corresponding RCS (Radar Cross Section, radar cross-sectional area) range according to the result of each target classification, performing RCS filtering on millimeter wave radar data points, and deleting millimeter wave radar data points exceeding the RCS normal range; meanwhile, the millimeter wave radar data at the previous moment are correlated, and the steps of matching, kalman filtering, trust value updating and the like are carried out on the millimeter wave radar data points at the moment; finally filtering trust values of the millimeter wave radar data points, and deleting the millimeter wave radar data points lower than a trust value threshold;
s5.3, acquiring a camera internal reference matrix and a distortion coefficient thereof, calculating the coordinates of each point of the millimeter wave radar under a camera coordinate system through the relative positions of the millimeter wave radar and the camera, and then passing through the camera internal reference matrix and the distortion coefficient thereofCount the data points of each millimeter wave radarPixel locations (x i ,y i ) Then, screening millimeter wave radar data by applying the result of the pixel area range of each target identified and classified by the targets in S5.1, deleting millimeter wave radar data points exceeding the pixel area range, directly outputting the result if 1 or no millimeter wave radar data point exists in the pixel area range of a certain target, and presetting the clustering radius of each target class by the result of the target classification in S5.1 and the common size of the target class if 2 or more millimeter wave radar data points exist in the pixel area range of the certain target, so as to cluster the millimeter wave radar data points in the pixel area range of the target; and constructing three-dimensional models of different types of targets, so as to obtain the size information of the different targets. Combining the feature tensor in the two-dimensional bounding box with the millimeter wave radar data point cloud obtained by matching in the S5.2 to obtain fusion features, wherein the method specifically comprises the following steps: acquiring the category of the target, and obtaining a three-dimensional model under the category; constructing a regression model to obtain size information of the target, and finally outputting a three-dimensional detection result of the target, wherein the three-dimensional detection result comprises a category, a three-dimensional coordinate and a three-dimensional model by combining the matched millimeter wave radar data point information;
s6, calculating the position, speed and the like of the target through the clustered point cloud distribution and the corresponding information, and if no millimeter wave radar data point exists in the pixel area range of a certain target, determining that the image-based recognition algorithm is misjudged; if 1 millimeter wave radar data point exists in the pixel area range of a certain target, the speed of the data point is used for assigning a value to the speed of the target; if the pixel area range of a certain target corresponds to more than one millimeter wave radar data point, using the average positions of the data points to assign a value to the position of the target, selecting the average speed or the maximum speed of the data points to assign a value to the speed of the target according to different decision requirements, and finally combining the three-dimensional model generated in the step S5.3 to finish the fusion of the millimeter wave radar data and the camera data.
The invention also relates to a target detection device for data fusion of the millimeter wave radar and the camera, which comprises one or more processors, the millimeter wave radar, the camera and signal connecting lines, wherein the millimeter wave radar data and the camera data are respectively transmitted to the processors through the corresponding signal connecting lines, and the method for data fusion of the millimeter wave radar and the camera is realized in the processors.
The present invention also relates to a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the object detection method of data fusion of a millimeter wave radar and a camera as set forth in any one of claims 1 to 5.
The invention also relates to a computing device comprising a memory and a processor, wherein the memory has stored therein executable code which, when executed by the processor, implements the method of any of claims 1-5.
According to the method for fusing the millimeter wave radar and the camera data, the millimeter wave radar and the camera are arranged at fixed relative positions, and meanwhile, the data of the millimeter wave radar and the camera are collected. Aligning millimeter wave radar and camera data according to acquisition time; performing target detection on image data acquired by a camera to obtain a pixel area range and a category occupied by a target; projecting millimeter wave radar data points to pixel positions in an image plane according to the relative position relation between the millimeter wave radar and the camera and the internal parameters of the camera, and establishing a corresponding relation between the target and the millimeter wave radar data points according to the relation between the pixel and the pixel area range of the target; filtering and clustering millimeter wave radar data points by utilizing target class information based on images; defining the position and speed of the target by utilizing the position and speed information of millimeter wave radar data points; and finally outputting complete three-dimensional detection information of the target, including category, position, speed and three-dimensional model.
The invention has the advantages that: the image-based target detection information and the millimeter wave radar-based target detection information are applied to auxiliary recognition of the square domain, so that the dimension and accuracy of target perception are improved, better performance can be achieved in recognition of high-speed targets and low-speed targets, and more reliable perception information is provided.
Drawings
FIG. 1 is a step diagram of a millimeter wave radar data and image fusion method in one embodiment of the invention;
FIG. 2 is a mounting location of a millimeter wave radar and camera in one embodiment of the invention;
FIG. 3 is a View of a frame of data acquired by a millimeter wave radar in a BEV (bird's Eye View) plane, each point representing a data point of the millimeter wave radar, in accordance with one embodiment of the present invention;
FIG. 4 is a frame of data collected by a camera in one embodiment of the invention;
fig. 5 is a diagram of the primary fusion effect of the millimeter wave radar and camera data fusion method in one embodiment of the invention;
fig. 6 is a diagram of the final fusion effect of the millimeter wave radar and camera data fusion method in one embodiment of the invention.
Detailed Description
The objects and effects of the present invention will become more apparent from the following detailed description of the preferred embodiments and the accompanying drawings, it being understood that the specific embodiments described herein are merely illustrative of the invention and not limiting thereof.
Example 1
In one embodiment, as shown in fig. 1, a method for fusing millimeter wave radar and camera data is provided, a deep learning model for target recognition and classification is trained in advance, then, as labeled in fig. 2, a camera and millimeter wave radar are installed at a 12-number position of an unmanned vehicle on a picture, data of the millimeter wave radar and the camera are collected in real time, timestamps of the millimeter wave radar and the camera data are matched, millimeter wave radar data are filtered and filtered respectively, target recognition and classification are performed on the camera data, and finally, fusion of the millimeter wave radar and the camera data is completed by projection.
The method specifically comprises the following steps:
step 1, training a deep learning model by using 50000 pictures with target types and target pixel rectangular ranges marked in advance, in this embodiment, marking the types of the targets as six common traffic participants such as pedestrians, non-motor vehicles, automobiles, trucks, centroids and others in advance, marking the pixel area ranges of the targets by using a rectangular frame, and obtaining the deep learning model through training:
(a) Extracting features of the pictures in the examples to obtain feature tensors image_F, wherein the shape and the size of each feature tensor are C_I, H_I and W_I, and C_ I, H _ I, W _I are respectively the channel number and the length and the width of the preset feature tensor;
(b) Accessing each characteristic tensor into a target detection head, and outputting a two-dimensional bounding box of each target and class information of the target, wherein the total number of classes which can be detected by the target detection head is C;
step 2, selecting the types of millimeter wave radars and cameras, wherein the working frequency band of the millimeter wave radars selected in the embodiment is 79GHz-81 GHz, the ranging range is 1.5 m-100 m, and a horizontal viewing angle of-45 degrees and a pitching viewing angle of-12 degrees are provided; the camera model selected in this embodiment is H100, with pixels of 1280 x 720, providing a horizontal viewing angle of-67.5 degrees to 67.5 degrees and a pitch viewing angle of-10 degrees to 10 degrees.
The millimeter wave radar and the camera are arranged at the 12 # position on the unmanned vehicle, the orientation of the millimeter wave radar and the camera are consistent, and the relative positions of the millimeter wave radar and the camera and the central axis of the vehicle are respectively calibrated, so that the relative position relation between the millimeter wave radar and the camera is obtained;
step 3, the millimeter wave radar and the camera are started at the same time, and data of the millimeter wave radar and the camera are obtained, as shown in fig. 3 and fig. 4, fig. 3 is a view of one frame of data collected by the millimeter wave radar under BEV in one embodiment, and fig. 4 is one frame of data collected by the camera in one embodiment. Collecting real scene as intersection near dining hall of new park of certain enterprise, collecting time from 10 am to 14 pm, collecting object as traffic facilities such as staff of certain enterprise and vehicles in park, and collecting data of point cloud using binary systemEncoding. Millimeter wave Lei Dadian cloud data per frame is a collection of time and data pointsWherein t is r For the time of the current frame, +.>The number of radar data points in the current frame is N. Each data point contains position, velocity and radar cross sectional area (RCS) information, typically the position and velocity information in turn contains longitudinal and transverse components, described below +.>Wherein (1)>Represents the lateral position +.>Indicates the longitudinal position +.>Represents lateral speed, +.>Representing longitudinal speed, acquiring camera data in real time, wherein each frame of camera data is time and a two-dimensional image, (t) c M), wherein t c For the time of the current frame, M is a two-dimensional matrix representing the image, the length and width of M are the resolution of the image, and each element M of M x,y Is one pixel in the image;
step 4, reading and comparing the real-time frame rate of the millimeter wave radar and the camera in the industrial personal computer, determining the data used as the reference in real time according to the frame numbers of the millimeter wave Lei Dadian cloud data and the camera image data, selecting a sensor with lower frame rate in the millimeter wave radar and the camera as the reference data for fusing the millimeter wave radar and the camera data, in the embodiment, taking the camera data as the matched reference data if the frame rate of the camera is lower, and taking the time stamp interval of the millimeter wave radar and the camera data as the millimeter time stamp interval of the millimeter wave radar and the camera dataPerforming association matching on the wave radar data and the image data, setting a threshold value of the maximum time of the difference between the millimeter wave radar data and the current frame, and determining that the frame is not matched with the data if the difference is higher than the threshold value, and finding a difference between the difference and t for each frame of the image data c Nearest t r Using the frame millimeter wave radar data as data paired therewith;
step 5, for each paired pair of millimeter wave radar data and camera dataThe following steps are performed:
(a) Inputting a picture into a pre-prepared deep learning model, wherein the deep learning model obtains the type of each target in the picture and the corresponding pixel region range of each target on the picture through calculation, temporarily stores the output result of the deep learning model in a text form, names the text by a timestamp, and each line of the text comprises the type of one target and matrix vertex coordinates representing the pixel region range;
(b) According to a specific scene, adjusting an ROI range, namely setting the current test scene as an enterprise internal road, and according to the width and the vehicle running speed, setting the ROI range to be a longitudinal distance of 0m to 50m and a transverse distance of-8 m to 8m, wherein a minus sign in-8 m represents that a target is at the left side right in front of a millimeter wave radar, performing ROI filtering on millimeter wave radar data, and deleting millimeter wave radar data points exceeding the ROI; correlating the results of target identification and classification in the step (a), setting a corresponding RCS range through the result of each target classification, performing RCS filtering on millimeter wave radar data, and deleting millimeter wave radar data points exceeding the RCS normal range; meanwhile, the millimeter wave radar data at the previous moment are correlated, and the steps of matching, kalman filtering, trust value updating and the like are carried out on the millimeter wave radar data points at the moment; finally filtering the trust value of the millimeter wave radar data, setting the maximum trust value of the target to be 5, setting the minimum trust value to be 0 in the embodiment, setting the trust value of the millimeter wave radar data points higher than the threshold value of the maximum trust value to be still 5, deleting the millimeter wave radar data points lower than the threshold value of the minimum trust value, re-registering the processed millimeter wave radar data points in a text, and keeping the text format the same as that before unfiltered;
(c) Acquiring internal parameters of a camera and distortion coefficients thereof through calibration of a chessboard, reading filtered millimeter wave radar data, calculating coordinates of each data point of the millimeter wave radar under a camera coordinate system through relative positions of the millimeter wave radar and the camera, projecting the filtered millimeter wave radar data into an image space through the internal parameters of the camera and the distortion coefficients thereof, screening the millimeter wave radar data by applying a result of a pixel area range of each target identified and classified in the step (a), deleting the millimeter wave radar data beyond the pixel area range, directly outputting the result if 1 or no millimeter wave radar data point exists in the pixel area range of a certain target, determining a clustering radius through a result of object classification in the step (a), and selecting a traffic cone with a clustering radius of 0.3m, a pedestrian with a clustering radius of 0.5m, a non-motor vehicle with a clustering radius of 1m, a vehicle with a clustering radius of 10m and the like, so as to cluster the millimeter wave radar data points in the pixel area of the target;
step 6, generating a cube bounding box through the clustering result and the target identification result:
(a) Constructing three-dimensional Models { Models [ c ] [ P ] |c of different types of targets as indexes of different types, wherein P is an index of model parameters, c > =0, 0< =p < P }, and P is the number of model parameters, so that size information of different targets is obtained.
(b) Fusing the feature tensor in the two-dimensional bounding box with the point cloud matched in the step 5 and corresponding to the feature tensor to obtain Fusion feature fusion_Fk]K is the index of different categories, the category of the target is obtained, and a three-dimensional model [ c ] under the category is obtained 1 ][p]Wherein c1 is the target class in Models [ c ]][p]Category index of (a).
(c) Constructing a regression model R, and inputting the regression model R into fusion_Fk 1 ]And Models [ c ] 1 ][p]The output is Models [ c ] 1 ][p]The adjustment factors of all model parameters in the model are combined with an original three-dimensional model [ c ] 1 ][p]Obtaining Dimension [ k ] of the size information of the object 1 ],k 1 For indexes of different targets, combining point cloud information, and finally outputting a three-dimensional detection result { Prediction [ k ]]|k>=0 }, containing categories of different objects, three-dimensional coordinates, and three-dimensional models.
Step 7, judging the position, speed and the like of the target through the millimeter wave radar data point distribution after clustering, and in the embodiment, if no millimeter wave radar data point exists in the pixel area range of a certain target, judging as a target recognition model error based on an image; if 1 millimeter wave radar data point exists in the pixel area range of a certain target, the speed of the data point is used for assigning a value to the speed of the target; if the pixel area range of a certain target corresponds to a millimeter wave radar data point set, using the average positions of all data points in the point set to assign values to the positions of the targets, selecting the average speeds or the maximum speeds of all data points in the point set to assign values to the speeds of the targets according to different decision requirements, and fitting the positions of the targets by the distribution states of the data points; combining the results of the identification and classification of the targets in the step 5 to finish the fusion of the millimeter wave radar and the camera data;
and 8, visualizing the fused data, and simultaneously displaying the pixel area range area of each target and the type of the target, the distance between the target and the millimeter wave radar and the radial speed of the millimeter wave radar and the camera in the picture.
In one embodiment, the effect of data fusion between the millimeter wave radar and the camera is shown in fig. 5, the deep learning model identifies and classifies the target, and meanwhile, the data points of the millimeter wave radar are projected onto the target, and finally, the fusion result is obtained.
Example 2
Corresponding to the foregoing embodiment, the present invention also provides an embodiment of a device for fusing millimeter wave radar and camera data, where the device includes a millimeter wave radar, a camera and a signal connection line, and millimeter wave radar data and camera data are respectively transmitted to a processor through corresponding signal connection lines, where the method for fusing millimeter wave radar and camera data described in embodiment 1 is implemented.
The embodiment of the data fusion of the millimeter wave radar and the camera can be applied to any device with image acquisition capability, millimeter wave data acquisition capability and data transmission capability, wherein the any device with data transmission capability can comprise devices or devices such as a CPU, an MCU, a memory, a hard disk and the like. In this embodiment, the method of real-time processing by using the combination device is only for illustrating the technical scheme of the present invention, and the method is not limited thereto, and millimeter wave radar data and camera data required in the fusion of millimeter wave radar and camera data may be acquired in real time or may be acquired in advance, and then offline processing is performed locally.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Example 3
The present embodiment relates to a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the object detection method of data fusion of a millimeter wave radar and a camera as set forth in any one of claims 1 to 5.
Example 4
The present embodiment relates to a computing device comprising a memory and a processor, wherein the memory has executable code stored therein, which when executed by the processor, implements the method of any of claims 1-5.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the spirit of the technical solutions according to the embodiments of the present invention.

Claims (8)

1. The target detection method for data fusion of the millimeter wave radar and the camera is characterized by comprising the following steps of:
s1, mounting a millimeter wave radar and a camera at fixed relative positions, wherein the fields of View (FOV) of the millimeter wave radar and the camera are overlapped as much as possible for effectively implementing data fusion;
s2, acquiring millimeter wave Lei Dadian cloud data in real time, wherein each frame of millimeter wave Lei Dadian cloud data is a set of time and data pointsWherein t is r For the time when the previous frame of millimeter wave radar data, < >>N is the number of radar data points in the current frame; each data point contains position, velocity and radar cross-sectional area (RCS) information, typically the position and velocity information in turn contains longitudinal and transverse components, described belowWherein (1)>Represents the lateral position +.>Indicates the longitudinal position +.>Represents lateral speed, +.>Representing the longitudinal speed;
s3, acquiring camera data in real time, wherein each frame of camera data is time and a two-dimensional image, (t) c M), wherein t c For the time of the previous frame of the camera data, M is a two-dimensional matrix representing the image, the length and width of M are the resolution of the image, and each element M of M x,y Is one pixel in the image;
s4, matching millimeter wave radar data with camera data according to time; in general, the sampling frequency of the camera is higher than that of the millimeter wave radar, and for each frame of image data, the sampling frequency is found out to be equal to t c Nearest t r Using the frame millimeter wave radar data as data paired therewith;
s5, millimeter wave radar data and camera data paired for each pairThe following steps are performed:
s5.1, performing target detection on the image by using an algorithm based on computer vision, and outputting the pixel area range of each target and the category information of the target;
s5.2, according to the relative positions of the millimeter wave radar and the camera and the internal and external parameter information of the camera, the data point of each millimeter wave radar is obtainedPixel locations (x i ,y i );
S5.3, if the data points of the millimeter wave radarCorresponding pixel position (x i ,y i ) The data point is considered to be generated by a certain target when the data point is positioned in the pixel area range of the target, so that each target identified by each image corresponds to the data point generated by a plurality of millimeter wave radars;
s6, filtering millimeter wave radar data points corresponding to the targets according to the category information of the targets;
and S7, assigning the position and the speed of the target according to the filtered millimeter wave radar data points, and outputting the complete information of the target by combining the category of the target.
2. The method for detecting the targets by fusing millimeter wave radar and camera data according to claim 1, wherein in step S5.1, the targets are detected and classified, the targets are classified into various categories such as pedestrians, bicycles, cars, trucks and the like after being identified, and the pixel area range occupied by the targets is marked on the images.
3. The method for detecting a target by fusing millimeter wave radar and camera data according to claim 1, wherein the filtering of the corresponding millimeter wave radar data points according to the class information of the target in step S6 comprises the following sub-steps:
s6.1, presetting different radius information r according to the category information of the target j J=1, 2, …, C being the total number of categories that can be identified by the two-dimensional image-based target detection algorithm;
s6.2, if the millimeter wave radar data point number corresponding to a certain target is 0, not filtering the data point of the target; if the millimeter wave radar data point corresponding to a certain target is 1, the data point of the target is not filtered; if the millimeter wave radar data point number corresponding to a certain target is 2 and the target class is j,then a determination is made: if the distance between two data points is less than or equal to r j Two data points are reserved; if the distance between two data points is greater than r j Only the data points close to the millimeter wave radar and the camera are reserved, and the other data points are deleted; if the millimeter wave radar data point number corresponding to a certain target point is more than 2 and the target class is j, r is used j Clustering all data points for the radius, and judging: if a point set is obtained through clustering, all data points in the point set are reserved; if a plurality of point sets are obtained through clustering, only the data points in the point set which is closer to the millimeter wave radar and the camera are reserved, and the rest data points are deleted.
4. The method for detecting the target by fusing millimeter wave radar and camera data according to claim 1, wherein in step S7, the position and the speed of the target are assigned according to the filtered millimeter wave radar data points, if the target corresponds to only one millimeter wave radar data point, the position of the target is assigned by using the position of the data point, and the speed of the target is assigned by using the speed of the data point; if the target corresponds to a point set, the average positions of all the data points in the point set are used for assigning the positions of the target, and the average speeds or the maximum speeds of all the data points in the point set can be selected for assigning the speeds of the target according to different decision requirements.
5. The method for detecting a target by data fusion of a millimeter wave radar and a camera according to claim 1, wherein the step S7 of outputting complete information of the target in combination with the class of the target comprises the following sub-steps:
s7.1, presetting different three-dimensional models according to category information of the target, wherein the three-dimensional models are defined by length, width and height, (l) j ,w j ,h j ) J=1, 2, …, C being the total number of categories that can be identified by the two-dimensional image-based target detection algorithm;
s7.2, outputting complete information of the target, wherein the complete information comprises the type, the position, the speed and the three-dimensional model of the target.
6. The object detection device for data fusion of the millimeter wave radar and the camera is characterized by comprising one or more processors, the millimeter wave radar, the camera and signal connecting lines, wherein the millimeter wave radar data and the camera data are respectively transmitted to the processors through the corresponding signal connecting lines, and the millimeter wave radar and the camera data fusion method of any one of claims 1-5 is realized in the processors.
7. A computer-readable storage medium, having stored thereon a program which, when executed by a processor, implements the object detection method of data fusion of a millimeter wave radar and a camera as claimed in any one of claims 1 to 5.
8. A computing device comprising a memory and a processor, wherein the memory has executable code stored therein, which when executed by the processor, implements the method of any of claims 1-5.
CN202310332750.8A 2023-03-27 2023-03-27 Millimeter wave radar and camera data fusion target detection method and device Pending CN116794650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310332750.8A CN116794650A (en) 2023-03-27 2023-03-27 Millimeter wave radar and camera data fusion target detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310332750.8A CN116794650A (en) 2023-03-27 2023-03-27 Millimeter wave radar and camera data fusion target detection method and device

Publications (1)

Publication Number Publication Date
CN116794650A true CN116794650A (en) 2023-09-22

Family

ID=88035340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310332750.8A Pending CN116794650A (en) 2023-03-27 2023-03-27 Millimeter wave radar and camera data fusion target detection method and device

Country Status (1)

Country Link
CN (1) CN116794650A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117093872A (en) * 2023-10-19 2023-11-21 四川数字交通科技股份有限公司 Self-training method and system for radar target classification model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117093872A (en) * 2023-10-19 2023-11-21 四川数字交通科技股份有限公司 Self-training method and system for radar target classification model
CN117093872B (en) * 2023-10-19 2024-01-02 四川数字交通科技股份有限公司 Self-training method and system for radar target classification model

Similar Documents

Publication Publication Date Title
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN112581612B (en) Vehicle-mounted grid map generation method and system based on fusion of laser radar and all-round-looking camera
CN112215306B (en) Target detection method based on fusion of monocular vision and millimeter wave radar
CN110738121A (en) front vehicle detection method and detection system
CN110197173B (en) Road edge detection method based on binocular vision
CN112740225B (en) Method and device for determining road surface elements
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
Jiang et al. Target detection algorithm based on MMW radar and camera fusion
CN113281782A (en) Laser radar snow point filtering method based on unmanned vehicle
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN116794650A (en) Millimeter wave radar and camera data fusion target detection method and device
CN115327572A (en) Method for detecting obstacle in front of vehicle
CN113609942B (en) Road intelligent monitoring system based on multi-view and multi-spectral fusion
CN113219472B (en) Ranging system and method
Xuan et al. Robust lane-mark extraction for autonomous driving under complex real conditions
CN113988197A (en) Multi-camera and multi-laser radar based combined calibration and target fusion detection method
Wang et al. Lane-line detection algorithm for complex road based on OpenCV
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
CN112115737B (en) Vehicle orientation determining method and device and vehicle-mounted terminal
CN111353481A (en) Road obstacle identification method based on laser point cloud and video image
CN114724119B (en) Lane line extraction method, lane line detection device, and storage medium
CN115965847A (en) Three-dimensional target detection method and system based on multi-modal feature fusion under cross view angle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination