CN116363623A - Vehicle detection method based on millimeter wave radar and vision fusion - Google Patents
Vehicle detection method based on millimeter wave radar and vision fusion Download PDFInfo
- Publication number
- CN116363623A CN116363623A CN202310042195.5A CN202310042195A CN116363623A CN 116363623 A CN116363623 A CN 116363623A CN 202310042195 A CN202310042195 A CN 202310042195A CN 116363623 A CN116363623 A CN 116363623A
- Authority
- CN
- China
- Prior art keywords
- millimeter wave
- radar
- coordinate system
- data
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 58
- 230000004927 fusion Effects 0.000 title claims abstract description 34
- 238000001914 filtration Methods 0.000 claims abstract description 15
- 238000006243 chemical reaction Methods 0.000 claims abstract description 14
- 230000000007 visual effect Effects 0.000 claims abstract description 13
- 238000012360 testing method Methods 0.000 claims abstract description 7
- 238000002156 mixing Methods 0.000 claims abstract description 4
- 238000012549 training Methods 0.000 claims abstract description 4
- 230000001360 synchronised effect Effects 0.000 claims abstract description 3
- 238000000034 method Methods 0.000 claims description 11
- 238000005259 measurement Methods 0.000 claims description 7
- 238000013519 translation Methods 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000001094 effect on targets Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention discloses a vehicle detection method based on millimeter wave radar and vision fusion, which relates to the field of vehicle identification and comprises the following steps: synchronous data acquisition is carried out on traffic scene targets by adopting millimeter wave radars and vision sensors; extracting vehicles and pedestrians from videos acquired by the vision sensor for marking; calculating to obtain a conversion relation from a radar projection coordinate system to a pixel coordinate system; testing the fusion precision; filtering the millimeter wave Lei Dadian cloud; detecting the image data vehicle, mixing urban road with good clear light and urban road data at night, fusing the millimeter wave radar data with visual data, and obtaining a real-time traffic scene vehicle pedestrian detection model through training. The millimeter wave radar is used for assisting in detecting the visual image, so that the problem of interference of weather, small targets and the like on the visual image is solved better.
Description
Technical Field
The invention relates to the field of vehicle identification, in particular to a vehicle detection method based on millimeter wave radar and vision fusion.
Background
In the field of vehicle identification, the current main detection method is still based on vision, the traditional target detection is transited to the target detection based on deep learning, and the traditional detection method can realize the target detection task to a certain extent, but the detection speed is very low due to the fact that the traditional detection method is too complicated in the characteristic extraction process, the target detection task based on the deep learning is mainly divided into a one-stage model and a two-stage model, the one-stage model takes yolo as a starting point, classification and regression problems are combined, and the detection speed is greatly improved.
Millimeter wave radars have wide applications in radar, communications, precision guidance, remote sensing, radio astronomy, medicine, biology, and the like. The method is mainly applied to searching and target interception, guidance, fire control and tracking, testing, anti-collision, mapping, imaging, outer space application and the like.
The vision and millimeter wave radar fused target detection technology can effectively improve the detection rate of a single sensor to vehicles or pedestrians, respectively preprocesses the original data of the cameras and the millimeter wave radar, and utilizes the ranging advantages of the millimeter wave radar to obtain the distance and boundary information of the target through data layer fusion, feature layer fusion, decision layer fusion or multi-level fusion, so that the detection accuracy is improved.
However, the technical problems existing at present are as follows:
purely visual object detection has disadvantages in coping with complex scenes such as object overlapping, dense traffic pedestrian detection, rainy and foggy weather, and the like. The target mark is a difficulty of low-resolution radar application, and only a radar sensor is used as a data source, so that certain defects exist in the aspect of reliability.
The vision and millimeter wave radar fusion method still has some defects, and due to the data characteristics of the millimeter wave radar, the overall effect on target classification is poor, the data analysis and the processing are difficult, and the missing detection situation is easy to occur at a long distance. Meanwhile, data matching is also a great difficulty in fusion, for example, the problem of coordinate unification, radar data is generally BEV data, if image data is converted into BEV data, the image can be distorted, so that characteristic extraction is not facilitated, and at present, the radar data is converted into point cloud to be projected to image coordinates. Because of the huge difference of the data, different network structures are used to ensure the consistency of data statistics distribution as much as possible, so that the fusion can improve the performance. The problem of consistent time sequence is also considered in fusion, and a certain time difference exists among the sensor data due to different sampling frequencies, so that great challenges are brought to fusion.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a vehicle detection method based on millimeter wave radar and vision fusion, which is based on a data fusion model of the millimeter wave radar and a camera, establishes a relative coordinate conversion relation, and realizes the fusion of radar data and camera data in space and time dimension; according to the target recognition result of the image, preprocessing radar data, then filtering, taking the image data as a main part and millimeter wave data as an auxiliary part, and utilizing a deep learning detection algorithm to realize target recognition of vehicles and pedestrians, thereby improving the accuracy and efficiency of vehicle detection.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the vehicle detection method based on millimeter wave radar and vision fusion comprises the following steps:
step one: synchronous data acquisition is carried out on traffic scene targets by adopting millimeter wave radars and vision sensors;
step two: extracting vehicles and pedestrians from videos acquired by the vision sensor for marking, and acquiring calibration parameters to obtain the focal length and main points of the camera;
step three: calculating to obtain a conversion relation from a radar projection coordinate system to a pixel coordinate system;
step four: testing the fusion precision, comparing the precision by using a corner reflector, obtaining a radar measurement value through a target detection algorithm, and calculating to obtain a camera measurement value according to the corner reflector pixel points so that the error meets the data fusion requirement;
step five: filtering the millimeter wave Lei Dadian cloud; meanwhile, the radar data are supplemented by using the image data;
step six: detecting an image data vehicle, mixing urban road with good clear light and urban road data at night, extracting features of data of millimeter wave radars and visual data, fusing the data and the visual data on a feature layer, and training to obtain a real-time traffic scene vehicle pedestrian detection model.
Further, in the first step, the millimeter wave radar is kept parallel to the cross section of the camera, and the cross section is perpendicular to the ground.
In the first step, the millimeter wave radar and the camera are ensured to start to collect simultaneously through the data collection software of the upper computer, so that the time synchronization of the radar and the camera is realized.
Further, in the third step, the conversion formula from the radar projection coordinate system to the pixel coordinate system is as follows:
wherein the method comprises the steps ofH,Respectively a translation distance and a rotation angle from a camera projection coordinate system to a camera coordinate system;
L x for the x-axis distance of the radar projection coordinate system and the camera projection coordinate system,is the y-axis distance between the radar projection coordinate system and the camera projection coordinate system.
In the fifth step, extended kalman filtering is adopted as a millimeter wave radar detection pair filtering model.
In the sixth step, the YOLOv5 algorithm is adopted to detect the image data vehicle.
Further, in step six, the attention mechanism is increased.
Thus, the advantages over the prior art are:
the invention realizes the calibration of millimeter wave radar data and visual data and the time synchronization, solves the synchronization problem by converting a coordinate system and other methods, and improves the verification effect by using the corner reflector for verification.
Because the millimeter wave Lei Dadian cloud has some noise data, the invention adopts the filtering treatment of the point cloud, and the adopted filtering method is extended Kalman filtering, which greatly improves the detection result of the filtering treatment.
Compared with the disadvantage of single mode, the method and the device for detecting the visual image by using the millimeter wave radar help the visual image to detect, and the fusion model is used for better solving the problem of interference of weather, small targets and the like on the visual image.
Based on the above, the invention has the advantages that:
1. the problem of poor detection of a fuzzy target in a small target and rain and fog environment can be solved;
2. the millimeter wave radar point cloud data is used for assisting in image detection, so that the accuracy of target detection is improved;
3. the coverage is wide, the detection distance is long, and the method is suitable for the road sections with mixed running of people and vehicles, dense traffic flow and easy congestion, and can simultaneously identify pedestrians, non-motor vehicles and the like;
4. the intelligent traffic control system is wide in applicability, and can be used for intelligent traffic oriented intelligent traffic management, such as urban intersection self-adaptive signal control, violation snapshot assistance, traffic flow statistics, non-motor vehicle detection, traffic road section flow detection, vehicle identification, accident detection, speed measurement, congestion detection, sprinkle detection, urban and rural road traffic flow statistics and early warning systems. The system can also be used for realizing wireless communication among vehicles, vehicle-to-vehicle, vehicle-to-road test infrastructure and vehicles-to-passers-by in a V2X vehicle networking end sensing scene, and timely early warning is carried out by sensing the surrounding conditions of the vehicles in real time;
5. the industrial deployment is easy, and the method is completely suitable for outdoor scenes.
Drawings
FIG. 1 is a flow chart of the conversion of the radar coordinate system and the pixel coordinate system of the present invention;
FIG. 2 is a schematic flow chart of the detection method of the present invention.
Detailed Description
The following describes specific embodiments of the present invention with reference to the drawings.
As shown in fig. 1 and 2, the present invention mainly includes the following steps:
step one: based on millimeter wave radar and vision sensor carry out data acquisition to traffic scene target, keep millimeter wave radar and camera's cross section parallel, and perpendicular with ground, guarantee through host computer data acquisition software that millimeter wave radar and camera begin to gather simultaneously, realize radar and camera synchronization in time.
Step two: extracting vehicles and pedestrians from the video in the scene to mark, and simultaneously obtaining calibration parameters to obtain the focal length and the principal point of the camera.
Step three: as shown in fig. 1, projection coordinate systems of the radar and the camera are respectively established, the millimeter wave radar and the camera are parallel to the cross section and perpendicular to the ground through calibration, the x-axis of the projection coordinate system respectively represents the directions of the transverse sections of the radar and the camera, the y-axis represents the normal directions of the respective transverse sections, and the z-axis represents the direction which is perpendicular and upward relative to the ground. The projection coordinate system of the camera can obtain a camera coordinate system through translation and rotation, an x axis and a y axis in the camera coordinate system are respectively parallel to the x axis and the y axis of the pixel coordinate system, a z axis is the optical axis direction of the camera, and a conversion formula from the radar projection coordinate system to the pixel coordinate system is obtained through calculation, wherein the conversion formula is as follows:
wherein the method comprises the steps ofH,Respectively a translation distance and a rotation angle from a camera projection coordinate system to a camera coordinate system;
L x for the x-axis distance of the radar projection coordinate system and the camera projection coordinate system,is the y-axis distance between the radar projection coordinate system and the camera projection coordinate system.
Step four: and testing the fusion precision, comparing the precision by using the corner reflector, obtaining a radar measurement value by using a target detection algorithm, and obtaining a camera measurement value by calculating according to the corner reflector pixel points so that the error meets the data fusion requirement.
Step five: and filtering the millimeter wave Lei Dadian cloud, and adopting extended Kalman filtering as a millimeter wave radar detection pair filtering model. While the radar data is supplemented with image data.
Step six: and detecting the image data vehicle by using a YOLOv5 algorithm, increasing a attentive mechanism, mixing urban road data with good clear light with urban road data at night, extracting features of data of millimeter wave radar and visual data, and completing fusion on a feature layer, so as to obtain a real-time traffic scene vehicle pedestrian detection model through training.
As shown in fig. 2, the method further includes the steps of demodulating the radar echo signal, removing the distortion point and the dc component, calculating the difference, performing two-dimensional FFT, and the like to obtain a complete RDM before performing the space-time calibration in the first step; after the camera is calibrated, the camera cannot image according to an ideal pinhole model due to overlarge angle of a camera lens or physical deviation generated in the manufacturing process, so that radial distortion and tangential distortion are generated, and distortion calibration is also performed.
As an embodiment, the detection result of the fusion model under the method of the present invention and the detection result of the single-mode camera are compared in table 1, so that the detection effect of the present invention is obviously improved:
TABLE 1 comparison of the test amounts
There is a contradictory relationship between accuracy and recall. When higher accuracy is required, namely lower false detection rate is required, a higher prediction score threshold is required; and as the score threshold increases, the number of TP targets detected decreases, so that the recall rate decreases. The P-R curve is thus used to balance the accuracy against recall, and the accuracy reflects the relationship of the algorithm between accuracy and recall, and the corresponding curve and average accuracy can be obtained by statistics of the algorithm predictions and truth values (Average Precision). Average accuracy (mAP, mean Average Precision) is obtained by calculating an average value for each detected class of APs.
Table 2 is the average accuracy comparison for different algorithms by different categories:
table 2 ap comparison
It can be seen that the fusion-based object detection algorithm has detection capabilities superior to the monocular camera object detection algorithm in all categories.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should be covered by the protection scope of the present invention by making equivalents and modifications to the technical solution and the inventive concept thereof.
Claims (7)
1. The vehicle detection method based on millimeter wave radar and vision fusion is characterized by comprising the following steps:
step one: synchronous data acquisition is carried out on traffic scene targets by adopting millimeter wave radars and vision sensors;
step two: extracting vehicles and pedestrians from videos acquired by the vision sensor for marking, and acquiring calibration parameters to obtain the focal length and main points of the camera;
step three: calculating to obtain a conversion relation from a radar projection coordinate system to a pixel coordinate system;
step four: testing the fusion precision, comparing the precision by using a corner reflector, obtaining a radar measurement value through a target detection algorithm, and calculating to obtain a camera measurement value according to the corner reflector pixel points so that the error meets the data fusion requirement;
step five: filtering the millimeter wave Lei Dadian cloud; meanwhile, the radar data are supplemented by using the image data;
step six: detecting an image data vehicle, mixing urban road with good clear light and urban road data at night, extracting features of data of millimeter wave radars and visual data, fusing the data and the visual data on a feature layer, and training to obtain a real-time traffic scene vehicle pedestrian detection model.
2. The vehicle detection method based on the fusion of the millimeter wave radar and the vision according to claim 1, wherein in the first step, the millimeter wave radar is kept parallel to the cross section of the camera, and the cross section is perpendicular to the ground.
3. The vehicle detection method based on millimeter wave radar and vision fusion according to claim 1 or 2, wherein in the first step, the data acquisition software of the upper computer ensures that the millimeter wave radar and the camera start to acquire simultaneously, so that the time synchronization of the radar and the camera is realized.
4. The vehicle detection method based on millimeter wave radar and vision fusion according to claim 1, wherein in the third step, a conversion formula from a radar projection coordinate system to a pixel coordinate system is:wherein the method comprises the steps ofH,/>Respectively a translation distance and a rotation angle from a camera projection coordinate system to a camera coordinate system;
5. The vehicle detection method based on millimeter wave radar and vision fusion according to claim 1, wherein in the fifth step, extended kalman filtering is adopted as a millimeter wave radar detection pair filtering model.
6. The vehicle detection method based on millimeter wave radar and vision fusion according to claim 1, wherein the image data vehicle is detected by using YOLOv5 algorithm in the sixth step.
7. The vehicle detection method based on millimeter wave radar and vision fusion according to claim 1, wherein in step six, a attentiveness mechanism is added.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310042195.5A CN116363623B (en) | 2023-01-28 | 2023-01-28 | Vehicle detection method based on millimeter wave radar and vision fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310042195.5A CN116363623B (en) | 2023-01-28 | 2023-01-28 | Vehicle detection method based on millimeter wave radar and vision fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116363623A true CN116363623A (en) | 2023-06-30 |
CN116363623B CN116363623B (en) | 2023-10-20 |
Family
ID=86940226
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310042195.5A Active CN116363623B (en) | 2023-01-28 | 2023-01-28 | Vehicle detection method based on millimeter wave radar and vision fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116363623B (en) |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106289159A (en) * | 2016-07-28 | 2017-01-04 | 北京智芯原动科技有限公司 | The vehicle odometry method and device compensated based on range finding |
CN109492507A (en) * | 2017-09-12 | 2019-03-19 | 百度在线网络技术(北京)有限公司 | The recognition methods and device of the traffic light status, computer equipment and readable medium |
KR102061461B1 (en) * | 2019-10-08 | 2019-12-31 | 공간정보기술 주식회사 | Stereo camera system using vari-focal lens and operating method thereof |
CN111368706A (en) * | 2020-03-02 | 2020-07-03 | 南京航空航天大学 | Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision |
CN111812649A (en) * | 2020-07-15 | 2020-10-23 | 西北工业大学 | Obstacle identification and positioning method based on fusion of monocular camera and millimeter wave radar |
CN112560972A (en) * | 2020-12-21 | 2021-03-26 | 北京航空航天大学 | Target detection method based on millimeter wave radar prior positioning and visual feature fusion |
CN113359097A (en) * | 2021-06-21 | 2021-09-07 | 北京易航远智科技有限公司 | Millimeter wave radar and camera combined calibration method |
CN113822892A (en) * | 2021-11-24 | 2021-12-21 | 腾讯科技(深圳)有限公司 | Evaluation method, device and equipment of simulated radar and computer program product |
CN114137512A (en) * | 2021-11-29 | 2022-03-04 | 湖南大学 | Front multi-vehicle tracking method based on fusion of millimeter wave radar and deep learning vision |
CN114236528A (en) * | 2022-02-23 | 2022-03-25 | 浙江高信技术股份有限公司 | Target detection method and system based on millimeter wave radar and video fusion |
CN114280611A (en) * | 2021-11-08 | 2022-04-05 | 上海智能网联汽车技术中心有限公司 | Road side sensing method integrating millimeter wave radar and camera |
CN114708585A (en) * | 2022-04-15 | 2022-07-05 | 电子科技大学 | Three-dimensional target detection method based on attention mechanism and integrating millimeter wave radar with vision |
CN115032651A (en) * | 2022-06-06 | 2022-09-09 | 合肥工业大学 | Target detection method based on fusion of laser radar and machine vision |
CN115272452A (en) * | 2022-06-30 | 2022-11-01 | 深圳市镭神智能系统有限公司 | Target detection positioning method and device, unmanned aerial vehicle and storage medium |
CN115267762A (en) * | 2022-08-03 | 2022-11-01 | 电子科技大学重庆微电子产业技术研究院 | Low-altitude slow-speed small target tracking method integrating millimeter wave radar and visual sensor |
CN115372958A (en) * | 2022-08-17 | 2022-11-22 | 苏州广目汽车科技有限公司 | Target detection and tracking method based on millimeter wave radar and monocular vision fusion |
CN115457138A (en) * | 2021-05-20 | 2022-12-09 | 南京隼眼电子科技有限公司 | Position calibration method, system and storage medium based on radar and camera |
CN115565097A (en) * | 2022-08-29 | 2023-01-03 | 苏州飞搜科技有限公司 | Method and device for detecting compliance of personnel behaviors in transaction scene |
-
2023
- 2023-01-28 CN CN202310042195.5A patent/CN116363623B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106289159A (en) * | 2016-07-28 | 2017-01-04 | 北京智芯原动科技有限公司 | The vehicle odometry method and device compensated based on range finding |
CN109492507A (en) * | 2017-09-12 | 2019-03-19 | 百度在线网络技术(北京)有限公司 | The recognition methods and device of the traffic light status, computer equipment and readable medium |
KR102061461B1 (en) * | 2019-10-08 | 2019-12-31 | 공간정보기술 주식회사 | Stereo camera system using vari-focal lens and operating method thereof |
CN111368706A (en) * | 2020-03-02 | 2020-07-03 | 南京航空航天大学 | Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision |
CN111812649A (en) * | 2020-07-15 | 2020-10-23 | 西北工业大学 | Obstacle identification and positioning method based on fusion of monocular camera and millimeter wave radar |
CN112560972A (en) * | 2020-12-21 | 2021-03-26 | 北京航空航天大学 | Target detection method based on millimeter wave radar prior positioning and visual feature fusion |
CN115457138A (en) * | 2021-05-20 | 2022-12-09 | 南京隼眼电子科技有限公司 | Position calibration method, system and storage medium based on radar and camera |
CN113359097A (en) * | 2021-06-21 | 2021-09-07 | 北京易航远智科技有限公司 | Millimeter wave radar and camera combined calibration method |
CN114280611A (en) * | 2021-11-08 | 2022-04-05 | 上海智能网联汽车技术中心有限公司 | Road side sensing method integrating millimeter wave radar and camera |
CN113822892A (en) * | 2021-11-24 | 2021-12-21 | 腾讯科技(深圳)有限公司 | Evaluation method, device and equipment of simulated radar and computer program product |
CN114137512A (en) * | 2021-11-29 | 2022-03-04 | 湖南大学 | Front multi-vehicle tracking method based on fusion of millimeter wave radar and deep learning vision |
CN114236528A (en) * | 2022-02-23 | 2022-03-25 | 浙江高信技术股份有限公司 | Target detection method and system based on millimeter wave radar and video fusion |
CN114708585A (en) * | 2022-04-15 | 2022-07-05 | 电子科技大学 | Three-dimensional target detection method based on attention mechanism and integrating millimeter wave radar with vision |
CN115032651A (en) * | 2022-06-06 | 2022-09-09 | 合肥工业大学 | Target detection method based on fusion of laser radar and machine vision |
CN115272452A (en) * | 2022-06-30 | 2022-11-01 | 深圳市镭神智能系统有限公司 | Target detection positioning method and device, unmanned aerial vehicle and storage medium |
CN115267762A (en) * | 2022-08-03 | 2022-11-01 | 电子科技大学重庆微电子产业技术研究院 | Low-altitude slow-speed small target tracking method integrating millimeter wave radar and visual sensor |
CN115372958A (en) * | 2022-08-17 | 2022-11-22 | 苏州广目汽车科技有限公司 | Target detection and tracking method based on millimeter wave radar and monocular vision fusion |
CN115565097A (en) * | 2022-08-29 | 2023-01-03 | 苏州飞搜科技有限公司 | Method and device for detecting compliance of personnel behaviors in transaction scene |
Non-Patent Citations (2)
Title |
---|
JUNJI QIN; CONG LI; HUI JING: "《Research on Vision/Radar Sensor Fusion Positioning Algorithm Based on Strong Tracking Filter and Unscented Kalman Filter》", 《2022 IEEE 4TH INTERNATIONAL CONFERENCE ON CIVIL AVIATION SAFETY AND INFORMATION TECHNOLOGY (ICCASIT)》 * |
陈州全; 黄俊; 郑元杰: "《基于注意力机制的毫米波雷达和视觉融合目标检测算法》", 《电讯技术》 * |
Also Published As
Publication number | Publication date |
---|---|
CN116363623B (en) | 2023-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108960183B (en) | Curve target identification system and method based on multi-sensor fusion | |
CN111368706B (en) | Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision | |
CN114282597B (en) | Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system | |
CN109920246B (en) | Collaborative local path planning method based on V2X communication and binocular vision | |
CN112215306B (en) | Target detection method based on fusion of monocular vision and millimeter wave radar | |
CN112509333A (en) | Roadside parking vehicle track identification method and system based on multi-sensor sensing | |
WO2020052530A1 (en) | Image processing method and device and related apparatus | |
Mu et al. | Traffic light detection and recognition for autonomous vehicles | |
EP2282295B1 (en) | Object recognizing device and object recognizing method | |
US9154741B2 (en) | Apparatus and method for processing data of heterogeneous sensors in integrated manner to classify objects on road and detect locations of objects | |
CN111554088A (en) | Multifunctional V2X intelligent roadside base station system | |
KR101569919B1 (en) | Apparatus and method for estimating the location of the vehicle | |
Choi et al. | A sensor fusion system with thermal infrared camera and LiDAR for autonomous vehicles and deep learning based object detection | |
CN108594244B (en) | Obstacle recognition transfer learning method based on stereoscopic vision and laser radar | |
CN110188606B (en) | Lane recognition method and device based on hyperspectral imaging and electronic equipment | |
CN112740225B (en) | Method and device for determining road surface elements | |
CN113885062A (en) | Data acquisition and fusion equipment, method and system based on V2X | |
CN112598899A (en) | Data processing method and device | |
WO2021166169A1 (en) | Vehicle condition estimation method, vehicle condition estimation device and vehicle condition estimation program | |
US20230177724A1 (en) | Vehicle to infrastructure extrinsic calibration system and method | |
Song et al. | Automatic detection and classification of road, car, and pedestrian using binocular cameras in traffic scenes with a common framework | |
CN116363623B (en) | Vehicle detection method based on millimeter wave radar and vision fusion | |
CN116699602A (en) | Target detection system and method based on millimeter wave radar and camera fusion | |
CN115790568A (en) | Map generation method based on semantic information and related equipment | |
CN115188195A (en) | Method and system for extracting vehicle track of urban omnidirectional intersection in real time |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Bai Hongliang Inventor after: Gou Huanshu Inventor after: Zhang Wenxiao Inventor after: Xiong Fengye Inventor before: Bai Hongliang Inventor before: Gou Shuhuan Inventor before: Zhang Wenxiao Inventor before: Xiong Fengye |