CN110865367B - Intelligent radar video data fusion method - Google Patents

Intelligent radar video data fusion method Download PDF

Info

Publication number
CN110865367B
CN110865367B CN201911207198.XA CN201911207198A CN110865367B CN 110865367 B CN110865367 B CN 110865367B CN 201911207198 A CN201911207198 A CN 201911207198A CN 110865367 B CN110865367 B CN 110865367B
Authority
CN
China
Prior art keywords
target object
image
distance
reference line
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911207198.XA
Other languages
Chinese (zh)
Other versions
CN110865367A (en
Inventor
邓韶辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Heyuan Technology Co ltd
Original Assignee
Shanxi Heyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Heyuan Technology Co ltd filed Critical Shanxi Heyuan Technology Co ltd
Priority to CN201911207198.XA priority Critical patent/CN110865367B/en
Publication of CN110865367A publication Critical patent/CN110865367A/en
Application granted granted Critical
Publication of CN110865367B publication Critical patent/CN110865367B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/415Identification of targets based on measurements of movement associated with the target

Abstract

The invention relates to an intelligent fusion method of radar video data, which belongs to the technical field of radar and video processing, and specifically comprises the steps of drawing a datum line in an acquisition area and crossing the whole acquisition area, respectively placing datum targets at two ends of the datum line, acquiring a video image and a radar monitoring image, determining the distance between the target and the datum line at the same time point, calculating the distance between each target and the datum line on the video image, calculating the distance between the foot point of each target on the datum line and the datum target on the video image, calculating the distance between each target and the datum line on the radar monitoring image, calculating the distance between the foot point of each target on the datum line and the datum target on the radar monitoring image, and finally determining whether the targets on the two images are the same target according to the distance and the direction, and if the targets are the same target, carrying out fusion.

Description

Intelligent radar video data fusion method
Technical Field
The invention relates to an intelligent fusion method for radar video data, and belongs to the technical field of radar and video processing.
Background
With the development of scientific technology, radar and video sensing technology are increasingly applied to intelligent traffic, and radar sensors measure the distance, speed and angle of surrounding objects by the principle of transmitting high-frequency electromagnetic waves and receiving echoes. The video sensor detects the type and angle of surrounding objects by monitoring video images in the lens. However, both radar sensors and video sensors have limitations in practical applications. Limitations of radar technology, for example, are: first, the detail resolution of the environment and obstacles is not high, especially in terms of angular resolution, and second, the object type cannot be identified. The limitations of video technology are: firstly, the influence of illumination and environment such as fog, rain, snow weather and the like is large, and secondly, the distance and speed information of a target cannot be accurately obtained; it is necessary to effectively fuse video and radar data.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides an intelligent radar video data fusion method which can dynamically fuse radar data and video data and improve the identification accuracy of a target object.
In order to achieve the aim, the technical scheme adopted by the invention is an intelligent fusion method of radar video data, which comprises a video collector and a radar sensor which are arranged at the same position, and is specifically carried out according to the following steps,
s1, establishing a reference line, selecting the position of a distance video collector X in a collection area of the video collector, drawing a reference line across the whole collection area, and respectively placing reference targets at two ends of the reference line, wherein the position of a reference line image collected by the video collector is a reference line ab of the video image, and the position connecting line of two reference targets collected by a radar sensor is a reference line a 'b' of the radar image;
s2, acquiring a video image and a radar monitoring image, acquiring a real-time video image of a video collector and a real-time radar monitoring image of a radar sensor, selecting a frame of video image and a frame of radar monitoring image at the same time point, calculating the proportion of a target object to an actual object on each frame of video image, determining the proportion of the video image, calculating the proportion of the distance of the target object on each frame of radar monitoring image from an origin, and determining the proportion of the radar monitoring image;
s3, determining the distance between the target object and the reference line at the same time point, and calculating the distance L between each target object and the reference line ab on the video image 1 、L 2 、L 3 … …, while calculating the distance M of each object from the reference line a 'b' on the radar monitor image 1 、M 2 、M 3 … … the distance K between the foot drop point of each object on the video image and the reference object on the reference line ab is calculated 1 、K 2 、K 3 … … calculating the distance N between the foot drop point of each target object on the radar monitoring image on the datum line a 'b' and the datum target object 1 、N 2 、N 3 … … and then scaling the actual distance of each object on the two images, i.e. the actual distance L 'of each object on the video image from the reference line ab' 1 、L' 2 、L' 3 … … calculating the actual distance K 'between the foot drop point of each object on the video image and the reference object on the reference line ab' 1 、K' 2 、K' 3 ……Calculating the actual distance M ' between each target object on the radar monitoring image and the reference line a ' b ' 1 、M' 2 、M' 3 … … calculating the distance N 'between the foot drop point of each target object on the radar monitoring image on the datum line a' b 'and the datum target object' 1 、N' 2 、N' 3 ……;
S4, image fusion, namely comparing the actual distance between the target object on the video image and the reference line ab, the actual distance between the foot drop of each target object on the video image and the reference object on the reference line ab, the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot drop of each target object on the radar monitoring image and the reference object on the reference line a 'b'; when the actual distance between the target object on the video image and the reference line ab is equal to the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot drop of each target object on the video image and the reference target object on the reference line ab is equal to the actual distance between the foot drop of each target object on the radar monitoring image and the reference target object on the reference line a 'b', namely, the target object on the video image and the target object on the radar monitoring image are the same target object, then the real-time data of the target object on the radar monitoring image and the real-time data of the target object in the video image are fused, each frame of data on the video image and the radar monitoring image are compared and analyzed, and finally the video image with speed, moving direction and distance is output.
Compared with the prior art, the invention has the following technical effects: according to the method, each frame of image on the radar and the video at the same time is acquired, a datum line is established in the acquisition area of the video and the radar, the distance from the target object on each frame of image to the datum line and the distance between the foot point of the target object on the datum line and the datum target object are determined, the video image and the target object on the radar monitoring image are identified by comparing the two distances between the video image and the target object on the radar monitoring image, and if the two distances are equal, the video image and the target object on the radar monitoring image are identified, and video and radar data are fused, so that dynamic intelligent fusion of the video and the radar data can be realized, and finally, a new video image containing the speed, the moving direction and the distance is output, and the identification precision of the target object is improved.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
An intelligent fusion method of radar video data comprises a video collector and a radar sensor which are arranged at the same position, and is carried out according to the following steps,
s1, establishing a reference line, selecting the position of a distance video collector X in a collection area of the video collector, drawing a reference line across the whole collection area, and respectively placing reference targets at two ends of the reference line, wherein the position of a reference line image collected by the video collector is a reference line ab of the video image, and the position connecting line of two reference targets collected by a radar sensor is a reference line a 'b' of the radar image;
s2, acquiring a video image and a radar monitoring image, acquiring a real-time video image of a video collector and a real-time radar monitoring image of a radar sensor, selecting a frame of video image and a frame of radar monitoring image at the same time point, calculating the proportion of a target object to an actual object on each frame of video image, determining the proportion of the video image, calculating the proportion of the distance of the target object on each frame of radar monitoring image from an origin, and determining the proportion of the radar monitoring image;
s3, determining the distance between the target object and the reference line at the same time point, and calculating the distance L between each target object and the reference line ab on the video image 1 、L 2 、L 3 … …, while calculating the distance M of each object from the reference line a 'b' on the radar monitor image 1 、M 2 、M 3 … … the distance K between the foot drop point of each object on the video image and the reference object on the reference line ab is calculated 1 、K 2 、K 3 … … calculating the distance N between the foot drop point of each target object on the radar monitoring image on the datum line a 'b' and the datum target object 1 、N 2 、N 3 … … and then scaling the actual distance of each object on the two images, i.e. the actual distance L 'of each object on the video image from the reference line ab' 1 、L' 2 、L' 3 … … calculating the actual distance K 'between the foot drop point of each object on the video image and the reference object on the reference line ab' 1 、K' 2 、K' 3 … … calculating the actual distance M ' of each target object on the radar monitoring image from the reference line a ' b ' 1 、M' 2 、M' 3 … … calculating the distance N 'between the foot drop point of each target object on the radar monitoring image on the datum line a' b 'and the datum target object' 1 、N' 2 、N' 3 ……;
S4, image fusion, namely comparing the actual distance between the target object on the video image and the reference line ab, the actual distance between the foot drop of each target object on the video image and the reference object on the reference line ab, the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot drop of each target object on the radar monitoring image and the reference object on the reference line a 'b'; when the actual distance between the target object on the video image and the reference line ab is equal to the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot drop of each target object on the video image and the reference target object on the reference line ab is equal to the actual distance between the foot drop of each target object on the radar monitoring image and the reference target object on the reference line a 'b', namely, the target object on the video image and the target object on the radar monitoring image are the same target object, then the real-time data of the target object on the radar monitoring image and the real-time data of the target object in the video image are fused, each frame of data on the video image and the radar monitoring image are compared and analyzed, and finally the video image with speed, moving direction and distance is output.
The foregoing description of the preferred embodiment of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (1)

1. The intelligent fusion method of the radar video data comprises a video collector and a radar sensor which are arranged at the same position, and is characterized in that: in particular according to the following steps,
s1, establishing a reference line, selecting the position of a distance video collector X in a collection area of the video collector, drawing a reference line across the whole collection area, and respectively placing reference targets at two ends of the reference line, wherein the position of a reference line image collected by the video collector is a reference line ab of the video image, and the position connecting line of two reference targets collected by a radar sensor is a reference line a 'b' of the radar image;
s2, acquiring a video image and a radar monitoring image, acquiring a real-time video image of a video collector and a real-time radar monitoring image of a radar sensor, selecting a frame of video image and a frame of radar monitoring image at the same time point, calculating the proportion of a target object to an actual object on each frame of video image, determining the proportion of the video image, calculating the proportion of the distance of the target object on each frame of radar monitoring image from an origin, and determining the proportion of the radar monitoring image;
s3, determining the distance between the target object and the reference line at the same time point, and calculating the distance L between each target object and the reference line ab on the video image 1 、L 2 、L 3 … …, while calculating the distance M of each object from the reference line a 'b' on the radar monitor image 1 、M 2 、M 3 … … the distance K between the foot drop point of each object on the video image and the reference object on the reference line ab is calculated 1 、K 2 、K 3 … … calculating the distance N between the foot drop point of each target object on the radar monitoring image on the datum line a 'b' and the datum target object 1 、N 2 、N 3 … … and then scaling the actual distance of the objects on the two images, i.e. the distance base of each object on the video imageThe actual distance L 'of the quasi-line ab' 1 、L' 2 、L' 3 … … calculating the actual distance K 'between the foot drop point of each object on the video image and the reference object on the reference line ab' 1 、K' 2 、K' 3 … … calculating the actual distance M ' of each target object on the radar monitoring image from the reference line a ' b ' 1 、M' 2 、M' 3 … … calculating the distance N 'between the foot drop point of each target object on the radar monitoring image on the datum line a' b 'and the datum target object' 1 、N' 2 、N' 3 ……;
S4, image fusion, namely comparing the actual distance between the target object on the video image and the reference line ab, the actual distance between the foot drop of each target object on the video image and the reference object on the reference line ab, the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot drop of each target object on the radar monitoring image and the reference object on the reference line a 'b'; when the actual distance between the target object on the video image and the reference line ab is equal to the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot drop of each target object on the video image and the reference target object on the reference line ab is equal to the actual distance between the foot drop of each target object on the radar monitoring image and the reference target object on the reference line a 'b', namely, the target object on the video image and the target object on the radar monitoring image are the same target object, then the real-time data of the target object on the radar monitoring image and the real-time data of the target object in the video image are fused, each frame of data on the video image and the radar monitoring image are compared and analyzed, and finally the video image with speed, moving direction and distance is output.
CN201911207198.XA 2019-11-30 2019-11-30 Intelligent radar video data fusion method Active CN110865367B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911207198.XA CN110865367B (en) 2019-11-30 2019-11-30 Intelligent radar video data fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911207198.XA CN110865367B (en) 2019-11-30 2019-11-30 Intelligent radar video data fusion method

Publications (2)

Publication Number Publication Date
CN110865367A CN110865367A (en) 2020-03-06
CN110865367B true CN110865367B (en) 2023-05-05

Family

ID=69658310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911207198.XA Active CN110865367B (en) 2019-11-30 2019-11-30 Intelligent radar video data fusion method

Country Status (1)

Country Link
CN (1) CN110865367B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753757B (en) * 2020-06-28 2021-06-18 浙江大华技术股份有限公司 Image recognition processing method and device
CN116047112A (en) * 2021-10-28 2023-05-02 华为技术有限公司 Method, device and storage medium for measuring surface flow velocity of fluid

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101825442A (en) * 2010-04-30 2010-09-08 北京理工大学 Mobile platform-based color laser point cloud imaging system
JP2015206797A (en) * 2012-11-22 2015-11-19 株式会社デンソー Target detection device
CN106408940A (en) * 2016-11-02 2017-02-15 南京慧尔视智能科技有限公司 Microwave and video data fusion-based traffic detection method and device
CN109490890A (en) * 2018-11-29 2019-03-19 重庆邮电大学 A kind of millimetre-wave radar towards intelligent vehicle and monocular camera information fusion method
CN109886308A (en) * 2019-01-25 2019-06-14 中国汽车技术研究中心有限公司 One kind being based on the other dual sensor data fusion method of target level and device
CN109901156A (en) * 2019-01-25 2019-06-18 中国汽车技术研究中心有限公司 A kind of subject fusion method and apparatus of vehicle millimetre-wave radar and camera
CN109948523A (en) * 2019-03-18 2019-06-28 中国汽车工程研究院股份有限公司 A kind of object recognition methods and its application based on video Yu millimetre-wave radar data fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10852419B2 (en) * 2017-10-20 2020-12-01 Texas Instruments Incorporated System and method for camera radar fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101825442A (en) * 2010-04-30 2010-09-08 北京理工大学 Mobile platform-based color laser point cloud imaging system
JP2015206797A (en) * 2012-11-22 2015-11-19 株式会社デンソー Target detection device
CN106408940A (en) * 2016-11-02 2017-02-15 南京慧尔视智能科技有限公司 Microwave and video data fusion-based traffic detection method and device
CN109490890A (en) * 2018-11-29 2019-03-19 重庆邮电大学 A kind of millimetre-wave radar towards intelligent vehicle and monocular camera information fusion method
CN109886308A (en) * 2019-01-25 2019-06-14 中国汽车技术研究中心有限公司 One kind being based on the other dual sensor data fusion method of target level and device
CN109901156A (en) * 2019-01-25 2019-06-18 中国汽车技术研究中心有限公司 A kind of subject fusion method and apparatus of vehicle millimetre-wave radar and camera
CN109948523A (en) * 2019-03-18 2019-06-28 中国汽车工程研究院股份有限公司 A kind of object recognition methods and its application based on video Yu millimetre-wave radar data fusion

Also Published As

Publication number Publication date
CN110865367A (en) 2020-03-06

Similar Documents

Publication Publication Date Title
CN110532896B (en) Road vehicle detection method based on fusion of road side millimeter wave radar and machine vision
CN111368706B (en) Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision
CN112558023B (en) Calibration method and device of sensor
CN109615880B (en) Vehicle flow measuring method based on radar image processing
CN108692701B (en) Mobile robot multi-sensor fusion positioning method based on particle filter
CN111045000A (en) Monitoring system and method
CN110865367B (en) Intelligent radar video data fusion method
CN105116886A (en) Robot autonomous walking method
CN113850102A (en) Vehicle-mounted vision detection method and system based on millimeter wave radar assistance
Wang et al. A roadside camera-radar sensing fusion system for intelligent transportation
CN115205559A (en) Cross-domain vehicle weight recognition and continuous track construction method
CN115690713A (en) Binocular camera-based radar-vision fusion event detection method
WO2024078265A1 (en) Multi-layer high-precision map generation method and apparatus
CN115166721B (en) Radar and GNSS information calibration fusion method and device in roadside sensing equipment
CN111323771A (en) Fixed-distance-based millimeter wave radar and video data fusion method
CN117310627A (en) Combined calibration method applied to vehicle-road collaborative road side sensing system
Chenchen et al. A camera calibration method for obstacle distance measurement based on monocular vision
CN115471526A (en) Automatic driving target detection and tracking method based on multi-source heterogeneous information fusion
CN110865368A (en) Radar video data fusion method based on artificial intelligence
CN113947141B (en) Roadside beacon sensing system of urban intersection scene
CN115166722A (en) Non-blind-area single-rod multi-sensor detection device for road side unit and control method
Reulke et al. Situation analysis and atypical event detection with multiple cameras and multi-object tracking
Kyutoku et al. Ego-localization robust for illumination condition changes based on far-infrared camera and millimeter-wave radar fusion
CN110375654A (en) The monitoring method of real-time detection bridge three-D displacement
CN201654235U (en) Multiple sensor fusion device for realizing absolute positioning of measurement target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant