CN110865367A - Intelligent fusion method for radar video data - Google Patents

Intelligent fusion method for radar video data Download PDF

Info

Publication number
CN110865367A
CN110865367A CN201911207198.XA CN201911207198A CN110865367A CN 110865367 A CN110865367 A CN 110865367A CN 201911207198 A CN201911207198 A CN 201911207198A CN 110865367 A CN110865367 A CN 110865367A
Authority
CN
China
Prior art keywords
target object
reference line
radar
image
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911207198.XA
Other languages
Chinese (zh)
Other versions
CN110865367B (en
Inventor
邓韶辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Heyuan Polytron Technologies Inc
Original Assignee
Shanxi Heyuan Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Heyuan Polytron Technologies Inc filed Critical Shanxi Heyuan Polytron Technologies Inc
Priority to CN201911207198.XA priority Critical patent/CN110865367B/en
Publication of CN110865367A publication Critical patent/CN110865367A/en
Application granted granted Critical
Publication of CN110865367B publication Critical patent/CN110865367B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/415Identification of targets based on measurements of movement associated with the target

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to an intelligent fusion method of radar video data, belonging to the technical field of radar and video processing, which comprises the steps of drawing a reference line across the whole acquisition area in the acquisition area, respectively placing reference targets at two ends of the reference line, acquiring video images and radar monitoring images, determining the distance between the target at the same time point and the reference line, calculating the distance between each target and the reference line on the video images, calculating the distance between a foot point of each target on the reference line on the video images and the reference target, calculating the distance between each target and the reference line on the radar monitoring images, calculating the distance between the foot point of each target on the reference line on the radar monitoring images and the reference target, finally determining whether the targets on the two images are the same target according to the distance and the direction, and fusing if the targets are the same target, the invention can dynamically fuse radar data and video data and improve the identification precision of the target object.

Description

Intelligent fusion method for radar video data
Technical Field
The invention relates to an intelligent fusion method of radar video data, and belongs to the technical field of radar and video processing.
Background
With the development of scientific technology, radar and video sensing technologies are increasingly applied to intelligent traffic, and a radar sensor measures the distance, speed and angle of surrounding objects by transmitting high-frequency electromagnetic waves and receiving echoes. The video sensor detects the type and angle of the surrounding object by monitoring the video image in the lens. However, both radar sensors and video sensors have limitations in practical applications. Limitations such as radar technology are: firstly, the detail resolution of the environment and obstacles is not high, particularly in terms of angular resolution, and secondly, the type of object cannot be identified. Video technology is limited in that: firstly, the influence of illumination and environment such as fog, rain and snow weather is large, and secondly, distance and speed information of a target cannot be accurately acquired; it is essential how to effectively fuse the video and radar data.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides an intelligent fusion method for radar video data, which can dynamically fuse the radar data and the video data and improve the identification precision of a target object.
In order to achieve the purpose, the technical scheme adopted by the invention is an intelligent fusion method of radar video data, which comprises a video collector and a radar sensor which are arranged at the same position and specifically comprises the following steps,
s1, establishing a reference line, selecting a position which is away from the video collector X in the collection area of the video collector, drawing a reference line across the whole collection area, and placing reference targets at two ends of the reference line respectively, namely the position of the reference line image collected by the video collector is a reference line ab of the video image, and the positions of the two reference targets collected by the radar sensor are connected to form a reference line a 'b' of the radar image;
s2, acquiring a video image and a radar monitoring image, acquiring a real-time video image of a video collector and a real-time radar monitoring image of a radar sensor, then selecting a frame of video image and a frame of radar monitoring image at the same time point, calculating the proportional size of a target object and an actual object on each frame of video image, determining a video image proportional scale, calculating the distance between the target object on each frame of radar monitoring image and an original point, calculating the proportional size of the actual distance between the actual object and the monitoring point, and determining the proportional scale of the radar monitoring image;
s3, determining the distance between the target object and the reference line at the same time point, and calculating the distance L between each target object and the reference line ab on the video image1、L2、L3… …, simultaneous computation on radar monitor images eachDistance M of target object from reference line a' b1、M2、M3… …, calculating the distance K between the foot point of each target object on the reference line ab and the reference target object on the video image1、K2、K3… …, calculating the distance N of the foot point of each target object on the reference line a 'b' on the radar monitoring image from the reference target object1、N2、N3… …, the actual distance of the target objects on the two images, i.e. the actual distance L 'of each target object from the reference line ab on the video image is then converted by a scale'1、L'2、L'3… …, calculating the actual distance K 'from the reference object by the foot point of each object on the reference line ab on the video image'1、K'2、K'3… …, calculating the actual distance M 'of each target object from the reference line a' b 'on the radar monitoring image'1、M'2、M'3… …, calculating the distance N 'from the reference target object of the foot point of each target object on the reference line a' b 'on the radar monitoring image'1、N'2、N'3……;
S4, fusing images, and comparing and analyzing the actual distance between a target object on the video image and the reference line ab, the actual distance between the foot point of each target object on the reference line ab and the reference target object on the video image, the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot point of each target object on the reference line a 'b' and the reference target object on the radar monitoring image; when the actual distance between the target object on the video image and the reference line ab is equal to the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot point of each target object on the reference line ab on the video image and the reference target object is equal to the actual distance between the foot point of each target object on the reference line a 'b' on the radar monitoring image and the reference target object, namely the target object on the video image and the target object on the radar monitoring image are the same target object, then the real-time data of the target object monitored by the radar and the real-time data of the target object in the video image are fused, the video image and each frame of data on the radar monitoring image are compared and analyzed, and finally the video image with the speed, the moving direction and the distance is output.
Compared with the prior art, the invention has the following technical effects: according to the invention, each frame of image on the radar and the video at the same time is obtained, the reference line is established in the acquisition area of the video and the radar, the distance between the target object on each frame of image and the reference line and the distance between the foothold of the target object on the reference line and the reference target object are further determined, the video and radar data are fused by comparing the two distances between the video image and the target object on the radar monitoring image, if the two distances are equal, the target object on the video image and the target object on the radar monitoring image are identified, so that the dynamic intelligent fusion of the video and radar data can be realized, and finally, a new video image containing the speed, the moving direction and the distance is output, and the identification precision of the target object is further improved.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more apparent, the present invention is further described in detail below with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
An intelligent fusion method of radar video data comprises a video collector and a radar sensor which are arranged at the same position, and is specifically carried out according to the following steps,
s1, establishing a reference line, selecting a position which is away from the video collector X in the collection area of the video collector, drawing a reference line across the whole collection area, and placing reference targets at two ends of the reference line respectively, namely the position of the reference line image collected by the video collector is a reference line ab of the video image, and the positions of the two reference targets collected by the radar sensor are connected to form a reference line a 'b' of the radar image;
s2, acquiring a video image and a radar monitoring image, acquiring a real-time video image of a video collector and a real-time radar monitoring image of a radar sensor, then selecting a frame of video image and a frame of radar monitoring image at the same time point, calculating the proportional size of a target object and an actual object on each frame of video image, determining a video image proportional scale, calculating the distance between the target object on each frame of radar monitoring image and an original point, calculating the proportional size of the actual distance between the actual object and the monitoring point, and determining the proportional scale of the radar monitoring image;
s3, determining the distance between the target object and the reference line at the same time point, and calculating the distance L between each target object and the reference line ab on the video image1、L2、L3… …, calculating the distance M of each target object from the reference line a 'b' on the radar monitoring image1、M2、M3… …, calculating the distance K between the foot point of each target object on the reference line ab and the reference target object on the video image1、K2、K3… …, calculating the distance N of the foot point of each target object on the reference line a 'b' on the radar monitoring image from the reference target object1、N2、N3… …, the actual distance of the target objects on the two images, i.e. the actual distance L 'of each target object from the reference line ab on the video image is then converted by a scale'1、L'2、L'3… …, calculating the actual distance K 'from the reference object by the foot point of each object on the reference line ab on the video image'1、K'2、K'3… …, calculating the actual distance M 'of each target object from the reference line a' b 'on the radar monitoring image'1、M'2、M'3… …, calculating the distance N 'from the reference target object of the foot point of each target object on the reference line a' b 'on the radar monitoring image'1、N'2、N'3……;
S4, fusing images, and comparing and analyzing the actual distance between a target object on the video image and the reference line ab, the actual distance between the foot point of each target object on the reference line ab and the reference target object on the video image, the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot point of each target object on the reference line a 'b' and the reference target object on the radar monitoring image; when the actual distance between the target object on the video image and the reference line ab is equal to the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot point of each target object on the reference line ab on the video image and the reference target object is equal to the actual distance between the foot point of each target object on the reference line a 'b' on the radar monitoring image and the reference target object, namely the target object on the video image and the target object on the radar monitoring image are the same target object, then the real-time data of the target object monitored by the radar and the real-time data of the target object in the video image are fused, the video image and each frame of data on the radar monitoring image are compared and analyzed, and finally the video image with the speed, the moving direction and the distance is output.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principles of the present invention are intended to be included therein.

Claims (1)

1. The intelligent fusion method of the radar video data comprises a video collector and a radar sensor which are arranged at the same position, and is characterized in that: the method specifically comprises the following steps of,
s1, establishing a reference line, selecting a position which is away from the video collector X in the collection area of the video collector, drawing a reference line across the whole collection area, and placing reference targets at two ends of the reference line respectively, namely the position of the reference line image collected by the video collector is a reference line ab of the video image, and the positions of the two reference targets collected by the radar sensor are connected to form a reference line a 'b' of the radar image;
s2, acquiring a video image and a radar monitoring image, acquiring a real-time video image of a video collector and a real-time radar monitoring image of a radar sensor, then selecting a frame of video image and a frame of radar monitoring image at the same time point, calculating the proportional size of a target object and an actual object on each frame of video image, determining a video image proportional scale, calculating the distance between the target object on each frame of radar monitoring image and an original point, calculating the proportional size of the actual distance between the actual object and the monitoring point, and determining the proportional scale of the radar monitoring image;
s3, determining the distance between the target object and the reference line at the same time point, and calculating the distance L between each target object and the reference line ab on the video image1、L2、L3… …, calculating the distance M of each target object from the reference line a 'b' on the radar monitoring image1、M2、M3… …, calculating the distance K between the foot point of each target object on the reference line ab and the reference target object on the video image1、K2、K3… …, calculating the distance N of the foot point of each target object on the reference line a 'b' on the radar monitoring image from the reference target object1、N2、N3… …, the actual distance of the target objects on the two images, i.e. the actual distance L 'of each target object from the reference line ab on the video image is then converted by a scale'1、L'2、L'3… …, calculating the actual distance K 'from the reference object by the foot point of each object on the reference line ab on the video image'1、K'2、K'3… …, calculating the actual distance M 'of each target object from the reference line a' b 'on the radar monitoring image'1、M'2、M'3… …, calculating the distance N 'from the reference target object of the foot point of each target object on the reference line a' b 'on the radar monitoring image'1、N'2、N'3……;
S4, fusing images, and comparing and analyzing the actual distance between a target object on the video image and the reference line ab, the actual distance between the foot point of each target object on the reference line ab and the reference target object on the video image, the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot point of each target object on the reference line a 'b' and the reference target object on the radar monitoring image; when the actual distance between the target object on the video image and the reference line ab is equal to the actual distance between the target object on the radar monitoring image and the reference line a 'b', and the actual distance between the foot point of each target object on the reference line ab on the video image and the reference target object is equal to the actual distance between the foot point of each target object on the reference line a 'b' on the radar monitoring image and the reference target object, namely the target object on the video image and the target object on the radar monitoring image are the same target object, then the real-time data of the target object monitored by the radar and the real-time data of the target object in the video image are fused, the video image and each frame of data on the radar monitoring image are compared and analyzed, and finally the video image with the speed, the moving direction and the distance is output.
CN201911207198.XA 2019-11-30 2019-11-30 Intelligent radar video data fusion method Active CN110865367B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911207198.XA CN110865367B (en) 2019-11-30 2019-11-30 Intelligent radar video data fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911207198.XA CN110865367B (en) 2019-11-30 2019-11-30 Intelligent radar video data fusion method

Publications (2)

Publication Number Publication Date
CN110865367A true CN110865367A (en) 2020-03-06
CN110865367B CN110865367B (en) 2023-05-05

Family

ID=69658310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911207198.XA Active CN110865367B (en) 2019-11-30 2019-11-30 Intelligent radar video data fusion method

Country Status (1)

Country Link
CN (1) CN110865367B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753757A (en) * 2020-06-28 2020-10-09 浙江大华技术股份有限公司 Image recognition processing method and device
WO2023071909A1 (en) * 2021-10-28 2023-05-04 华为技术有限公司 Method and device for measuring flow velocity of fluid surface, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101825442A (en) * 2010-04-30 2010-09-08 北京理工大学 Mobile platform-based color laser point cloud imaging system
JP2015206797A (en) * 2012-11-22 2015-11-19 株式会社デンソー Target detection device
CN106408940A (en) * 2016-11-02 2017-02-15 南京慧尔视智能科技有限公司 Microwave and video data fusion-based traffic detection method and device
CN109490890A (en) * 2018-11-29 2019-03-19 重庆邮电大学 A kind of millimetre-wave radar towards intelligent vehicle and monocular camera information fusion method
US20190120955A1 (en) * 2017-10-20 2019-04-25 Texas Instruments Incorporated System and method for camera radar fusion
CN109886308A (en) * 2019-01-25 2019-06-14 中国汽车技术研究中心有限公司 One kind being based on the other dual sensor data fusion method of target level and device
CN109901156A (en) * 2019-01-25 2019-06-18 中国汽车技术研究中心有限公司 A kind of subject fusion method and apparatus of vehicle millimetre-wave radar and camera
CN109948523A (en) * 2019-03-18 2019-06-28 中国汽车工程研究院股份有限公司 A kind of object recognition methods and its application based on video Yu millimetre-wave radar data fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101825442A (en) * 2010-04-30 2010-09-08 北京理工大学 Mobile platform-based color laser point cloud imaging system
JP2015206797A (en) * 2012-11-22 2015-11-19 株式会社デンソー Target detection device
CN106408940A (en) * 2016-11-02 2017-02-15 南京慧尔视智能科技有限公司 Microwave and video data fusion-based traffic detection method and device
US20190120955A1 (en) * 2017-10-20 2019-04-25 Texas Instruments Incorporated System and method for camera radar fusion
CN109490890A (en) * 2018-11-29 2019-03-19 重庆邮电大学 A kind of millimetre-wave radar towards intelligent vehicle and monocular camera information fusion method
CN109886308A (en) * 2019-01-25 2019-06-14 中国汽车技术研究中心有限公司 One kind being based on the other dual sensor data fusion method of target level and device
CN109901156A (en) * 2019-01-25 2019-06-18 中国汽车技术研究中心有限公司 A kind of subject fusion method and apparatus of vehicle millimetre-wave radar and camera
CN109948523A (en) * 2019-03-18 2019-06-28 中国汽车工程研究院股份有限公司 A kind of object recognition methods and its application based on video Yu millimetre-wave radar data fusion

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753757A (en) * 2020-06-28 2020-10-09 浙江大华技术股份有限公司 Image recognition processing method and device
CN111753757B (en) * 2020-06-28 2021-06-18 浙江大华技术股份有限公司 Image recognition processing method and device
WO2023071909A1 (en) * 2021-10-28 2023-05-04 华为技术有限公司 Method and device for measuring flow velocity of fluid surface, and storage medium

Also Published As

Publication number Publication date
CN110865367B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN110532896B (en) Road vehicle detection method based on fusion of road side millimeter wave radar and machine vision
CN112558023B (en) Calibration method and device of sensor
CN112946628A (en) Road running state detection method and system based on radar and video fusion
Lan et al. Vehicle speed measurement based on gray constraint optical flow algorithm
Roy et al. Automated traffic surveillance using fusion of Doppler radar and video information
CN106646474A (en) Unstructured road accidented barrier detection apparatus
CN111045000A (en) Monitoring system and method
CN108692701B (en) Mobile robot multi-sensor fusion positioning method based on particle filter
CN115943439A (en) Multi-target vehicle detection and re-identification method based on radar vision fusion
CN113850102A (en) Vehicle-mounted vision detection method and system based on millimeter wave radar assistance
CN110865367B (en) Intelligent radar video data fusion method
CN115808170B (en) Indoor real-time positioning method integrating Bluetooth and video analysis
CN116310679A (en) Multi-sensor fusion target detection method, system, medium, equipment and terminal
CN116699602A (en) Target detection system and method based on millimeter wave radar and camera fusion
CN114814823A (en) Rail vehicle detection system and method based on integration of millimeter wave radar and camera
CN115690713A (en) Binocular camera-based radar-vision fusion event detection method
Scharf et al. A semi-automated multi-sensor data labeling process for deep learning in automotive radar environment
CN111177297B (en) Dynamic target speed calculation optimization method based on video and GIS
CN117310627A (en) Combined calibration method applied to vehicle-road collaborative road side sensing system
CN115166722B (en) Non-blind-area single-rod multi-sensor detection device for road side unit and control method
CN111323771A (en) Fixed-distance-based millimeter wave radar and video data fusion method
CN110865368A (en) Radar video data fusion method based on artificial intelligence
CN115471526A (en) Automatic driving target detection and tracking method based on multi-source heterogeneous information fusion
Jiang et al. Real-time container truck speed measurement at container port gates based on the binocular vision technology
CN110375654A (en) The monitoring method of real-time detection bridge three-D displacement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant