CN118711368B - A monitoring method and system for multi-view radar vision fusion sensing data - Google Patents

A monitoring method and system for multi-view radar vision fusion sensing data

Info

Publication number
CN118711368B
CN118711368B CN202410866009.4A CN202410866009A CN118711368B CN 118711368 B CN118711368 B CN 118711368B CN 202410866009 A CN202410866009 A CN 202410866009A CN 118711368 B CN118711368 B CN 118711368B
Authority
CN
China
Prior art keywords
radar
target vehicle
data
vehicle
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410866009.4A
Other languages
Chinese (zh)
Other versions
CN118711368A (en
Inventor
闫昊
王永飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Intercommunication Technology Co ltd
Original Assignee
Smart Intercommunication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Intercommunication Technology Co ltd filed Critical Smart Intercommunication Technology Co ltd
Priority to CN202410866009.4A priority Critical patent/CN118711368B/en
Publication of CN118711368A publication Critical patent/CN118711368A/en
Application granted granted Critical
Publication of CN118711368B publication Critical patent/CN118711368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/042Detecting movement of traffic to be counted or controlled using inductive or magnetic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/048Detecting movement of traffic to be counted or controlled with provision for compensation of environmental or other condition, e.g. snow, vehicle stopped at detector
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/056Detecting movement of traffic to be counted or controlled with provision for distinguishing direction of travel
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

本申请公开了一种多目雷视融合感知数据的监测方法及系统,涉及交通路况监管技术领域,包括:S1、确定待测路口信息,所述路口信息包括:路口名称、路口进口数量、路口经纬度、各进出口车辆方向及数量、路口安装的设备信息;根据路口信息,确定当前路口的车辆信息对应的雷达原始数据、雷达视频原始数据、多目视频数据、电警视频原始数据;S2、将获取的雷达原始数据和雷达视频原始数据进行初次处理,得到处理后的第一雷视处理数据,确定目标车辆和目标车辆的车辆ID、时间戳,并提取出目标车辆关于速度、距离、方向的第一特征;能够实现在复杂场景下对车辆精准的定位。

This application discloses a monitoring method and system for multi-view radar-visual fusion sensing data, relating to the field of traffic condition monitoring technology, including: S1, determining the information of the intersection to be measured, the intersection information including: intersection name, number of intersection entrances, intersection latitude and longitude, direction and number of vehicles at each entrance and exit, and equipment information installed at the intersection; based on the intersection information, determining the original radar data, original radar video data, multi-view video data, and original electronic police video data corresponding to the vehicle information at the current intersection; S2, performing initial processing on the acquired original radar data and original radar video data to obtain the processed first radar-visual processing data, determining the target vehicle and the target vehicle's vehicle ID and timestamp, and extracting the first features of the target vehicle regarding speed, distance, and direction; enabling accurate vehicle positioning in complex scenarios.

Description

Method and system for monitoring multi-view radar fusion sensing data
Technical Field
The invention relates to the technical field of traffic road condition supervision, in particular to a method and a system for monitoring multi-view radar fusion perception data.
Background
The multi-view radar integrated machine is a traffic sensor which combines a plurality of cameras, millimeter wave radars and a high-performance processor into a whole, and can realize the fusion calculation of the radars and video data. The multi-view radar integrated system can output the positions of various targets and road events at the same time, and generally fuses and outputs the recognition results of the radar and the video. In the application of digital twinning of holographic intersections, in order to restore the traffic running state of the whole intersection, radar and video data often need to be fused for many times. However, the situation of losing data is unavoidable in the fusion process, but the problem positioning difficulty is high due to the fact that the fusion links are more. A method and a system for monitoring multi-view radar fusion data aim to accurately locate the occurrence position of a problem and provide an optimization direction for a developer to optimize a fusion algorithm.
Disclosure of Invention
The embodiment of the application solves the problem that targets are easy to lose in a complex area of a scene of a tracked vehicle in the prior art by providing the monitoring method and the system for the multi-view radar fusion perception data, and realizes the accuracy of tracking and positioning of the vehicle.
The embodiment of the application provides a method for monitoring multi-view radar fusion perception data, which comprises the following steps:
S1, determining intersection information to be detected, wherein the intersection information comprises intersection names, intersection import quantity, intersection longitude and latitude, directions and quantity of vehicles at each entrance and exit and equipment information installed at the intersection;
S2, performing primary processing on the acquired radar original data and radar video original data to obtain processed first radar processing data, determining a target vehicle and vehicle IDs and time stamps of the target vehicle, and extracting first characteristics of the target vehicle about speed, distance and direction;
S3, when the position of the target vehicle changes, tracking the target vehicle by using the multi-view video data and the first radar processing data to obtain second radar processing data, identifying the type and the speed of the target vehicle when the target vehicle is tracked, and acquiring second characteristics corresponding to the behavior information of the vehicle;
And S4, carrying out fusion processing on the first radar processing data, the second radar processing data and the electric warning video original data, associating the target vehicles, and determining the fused target vehicles based on the first characteristics and the second characteristics.
Step S2 further comprises the following implementation:
S21, detecting a scene through a radar, and determining coordinate values of a target vehicle on a current intersection about a radar coordinate system;
s22, shooting a scene through a camera, obtaining an image of a target vehicle, and determining coordinate values of the target vehicle about an image coordinate system;
S23, fusing the coordinate values of the obtained radar coordinate system with the coordinate values of the image coordinate system, and determining the position and the angle of the target vehicle;
S24, determining the corresponding speed of the target vehicle and the distance between the target vehicle and the intersection and the surrounding vehicles according to the change of the position of the target vehicle under different time stamps, and predicting the direction of the target vehicle according to the current intersection information and the distance between the target vehicle and the intersection.
Step S3 further includes the following implementation:
s31, acquiring a sampling rate and a view field corresponding to the multi-camera, and determining that a vehicle in the multi-camera and a target vehicle in the radar video original data are the same;
s32, obtaining visual features of the target vehicle, wherein the visual features comprise the outline, the color and the texture of the target vehicle;
s33, dividing the first radar processing data into different areas, and extracting the characteristics of the shape, the size and the speed of the target vehicle in each area as first radar characteristics;
And S34, performing feature matching on the target vehicle according to the first radar feature and the visual feature, identifying the first radar feature and the visual feature which are kept highly similar at different times and under different ambient light, and outputting the first radar feature and the visual feature as the second feature.
In step S4, the implementation manner of fusion processing for the first radar processing data, the second radar processing data and the electric alarm video original data includes:
s41, obtaining corresponding license plate information in the electric police original data;
S42, binding license plate information with the identified target vehicle, and determining whether the movement track of the target vehicle after binding is the same as the movement track of the license plate information;
S43, adding the features corresponding to the license plate information to the first features and the second features as final features of the target vehicle.
The system for monitoring the multi-view radar fusion perception data comprises an information acquisition module, a data processing module and a data processing module, wherein the information acquisition module is used for acquiring intersection information and radar original data, radar video original data, multi-view video data and electric police video original data corresponding to the intersection information;
The first processing module is used for carrying out primary processing on the radar original data and the radar video original data to obtain first characteristics corresponding to the target vehicle, and outputting the primarily processed data as first radar processing data;
the second processing module is used for tracking the target vehicle and determining a second characteristic corresponding to the target vehicle by the multi-view video data and the first radar processing data;
and the final processing module is used for carrying out fusion processing on the first radar processing data, the second radar processing data and the electric police video original data to determine a target vehicle tracked by the fused data.
One or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
By integrating the radar original data, the radar video original data, the multi-view video data and the electric police video original data, the combination of the multi-mode data enables the monitoring system to keep higher performance under various weather and illumination conditions.
Through primary processing and secondary processing, the system can accurately track the target vehicle, extract key characteristics about speed, distance, direction and the like, and can also recognize the type and behavior information of the vehicle.
The system ensures accurate identification of the target vehicle under different time and ambient light through multi-step verification and feature matching, and improves the robustness of the system.
The license plate information is bound with the identified target vehicle, so that the tracking accuracy is further enhanced, and the method can be used for subsequent data analysis and traffic management.
Drawings
FIG. 1 is a schematic flow chart of steps S1-S4 of a method for monitoring multi-view radar fusion perception data;
FIG. 2 is a flowchart of steps S21-S24 of a method for monitoring multi-view radar fusion perception data according to the present invention;
FIG. 3 is a flowchart of steps S31-S34 of a method for monitoring multi-view radar fusion perception data according to the present invention;
FIG. 4 is a flowchart of steps S41-S43 of a method for monitoring multi-view radar fusion perception data according to the present invention;
fig. 5 is a system frame diagram of a system for monitoring multi-view radar fusion perception data according to the present invention.
Detailed Description
In order that the application may be readily understood, a more particular description of the application will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings, in which, however, the application may be embodied in many different forms and is not limited to the embodiments described herein, but is instead provided for the purpose of providing a more thorough understanding of the present disclosure.
It should be noted that the terms "vertical", "horizontal", "upper", "lower", "left", "right", and the like are used herein for illustrative purposes only and do not represent the only embodiment.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, the terms used herein in this description of the invention are used for the purpose of describing particular embodiments only and are not intended to be limiting of the invention, and the term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
If the multi-view integrated machine directly outputs the recognized position result without judging, when the recognition effect is poor at this time, some position results which do not accord with the ordinary are easy to output, so that the problems of blocking, jumping and the like can occur after the back-end user restores the position of the multi-view integrated machine. For example, in a certain scene, the target that is mainly recognized by the multi-view integrated machine is a vehicle, and the multi-view integrated machine is easy to lose the target in the process of tracking and recognizing the target due to the reasons of vehicle speed change and the like, so that the problems of deviation of recognition results and the like are caused. However, the problem positioning difficulty is high due to more fusion links.
In order to solve the problem of high difficulty in problem positioning, the invention processes the currently monitored data by setting five parts, including intersection information configuration, data detection position, monitoring task configuration, scene threshold configuration, data monitoring and problem positioning.
1. The method comprises the steps of obtaining intersection information configuration, wherein the intersection information configuration comprises equipment information for configuring intersection names, the number of intersections inlets, the longitude and latitude of the intersections, the directions and the number of vehicles at each entrance and exit and installing the intersections.
2. A data detection location is determined.
In the application of the holographic intersection, the data flow of the perception data of the multi-view integrated machine sequentially passes through 5 stages of the perception equipment, the edge calculation unit, the equipment data access, the big data platform and the holographic intersection platform. The application mainly relates to sensing equipment, an edge computing unit and data monitoring of 3 links of equipment data access.
The intelligent sensing equipment is used for finishing the working of radar data processing, video data far-near fusion, license plate and other vehicle information identification and the like. And finishing radar and video data fusion of a single inlet and radar and video data fusion of the whole intersection in the edge computing unit. And finishing the access of the equipment data and the conversion of the picture format in the equipment data access link. The system buries the points before and after the position where each data ID changes, and obtains the vehicle ID, the time stamp, the original vehicle ID, the license plate number and the vehicle type information of the buries, wherein the vehicle ID, the time stamp and the original vehicle ID of the buries are the necessary items.
3. And monitoring task configuration.
The configuration of the monitoring task mainly completes the configuration of the data monitoring position and the data monitoring extraction time. The data monitoring position configuration function is used for selecting the monitoring position of the data monitoring task, and the data monitoring extraction time configuration function is used for configuring the data extraction time length and the data extraction starting time.
4. Scene threshold configuration.
In practical application, the installation position of the multi-view all-in-one device is often required to be determined according to factors such as intersection canalization and pole setting conditions, and the collection precision and the fusion effect of data are directly affected by different installation positions of the device. The scene threshold configuration mainly completes the installation scene configuration of the multi-view all-in-one device.
5. Data monitoring and problem positioning.
The data monitoring mainly completes the data quality monitoring of the positions of all buried points and supports the monitoring of the data of the whole intersection and the directions of all inlets. And counting the data quantity and the details of the driving data monitored by each buried point, and carrying out problem positioning through the comparative analysis of the data of each buried point.
Example 1
As shown in fig. 1, the method for monitoring the multi-view radar fusion perception data comprises the following steps:
S1, determining intersection information to be detected, wherein the intersection information comprises intersection names, intersection import quantity, intersection longitude and latitude, directions and quantity of vehicles at each entrance and exit and equipment information installed at the intersection;
S2, performing primary processing on the acquired radar original data and radar video original data to obtain processed first radar processing data, determining a target vehicle and vehicle IDs and time stamps of the target vehicle, and extracting first characteristics of the target vehicle about speed, distance and direction;
S3, when the position of the target vehicle changes, tracking the target vehicle by using the multi-view video data and the first radar processing data to obtain second radar processing data, identifying the type and the speed of the target vehicle when the target vehicle is tracked, and acquiring second characteristics corresponding to the behavior information of the vehicle;
And S4, carrying out fusion processing on the first radar processing data, the second radar processing data and the electric warning video original data, associating the target vehicles, and determining the fused target vehicles based on the first characteristics and the second characteristics.
The radar raw data is used for receiving echo signals sent by nearby vehicles by using a radar, including information of time, amplitude, phase, frequency and the like of the echo signals, converting the received information into a visible radar image, regarding each point on the radar image as an identified target vehicle, and giving a vehicle ID and a timestamp related to the position of the radar image.
The radar video original data adopts video equipment to find the type and speed of a corresponding target vehicle on a radar image, the multi-view video data adopts a plurality of cameras or a plurality of cameras arranged to correlate the corresponding target vehicles in a plurality of images, and the direction and speed change of the target vehicles in the images are identified, so that the driving intention of the target vehicles is identified, the early warning at the intersection is facilitated, and the traffic accidents are reduced.
The electric police video raw data refers to raw video data captured by an electronic police system (usually installed at a traffic intersection). These video data are mainly used for traffic monitoring, violation detection and recording to provide real-time pictures and post evidence of road traffic.
Firstly, the radar original data and the radar video original data are used for identifying the vehicles, and the radar high-precision ranging and speed measuring capability and the visual information of the video data are combined, so that more comprehensive and accurate target detection and identification results are provided.
Example two
In order to improve the processing effect of the radar raw data, when the radar raw data and the radar video raw data are processed for the first time, as shown in fig. 2, step S2 further includes:
S21, detecting a scene through a radar, and determining coordinate values of a target vehicle on a current intersection about a radar coordinate system;
s22, shooting a scene through a camera, obtaining an image of a target vehicle, and determining coordinate values of the target vehicle about an image coordinate system;
S23, fusing the coordinate values of the obtained radar coordinate system with the coordinate values of the image coordinate system, and determining the position and the angle of the target vehicle.
When the radar coordinate system and the image coordinate system are fused, the method further comprises the step of determining the corresponding time stamp and intersection information of the radar and the camera, and determining that the data comprising the target vehicle is in the same space time.
And acquiring the acquired image of the target vehicle, acquiring a center point of each target vehicle according to the position of the image and the distance between the center point and the vehicle edge part, acquiring the corresponding direction of the target vehicle for the angle of the vehicle edge part in the image coordinate system, and determining the prediction direction of the target vehicle according to the current intersection information and the relative position of the target vehicle in the intersection.
S24, determining the corresponding speed of the target vehicle and the distance between the target vehicle and the intersection and the surrounding vehicles according to the change of the position of the target vehicle under different time stamps, and predicting the direction of the target vehicle according to the current intersection information and the distance between the target vehicle and the intersection.
The position and the angle of the target vehicle at the intersection can be accurately determined through the data fusion of the radar and the radar video, the possible errors and limitations of a single sensor are overcome, and the data processed at the moment is processed into first radar processing data for primary processing.
By analyzing the position change of the target vehicle under different time stamps, the real-time speed of the vehicle can be calculated. Meanwhile, by measuring the distance between the target vehicle and the intersection and surrounding vehicles, the relative position and potential risk between the vehicles can be estimated.
Preferably, when the position change of the target vehicle under different time stamps is acquired, a moving track of the target vehicle is generated, a point with the largest gradient on the moving track is selected, and a direction corresponding to the point with the largest gradient is taken as a predicted direction of the target vehicle.
By combining the current intersection information and the distance of the target vehicle in the intersection, the future running direction of the target vehicle can be predicted. This is critical to traffic management and autopilot systems and can help plan paths ahead of time and avoid potential collisions.
Example III
If the image acquisition is carried out currently, the data acquired by the multi-camera is combined with radar video data to determine how the position and relative condition of the tracked target vehicle change under the normally shot image.
Specifically, as shown in fig. 3, step S3 further includes the following implementation manners:
S31, acquiring the sampling rate and the view field corresponding to the multi-camera, and determining that the vehicle in the multi-camera and the target vehicle in the radar video original data are the same.
S32, acquiring visual characteristics of the target vehicle, wherein the visual characteristics comprise the outline, the color and the texture of the target vehicle.
In this step, contour extraction uses an edge detection algorithm (e.g., canny edge detection) to identify the contour of the vehicle, color features are extracted by color space conversion (e.g., RGB to HSV) and color histogram statistics, and texture features extract texture information using gray level co-occurrence matrix, local Binary Pattern (LBP), etc.
S33, dividing the first radar processing data into different areas, and extracting the characteristics of the shape, the size and the speed of the target vehicle in each area as first radar characteristics.
In the step, the radar point cloud is divided into different targets through a clustering algorithm (such as DBSCAN), the characteristics of the shapes, the sizes, the speeds and the like of the targets are extracted, after the first radar characteristic is obtained, the change value of the characteristics of the target vehicle and the moving track of the target vehicle are determined when the target vehicle moves, and the target vehicle can be aligned to the positions of the radar and the video conveniently.
And S34, performing feature matching on the target vehicle according to the first radar features and the visual features, identifying the first radar features and the visual features which are kept highly similar at different times and under different ambient light, and outputting the first radar features and the visual features as second features so as to verify the accuracy of the matched target vehicle.
Because the multi-view video data collected by the multi-view cameras are adopted at this time, the coordinate system of each camera is different, and meanwhile, the collected data are easy to generate the same or similar targets, and a part of detail information exists in each different view angle, so that the current targets and the details of the current targets need to be determined to realize the control of the collected data when the comparison is carried out at this time.
The step S34 of matching the target vehicle further comprises the steps of comparing the shape and the size of the target vehicle in the first radar feature with the outline feature in the visual feature to obtain a vehicle to be identified;
And judging whether the vehicle to be identified is a target vehicle or not based on the speed characteristics in the first radar characteristics and the moving track of the vehicle to be identified in the visual characteristics.
Preferably, in the vehicle tracking scene, since the moving speed and the moving direction of the vehicle will change, the contour texture extracted at different times will also change, and in order to identify the change possibly found by the features of the vehicle during moving, determining whether the vehicle to be identified is the target vehicle further includes determining the similarity between the corresponding time sequences of the first lightning feature and the visual feature.
The implementation manner of the similarity between the first radar feature and the visual feature corresponding time sequence comprises the following steps:
Extracting a first characteristic sequence which changes with time from the visual characteristics, and extracting a second characteristic sequence which changes with time from the first radar characteristics;
Calculating the distances of all point pairs between the first characteristic sequence and the second characteristic sequence to generate a distance matrix;
Finding a path passing through the distance matrix from the distance matrix, wherein the sum of the distances between all the point pairs on the path is the smallest, outputting the path as an optimal alignment path, and obtaining the similarity between the time sequences corresponding to the first radar feature and the visual feature based on the output optimal alignment path.
Preferably, in an implementation case of the present application, when determining whether the vehicle to be identified is a target vehicle, determining scene thresholds under different light sources, so that feature values of colors in the extracted first feature sequence and the extracted second feature sequence are both greater than the scene thresholds.
The acquisition mode for the extracted scene threshold value is as follows:
an initial threshold t_init is set.
The intensity of the illumination monitored is denoted as L, and a mapping function f (L) is used to adjust the threshold, expressed as:
t_adj=t_init+k×f (L), where k is an adjustment coefficient for controlling the extent of influence of illumination on the threshold value, and t_adj is a scene threshold value of the current environment.
The set scene threshold is set according to color shape, size or other effective features for tracking the object, such as with an inadvertent change in the light source in the environment, e.g., a change in light from morning to evening, the originally set threshold may no longer be applicable. The target can not be accurately tracked by the time the threshold value is taken in the morning and the color characteristics of the target can be changed under different illumination conditions, and the scene threshold value is set to reduce errors caused by light influence when a target vehicle is tracked.
When the feature value of the similarity between the time sequences corresponding to the first radar feature and the visual feature is larger than the scene threshold, the current vehicle to be detected is a target vehicle, and the output data corresponding to the first radar feature and the visual feature are used as second radar processing data.
Example IV
In this embodiment, original video data captured by an electronic police system is obtained, license plate information of a current target vehicle is identified from the data of the electronic police system, and the obtained license plate information is bound with the target vehicle identified in the first and second radar processing data, so that more comprehensive information of the target vehicle during movement is obtained.
Specifically, as shown in fig. 4, in step S4, the implementation manner of performing the fusion processing on the first radar processing data, the second radar processing data, and the electric alarm video raw data includes:
s41, obtaining corresponding license plate information in the electric police original data;
S42, binding license plate information with the identified target vehicle, and determining whether the movement track of the target vehicle after binding is the same as the movement track of the license plate information;
S43, adding the features corresponding to the license plate information to the first features and the second features as final features of the target vehicle.
This step is to further add features related to license plate information based on the features of the target vehicle that have been identified and verified. By the aid of the method, the feature set of the target vehicle can be enriched, and accuracy and stability of vehicle identification are improved. Meanwhile, the comprehensive features can also be used for more complex scene analysis, such as traffic jam prediction, abnormal behavior detection and the like.
Tracking the target vehicle according to the obtained final characteristics to determine the moving direction and track of the target vehicle under different time stamps.
Preferably, in order to prevent the target vehicle from being lost when the fusion process is performed, the process for the target vehicle in step S4 further includes:
Determining a vehicle ID of a target vehicle, filling a currently acquired image once when the vehicle ID disappears, and burying a data point; if the vehicle ID is still invisible after the first filling, performing the second filling, and if the vehicle ID is disappeared after the second filling, deleting and re-acquiring the image;
after the vehicle ID is acquired in the above step, the movement state of the corresponding target vehicle is determined according to the vehicle ID, and the movement track of the target vehicle is displayed.
In the step, an image for filling one frame is filled once, the filled image is identified, whether a current vehicle exists in the identified image or not is determined, and the secondary filling is consistent with the primary filling mode.
A system for monitoring multi-view radar fusion awareness data, as shown in fig. 5, comprising:
the information acquisition module is used for acquiring intersection information and radar original data, radar video original data, multi-view video data and electric police video original data corresponding to the intersection information;
The first processing module is used for carrying out primary processing on the radar original data and the radar video original data to obtain first characteristics corresponding to the target vehicle, and outputting the primarily processed data as first radar processing data;
the second processing module is used for tracking the target vehicle and determining a second characteristic corresponding to the target vehicle by the multi-view video data and the first radar processing data;
and the final processing module is used for carrying out fusion processing on the first radar processing data, the second radar processing data and the electric police video original data to determine a target vehicle tracked by the fused data.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. The monitoring method of the multi-view radar fusion perception data is characterized by comprising the steps of S1, determining crossing information to be detected, wherein the crossing information comprises crossing names, crossing entrance quantity, crossing longitude and latitude, each import and export vehicle direction and quantity and equipment information installed at a crossing;
S2, performing primary processing on the acquired radar original data and radar video original data to obtain processed first radar processing data, determining a target vehicle and vehicle IDs and time stamps of the target vehicle, and extracting first characteristics of the target vehicle about speed, distance and direction;
and S3, when the position of the target vehicle changes, tracking the target vehicle by using the multi-view video data and the first radar processing data to obtain second radar processing data, identifying the type and the speed of the target vehicle when the target vehicle is tracked, and acquiring second characteristics corresponding to the vehicle behavior information, wherein the step S3 further comprises:
s31, acquiring a sampling rate and a view field corresponding to the multi-camera, and determining that a vehicle in the multi-camera and a target vehicle in the radar video original data are the same;
s32, obtaining visual features of the target vehicle, wherein the visual features comprise the outline, the color and the texture of the target vehicle;
s33, dividing the first radar processing data into different areas, and extracting the characteristics of the shape, the size and the speed of the target vehicle in each area as first radar characteristics;
S34, performing feature matching on the target vehicle according to the first thunder features and the visual features, identifying the first thunder features and the visual features which are kept highly similar at different times and under different ambient light, and outputting the first thunder features and the visual features as second features;
And S4, carrying out fusion processing on the first radar processing data, the second radar processing data and the electric warning video original data, associating the target vehicles, and determining the fused target vehicles based on the first characteristics and the second characteristics.
2. The method for monitoring the fusion perception data of the multi-view radar according to claim 1, wherein the step S2 further comprises the following implementation manner:
S21, detecting a scene through a radar, and determining coordinate values of a target vehicle on a current intersection about a radar coordinate system;
s22, shooting a scene through a camera, obtaining an image of a target vehicle, and determining coordinate values of the target vehicle about an image coordinate system;
S23, fusing the coordinate values of the obtained radar coordinate system with the coordinate values of the image coordinate system, and determining the position and the angle of the target vehicle;
S24, determining the corresponding speed of the target vehicle and the distance between the target vehicle and the intersection and the surrounding vehicles according to the change of the position of the target vehicle under different time stamps, and predicting the direction of the target vehicle according to the current intersection information and the distance between the target vehicle and the intersection.
3. The method for monitoring multi-view fusion awareness data according to claim 1, wherein the step S34 of matching the target vehicle further comprises comparing the shape and size of the target vehicle in the first view feature with the contour feature in the visual feature to obtain the vehicle to be identified;
And judging whether the vehicle to be identified is a target vehicle or not based on the speed characteristics in the first radar characteristics and the moving track of the vehicle to be identified in the visual characteristics.
4. The method for monitoring multi-view radar fusion awareness data according to claim 3, wherein the step of determining whether the vehicle to be identified is a target vehicle further comprises determining a similarity between the first radar feature and the time sequence corresponding to the visual feature;
the implementation manner of the similarity between the first radar feature and the visual feature corresponding time sequence comprises the following steps:
Extracting a first characteristic sequence which changes with time from the visual characteristics, and extracting a second characteristic sequence which changes with time from the first radar characteristics;
Calculating the distances of all point pairs between the first characteristic sequence and the second characteristic sequence to generate a distance matrix;
Finding a path passing through the distance matrix from the distance matrix, wherein the sum of the distances between all the point pairs on the path is the smallest, outputting the path as an optimal alignment path, and obtaining the similarity between the time sequences corresponding to the first radar feature and the visual feature based on the output optimal alignment path.
5. The method for monitoring the multi-view radar fusion perception data according to claim 3, wherein when judging whether the vehicle to be identified is the target vehicle, the method further comprises determining scene thresholds under different light sources, so that the feature values of the colors in the extracted first feature sequence and the extracted second feature sequence are larger than the scene thresholds.
6. The method for monitoring the fusion perception data of the multi-view radar according to claim 1, wherein the implementation manner of the fusion processing of the first radar processing data, the second radar processing data and the electric alarm video raw data in the step S4 includes:
s41, obtaining corresponding license plate information in the electric police original data;
S42, binding license plate information with the identified target vehicle, and determining whether the movement track of the target vehicle after binding is the same as the movement track of the license plate information;
S43, adding the features corresponding to the license plate information to the first features and the second features as final features of the target vehicle.
7. The method for monitoring the fusion perception data of the multi-view radar according to claim 1, wherein the step S4 further comprises:
Determining a vehicle ID of a target vehicle, filling a currently acquired image once when the vehicle ID disappears, and burying a data point; if the vehicle ID is still invisible after the first filling, performing the second filling, and if the vehicle ID is disappeared after the second filling, deleting and re-acquiring the image;
after the vehicle ID is acquired in the above step, the movement state of the corresponding target vehicle is determined according to the vehicle ID, and the movement track of the target vehicle is displayed.
8. The method for monitoring the multi-view radar fusion perception data as claimed in claim 5, wherein the scene threshold is obtained by the following steps:
Setting an initial threshold t_init;
The intensity of the illumination monitored is denoted as L, and a mapping function f (L) is used to adjust the threshold, expressed as:
t_adj=t_init+k×f (L), where k is an adjustment coefficient for controlling the extent of influence of illumination on the threshold value, and t_adj is a scene threshold value of the current environment.
9. The system for monitoring the multi-view radar fusion perception data is characterized by comprising an information acquisition module, a data processing module and a data processing module, wherein the information acquisition module is used for acquiring radar original data, radar video original data, multi-view video data and electric police video original data corresponding to intersection information and intersection information;
The first processing module is used for carrying out primary processing on the radar original data and the radar video original data to obtain first characteristics corresponding to the target vehicle, and outputting the primarily processed data as first radar processing data;
the second processing module is used for tracking the target vehicle by using the multi-view video data and the first thunder processing data and determining a second characteristic corresponding to the target vehicle;
and the final processing module is used for carrying out fusion processing on the first radar processing data, the second radar processing data and the electric police video original data to determine a target vehicle tracked by the fused data.
CN202410866009.4A 2024-07-01 2024-07-01 A monitoring method and system for multi-view radar vision fusion sensing data Active CN118711368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410866009.4A CN118711368B (en) 2024-07-01 2024-07-01 A monitoring method and system for multi-view radar vision fusion sensing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410866009.4A CN118711368B (en) 2024-07-01 2024-07-01 A monitoring method and system for multi-view radar vision fusion sensing data

Publications (2)

Publication Number Publication Date
CN118711368A CN118711368A (en) 2024-09-27
CN118711368B true CN118711368B (en) 2026-02-24

Family

ID=92819302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410866009.4A Active CN118711368B (en) 2024-07-01 2024-07-01 A monitoring method and system for multi-view radar vision fusion sensing data

Country Status (1)

Country Link
CN (1) CN118711368B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115798232A (en) * 2022-11-01 2023-03-14 智慧互通科技股份有限公司 Holographic intersection traffic management system based on the combination of Levision All-in-One Machine and multi-eye camera

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101925293B1 (en) * 2015-12-30 2018-12-05 건아정보기술 주식회사 The vehicle detecting system by converging radar and image
CN108596129B (en) * 2018-04-28 2022-05-06 武汉盛信鸿通科技有限公司 Vehicle line-crossing detection method based on intelligent video analysis technology
CN112562405A (en) * 2020-11-27 2021-03-26 山东高速建设管理集团有限公司 Radar video intelligent fusion and early warning method and system
CN114298163B (en) * 2021-12-09 2025-03-25 连云港杰瑞电子有限公司 An online road condition detection system and method based on multi-source information fusion
KR102456151B1 (en) * 2022-01-07 2022-10-20 포티투닷 주식회사 Sensor fusion system based on radar and camera and method of calculating the location of nearby vehicles
US12236786B2 (en) * 2022-09-23 2025-02-25 GM Global Technology Operations LLC Calibration of time to collision threshold in low light conditions
CN116165654A (en) * 2023-02-22 2023-05-26 江苏恒超智能技术有限公司 Millimeter wave radar and video combined vehicle track monitoring method
CN116434056B (en) * 2023-03-02 2025-12-23 中数兴盛科技有限责任公司 A target recognition method, system and electronic device based on radar-visual fusion
CN116935631A (en) * 2023-06-25 2023-10-24 河北交通职业技术学院 Abnormal traffic situation detection method, device and system based on radar fusion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115798232A (en) * 2022-11-01 2023-03-14 智慧互通科技股份有限公司 Holographic intersection traffic management system based on the combination of Levision All-in-One Machine and multi-eye camera

Also Published As

Publication number Publication date
CN118711368A (en) 2024-09-27

Similar Documents

Publication Publication Date Title
CN112700470B (en) A method of target detection and trajectory extraction based on traffic video streams
Javadi et al. Vehicle speed measurement model for video-based systems
KR101647370B1 (en) road traffic information management system for g using camera and radar
CN102354457B (en) General Hough transformation-based method for detecting position of traffic signal lamp
Tak et al. Development of AI‐Based Vehicle Detection and Tracking System for C‐ITS Application
EP2709066A1 (en) Concept for detecting a motion of a moving object
CN115965655A (en) Traffic target tracking method based on radar-vision integration
US11645838B2 (en) Object detection system, object detection method, and program
KR20210158037A (en) Method for tracking multi target in traffic image-monitoring-system
CN113505638B (en) Method and device for monitoring traffic flow and computer readable storage medium
Luo et al. Enhanced YOLOv5s+ DeepSORT method for highway vehicle speed detection and multi-sensor verification
CN120726825B (en) Method and system for detecting vehicle stopping by fusing multiple-terminal non-continuity
CN111291722A (en) Vehicle weight recognition system based on V2I technology
KR20190134303A (en) Apparatus and method for image recognition
Lashkov et al. Edge-computing-empowered vehicle tracking and speed estimation against strong image vibrations using surveillance monocular camera
Kamil et al. Vehicle speed estimation using consecutive frame approaches and deep image homography for image rectification on monocular videos
CN117994295A (en) Cross-camera track splicing method based on space-time constraint
EP2709065A1 (en) Concept for counting moving objects passing a plurality of different areas within a region of interest
CN111627224A (en) Vehicle speed abnormality detection method, device, equipment and storage medium
CN117406212A (en) Visual fusion detection method for traffic multi-element radar
CN117130010A (en) Obstacle sensing method and system for unmanned vehicle and unmanned vehicle
Rodríguez et al. An adaptive, real-time, traffic monitoring system
Deshpande et al. Automatic two-wheeler rider identification and triple-riding detection in surveillance systems using deep-learning models
CN120032522B (en) Traffic incident violation warning system based on AI and multi-source data fusion
Tayeb et al. Vehicle speed estimation using gaussian mixture model and kalman filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant