WO2020134512A1 - Traffic detection system based on millimeter wave radar and video - Google Patents

Traffic detection system based on millimeter wave radar and video Download PDF

Info

Publication number
WO2020134512A1
WO2020134512A1 PCT/CN2019/113964 CN2019113964W WO2020134512A1 WO 2020134512 A1 WO2020134512 A1 WO 2020134512A1 CN 2019113964 W CN2019113964 W CN 2019113964W WO 2020134512 A1 WO2020134512 A1 WO 2020134512A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
wave radar
data
image
sin
Prior art date
Application number
PCT/CN2019/113964
Other languages
French (fr)
Chinese (zh)
Inventor
陈焰中
章庆
陈俊德
Original Assignee
南京慧尔视智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京慧尔视智能科技有限公司 filed Critical 南京慧尔视智能科技有限公司
Publication of WO2020134512A1 publication Critical patent/WO2020134512A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • G08G1/054Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed photographing overspeeding vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/056Detecting movement of traffic to be counted or controlled with provision for distinguishing direction of travel

Definitions

  • the invention belongs to the technical field of data collection and processing in intelligent transportation, and particularly relates to a traffic detection system based on millimeter wave radar and video.
  • the accuracy of a single video sensor is not high in detection effect, the delay of image processing is large, and its accuracy is easily affected by rain, fog and other environments.
  • other types of sensors were gradually added to the traffic detection system, such as millimeter wave radar, geomagnetism, lidar and other sensors.
  • the advantages of different sensors were used to perform multi-sensor data fusion, which greatly improved the detection performance of the system.
  • the radar sensor can detect the position and speed information of the target in real time.
  • the data processing can reach 20 times per second.
  • the environment is highly adaptable. It can work around the clock and around the clock. The disadvantage is that it cannot be visualized.
  • the video sensor can process the image information. If the image is analyzed and calculated, it takes a long time to transcode, it cannot respond in real time, and the environment is not adaptable.
  • radar and video can be used as the detection front-end for data fusion processing.
  • the system eventually combines multiple information of the same target collected by multiple sensors, which has a huge amount of processing on the original data and aggravates the system platform. Data processing work.
  • the radar and the camera have different installation positions, so the information fusion needs to correlate the coordinate systems of the two sensors, and the coordinate relationship between the two sensors is related to their relative positions. If one of the sensors has a slight difference To move, the coordinates need to be re-associated. Since the two sensors are often shaken during use, the installation position will deviate after a period of time, resulting in unstable distances and angles. Coordinate matching needs to be performed periodically, so installation and debugging are difficult and often require maintenance.
  • the present invention provides a traffic detection system based on millimeter wave radar and video in response to the problems of the existing technical institutions that are complicated and the work efficiency is low.
  • the traffic detection system includes a data collection unit, a data processing unit, a data storage unit, a data communication unit, and a system platform;
  • the data acquisition unit includes a millimeter-wave radar sensor and a video sensor.
  • the millimeter-wave radar sensor collects the target's coordinate position, target speed, and target length in real time and sends it to the data processing unit;
  • the video sensor collects high-definition image data in real time and sends it to the data processing unit ;
  • the data processing unit includes a microprocessor and an embedded neural network processor, fuse the data information from the millimeter wave radar and video sensor detected by the same target, and finally output the complete information of the same target to the system platform in a unified format;
  • the implementation of the traffic detection system includes the following steps:
  • Step 1 The millimeter-wave radar sensor collects target information.
  • the target information includes the identifier, speed, coordinates, and length of the target.
  • the video sensor collects image information of the road surface.
  • the millimeter-wave radar sensor and the video sensor collect The original data is sent to the data processing unit;
  • Step 2 Convert world coordinates to image plane coordinates; the millimeter wave radar sensor collects the target coordinates, and then corrects the internal and external parameters of the camera of the video sensor to obtain the conversion matrix between the target coordinates and the image plane coordinates. The coordinates are converted into image plane coordinates, and the position of the target detected and collected by the millimeter wave radar sensor on the image plane is determined;
  • Step 3 A neural network framework is used to identify the target type in the image captured by the video sensor; the neural network framework algorithm is run in the embedded neural network processor, and the unstructured data of the road surface image is converted into the target type through image machine learning And color structured data;
  • Step 4 Use the coordinate conversion of Step 2 and the image recognition of Step 3 to correspond to the targets detected by the millimeter wave radar sensor in the image captured by the camera, and obtain all the parameter information of each target in the image Fusion of superimposed target parameters, the parameter information includes the speed, coordinates and target type of each target;
  • Step 5 Send the fused target parameters to the system platform.
  • step two that is, the conversion of the target point in the world coordinate system to the point in the image coordinate system, is implemented in two steps:
  • the rotation around the X axis is ⁇
  • the rotation around the Y axis is ⁇
  • the rotation around the Z axis is ⁇ ;
  • the available target is transformed from the coordinates in the world coordinate system (X w , Y w , Z w ) to the camera coordinate system (X c , Y c , Z c )
  • is the horizontal angle of view of the target in the camera
  • Is the vertical viewing angle of the target in the camera
  • v the horizontal dimension of the image
  • h the vertical dimension of the image.
  • step three the data after the video sensor recognizes the target type, calculates the traffic flow statistical data in the data processing unit, counts and tracks the detected target, and calculates each cycle within a specified area Statistics of traffic flow for each model.
  • the data processing unit determines whether the target has speeding, retrograde, or illegal traffic events based on the target speed and coordinates detected by the millimeter wave radar sensor; if a traffic event occurs, the data processing unit sends an instruction to the video sensor Take photos, take video forensics, and transfer event text information, image data, and video data to the system platform.
  • the beneficial effects of the present invention are as follows:
  • the traffic detection system based on millimeter wave radar and video of the present invention reduces the data calculation difficulty of the system and simplifies the system process.
  • the original image data collected by the video sensor is transmitted to the data processing unit without video encoding and decoding operations, so that the data is not delayed, and can be real-time data fusion with the data detected by the millimeter wave radar.
  • the data processing unit the preliminary fusion processing of the two data is carried out, and the fused target information is sent to the system platform, which reduces the calculation amount of the system, speeds up the response time of the system, and improves the detection accuracy.
  • FIG. 1 is a system structure diagram of the present invention.
  • Fig. 2 is a system flowchart of the present invention.
  • FIG. 3 is a schematic diagram of camera calibration of the video sensor in the present invention.
  • FIG. 4 is a schematic diagram of the projection relationship of the target from the camera coordinate system to the image coordinate system in the present invention.
  • An embodiment of the present invention is a traffic detection system based on millimeter wave radar and video.
  • a traffic detection system based on millimeter wave radar and video includes a data collection unit and data. Processing unit, data storage unit, data communication unit and system platform.
  • the data acquisition unit includes a millimeter wave radar sensor and a video sensor.
  • the millimeter wave radar sensor collects target information in real time.
  • the detected data content includes the target's coordinate position, target speed, target length, etc., and sends the collected target information to the data processing unit.
  • the video sensor collects high-definition image data in real time, and sends the collected image information to the data processing unit.
  • the data processing unit includes an ARM microprocessor and an embedded neural network processor (NPU).
  • the embedded neural network processor (NPU) is responsible for the processing of image data, identifying the type of target (pedestrian, car, truck, etc.), and the data of the target information collected by the millimeter wave radar (target coordinate position, target speed, The length of the target, etc.) is superimposed on the video image.
  • the ARM microprocessor is responsible for statistics of all traffic flow information and traffic incident recognition.
  • the data processing unit converts the coordinate of the same target to the target information received in real time, merges the data information from the millimeter wave radar and video sensor detected by the same target, and finally outputs the complete information of the same target to the system platform in a unified format .
  • the data storage unit includes an embedded multimedia memory card (EMMC) and an SD card.
  • EMMC embedded multimedia memory card
  • SD card is used to store traffic flow database and traffic event information.
  • the communication unit includes a network (ETH) interface and an RS485 interface.
  • the network (ETH) interface is used to transmit traffic flow information, traffic event information, and video image information to the system platform.
  • the RS485 interface is used for docking the camera at the bayonet and the traffic control signal at the intersection.
  • the implementation of a traffic detection system based on millimeter wave radar and video includes the following steps:
  • Step 1 The millimeter-wave radar sensor and the video sensor are used as environment-aware front-ends to collect target and road information, respectively collect target information and road image information, and send the raw data collected by the millimeter-wave radar sensor and the video sensor to the data processing unit .
  • the information collected by the millimeter wave radar sensor includes the identifier (ID), speed, coordinates, length, etc. of the target, and the video sensor collects the image information of the road surface.
  • Step 2 Convert world coordinates to image plane coordinates.
  • the millimeter wave radar sensor collects the target coordinates, and then corrects the internal and external parameters of the camera of the video sensor to obtain a conversion matrix (that is, a rotation matrix and a translation matrix) between the target coordinates and the image plane coordinates, and converts the road surface world coordinates into Image plane coordinates, so that the position of the target detected by the millimeter wave radar sensor collection and projection on the image plane can be determined.
  • a conversion matrix that is, a rotation matrix and a translation matrix
  • I the rotation matrix from the world coordinate system to the camera coordinate system.
  • the available target is transformed from the coordinates in the world coordinate system (X w , Y w , Z w ) to the camera coordinate system (X c , Y c , Z c )
  • is the horizontal viewing angle of the target in the camera Is the vertical viewing angle of the target in the camera
  • v is the horizontal dimension of the image
  • h is the vertical dimension of the image.
  • Step 3 Use neural network framework YOLO to identify the target type in the image captured by the video sensor.
  • the video detector only needs to identify the category of the target, without other complicated calculations, reducing the delay of transcoding, and speeding up the response time of the system.
  • the YOLO algorithm is run in a neural processing unit (NPU).
  • the neural processing unit (NPU) uses a "data-driven parallel computing" architecture. Through image machine learning, the unstructured data of the road surface image is converted into the structure of the target type and color. data.
  • Step 4 Using the coordinate conversion of Step 2 and the image recognition of Step 3, the targets detected by the millimeter wave radar sensor can be matched with the targets in the image captured by the camera, and all the parameter information of each target can be obtained.
  • the parameters of the target are superimposed in the parameter.
  • the parameters include the speed, coordinates, and type of each target (including vehicle type, color, etc.). Fusion calculation of the target data detected by the two sensors increases the types of target parameters that can be provided and improves the detection accuracy of the system.
  • Step 5 Send the fused target parameters (speed, coordinates, type, etc.) to the system platform. Perform preliminary statistical processing on the raw data detected by the two sensors without the need to pass the raw data detected by the front-end collection unit to the system platform, which greatly reduces the amount of calculation in the system.
  • the data after identifying the target type by the video sensor can be further calculated by the traffic flow statistical data in the data processing unit. Count and track the detected targets, and calculate the traffic flow, average speed, occupancy rate, and headway of each vehicle type (non-motor vehicle, car, bus, truck) in each cycle in a specified area Etc. traffic flow statistics.
  • the data processing unit can also determine whether the target has traffic events such as speeding, retrograde, or parking violation based on the target speed and coordinates detected by the millimeter wave radar sensor. If a traffic incident occurs, the data processing unit sends an instruction to the video sensor to take a picture, take video forensics, and transfer the text information, image data, and video data of the incident to the system platform.
  • traffic events such as speeding, retrograde, or parking violation based on the target speed and coordinates detected by the millimeter wave radar sensor. If a traffic incident occurs, the data processing unit sends an instruction to the video sensor to take a picture, take video forensics, and transfer the text information, image data, and video data of the incident to the system platform.

Abstract

A traffic detection system based on a millimeter wave radar and a video. According to the system, target information is acquired by a millimeter wave radar sensor and a video sensor (step 1); world coordinates are converted into image plane coordinates, and the position of a target projection acquired and detected by the millimeter wave radar sensor on an image plane is determined (step 2); a target type is identified by means of image machine learning; the data acquired by the video sensor and the data by the millimeter wave radar are fused (step 3), and the speed, coordinates and type of a target are output (step 4). According to system, the data detected by the video sensor and data detected by the millimeter wave radar are subjected to real-time data fusion, and the fused target information is sent to a system platform, thereby reducing the calculation amount of the system, accelerating the response time of the system, and improving detection precision.

Description

一种基于毫米波雷达和视频的交通检测系统A traffic detection system based on millimeter wave radar and video 技术领域Technical field
本发明属于智能交通中的数据采集和处理技术领域,具体涉及一种基于毫米波雷达和视频的交通检测系统。The invention belongs to the technical field of data collection and processing in intelligent transportation, and particularly relates to a traffic detection system based on millimeter wave radar and video.
背景技术Background technique
现有的交通检测系统中大多都采用视频传感器,单个视频传感器在检测效果上精确度不高,图像处理的延时大,而且其精度容易受到雨、雾等环境的影响。后来交通检测系统中逐渐加入其它类型的传感器,例如毫米波雷达、地磁、激光雷达等传感器,利用不同传感器的优势,进行多传感器数据融合,大大提高了系统的检测性能。雷达传感器能够做到实时检测到目标的位置和速度信息,数据处理能达到每秒20次,环境适应性强,可全天候、全天时工作,其弊端是无法可视化。而视频传感器能够处理图像信息,如若对图像进行分析计算,需要花费很长时间经过转码,无法做到实时响应,且环境适应性不高。Most of the existing traffic detection systems use video sensors. The accuracy of a single video sensor is not high in detection effect, the delay of image processing is large, and its accuracy is easily affected by rain, fog and other environments. Later, other types of sensors were gradually added to the traffic detection system, such as millimeter wave radar, geomagnetism, lidar and other sensors. The advantages of different sensors were used to perform multi-sensor data fusion, which greatly improved the detection performance of the system. The radar sensor can detect the position and speed information of the target in real time. The data processing can reach 20 times per second. The environment is highly adaptable. It can work around the clock and around the clock. The disadvantage is that it cannot be visualized. The video sensor can process the image information. If the image is analyzed and calculated, it takes a long time to transcode, it cannot respond in real time, and the environment is not adaptable.
利用这两种传感器的优势,可将雷达和视频作为检测前端进行数据融合处理。但由于这两种传感器各自独立进行数据采集且输出的数据格式不一致,系统最终是将多个传感器采集到的同一目标的多种信息冗杂在一起,对原始数据的处理量巨大,加重了系统平台的数据处理工作。而且目前的系统中,雷达和摄像机由于其安装位置不同,信息融合时需要将两种传感器的坐标系进行关联,而两个传感器坐标关联与他们的相对位置有关,若其中一个传感器有了稍微的移动,需要重新关联坐标。由于两个传感器在使用中经常会受到抖动,一段时间后安装位置会有偏差,导致距离及角度的不固定,需要定期重新进行坐标匹配,所以安装调试难度大,经常需要维护。Taking advantage of these two sensors, radar and video can be used as the detection front-end for data fusion processing. However, because these two sensors independently collect data and the output data format is inconsistent, the system eventually combines multiple information of the same target collected by multiple sensors, which has a huge amount of processing on the original data and aggravates the system platform. Data processing work. Moreover, in the current system, the radar and the camera have different installation positions, so the information fusion needs to correlate the coordinate systems of the two sensors, and the coordinate relationship between the two sensors is related to their relative positions. If one of the sensors has a slight difference To move, the coordinates need to be re-associated. Since the two sensors are often shaken during use, the installation position will deviate after a period of time, resulting in unstable distances and angles. Coordinate matching needs to be performed periodically, so installation and debugging are difficult and often require maintenance.
发明内容Summary of the invention
本发明提供针对现有技术机构复杂以及工作效率低的问题,提供一种基于毫米波雷达和视频的交通检测系统。The present invention provides a traffic detection system based on millimeter wave radar and video in response to the problems of the existing technical institutions that are complicated and the work efficiency is low.
具体地说,本发明是采用以下技术方案实现的:所述交通检测系统包括数据采集单元、数据处理单元、数据存储单元、数据通信单元和系统平台;Specifically, the present invention is implemented using the following technical solution: the traffic detection system includes a data collection unit, a data processing unit, a data storage unit, a data communication unit, and a system platform;
所述数据采集单元包括毫米波雷达传感器和视频传感器,毫米波雷达传感器实时采集目标的坐标位置、目标的速度、目标的长度发送给数据处理单元;视频传感器实时采集高清图像数据发送给数据处理单元;The data acquisition unit includes a millimeter-wave radar sensor and a video sensor. The millimeter-wave radar sensor collects the target's coordinate position, target speed, and target length in real time and sends it to the data processing unit; the video sensor collects high-definition image data in real time and sends it to the data processing unit ;
所述数据处理单元包括微处理器和嵌入式神经网络处理器,对同一目标的来自毫米波雷达和视频传感器检测到的数据信息融合,最后将同一目标的完整信息按统一格式输出至系统平台;The data processing unit includes a microprocessor and an embedded neural network processor, fuse the data information from the millimeter wave radar and video sensor detected by the same target, and finally output the complete information of the same target to the system platform in a unified format;
所述该交通检测系统的实施包括以下步骤:The implementation of the traffic detection system includes the following steps:
步骤一、毫米波雷达传感器采集目标信息,所述目标信息包括目标的标识符、速度、坐标、长度,视频传感器采集路面的图像信息,将所述毫米波雷达传感器和所述视频传感器采集到的原始数据发送至数据处理单元;Step 1: The millimeter-wave radar sensor collects target information. The target information includes the identifier, speed, coordinates, and length of the target. The video sensor collects image information of the road surface. The millimeter-wave radar sensor and the video sensor collect The original data is sent to the data processing unit;
步骤二、世界坐标转换成图像平面坐标;毫米波雷达传感器采集到目标坐标,然后对视频传感器的相机的内参和外参的摄像头进行校正,得到目标坐标与图像平面坐标的转换矩阵,将路面世界坐标转换成图像平面坐标,确定出毫米波雷达传感器采集检测到的目标投影在图像平面上的位置;Step 2: Convert world coordinates to image plane coordinates; the millimeter wave radar sensor collects the target coordinates, and then corrects the internal and external parameters of the camera of the video sensor to obtain the conversion matrix between the target coordinates and the image plane coordinates. The coordinates are converted into image plane coordinates, and the position of the target detected and collected by the millimeter wave radar sensor on the image plane is determined;
步骤三、采用神经网络框架识别视频传感器所拍摄的图像中的目标类型;在嵌入式神经网络处理器中运行神经网络框架算法,通过图像机器学习,将路面的图像非结构化数据转换成目标类型和颜色的结构化数据;Step 3: A neural network framework is used to identify the target type in the image captured by the video sensor; the neural network framework algorithm is run in the embedded neural network processor, and the unstructured data of the road surface image is converted into the target type through image machine learning And color structured data;
步骤四、利用步骤二的坐标转换及步骤三的图像识别,将毫米波雷达传感器检测到的目标在相机拍摄到的图像中的目标进行对应,获取到每个目标的所有参数信息,在图像中叠加目标参数进行融合,所述参数信息包括每个目标的速度、坐标、目标类型;Step 4: Use the coordinate conversion of Step 2 and the image recognition of Step 3 to correspond to the targets detected by the millimeter wave radar sensor in the image captured by the camera, and obtain all the parameter information of each target in the image Fusion of superimposed target parameters, the parameter information includes the speed, coordinates and target type of each target;
步骤五、将融合后的所述目标参数发送至系统平台。Step 5: Send the fused target parameters to the system platform.
进一步而言,步骤二中所述世界坐标转换成图像平面坐标,即将世界坐标系中目标的点到图像坐标系中的点的转换,分为两步实现:Further, the conversion of the world coordinates into image plane coordinates in step two, that is, the conversion of the target point in the world coordinate system to the point in the image coordinate system, is implemented in two steps:
(1)将毫米波雷达传感器检测到的目标在世界坐标系中的坐标(X w,Y w,Z w)变换到相机坐标系(X c,Y c,Z c),变换公式为: (1) Transform the coordinates of the target detected by the millimeter wave radar sensor in the world coordinate system (X w , Y w , Z w ) into the camera coordinate system (X c , Y c , Z c ), the transformation formula is:
Figure PCTCN2019113964-appb-000001
Figure PCTCN2019113964-appb-000001
式中,
Figure PCTCN2019113964-appb-000002
为由世界坐标系到相机坐标系的旋转矩阵;
In the formula,
Figure PCTCN2019113964-appb-000002
Is the rotation matrix from the world coordinate system to the camera coordinate system;
which is
Figure PCTCN2019113964-appb-000003
Figure PCTCN2019113964-appb-000003
其中绕X轴旋转为α,绕Y轴旋转为β,绕Z轴旋转为θ;Among them, the rotation around the X axis is α, the rotation around the Y axis is β, and the rotation around the Z axis is θ;
Figure PCTCN2019113964-appb-000004
为由世界坐标系到相机坐标系的平移矩阵,由于本装置中毫米波雷达传感器与视频传感器的距离很近,平移的长度相当于0,即
Figure PCTCN2019113964-appb-000005
Figure PCTCN2019113964-appb-000004
Is the translation matrix from the world coordinate system to the camera coordinate system. Since the distance between the millimeter wave radar sensor and the video sensor in this device is very close, the length of the translation is equivalent to 0, that is
Figure PCTCN2019113964-appb-000005
可求得:Available:
Figure PCTCN2019113964-appb-000006
Figure PCTCN2019113964-appb-000006
可得目标由世界坐标系中的坐标(X w,Y w,Z w)变换到相机坐标系(X c,Y c,Z c) The available target is transformed from the coordinates in the world coordinate system (X w , Y w , Z w ) to the camera coordinate system (X c , Y c , Z c )
X C=cos(β)cos(θ)X W+cos(β)sin(θ)Y W-sin(β)Z W X C = cos(β)cos(θ)X W +cos(β)sin(θ)Y W -sin(β)Z W
Y C=(-cos(α)sin(θ)+sin(α)sin(θ)cos(θ))X W+(cos(α)cos(θ)+sin(α)sin(β)sin(θ))Y W+sin(α)cos(β)Z W Y C = (-cos(α)sin(θ)+sin(α)sin(θ)cos(θ)) X W +(cos(α)cos(θ)+sin(α)sin(β)sin( θ))Y W +sin(α)cos(β)Z W
Z C=(sin(α)sin(θ)+cos(α)sin(β)cos(θ))X W+(-sin(α)cos(θ)+cos(α)sin(β)sin(θ))Y W+cos(α)cos(β)Z W Z C = (sin(α)sin(θ)+cos(α)sin(β)cos(θ)) X W +(-sin(α)cos(θ)+cos(α)sin(β)sin( θ))Y W +cos(α)cos(β)Z W
(2)再将检测到的目标由相机坐标系变换到图像坐标系,确定目标在所述视频传感器所拍摄的画面中的投影位置(X S,Y S),实现毫米波雷达传感器检测到的目标在图像上的定位: (2) Transform the detected target from the camera coordinate system to the image coordinate system to determine the projection position (X S , Y S ) of the target in the picture taken by the video sensor to realize the detection by the millimeter wave radar sensor Target positioning on the image:
Figure PCTCN2019113964-appb-000007
Figure PCTCN2019113964-appb-000007
Figure PCTCN2019113964-appb-000008
Figure PCTCN2019113964-appb-000008
ω为目标在相机中的水平视角,
Figure PCTCN2019113964-appb-000009
为目标在相机中的垂直视角,v为图像的横向尺寸,h为图像的垂向尺寸。
ω is the horizontal angle of view of the target in the camera,
Figure PCTCN2019113964-appb-000009
Is the vertical viewing angle of the target in the camera, v is the horizontal dimension of the image, and h is the vertical dimension of the image.
进一步而言,在步骤三中,视频传感器识别目标类型后的数据,在数据处理单元中进行交通流统计数据计算,统计并跟踪检测到的目标,计算得出某一指定区域内每个周期内的各车型的交通流统计数据。Further, in step three, the data after the video sensor recognizes the target type, calculates the traffic flow statistical data in the data processing unit, counts and tracks the detected target, and calculates each cycle within a specified area Statistics of traffic flow for each model.
进一步而言,所述数据处理单元根据毫米波雷达传感器检测到的目标速度、 坐标判断目标是否存在超速、逆行、违停等交通事件;若发生交通事件,所述数据处理单元发送指令让视频传感器进行拍照、录取视频取证,并将事件的文本信息、图像数据、视频数据传至系统平台。Further, the data processing unit determines whether the target has speeding, retrograde, or illegal traffic events based on the target speed and coordinates detected by the millimeter wave radar sensor; if a traffic event occurs, the data processing unit sends an instruction to the video sensor Take photos, take video forensics, and transfer event text information, image data, and video data to the system platform.
本发明的有益效果如下:本发明基于毫米波雷达和视频的交通检测系统,降低系统的数据计算难度、简化系统流程。视频传感器采集到的原始图像数据传送至数据处理单元,无需经过视频编码及解码操作,从而数据无延时现象,能够与毫米波雷达检测到的数据进行实时数据融合。在数据处理单元中进行两者数据的初步融合处理,将融合后的目标信息发送至系统平台,减轻了系统的计算量,加快系统的响应时间,提高了检测精度。The beneficial effects of the present invention are as follows: The traffic detection system based on millimeter wave radar and video of the present invention reduces the data calculation difficulty of the system and simplifies the system process. The original image data collected by the video sensor is transmitted to the data processing unit without video encoding and decoding operations, so that the data is not delayed, and can be real-time data fusion with the data detected by the millimeter wave radar. In the data processing unit, the preliminary fusion processing of the two data is carried out, and the fused target information is sent to the system platform, which reduces the calculation amount of the system, speeds up the response time of the system, and improves the detection accuracy.
附图说明BRIEF DESCRIPTION
图1是本发明的系统结构图。FIG. 1 is a system structure diagram of the present invention.
图2是本发明的系统流程图。Fig. 2 is a system flowchart of the present invention.
图3是本发明中视频传感器的相机校正示意图。FIG. 3 is a schematic diagram of camera calibration of the video sensor in the present invention.
图4是本发明中目标从相机坐标系到图像坐标系投影关系的示意图。4 is a schematic diagram of the projection relationship of the target from the camera coordinate system to the image coordinate system in the present invention.
具体实施方式detailed description
下面结合实施例并参照附图对本发明作进一步详细描述。The present invention will be described in further detail below with reference to embodiments and with reference to the accompanying drawings.
实施例1:Example 1:
本发明的一个实施例,为一种基于毫米波雷达和视频的交通检测系统,参照图1、图2、图3和图4,基于毫米波雷达和视频的交通检测系统包括数据采集单元、数据处理单元、数据存储单元、数据通信单元和系统平台。An embodiment of the present invention is a traffic detection system based on millimeter wave radar and video. Referring to FIGS. 1, 2, 3, and 4, a traffic detection system based on millimeter wave radar and video includes a data collection unit and data. Processing unit, data storage unit, data communication unit and system platform.
数据采集单元:数据采集单元包括毫米波雷达传感器和视频传感器。毫米波雷达传感器实时采集目标信息,检测到的数据内容包括目标的坐标位置、目标的速度、目标的长度等,将采集到的目标信息发送给数据处理单元。视频传感器实时采集高清图像数据,将采集到的图像信息发送给数据处理单元。Data acquisition unit: The data acquisition unit includes a millimeter wave radar sensor and a video sensor. The millimeter wave radar sensor collects target information in real time. The detected data content includes the target's coordinate position, target speed, target length, etc., and sends the collected target information to the data processing unit. The video sensor collects high-definition image data in real time, and sends the collected image information to the data processing unit.
数据处理单元:数据处理单元包含ARM微处理器和嵌入式神经网络处理器 (NPU)。嵌入式神经网络处理器(NPU)负责图像数据的处理,识别目标的类型(行人、汽车、卡车等),并将毫米波雷达采集到的目标信息的数据(目标的坐标位置、目标的速度、目标的长度等)叠加在视频图像中。ARM微处理器负责统计所有交通流的信息以及交通事件识别。数据处理单元中对实时接收到的目标信息进行同一目标的坐标转换,将同一目标的来自毫米波雷达和视频传感器检测到的数据信息融合,最后将同一目标的完整信息按统一格式输出至系统平台。Data processing unit: The data processing unit includes an ARM microprocessor and an embedded neural network processor (NPU). The embedded neural network processor (NPU) is responsible for the processing of image data, identifying the type of target (pedestrian, car, truck, etc.), and the data of the target information collected by the millimeter wave radar (target coordinate position, target speed, The length of the target, etc.) is superimposed on the video image. The ARM microprocessor is responsible for statistics of all traffic flow information and traffic incident recognition. The data processing unit converts the coordinate of the same target to the target information received in real time, merges the data information from the millimeter wave radar and video sensor detected by the same target, and finally outputs the complete information of the same target to the system platform in a unified format .
数据存储单元:数据存储单元包括嵌入式多媒体存储卡(EMMC)和SD卡。包括嵌入式多媒体存储卡(EMMC)用于存储操作系统,SD卡用于存储交通流数据库和交通事件信息。Data storage unit: The data storage unit includes an embedded multimedia memory card (EMMC) and an SD card. Including embedded multimedia memory card (EMMC) is used to store operating system, SD card is used to store traffic flow database and traffic event information.
数据通信单元:通信单元包括网络(ETH)接口和RS485接口,网络(ETH)接口用于向系统平台传输交通流信息、交通事件信息、视频图像信息。RS485接口用于对接卡口相机及路口交通控制信号机。Data communication unit: The communication unit includes a network (ETH) interface and an RS485 interface. The network (ETH) interface is used to transmit traffic flow information, traffic event information, and video image information to the system platform. The RS485 interface is used for docking the camera at the bayonet and the traffic control signal at the intersection.
如图2所示,基于毫米波雷达和视频的交通检测系统的实施包括以下步骤:As shown in Figure 2, the implementation of a traffic detection system based on millimeter wave radar and video includes the following steps:
步骤一、毫米波雷达传感器和视频传感器作为环境感知前端进行目标和路面的信息采集,分别采集目标信息和路面图像信息,并将毫米波雷达传感器和视频传感器采集到的原始数据发送至数据处理单元。其中毫米波雷达传感器采集到的信息包括目标的标识符(ID)、速度、坐标、长度等,视频传感器采集到路面的图像信息。Step 1: The millimeter-wave radar sensor and the video sensor are used as environment-aware front-ends to collect target and road information, respectively collect target information and road image information, and send the raw data collected by the millimeter-wave radar sensor and the video sensor to the data processing unit . Among them, the information collected by the millimeter wave radar sensor includes the identifier (ID), speed, coordinates, length, etc. of the target, and the video sensor collects the image information of the road surface.
步骤二、世界坐标转换成图像平面坐标。毫米波雷达传感器采集到目标坐标,然后对视频传感器的相机的内参和外参的摄像头进行校正,得到目标坐标与图像平面坐标的转换矩阵(即旋转矩阵和平移矩阵),将路面世界坐标转换成图像平面坐标,从而可以确定出毫米波雷达传感器采集检测到的目标投影在图像平面上的位置。Step 2: Convert world coordinates to image plane coordinates. The millimeter wave radar sensor collects the target coordinates, and then corrects the internal and external parameters of the camera of the video sensor to obtain a conversion matrix (that is, a rotation matrix and a translation matrix) between the target coordinates and the image plane coordinates, and converts the road surface world coordinates into Image plane coordinates, so that the position of the target detected by the millimeter wave radar sensor collection and projection on the image plane can be determined.
世界坐标系中目标的点到图像坐标系中的点的转换过程分两步来实现:The conversion process from the point of the target in the world coordinate system to the point in the image coordinate system is implemented in two steps:
(1)将毫米波雷达传感器检测到的目标在世界坐标系中的坐标(X w,Y w,Z w)变换到相机坐标系(X c,Y c,Z c),变换公式为: (1) Transform the coordinates of the target detected by the millimeter wave radar sensor in the world coordinate system (X w , Y w , Z w ) into the camera coordinate system (X c , Y c , Z c ), the transformation formula is:
Figure PCTCN2019113964-appb-000010
Figure PCTCN2019113964-appb-000010
式中,
Figure PCTCN2019113964-appb-000011
为由世界坐标系到相机坐标系的旋转矩阵。
In the formula,
Figure PCTCN2019113964-appb-000011
Is the rotation matrix from the world coordinate system to the camera coordinate system.
which is
Figure PCTCN2019113964-appb-000012
Figure PCTCN2019113964-appb-000012
如图3所示,其中绕X轴旋转为α,绕Y轴旋转为β,绕Z轴旋转为θ。As shown in Figure 3, where the rotation around the X axis is α, the rotation around the Y axis is β, and the rotation around the Z axis is θ.
Figure PCTCN2019113964-appb-000013
为由世界坐标系到相机坐标系的平移矩阵,由于本装置中毫米波雷达传感器与视频传感器的距离很近,平移的长度相当于0,即
Figure PCTCN2019113964-appb-000014
Figure PCTCN2019113964-appb-000013
Is the translation matrix from the world coordinate system to the camera coordinate system. Since the distance between the millimeter wave radar sensor and the video sensor in this device is very close, the length of the translation is equivalent to 0, that is
Figure PCTCN2019113964-appb-000014
可求得:Available:
Figure PCTCN2019113964-appb-000015
Figure PCTCN2019113964-appb-000015
可得目标由世界坐标系中的坐标(X w,Y w,Z w)变换到相机坐标系(X c,Y c,Z c) The available target is transformed from the coordinates in the world coordinate system (X w , Y w , Z w ) to the camera coordinate system (X c , Y c , Z c )
X C=cos(β)cos(θ)X W+cos(β)sin(θ)Y W-sin(β)Z W X C = cos(β)cos(θ)X W +cos(β)sin(θ)Y W -sin(β)Z W
Y C=(-cos(α)sin(θ)+sin(α)sin(θ)cos(θ))X W+(cos(α)cos(θ)+sin(α)sin(β)sin(θ))Y W+sin(α)cos(β)Z W Y C = (-cos(α)sin(θ)+sin(α)sin(θ)cos(θ)) X W +(cos(α)cos(θ)+sin(α)sin(β)sin( θ))Y W +sin(α)cos(β)Z W
Z C=(sin(α)sin(θ)+cos(α)sin(β)cos(θ))X W+(-sin(α)cos(θ)+cos(α)sin(β)sin(θ))Y W+cos(α)cos(β)Z W Z C = (sin(α)sin(θ)+cos(α)sin(β)cos(θ)) X W +(-sin(α)cos(θ)+cos(α)sin(β)sin( θ))Y W +cos(α)cos(β)Z W
(2)再将检测到的目标由相机坐标系变换到图像坐标系,确定目标在视频传感器所拍摄的画面中的投影位置(X S,Y S),实现毫米波雷达传感器检测到的目标在图像上的定位。 (2) Transform the detected target from the camera coordinate system to the image coordinate system, determine the projection position (X S , Y S ) of the target in the picture taken by the video sensor, and realize the target detected by the millimeter wave radar sensor Positioning on the image.
Figure PCTCN2019113964-appb-000016
Figure PCTCN2019113964-appb-000016
Figure PCTCN2019113964-appb-000017
Figure PCTCN2019113964-appb-000017
如图4所示,ω为目标在相机中的水平视角,
Figure PCTCN2019113964-appb-000018
为目标在相机中的垂直视角,v为图像的横向尺寸,h为图像的垂向尺寸。
As shown in Figure 4, ω is the horizontal viewing angle of the target in the camera
Figure PCTCN2019113964-appb-000018
Is the vertical viewing angle of the target in the camera, v is the horizontal dimension of the image, and h is the vertical dimension of the image.
步骤三、采用神经网络框架YOLO识别视频传感器所拍摄的图像中的目标类型。视频检测器只需要识别出目标的类别,无需其他复杂的计算,降低转码的延时,加快了系统的响应时间。在神经处理单元(NPU)中运行YOLO算法,神 经处理单元(NPU)采用“数据驱动并行计算”的架构,通过图像机器学习,将路面的图像非结构化数据转换成目标类型和颜色的结构化数据。Step 3: Use neural network framework YOLO to identify the target type in the image captured by the video sensor. The video detector only needs to identify the category of the target, without other complicated calculations, reducing the delay of transcoding, and speeding up the response time of the system. The YOLO algorithm is run in a neural processing unit (NPU). The neural processing unit (NPU) uses a "data-driven parallel computing" architecture. Through image machine learning, the unstructured data of the road surface image is converted into the structure of the target type and color. data.
步骤四、利用步骤二的坐标转换及步骤三的图像识别,可以将毫米波雷达传感器检测到的目标在相机拍摄到的图像中的目标进行对应,获取到每个目标的所有参数信息,在图像中叠加目标的参数,参数包括每个目标的速度、坐标、目标类型(包括车型,颜色等信息)。融合计算两个传感器检测到的目标数据,增加了可提供的目标参数类型,提高了系统的检测精度。Step 4: Using the coordinate conversion of Step 2 and the image recognition of Step 3, the targets detected by the millimeter wave radar sensor can be matched with the targets in the image captured by the camera, and all the parameter information of each target can be obtained. The parameters of the target are superimposed in the parameter. The parameters include the speed, coordinates, and type of each target (including vehicle type, color, etc.). Fusion calculation of the target data detected by the two sensors increases the types of target parameters that can be provided and improves the detection accuracy of the system.
步骤五、将融合后的目标参数(速度、坐标、类型等)发送至系统平台。对两个传感器检测到的原始数据进行初步统计加工,无需将前端采集单元检测到的原始数据都传给系统平台,大大减少了系统的计算量。Step 5: Send the fused target parameters (speed, coordinates, type, etc.) to the system platform. Perform preliminary statistical processing on the raw data detected by the two sensors without the need to pass the raw data detected by the front-end collection unit to the system platform, which greatly reduces the amount of calculation in the system.
利用视频传感器识别目标类型后的数据,在数据处理单元中还可以进一步交通流统计数据计算。统计并跟踪检测到的目标,计算得出某一指定区域内每个周期内的各车型(非机动车、小汽车、公交车、货车)的过车流量、平均速度、占有率、车头时距等交通流统计数据。The data after identifying the target type by the video sensor can be further calculated by the traffic flow statistical data in the data processing unit. Count and track the detected targets, and calculate the traffic flow, average speed, occupancy rate, and headway of each vehicle type (non-motor vehicle, car, bus, truck) in each cycle in a specified area Etc. traffic flow statistics.
在数据处理单元还可以根据毫米波雷达传感器检测到的目标速度、坐标判断目标是否存在超速、逆行、违停等交通事件。若发生交通事件,数据处理单元发送指令让视频传感器进行拍照、录取视频取证,并将事件的文本信息、图像数据、视频数据传至系统平台。The data processing unit can also determine whether the target has traffic events such as speeding, retrograde, or parking violation based on the target speed and coordinates detected by the millimeter wave radar sensor. If a traffic incident occurs, the data processing unit sends an instruction to the video sensor to take a picture, take video forensics, and transfer the text information, image data, and video data of the incident to the system platform.
虽然本发明已以较佳实施例公开如上,但实施例并不是用来限定本发明的。在不脱离本发明之精神和范围内,所做的任何等效变化或润饰,同样属于本发明之保护范围。因此本发明的保护范围应当以本申请的权利要求所界定的内容为标准。Although the present invention has been disclosed as the above preferred embodiments, the embodiments are not intended to limit the present invention. Any equivalent changes or modifications made without departing from the spirit and scope of the present invention also belong to the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the content defined in the claims of the present application.

Claims (4)

  1. 一种基于毫米波雷达和视频的交通检测系统,其特征在于,所述交通检测系统包括数据采集单元、数据处理单元、数据存储单元、数据通信单元和系统平台;A traffic detection system based on millimeter wave radar and video, characterized in that the traffic detection system includes a data acquisition unit, a data processing unit, a data storage unit, a data communication unit, and a system platform;
    所述数据采集单元包括毫米波雷达传感器和视频传感器,毫米波雷达传感器实时采集目标的坐标位置、目标的速度、目标的长度发送给数据处理单元;视频传感器实时采集高清图像数据发送给数据处理单元;The data acquisition unit includes a millimeter-wave radar sensor and a video sensor. The millimeter-wave radar sensor collects the target's coordinate position, target speed, and target length in real time and sends it to the data processing unit; the video sensor collects high-definition image data in real time and sends it to the data processing unit ;
    所述数据处理单元包括微处理器和嵌入式神经网络处理器,对同一目标的来自毫米波雷达和视频传感器检测到的数据信息融合,最后将同一目标的完整信息按统一格式输出至系统平台;The data processing unit includes a microprocessor and an embedded neural network processor, fuse the data information from the millimeter wave radar and video sensor detected by the same target, and finally output the complete information of the same target to the system platform in a unified format;
    所述交通检测系统的实施包括以下步骤:The implementation of the traffic detection system includes the following steps:
    步骤一、毫米波雷达传感器采集目标信息,所述目标信息包括目标的标识符、速度、坐标、长度,视频传感器采集路面的图像信息,将所述毫米波雷达传感器和所述视频传感器采集到的原始数据发送至数据处理单元;Step 1: The millimeter-wave radar sensor collects target information. The target information includes the identifier, speed, coordinates, and length of the target. The video sensor collects image information of the road surface. The millimeter-wave radar sensor and the video sensor collect The original data is sent to the data processing unit;
    步骤二、世界坐标转换成图像平面坐标;毫米波雷达传感器采集到目标坐标,然后对视频传感器的相机的内参和外参的摄像头进行校正,得到目标坐标与图像平面坐标的转换矩阵,将路面世界坐标转换成图像平面坐标,确定出毫米波雷达传感器采集检测到的目标投影在图像平面上的位置;Step 2: Convert world coordinates to image plane coordinates; the millimeter wave radar sensor collects the target coordinates, and then corrects the internal and external parameters of the camera of the video sensor to obtain the conversion matrix between the target coordinates and the image plane coordinates. The coordinates are converted into image plane coordinates, and the position of the target detected and collected by the millimeter wave radar sensor on the image plane is determined;
    步骤三、采用神经网络框架识别视频传感器所拍摄的图像中的目标类型;在嵌入式神经网络处理器中运行神经网络框架算法,通过图像机器学习,将路面的图像非结构化数据转换成目标类型和颜色的结构化数据;Step 3: A neural network framework is used to identify the target type in the image captured by the video sensor; the neural network framework algorithm is run in the embedded neural network processor, and the unstructured data of the road surface image is converted into the target type through image machine learning And color structured data;
    步骤四、利用步骤二的坐标转换及步骤三的图像识别,将毫米波雷达传感器检测到的目标在相机拍摄到的图像中的目标进行对应,获取到每个目标的所有参数信息,在图像中叠加目标参数进行融合,所述参数信息包括每个目标的 速度、坐标、目标类型;Step 4: Use the coordinate conversion of Step 2 and the image recognition of Step 3 to correspond to the targets detected by the millimeter wave radar sensor in the image captured by the camera, and obtain all the parameter information of each target in the image Fusion of superimposed target parameters, the parameter information includes the speed, coordinates and target type of each target;
    步骤五、将融合后的所述目标参数发送至系统平台。Step 5: Send the fused target parameters to the system platform.
  2. 根据权利要求1所述的基于毫米波雷达和视频的交通检测系统,其特征在于:步骤二中所述世界坐标转换成图像平面坐标,即将世界坐标系中目标的点到图像坐标系中的点的转换,分为两步实现:The traffic detection system based on millimeter wave radar and video according to claim 1, characterized in that: in step two, the world coordinates are converted into image plane coordinates, that is, the point of the target in the world coordinate system to the point in the image coordinate system The conversion is divided into two steps:
    (1)将毫米波雷达传感器检测到的目标在世界坐标系中的坐标(X w,Y w,Z w)变换到相机坐标系(X c,Y c,Z c),变换公式为: (1) Transform the coordinates of the target detected by the millimeter wave radar sensor in the world coordinate system (X w , Y w , Z w ) into the camera coordinate system (X c , Y c , Z c ), the transformation formula is:
    Figure PCTCN2019113964-appb-100001
    Figure PCTCN2019113964-appb-100001
    式中,
    Figure PCTCN2019113964-appb-100002
    为由世界坐标系到相机坐标系的旋转矩阵;
    In the formula,
    Figure PCTCN2019113964-appb-100002
    Is the rotation matrix from the world coordinate system to the camera coordinate system;
    which is
    Figure PCTCN2019113964-appb-100003
    Figure PCTCN2019113964-appb-100003
    其中绕X轴旋转为α,绕Y轴旋转为β,绕Z轴旋转为θ;Among them, the rotation around the X axis is α, the rotation around the Y axis is β, and the rotation around the Z axis is θ;
    Figure PCTCN2019113964-appb-100004
    为由世界坐标系到相机坐标系的平移矩阵,由于本装置中毫米波雷达传感器与视频传感器的距离很近,平移的长度相当于0,即
    Figure PCTCN2019113964-appb-100005
    Figure PCTCN2019113964-appb-100004
    Is the translation matrix from the world coordinate system to the camera coordinate system. Since the distance between the millimeter wave radar sensor and the video sensor in this device is very close, the length of the translation is equivalent to 0, that is
    Figure PCTCN2019113964-appb-100005
    可求得:Available:
    Figure PCTCN2019113964-appb-100006
    Figure PCTCN2019113964-appb-100006
    可得目标由世界坐标系中的坐标(X w,Y w,Z w)变换到相机坐标系(X c,Y c,Z c) The available target is transformed from the coordinates in the world coordinate system (X w , Y w , Z w ) to the camera coordinate system (X c , Y c , Z c )
    X C=cos(β)cos(θ)X W+cos(β)sin(θ)Y W-sin(β)Z W X C = cos(β)cos(θ)X W +cos(β)sin(θ)Y W -sin(β)Z W
    Y C=(-cos(α)sin(θ)+sin(α)sin(θ)cos(θ))X W+(cos(α)cos(θ)+sin(α)sin(β)sin(θ))Y W Y C = (-cos(α)sin(θ)+sin(α)sin(θ)cos(θ)) X W +(cos(α)cos(θ)+sin(α)sin(β)sin( θ))Y W
    +sin(α)cos(β)Z W +sin(α)cos(β)Z W
    Z C=(sin(α)sin(θ)+cos(α)sin(β)cos(θ))X W+(-sin(α)cos(θ)+cos(α)sin(β)sin(θ))Y W Z C = (sin(α)sin(θ)+cos(α)sin(β)cos(θ)) X W +(-sin(α)cos(θ)+cos(α)sin(β)sin( θ))Y W
    +cos(α)cos(β)Z W +cos(α)cos(β)Z W
    (2)再将检测到的目标由相机坐标系变换到图像坐标系,确定目标在所述视频传感器所拍摄的画面中的投影位置(X S,Y S),实现毫米波雷达传感器检测到的目标在图像上的定位: (2) Transform the detected target from the camera coordinate system to the image coordinate system to determine the projection position (X S , Y S ) of the target in the picture taken by the video sensor to realize the detection by the millimeter wave radar sensor Target positioning on the image:
    Figure PCTCN2019113964-appb-100007
    Figure PCTCN2019113964-appb-100007
    Figure PCTCN2019113964-appb-100008
    Figure PCTCN2019113964-appb-100008
    ω为目标在相机中的水平视角,
    Figure PCTCN2019113964-appb-100009
    为目标在相机中的垂直视角,v为图像的横向尺寸,h为图像的垂向尺寸。
    ω is the horizontal angle of view of the target in the camera,
    Figure PCTCN2019113964-appb-100009
    Is the vertical viewing angle of the target in the camera, v is the horizontal dimension of the image, and h is the vertical dimension of the image.
  3. 根据权利要求1所述的基于毫米波雷达和视频的交通检测系统,其特征在于:在步骤三中,视频传感器识别目标类型后的数据,在数据处理单元中进行交通流统计数据计算,统计并跟踪检测到的目标,计算得出某一指定区域内每个周期内的各车型的交通流统计数据。The traffic detection system based on millimeter wave radar and video according to claim 1, characterized in that in step three, the data after the video sensor recognizes the target type, the traffic flow statistical data is calculated in the data processing unit Track the detected target and calculate the traffic flow statistics of each vehicle type in each cycle in a specified area.
  4. 根据权利要求1所述的基于毫米波雷达和视频的交通检测系统,其特征在于:所述数据处理单元根据毫米波雷达传感器检测到的目标速度、坐标判断目标是否存在超速、逆行、违停等交通事件;若发生交通事件,所述数据处理单元发送指令让视频传感器进行拍照、录取视频取证,并将事件的文本信息、图像数据、视频数据传至系统平台。The traffic detection system based on millimeter wave radar and video according to claim 1, characterized in that the data processing unit determines whether the target has overspeed, retrograde, illegal parking, etc. according to the target speed and coordinates detected by the millimeter wave radar sensor Traffic incident; if a traffic incident occurs, the data processing unit sends an instruction for the video sensor to take a picture, record video forensics, and transfer the event text information, image data, and video data to the system platform.
PCT/CN2019/113964 2018-12-29 2019-10-29 Traffic detection system based on millimeter wave radar and video WO2020134512A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811653251.4 2018-12-29
CN201811653251.4A CN109615870A (en) 2018-12-29 2018-12-29 A kind of traffic detection system based on millimetre-wave radar and video

Publications (1)

Publication Number Publication Date
WO2020134512A1 true WO2020134512A1 (en) 2020-07-02

Family

ID=66016540

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113964 WO2020134512A1 (en) 2018-12-29 2019-10-29 Traffic detection system based on millimeter wave radar and video

Country Status (2)

Country Link
CN (1) CN109615870A (en)
WO (1) WO2020134512A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866645A (en) * 2021-01-12 2021-05-28 二连浩特赛乌素机场管理有限公司 Anti-invasion artificial intelligence radar video monitoring system
CN113189583A (en) * 2021-04-26 2021-07-30 天津大学 Time-space synchronous millimeter wave radar and visual information fusion method
CN113524198A (en) * 2021-09-07 2021-10-22 广东新粤交通投资有限公司 Road construction initiative intelligence anticollision early warning robot

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615870A (en) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 A kind of traffic detection system based on millimetre-wave radar and video
CN109613537A (en) * 2019-01-16 2019-04-12 南京奥杰智能科技有限公司 A kind of hologram radar
CN111830470B (en) * 2019-04-16 2023-06-27 杭州海康威视数字技术股份有限公司 Combined calibration method and device, target object detection method, system and device
CN110288832A (en) * 2019-07-10 2019-09-27 南京慧尔视智能科技有限公司 It is merged based on microwave with the multiple-object information of video and visual presentation method
CN110341621B (en) * 2019-07-10 2021-02-19 北京百度网讯科技有限公司 Obstacle detection method and device
CN110428626A (en) * 2019-08-13 2019-11-08 舟山千眼传感技术有限公司 A kind of wagon detector and its installation method of microwave and video fusion detection
CN110515073B (en) * 2019-08-19 2021-09-07 南京慧尔视智能科技有限公司 Multi-radar cross-regional networking multi-target tracking identification method and device
CN112447058B (en) * 2019-09-03 2022-09-06 比亚迪股份有限公司 Parking method, parking device, computer equipment and storage medium
CN110738846B (en) * 2019-09-27 2022-06-17 同济大学 Vehicle behavior monitoring system based on radar and video group and implementation method thereof
CN112364678B (en) * 2019-09-30 2023-04-07 山东省科学院海洋仪器仪表研究所 Buoy identification and positioning method based on NPU board card and shipborne device thereof
CN110865366B (en) * 2019-10-12 2023-04-18 深圳市布谷鸟科技有限公司 Intelligent driving radar and image fusion man-machine interaction method
CN110794405B (en) * 2019-10-18 2022-06-10 北京全路通信信号研究设计院集团有限公司 Target detection method and system based on camera and radar fusion
CN111127877A (en) * 2019-11-19 2020-05-08 华为技术有限公司 Road condition information monitoring method and device
CN110796868A (en) * 2019-12-02 2020-02-14 江苏中路工程技术研究院有限公司 Video and microwave integrated traffic incident monitoring system and method
CN111178215B (en) * 2019-12-23 2024-03-08 深圳成谷科技有限公司 Sensor data fusion processing method and device
CN113359125A (en) * 2020-03-05 2021-09-07 富士通株式会社 Data fusion method and device and data processing equipment
CN111476099B (en) * 2020-03-09 2024-04-16 深圳市人工智能与机器人研究院 Target detection method, target detection device and terminal equipment
CN111427038A (en) * 2020-03-23 2020-07-17 厦门大学 Target identification method based on vision and 77GHz radar in garage environment
CN111753901B (en) * 2020-06-23 2023-08-15 国汽(北京)智能网联汽车研究院有限公司 Data fusion method, device, system and computer equipment
CN114067594B (en) * 2020-08-05 2023-02-17 北京万集科技股份有限公司 Method and device for planning driving path, computer equipment and storage medium
CN112150799A (en) * 2020-08-19 2020-12-29 上海图丽信息技术有限公司 Method for collecting road vehicle traffic big data by fusing radar videos
CN114078331B (en) * 2020-08-19 2023-02-17 北京万集科技股份有限公司 Overspeed detection method, overspeed detection device, visual sensor and storage medium
CN112115810A (en) * 2020-08-31 2020-12-22 南京理工大学 Target identification method, system, computer equipment and storage medium based on information fusion
CN112258834A (en) * 2020-09-29 2021-01-22 航天科工广信智能技术有限公司 Traffic incident identification and processing system based on multi-sensor data fusion technology
CN112379362A (en) * 2020-10-23 2021-02-19 连云港杰瑞电子有限公司 Event self-adaptive acquisition equipment and method based on multi-source data fusion
CN113390429B (en) * 2020-10-28 2024-01-26 腾讯科技(深圳)有限公司 Navigation method and device
CN112863183B (en) * 2021-01-14 2022-04-08 深圳尚桥信息技术有限公司 Traffic flow data fusion method and system
CN114972935A (en) * 2021-02-27 2022-08-30 上海华为技术有限公司 Information processing method and related equipment
CN113109805A (en) * 2021-03-31 2021-07-13 中国船舶重工集团公司第七二三研究所 Fusion processing method based on radio frequency echo information
CN113343849A (en) * 2021-06-07 2021-09-03 西安恒盛安信智能技术有限公司 Fusion sensing equipment based on radar and video
CN113420805B (en) * 2021-06-21 2022-11-29 车路通科技(成都)有限公司 Dynamic track image fusion method, device, equipment and medium for video and radar
CN113705495B (en) * 2021-09-01 2024-04-16 南京慧尔视智能科技有限公司 Method and device for classifying big and small vehicles based on target identification frame
CN114202730A (en) * 2022-01-08 2022-03-18 深圳安锐科技有限公司 High-speed train type dynamic recognition control system based on deep learning and multi-sensor data fusion
CN114758511B (en) * 2022-06-14 2022-11-25 深圳市城市交通规划设计研究中心股份有限公司 Sports car overspeed detection system, method, electronic equipment and storage medium
CN115440056A (en) * 2022-08-02 2022-12-06 天津光电聚能专用通信设备有限公司 Intelligent safety protection system based on millimeter wave radar and vision fusion
CN116189116B (en) * 2023-04-24 2024-02-23 江西方兴科技股份有限公司 Traffic state sensing method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5047515B2 (en) * 2006-03-20 2012-10-10 株式会社ゼンリン Road image creation system, road image creation method, and road image composition apparatus
CN103559791A (en) * 2013-10-31 2014-02-05 北京联合大学 Vehicle detection method fusing radar and CCD camera signals
CN106710240A (en) * 2017-03-02 2017-05-24 公安部交通管理科学研究所 Passing vehicle tracking and speed measuring method integrating multiple-target radar and video information
CN108847026A (en) * 2018-05-31 2018-11-20 安徽四创电子股份有限公司 A method of it is converted based on matrix coordinate and realizes that data investigation is shown
CN109615870A (en) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 A kind of traffic detection system based on millimetre-wave radar and video

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105711597B (en) * 2016-02-25 2018-06-26 江苏大学 Front locally travels context aware systems and method
JP6787157B2 (en) * 2017-01-31 2020-11-18 株式会社デンソー Vehicle control device
CN108648462A (en) * 2018-05-10 2018-10-12 芜湖航飞科技股份有限公司 A kind of vehicle identification method blended based on radar and visual information
CN108922188B (en) * 2018-07-24 2020-12-29 河北德冠隆电子科技有限公司 Radar tracking and positioning four-dimensional live-action traffic road condition perception early warning monitoring management system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5047515B2 (en) * 2006-03-20 2012-10-10 株式会社ゼンリン Road image creation system, road image creation method, and road image composition apparatus
CN103559791A (en) * 2013-10-31 2014-02-05 北京联合大学 Vehicle detection method fusing radar and CCD camera signals
CN106710240A (en) * 2017-03-02 2017-05-24 公安部交通管理科学研究所 Passing vehicle tracking and speed measuring method integrating multiple-target radar and video information
CN108847026A (en) * 2018-05-31 2018-11-20 安徽四创电子股份有限公司 A method of it is converted based on matrix coordinate and realizes that data investigation is shown
CN109615870A (en) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 A kind of traffic detection system based on millimetre-wave radar and video

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866645A (en) * 2021-01-12 2021-05-28 二连浩特赛乌素机场管理有限公司 Anti-invasion artificial intelligence radar video monitoring system
CN113189583A (en) * 2021-04-26 2021-07-30 天津大学 Time-space synchronous millimeter wave radar and visual information fusion method
CN113189583B (en) * 2021-04-26 2022-07-01 天津大学 Time-space synchronization millimeter wave radar and visual information fusion method
CN113524198A (en) * 2021-09-07 2021-10-22 广东新粤交通投资有限公司 Road construction initiative intelligence anticollision early warning robot

Also Published As

Publication number Publication date
CN109615870A (en) 2019-04-12

Similar Documents

Publication Publication Date Title
WO2020134512A1 (en) Traffic detection system based on millimeter wave radar and video
Chen et al. A review of vision-based traffic semantic understanding in ITSs
WO2021259344A1 (en) Vehicle detection method and device, vehicle, and storage medium
CN109100730B (en) Multi-vehicle cooperative rapid map building method
WO2018177026A1 (en) Device and method for determining road edge
US9292750B2 (en) Method and apparatus for detecting traffic monitoring video
CN110738121A (en) front vehicle detection method and detection system
WO2019129255A1 (en) Target tracking method and device
WO2021253245A1 (en) Method and device for identifying vehicle lane changing tendency
CN113850102B (en) Vehicle-mounted vision detection method and system based on millimeter wave radar assistance
WO2023240805A1 (en) Connected vehicle overspeed early warning method and system based on filtering correction
Liu et al. A novel multi-sensor fusion based object detection and recognition algorithm for intelligent assisted driving
Tak et al. Development of AI-based vehicle detection and tracking system for C-ITS application
Bi et al. A new method of target detection based on autonomous radar and camera data fusion
CN116824859A (en) Intelligent traffic big data analysis system based on Internet of things
Benjdira et al. TAU: A framework for video-based traffic analytics leveraging artificial intelligence and unmanned aerial systems
Wang et al. Object tracking based on the fusion of roadside LiDAR and camera data
CN105516661A (en) Master-slave target monitoring system and method in combination of fisheye camera and PTZ camera
Bai et al. Cyber mobility mirror: a deep learning-based real-world object perception platform using roadside LiDAR
CN111325187A (en) Lane position identification method and device
Wei et al. Street object detection/tracking for AI city traffic analysis
TWI712012B (en) Artificial intelligence traffic detection system
Jiang et al. An obstacle detection and distance measurement method for sloped roads based on VIDAR
TWI805077B (en) Path planning method and system
CN110865367A (en) Intelligent fusion method for radar video data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19901875

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19901875

Country of ref document: EP

Kind code of ref document: A1