CN112379362A - Event self-adaptive acquisition equipment and method based on multi-source data fusion - Google Patents

Event self-adaptive acquisition equipment and method based on multi-source data fusion Download PDF

Info

Publication number
CN112379362A
CN112379362A CN202011150344.2A CN202011150344A CN112379362A CN 112379362 A CN112379362 A CN 112379362A CN 202011150344 A CN202011150344 A CN 202011150344A CN 112379362 A CN112379362 A CN 112379362A
Authority
CN
China
Prior art keywords
video
radar
information
sensor
data fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011150344.2A
Other languages
Chinese (zh)
Inventor
阎学范
佟世继
王龙
张宇杰
范永
钟瑜
刘超
徐大伍
张东亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lianyungang Jierui Electronics Co Ltd
Original Assignee
Lianyungang Jierui Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lianyungang Jierui Electronics Co Ltd filed Critical Lianyungang Jierui Electronics Co Ltd
Priority to CN202011150344.2A priority Critical patent/CN112379362A/en
Publication of CN112379362A publication Critical patent/CN112379362A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems

Abstract

The invention relates to an event self-adaptive acquisition device and method based on multi-source data fusion.A GPU (graphics processing Unit) of the device comprises a video stream processing unit, a radar signal processing unit, a data fusion unit and a holder control unit, wherein the video stream processing unit receives video signals from a video sensor and obtains video target information through algorithm processing; the radar signal processing unit receives radar signals from the radar sensor and obtains radar target information through algorithm processing; the data fusion unit receives the video target information and the radar target information, then performs fusion, verification and cross validation, and then outputs the video streaming media information, traffic flow information and control output information after superposition; and the holder control unit adjusts the angle of the holder according to the control output information to realize event focusing. By applying the equipment and the method, the problems that the video detection is greatly influenced by the environment and the radar detection visualization effect is poor can be solved, so that the detection data is more comprehensive and the detection accuracy is higher.

Description

Event self-adaptive acquisition equipment and method based on multi-source data fusion
Technical Field
The invention belongs to the field of intelligent traffic data acquisition and processing, and particularly relates to an event self-adaptive acquisition device and method based on multi-source data fusion.
Background
With the progress of society and the development of scientific technology, intelligent traffic data acquisition and monitoring systems for traffic flow acquisition, traffic violation event evidence collection and the like are more and more widely applied to various urban intersections and road sections, can collect intersection traffic flow information, help to produce intersection release schemes, take candid shots and evidence collection of dangerous driving behaviors, make reasonable decisions on real-time traffic conditions, and realize safe, reliable and efficient road traffic flow control. The manpower is saved in the application, the driving behavior is normalized, and convenience is provided for the application in various places with greater depth and breadth.
The traditional intelligent traffic data monitoring generally adopts detection equipment such as a video sensor, a geomagnetic sensor, a millimeter wave radar and a laser radar, and relies on video automatic identification or manual control when monitoring traffic illegal events. The single sensor technology for environment measurement shows certain limitations:
the radar sensor is mainly characterized in that: (1) the method comprises the following steps of (1) visual display cannot be carried out, (2) the angle resolution of an object is poor, and (3) the target type cannot be identified;
the video sensor is mainly represented as follows: (1) the influence of illumination and environment such as fog, rain, snow weather and the like is large, (2) distance and speed information of a target cannot be accurately acquired;
in addition, in the aspect of a data analysis processor, a traditional DSP cannot directly process the generated ultra-high-speed data stream due to the limitation of the running speed, an FPGA processor has high power consumption and high cost under the same computing power, and the application of a GPU in multi-source data processing brings an economic and effective solution to the problems.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a novel event self-adaptive acquisition device based on multi-source data fusion, which can improve the recognition rate of an intelligent traffic data acquisition and processing system to traffic flow and traffic events in an intersection environment, overcome the defects of single use of a radar sensor and a video sensor, ensure that an intelligent traffic control system can more reliably plan an intersection release scheme, make a reasonable decision aiming at a real-time traffic condition and realize safe, reliable and efficient driving.
The invention aims to solve another technical problem of providing an event self-adaptive acquisition method based on multi-source data fusion.
In order to achieve the above object, the present invention firstly provides an event adaptive acquisition device based on multi-source data fusion, where the device includes a GPU processor, the GPU processor includes a video stream processing unit, a radar signal processing unit, a data fusion unit, and a control output unit, where:
the video stream processing unit receives video signals from the video sensor, processes the signals through an algorithm to obtain video target information and transmits the video target information to the data fusion unit;
the radar signal processing unit receives radar signals from the millimeter wave radar sensor, processes the signals through an algorithm, obtains radar target information and transmits the radar target information to the data fusion unit;
the data fusion unit receives the video target information and the radar target information, performs fusion, verification and cross validation on the video target information and the radar target information, and then outputs the video streaming media information, traffic flow information and holder adjustment information after superposition;
the control output unit receives the cloud deck adjustment information from the data fusion unit, drives the equipment cloud deck, realizes the adjustment of the horizontal angle and the pitch angle of the equipment, compares the adjustment with the information input by the angle sensor, confirms the adjustment in place, and performs focusing and evidence obtaining on the traffic incident.
According to a further preferred technical scheme, the equipment further comprises a video sensor, a radar sensor and an angle sensor, the video sensor transmits video streaming media information to be input into the video streaming processing unit, the radar sensor transmits millimeter wave radar echo signals to the radar signal processing unit, and the angle sensor transmits horizontal angle information and pitch angle information of the holder to the data fusion unit.
Further preferred technical solution is that the GPU processor further includes the following units: the GPU control system comprises a Tegra X1 processor, a General Programmable Input Output (GPIO), an embedded memory LPDDR4, a flash memory eMMC, an MIPI CSI-2 camera interface, a 10/100/1000 BASE-T Ethernet interface, an HDMI 2.0 interface, a PCIE interface, a USB 3.0 interface, a USB 2.0 interface, a UART interface, an SPI interface, an I2S interface, an I2C interface, wiring and the like, wherein the connection relation of the units is the same as that of the units in a conventional GPU processor.
The video stream processing unit adopts a deep learning algorithm based on artificial intelligence to carry out high-precision reasoning operation, carries out image detection, identification and semantic segmentation on the video stream input of the video sensor, and transmits the obtained video target information to the data fusion unit.
Further preferred technical solution is that the video stream processing unit and the radar signal processing unit are embedded programmable units of a GPU processor.
Further preferred technical solution is that the data fusion unit is an embedded programmable unit of a GPU processor.
Further preferred technical solution is that the output unit and the control unit are embedded programmable units of a GPU processor.
Further preferred technical solution is that the radar sensor is a millimeter wave radar sensor.
The invention also provides an event self-adaptive acquisition method based on multi-source data fusion, which comprises the following steps:
(1) receiving video data from the video sensor through the video stream processing unit, extracting, separating, deducing and judging the data through a YOLO V3 algorithm, and transmitting the obtained video target information to the data fusion unit;
(2) receiving millimeter wave radar signals from the radar sensor through the radar signal processing unit, performing signal processing through algorithms such as data resampling, filtering, compression, parameter estimation and the like, and transmitting obtained radar target information to the data fusion unit;
(3) and receiving the video target information and the radar target information by using the data fusion unit, performing fusion, verification and cross validation on the video target information and the radar target information, and then outputting traffic characteristic information and holder control information.
(4) And driving the control output unit by using the holder control information, so that the video sensor is aligned to the traffic event needing attention, and the traffic behavior is subjected to video recording, camera shooting and evidence obtaining and then stored.
The further preferable technical scheme of the event self-adaptive acquisition method based on multi-source data fusion is as follows: for step (1), further comprising extracting, by the video stream processing unit, a base image feature input by a video sensor through a YOLO V3 algorithm, where the YOLO V3 algorithm extraction process includes: reading an image, processing the image and an original image by a comparison algorithm, calibrating a video sensor, outputting target coordinates, drawing a frame, matching, outputting the image, identifying a target and outputting target pixel coordinates.
The further preferable technical scheme of the event self-adaptive acquisition method based on multi-source data fusion is as follows: the video stream information comprises an object number, an object type, object pixel coordinates, queue length, vehicle characteristics and the like. The radar target information comprises a target number, a target type, target world coordinates, a vehicle speed, vehicle characteristics and the like. The holder control information includes a horizontal angle, a pitch angle, and the like.
Compared with the prior art, the invention has the following beneficial effects:
the GPU processor in the equipment of the invention is one of the devices with strongest video stream processing performance, shortest design period and lowest development cost in the application-specific integrated circuit, the GPU processor is one of the best choices for improving the system integration and reliability of a small-batch system, and because the GPU processor has powerful functions and flexible application, the GPU processor is more favored in the video processing and automatic control neighborhood and is used as a main processor, and the GPU processor obtains very obvious benefits for the fusion, the verification and the control of multi-source sensor information:
a single GPU processor is used, data verification, fusion and control are completed on the bottom layer in a unified mode, and the system structure is simplified;
based on GPU processor technology, preprocessing of video signals and radar signals is completed in real time, video information and radar information are obtained, and then fusion and verification of the two kinds of information are achieved in the same processor, so that processing bandwidth of a system is improved, and output rate of the system is increased;
and the data fusion unit of the GPU processor is used for well performing information fusion and verification on radar information and video information, so that the output control information is more comprehensive and accurate.
The invention realizes the functions of more efficiently, accurately and reliably realizing traffic flow collection and evidence obtaining of traffic violation incidents, receives and processes multi-source data information by using the GPU processor, performs verification, fusion and cross verification, and then drives the holder to realize automatic focusing and automatic evidence obtaining of the traffic incident scene, overcomes the defects of a single radar or video sensing technology, reduces the manual workload, can more reliably identify information such as traffic flow data, queuing length, speed, violation types and the like, and improves the safety and the real-time performance of the operation of a traffic control system.
Description of the drawings:
FIG. 1 is a block diagram of a system for an adaptive event acquisition device based on multi-source data fusion;
FIG. 2 is a schematic diagram of a system structure of an event adaptive acquisition device based on multi-source data fusion;
FIG. 3 is a flow chart of video input information feature extraction;
FIG. 4 is a flow chart of radar input information feature extraction;
fig. 5 is a flow chart of data fusion output of video input information and radar input information.
The specific implementation mode is as follows:
reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the convenience of understanding the embodiments of the present invention, the following description will be further explained by taking several specific embodiments as examples in conjunction with the drawings, and the embodiments are not to be construed as limiting the embodiments of the present invention.
Embodiment 1, an event adaptive acquisition device based on multi-source data fusion:
referring to fig. 1 and 2, a GPU processor is included, the GPU processor including a video stream processing unit, a radar signal processing unit, a data fusion unit, and a control output unit, wherein:
the video stream processing unit receives video signals from the video sensor, processes the signals through an algorithm to obtain video target information and transmits the video target information to the data fusion unit; the radar signal processing unit receives radar signals from the millimeter wave radar sensor, processes the signals through an algorithm, obtains radar target information and transmits the radar target information to the data fusion unit; the data fusion unit receives the video target information and the radar target information, performs fusion, verification and cross validation on the video target information and the radar target information, and then outputs the video streaming media information, traffic flow information and holder adjustment information after superposition; the control output unit receives the cloud deck adjustment information from the data fusion unit, drives the equipment cloud deck, realizes the adjustment of the horizontal angle and the pitching angle of the equipment, and focuses and obtains evidence of the traffic incident.
The GPU processor also comprises the following units: the GPU control system comprises a Tegra X1 processor, a General Programmable Input Output (GPIO), an embedded memory RAM, a flash memory eMMC, an MIPI CSI-2 camera interface, a 10/100/1000 BASE-T Ethernet interface, an HDMI 2.0 interface, a PCIE interface, a USB 3.0 interface, a USB 2.0 interface, a UART interface, an SPI interface, an I2S interface, an I2C interface, wiring and the like, wherein the connection relationship of the units is the same as that of the units in a conventional GPU processor.
The MIPI CSI-2 camera interface, the USB 3.0 interface and the USB 2.0 interface can be used as video stream information input ports; the general programmable input/output ports GPIO, 10/100/1000 BASE-T Ethernet interface, UART interface, SPI interface, I2S interface and I2C interface can be used as radar signal input ports; the general programmable input/output ports GPIO, 10/100/1000 BASE-T Ethernet interface, UART interface, SPI interface, I2S interface and I2C interface can be used as angle signal input ports; 10/100/1000 BASE-T Ethernet interface, HDMI 2.0 interface, PCIE interface, USB 3.0 interface and USB 2.0 interface can be used as video stream media output port; the general programmable input output ports GPIO, 10/100/1000 BASE-T Ethernet interface, UART interface, SPI interface, I2S interface and I2C interface can be used as control information output ports. The interface is an interface part of the GPU processor and an external circuit, the driving and matching requirements of the input/output signals under different electrical characteristics are met, the interfaces in the GPU processor are classified according to groups, and each group can independently support different interface standards. By configuration, different electrical standards and input/output physical characteristics can be adapted.
The video stream processing unit and the radar signal processing unit are embedded programmable units of a GPU processor, are basic units for realizing user functions, and further comprise a feature extraction unit based on a YOLO V3 algorithm.
The Tegra X1 kernel is a work core of the GPU processor, and the GPU processor implements required work logic and input and output by running a program read into the flash memory eMMC in the Tegra X1 kernel.
The wiring connects all the units inside the GPU processor, and the length and the process of the connection determine the driving capability and the transmission speed of signals on the connection.
The video sensor, that is, the camera, is a video acquisition device, and mainly includes a lens, a CCD (charged Coupled device) image sensor, a pre-amplifier, an agc (automatic Gain control), an Analog/Digital (Analog/Digital) conversion circuit, a synchronous signal generator, a CCD driver, a DSP main control chip, a D/a (Digital/Analog) conversion circuit, and a power supply circuit.
In this embodiment, a video stream input unit is used to receive video signal input from a video sensor through an MIPI CSI-2 camera interface, a USB 3.0 interface, or a USB 2.0 interface, load the video signal into the YOLO V3 algorithm, complete extraction of feature data, compare the feature data with an original image, realize target coordinate marking, frame and matching, and finally complete image output, target identification and target pixel coordinate output.
The video stream information processed by the video stream information processing unit comprises an object number, an object type, an object pixel coordinate, a queue length, a vehicle characteristic and the like, and then the video stream data information is transmitted to the data fusion unit.
The radar sensor has various types of selectivity, such as an electromagnetic wave radar, a laser radar, an ultra-wideband radar and the like, has advantages, disadvantages and suitable application places, is mostly a millimeter wave radar in an intelligent traffic data acquisition and control system, and is also applied to the laser radar.
Compared with a millimeter wave radar and a centimeter wave radar which are longer in wavelength, the millimeter wave radar has the advantages of smaller wavelength and higher precision. Compared with a millimeter wave radar with shorter wavelength, the millimeter wave radar has the advantages of narrow antenna beam, high resolution, wide frequency band, strong anti-interference capability and the like under the condition that the antenna apertures are the same.
In the embodiment, a millimeter wave radar sensor is selected and used, the millimeter wave radar sensor collects radar echo signals through an antenna, a transmitting unit, a receiving unit and an analog-to-digital converter of the millimeter wave radar sensor, and the radar echo signals are input to the radar signal processing unit through a general programmable input/output (GPIO) interface, an 10/100/1000 BASE-T Ethernet interface, a universal asynchronous receiver/transmitter (UART) interface, an Serial Peripheral Interface (SPI) interface, an I2S interface or an I2C interface of a Graphics Processing Unit (GPU).
The radar signal processing unit receives radar echo signals from the radar sensor and processes the signals, the signal processing comprises data sampling, filtering, compression and parameter estimation to obtain radar data information, the radar data information is transmitted to the data fusion unit, and the radar target information comprises a target number, a target type, a target world coordinate, a vehicle speed, vehicle characteristics and the like.
The data fusion unit receives the video target information and the radar target information, fuses, checks and cross-verifies the video target information and the radar target information to extract information such as vehicle size, color, coordinates and vehicle speed, then encodes and outputs traffic flow information and video stream data through an 10/100/1000 BASE-T Ethernet interface, an HDMI 2.0 interface, a PCIE interface, a USB 3.0 interface or a USB 2.0 interface, and simultaneously outputs pan-tilt adjustment information through a general programmable input/output (GPIO), a 10/100/1000 BASE-T Ethernet interface, a UART interface, an SPI interface, an I2S interface or an I2C interface.
The output unit receives the traffic flow information and the video stream data information from the data fusion unit and outputs the information to the traffic management control equipment for standby use.
The control unit receives the cloud deck adjustment information from the data fusion unit, drives the equipment cloud deck, realizes the adjustment of the horizontal angle and the pitching angle of the equipment, and focuses and obtains evidence of the traffic incident.
Example 2, an event adaptive acquisition device based on multi-source data fusion,
referring to fig. 1, 2, the apparatus includes a GPU processor 4, a radar sensor 1, a video sensor 2, and an angle sensor 3.
The radar sensor comprises a radar module 11 and a signal preprocessing unit 12.
The GPU processor 4 performs arithmetic processing by the Tegra X1 kernel 46, in which an input port receiving a video sensor signal is defined as a video input port 41, an input port receiving a radar sensor signal is set as a radar signal input port 42, an input port receiving an angle sensor signal is set as an angle signal input port 43, the Tegra X1 kernel algorithm flow is configured and divided into a video processing unit 44 for processing video input into video target information, a radar signal processing unit 45 for processing radar signal input into radar target information, a data fusion unit 47 for realizing corresponding functions, an output unit 48 for extracting control information into traffic characteristic information and a control unit 4a for outputting pan-tilt control information, and corresponding output ports are configured as an output port 49 and an output port 4b for realizing data output and quality control output.
The device also includes an embedded memory LPDDR 44 c and a flash memory eMMC 4 d.
Embodiment 3, the method for performing multi-source data fusion using the apparatus described in embodiment 2:
the video input port 42 receives a standard video signal input by the video sensor 2, and inputs the standard video signal to the video recognition unit 44, the video recognition unit 44 loads the video signal into a YOLO V3 algorithm, extracts feature data, compares the feature data with an original image, realizes target coordinate marking, picture frame matching, finally completes image output, target identification and target pixel coordinate output, and then transmits video target information to the data fusion unit 47, wherein the video target information includes a target number, a target type, a target pixel coordinate, a queue length, vehicle features, and the like. The video input image feature extraction flow chart is shown in fig. 3.
The radar signal input port 41 receives radar target signals transmitted by the radar sensor 1, the radar target signals are input to the radar signal processing unit 45, the radar signal processing unit 45 performs signal processing on the radar signals, the signal processing includes data sampling, filtering, compression and parameter estimation to obtain radar data information, then the radar target information is transmitted to the data fusion unit 47, and the radar target information includes target numbers, target types, target world coordinates, vehicle speed, vehicle characteristics and the like. The radar input information feature extraction flow chart is shown in figure 4.
The data fusion unit 47 receives the video target information and the radar target information, compares and verifies the information, and completes fusion and cross validation, thereby improving the precision and efficiency of the whole system and the visual output of data. And then outputs traffic characteristic information through an output port 49, and an output port 4b outputs pan-tilt control information.
By adopting the multi-source traffic data monitoring method and the multi-source traffic data monitoring equipment, in practical application, the defects of a single sensor are overcome, the video data and the radar data are verified, fused and cross-verified, the judgment on the site environment, the traffic flow characteristics and the traffic events is more timely, accurate and comprehensive, and better conditions are provided for intelligent traffic data acquisition and processing.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The utility model provides an event self-adaptation collection equipment based on multisource data fusion which characterized in that, equipment includes the GPU treater, and its characterized in that, the GPU treater includes video stream processing unit, radar signal processing unit, data fusion unit and control output unit, wherein:
the video stream processing unit receives video signals from the video sensor, processes the signals through an algorithm to obtain video target information and transmits the video target information to the data fusion unit;
the radar signal processing unit receives radar signals from the millimeter wave radar sensor, processes the signals through an algorithm, obtains radar target information and transmits the radar target information to the data fusion unit;
the data fusion unit receives the video target information and the radar target information, performs fusion, verification and cross validation on the video target information and the radar target information, and then outputs the video streaming media information, traffic flow information and holder adjustment information after superposition;
the control output unit receives the cloud deck adjustment information from the data fusion unit, drives the equipment cloud deck, realizes the adjustment of the horizontal angle and the pitch angle of the equipment, compares the adjustment with the information input by the angle sensor, confirms the adjustment in place, and performs focusing and evidence obtaining on the traffic incident through the video sensor.
2. The apparatus of claim 1, further comprising a video sensor, a radar sensor, and an angle sensor, wherein the video sensor transmits video streaming media information to the video streaming processing unit, wherein the radar sensor transmits millimeter wave radar echo signals to the radar signal processing unit, and wherein the angle sensor transmits pan and tilt angle information of the pan and tilt to the data fusion unit.
3. The device according to claim 1, wherein the GPU processor further comprises: the GPU control system comprises a Tegra X1 processor, a general programmable input output port GPIO, an embedded memory RAM, a flash memory eMMC, an MIPI CSI-2 camera interface, a 10/100/1000 BASE-T Ethernet interface, an HDMI 2.0 interface, a PCIE interface, a USB 3.0 interface, a USB 2.0 interface, a UART interface, an SPI interface, an I2S interface, an I2C interface and wiring, wherein the connection relationship of the units is the same as that of the units in a conventional GPU processor.
4. The device according to claim 1, wherein the video stream processing unit performs high-precision reasoning operation by adopting deep learning based on artificial intelligence, performs image detection, recognition and semantic segmentation on the video stream input of the video sensor, and transmits the obtained video target information to the data fusion unit.
5. The apparatus of claim 1, wherein the video stream processing unit and the radar signal processing unit are embedded programmable units of a GPU processor.
6. The apparatus of claim 1, wherein the data fusion unit is an embedded programmable unit of a GPU processor.
7. The apparatus of claim 1, wherein the radar sensor is a millimeter wave radar sensor.
8. An event adaptive acquisition method based on multi-source data fusion, which is characterized by using the device of any one of claims 1-7, and comprises the following steps:
(1) receiving video data from the video sensor through the video stream processing unit, extracting, separating, deducing and judging the data through a YOLO V3 algorithm, and transmitting the obtained video target information to the data fusion unit;
(2) receiving millimeter wave radar signals from the radar sensor through the radar signal processing unit, performing signal processing through algorithms such as data resampling, filtering, compression, parameter estimation and the like, and transmitting obtained radar target information to the data fusion unit;
(3) receiving the video target information and the radar target information by using the data fusion unit, performing fusion, verification and cross validation on the video target information and the radar target information, and then outputting traffic characteristic information and holder control information;
(4) and driving the control output unit by using the holder control information, so that the video sensor is aligned to the traffic event needing attention, and the traffic behavior is subjected to video recording, camera shooting and evidence obtaining and then stored.
9. The method of claim 8, wherein for step (1), further comprising extracting, with the video stream processing unit, the base image features of the video sensor input by a YOLO V3 algorithm, the YOLO V3 algorithm extraction procedure comprises: reading an image, processing the image and an original image by a comparison algorithm, calibrating a video sensor, outputting target coordinates, drawing a frame, matching, outputting the image, identifying a target and outputting target pixel coordinates.
10. The method of claim 8, wherein the video stream information includes an object number, an object type, object pixel coordinates, queue length, and vehicle characteristics; the radar target information comprises a target number, a target type, a target world coordinate, a vehicle speed and vehicle characteristics; the pan-tilt control information includes a horizontal angle and a pitch angle.
CN202011150344.2A 2020-10-23 2020-10-23 Event self-adaptive acquisition equipment and method based on multi-source data fusion Pending CN112379362A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011150344.2A CN112379362A (en) 2020-10-23 2020-10-23 Event self-adaptive acquisition equipment and method based on multi-source data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011150344.2A CN112379362A (en) 2020-10-23 2020-10-23 Event self-adaptive acquisition equipment and method based on multi-source data fusion

Publications (1)

Publication Number Publication Date
CN112379362A true CN112379362A (en) 2021-02-19

Family

ID=74575939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011150344.2A Pending CN112379362A (en) 2020-10-23 2020-10-23 Event self-adaptive acquisition equipment and method based on multi-source data fusion

Country Status (1)

Country Link
CN (1) CN112379362A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095540A (en) * 2023-10-18 2023-11-21 四川数字交通科技股份有限公司 Early warning method and device for secondary road accidents, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109581345A (en) * 2018-11-28 2019-04-05 深圳大学 Object detecting and tracking method and system based on millimetre-wave radar
CN109615870A (en) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 A kind of traffic detection system based on millimetre-wave radar and video
CN109799493A (en) * 2017-11-17 2019-05-24 北京木牛领航科技有限公司 Radar and Multisensor video fusion system and method
CN109948523A (en) * 2019-03-18 2019-06-28 中国汽车工程研究院股份有限公司 A kind of object recognition methods and its application based on video Yu millimetre-wave radar data fusion
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision
KR20200106810A (en) * 2019-11-07 2020-09-15 사단법인 꿈드래장애인협회 System for sensing and control unexpected situation using data fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109799493A (en) * 2017-11-17 2019-05-24 北京木牛领航科技有限公司 Radar and Multisensor video fusion system and method
CN109581345A (en) * 2018-11-28 2019-04-05 深圳大学 Object detecting and tracking method and system based on millimetre-wave radar
CN109615870A (en) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 A kind of traffic detection system based on millimetre-wave radar and video
CN109948523A (en) * 2019-03-18 2019-06-28 中国汽车工程研究院股份有限公司 A kind of object recognition methods and its application based on video Yu millimetre-wave radar data fusion
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision
KR20200106810A (en) * 2019-11-07 2020-09-15 사단법인 꿈드래장애인협회 System for sensing and control unexpected situation using data fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095540A (en) * 2023-10-18 2023-11-21 四川数字交通科技股份有限公司 Early warning method and device for secondary road accidents, electronic equipment and storage medium
CN117095540B (en) * 2023-10-18 2024-01-23 四川数字交通科技股份有限公司 Early warning method and device for secondary road accidents, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2020199538A1 (en) Bridge key component disease early-warning system and method based on image monitoring data
CN108806243B (en) Traffic flow information acquisition terminal based on Zynq-7000
CN108877269B (en) Intersection vehicle state detection and V2X broadcasting method
CN110046584B (en) Road crack detection device and detection method based on unmanned aerial vehicle inspection
CN105185113A (en) Application device for traffic camera traffic information acquisition based on image identification
CN106851229B (en) Security and protection intelligent decision method and system based on image recognition
CN105611244A (en) Method for detecting airport foreign object debris based on monitoring video of dome camera
CN102426801A (en) Automatic parking lot berth management method and system based on visual analysis
CN111340151B (en) Weather phenomenon recognition system and method for assisting automatic driving of vehicle
CN102306274A (en) Device for identifying parking space state and method
CN110164139B (en) System and method for detecting and identifying side parking
CN110796580B (en) Intelligent traffic system management method and related products
CN109326125B (en) Picture quality diagnosis system and method based on embedded system
CN115331190A (en) Road hidden danger identification method and device based on radar fusion
CN112379362A (en) Event self-adaptive acquisition equipment and method based on multi-source data fusion
CN202003508U (en) Bus lane illegal vehicle intruding detection equipment
CN107147877A (en) FX night fog day condition all-weather colorful video imaging system and its construction method
CN110691224A (en) Transformer substation perimeter video intelligent detection system
CN108184096B (en) Panoramic monitoring device, system and method for airport running and sliding area
CN102708375B (en) High-definition integrated license plate snapshot recognition equipment and method
CN112926415A (en) Pedestrian avoiding system and pedestrian monitoring method
CN202257940U (en) License plate identification video server
CN103093579A (en) Debris flow or landslide alarm system based on videos
CN213483108U (en) Low-power-consumption parking management system based on combination of video and geomagnetism
CN111985418B (en) Vehicle-mounted highway traffic safety facility risk source intelligent identification device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination