CN113074714B - Multi-state potential sensing sensor based on multi-data fusion and processing method thereof - Google Patents
Multi-state potential sensing sensor based on multi-data fusion and processing method thereof Download PDFInfo
- Publication number
- CN113074714B CN113074714B CN202110224479.7A CN202110224479A CN113074714B CN 113074714 B CN113074714 B CN 113074714B CN 202110224479 A CN202110224479 A CN 202110224479A CN 113074714 B CN113074714 B CN 113074714B
- Authority
- CN
- China
- Prior art keywords
- data
- information
- sensor
- video
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D21/00—Measuring or testing not otherwise provided for
- G01D21/02—Measuring two or more variables by means not covered by a single other subclass
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The embodiment of the invention discloses a multi-state potential sensing sensor based on multi-data fusion and a processing method thereof. And multi-angle comprehensive fusion is carried out on the data obtained by the millimeter wave radar and the compound eye type video sensor and the data in the Beidou positioning time service module in the core processing unit to obtain more multi-level data. The method combines rich data obtained by different working characteristics of the sensor to meet the application requirements of all-weather and comprehensive perception on high-precision data of the polymorphic potential. The system can not only extract data of the characteristics, states, tracks and positions of different types of targets within the range of several meters to hundreds of meters, but also integrate the functions of monitoring, positioning, situational awareness, state prejudgment and the like.
Description
Technical Field
The embodiment of the invention relates to the technical field of video graphic analysis, AI video structuralization, target feature analysis, radar and video data fusion, situation perception, millimeter wave high-frequency signal processing, accurate positioning, target tracking and track analysis, in particular to a multi-data fusion-based polymorphic situation perception sensor and a processing method thereof.
Background
With the development of electronic technology, target tracking equipment utilizes a visual tracking system and the like to track a target through image acquisition, for example, tracking and monitoring targets such as vehicles, pedestrians or obstacles on a road, and realizes accurate management of special vehicles, road safety guarantee and the like. The existing target tracking equipment can only meet the definition requirements of face recognition and license plate recognition within a range of dozens of meters generally, cannot meet the definition requirements of large field angle, remote face recognition and license plate recognition, cannot meet the data requirements for helping monitoring personnel to quickly master more situation development on site in the same picture and more different application requirements based on image processing technology, has the defects of large arrangement quantity of required monitoring point positions, high cost of vertical rod wiring and poor flexibility of point position selection, and cannot meet the accurate application requirements of multi-state intelligent traffic and automatic driving vehicles on multi-state perception data.
Disclosure of Invention
Therefore, the embodiment of the invention provides a multi-state perception sensor based on multi-data fusion and a processing method thereof, and aims to solve the problems that the existing target tracking equipment cannot meet the requirements of the definition of large field angle, long-distance face recognition and license plate recognition, and cannot meet the requirements of intelligent traffic and automatic driving vehicles on accurate application of multi-state perception data.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
according to a first aspect of the embodiments of the present invention, a multi-state potential sensing sensor based on multi-data fusion is provided, where the multi-state potential sensing sensor includes a plurality of sensing sensors of different types and a core control board connected to the plurality of sensing sensors, and the sensing sensors include a compound eye type video sensor and a millimeter wave radar sensor, where the compound eye type video sensor is formed by arranging at least two ultra-high definition micro video cameras with equal pixels and different focal lengths in a compound eye type array;
the compound eye type image acquisition module is used for respectively acquiring ultrahigh-definition video image information of targets in different detection distance ranges through ultrahigh-definition miniature video cameras with different focal lengths, carrying out AI video image analysis processing on the video images to obtain structured data, characteristic data, state data, track information, abnormal event information, position information and other contents of a target object, acquiring three-dimensional structured data of the target object in a mode of mutually combining double video sensors or multiple video sensors, realizing continuous monitoring and detection effects of multiple regions and different distances on the target object by utilizing the ultrahigh-definition cameras with different focal lengths, and sending the obtained data into a core control panel for analysis and fusion processing, wherein the millimeter wave radar sensor is used for continuously tracking and detecting static targets and moving targets in a radar detection region by using high-frequency radio waves and sending the obtained data into the core control panel for analysis and fusion processing;
the core control panel comprises a core processor module, a core control module, a data storage/cache module, a GPS/Beidou positioning clock time service module and a clock module, wherein the core processor module is used for acquiring detection data of the sensing sensors corresponding to detection areas for analysis and processing when a target enters the detection areas of different sensing sensors, and correlating and fusing the detection data of different sensing sensors in overlapped detection coverage areas, the core control module is used for completing the control function of each subunit in the equipment according to instruction information, the GPS/Beidou positioning clock time service module is used for outputting longitude and latitude information, height information and clock information in real time and sending the information into the core control panel so as to perform multi-angle, omnibearing and comprehensive fusion with data acquired by the millimeter wave radar sensor and the compound eye type video sensor, and then acquiring more different system parameters and temporary data in the monitoring range covered by the equipment, the data storage/cache module is used for storing embedded application programs and algorithms required by each sensing sensor operated by the equipment, and various system parameters and temporary data set in advance, and the clock module is used for completing time service software and time service of each piece of data in local equipment and an external platform.
Furthermore, the perception sensor further comprises an infrared light supplement lamp, and the infrared light supplement lamp is used for turning on or off the infrared light supplement lamp according to the actual light intensity when the ambient light of the equipment cannot meet the requirements of optimal AI video graphic analysis or monitoring and monitoring application, so that the functional modes of night vision monitoring and image analysis of the equipment under dim light or no light are met.
Furthermore, the perception sensor further comprises a temperature and humidity sensor, and the temperature and humidity sensor is used for acquiring temperature and humidity information of the surrounding environment.
Further, the perception sensor further comprises an illumination sensor, and the illumination sensor is used for collecting the illumination information of the surrounding environment.
Furthermore, the multi-state potential sensing sensor also comprises a data interaction module, wherein the data interaction module is used for completing data interaction and analysis with external software or a third-party platform, and pushing the data interaction and analysis contents to the core control module, the data storage/cache module and each sensing sensor respectively according to the analysis contents and the interaction contents to complete and realize corresponding instructions.
Furthermore, the multi-state potential sensing sensor further comprises a protocol input and output module, wherein the protocol input and output module is used for outputting the final data obtained by the equipment to external software or a third-party platform or sending data information sent by external system software and the third-party platform to the data interaction module for analysis according to a communication protocol data transmission format pre-compiled by the equipment, and sending the analyzed content to each sub-module to execute a corresponding command.
Furthermore, the multi-state potential sensing sensor also comprises an extension sensor which is an external enhancement type sensor, external sensor data is sent to the protocol input and output module through an external extension interface of the device, and the data is sent to the corresponding subunit or external software and a third-party platform through the protocol input and output module.
According to a second aspect of the embodiments of the present invention, a processing method for a multi-data fusion-based multi-state potential sensing sensor is provided, where the method includes:
when a target enters a detection area of the millimeter radar sensor, the millimeter radar sensor outputs original point cloud data of the detected target and target data processed by the original point cloud data through a radar algorithm to a core control board in real time for analysis and processing, wherein the target data comprises real-time speed information, relative position information, acceleration information, XY axis variable information, point/track information, type information, target ID number information and the like of a target object;
the core processor module calls longitude and latitude information and height information which are output by a GPS/Beidou positioning clock time service module in real time, combines relative position information in the target data, processes and analyzes motion state information, abnormal event information, track information, position information and other contents of a target object obtained in an application program through radar data in an operating data storage/cache module, performs correlation fusion on the two types of data source information by using a correlation algorithm to obtain the motion state data, the abnormal event data, the track data, the position data and other contents of the target object with the longitude and latitude information and the height information, and accurately matches position area information where the target is located according to preset target detection area information in the data storage/cache module;
the core processor module calls a clock module to acquire time information, marks the motion state data, abnormal event data, track data, position data and other contents of the target object into radar comprehensive data information with a timestamp, and continuously sends the radar comprehensive data information into a data storage \ cache module to wait for the calling and outputting of the core processor module;
when a target enters the detection range of the video sensor, the equipment starts and calls an AI video processing algorithm in the video sensor to obtain characteristic information of the target, judges and identifies the type of the target according to the characteristic information, and simultaneously obtains a two-dimensional outline and target central point information of the target through the AI video processing algorithm so that a system can draw and obtain the motion track information, motion direction information, motion distance information and front, back, left and right distance information between every two targets according to the target central point information;
the core processing module calls time information in the clock module, and the video comprehensive data information generated by superimposing a timestamp on all data information acquired based on the video sensor is sent to the data storage \ cache module to wait for the calling and the output of the core processing module;
when a target enters an overlapping detection coverage area of a single video sensor and a single radar sensor, the core processor module performs fusion association and combination on the radar comprehensive data information and the video comprehensive data information according to a mutual fusion association mechanism of radar and video data to generate fusion data of the target with motion attributes and characteristic attributes, and the core processor module respectively superimposes the fusion data on original video images of different video sensors to generate different video image information;
when a target enters an overlapping detection coverage area of two video sensors and a radar sensor, according to the change of a detection area where the target is located, the equipment calls an AI video processing algorithm in a video sensor which is started in advance and a video sensor which is started at present to obtain characteristic information of the target, continuously compares and associates characteristic attributes of the target obtained from different video sensors, determines whether the target concerned by each sensor is the same target according to a mutual fusion and association calculation mechanism of radar and video data, if yes, transmits data information collected based on the video sensor at present to the video sensor through an association fusion mechanism, continuously detects the target and obtains the data information through the video sensor at present, and secondarily supplements the data information collected based on the video sensor at present;
the core processing module extracts and runs an AI video image processing analysis program in the data storage \ cache module, extracts real-time video comprehensive data information of a prior video sensor and a current video sensor to perform secondary AI video analysis processing, acquires depth information and three-dimensional outline size information of the same target, simultaneously combines the information with the video comprehensive data information of the current video sensor to generate secondary video comprehensive data information, and inputs the secondary video comprehensive data information into the data storage \ cache module to wait for the calling of the core processing module;
the core processing module calls radar comprehensive data information with a timestamp and secondary video comprehensive data information with a timestamp in the data storage/cache module, fusion association and combination are carried out on the radar comprehensive data information and the secondary video comprehensive data information collected by the two sensors according to a mutual fusion association computing mechanism of radar and video data, secondary fusion data with more comprehensive motion attributes and characteristic attributes of a target are formed, and the secondary fusion data are respectively superposed to original video images of a previous video sensor and a current video sensor by the core processing module to generate more perfect different secondary video image information.
Further, the method further comprises:
the method comprises the steps that original data information acquired based on different perception sensors and/or data information acquired after being processed by equipment are packaged and packaged according to a preset data format and are sent to a data interaction module, the data interaction module compresses the data according to a preset compression format and sends the compressed data to a protocol input and output module, the protocol input and output module periodically outputs protocol data according to a preset data communication protocol format, the data are output and then used by a third-party system or a platform, and the process is carried out until a target leaves a detection area covered by the equipment.
Further, the target type comprises a vehicle, a pedestrian or an obstacle, wherein the vehicle characteristic attribute information comprises key information such as a vehicle brand, a vehicle model, a vehicle license plate, a vehicle color, a vehicle type, a attribution and the appearance of a driver, and the pedestrian characteristic attribute information comprises key information such as a man, a woman, an age group, clothing and the appearance.
The embodiment of the invention has the following advantages:
the equipment is formed by combining a compound eye type video sensor, a millimeter wave radar sensor, other types of sensors, a Beidou positioning and timing module and a core processing unit, wherein the compound eye type video sensor is formed by arranging at least two ultra-high-definition micro video cameras with equal pixels and different focal lengths in a compound eye type array. And multi-angle comprehensive fusion is carried out on the data obtained by the millimeter wave radar and the compound eye type video sensor and the data in the Beidou positioning time service module in the core processing unit to obtain more multi-level data. The method combines rich data obtained by different working characteristics of the sensor to meet the application requirements of all-weather and comprehensive perception on high-precision data of the polymorphic potential. The system can not only extract data of the characteristics, states, tracks and positions of different types of targets within the range of several meters to hundreds of meters, but also integrate the functions of monitoring, positioning, situational awareness, state prejudgment and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions that the present invention can be implemented, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the effects and the achievable by the present invention, should still fall within the range that the technical contents disclosed in the present invention can cover.
Fig. 1 is a schematic structural diagram of a multi-data fusion-based multi-state potential sensing sensor according to embodiment 1 of the present invention;
fig. 2 is a schematic diagram of a detection range covered by a plurality of sensing sensors after fusion of the multi-data fusion-based multi-state potential sensing sensor provided in embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of an optimal sensing coverage area of an apparatus of a multi-data fusion-based multi-state potential sensing sensor according to embodiment 1 of the present invention.
Detailed Description
The present invention is described in terms of specific embodiments, and other advantages and benefits of the present invention will become apparent to those skilled in the art from the following disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment 1 of the invention provides a multi-state potential sensing sensor based on multi-data fusion, which comprises a plurality of sensing sensors of different types and a core control board connected with the sensing sensors.
The perception sensor comprises a compound eye type image acquisition module and a millimeter wave radar sensor, wherein the compound eye type image acquisition module is formed by arranging at least two ultra-high-definition micro video sensors with equal pixels and different focal lengths in a compound eye type array. The compound eye type image acquisition module is used for respectively acquiring ultrahigh-definition video image information of targets in different detection distance ranges through ultrahigh-definition miniature video cameras with different focal lengths, carrying out AI video image analysis processing on the video images to obtain structured data, characteristic data, state data, track information, abnormal event information, position information and other contents of the target object, acquiring three-dimensional structured data of the target object in a mode of mutually combining double video sensors or multiple video sensors, realizing continuous monitoring and detection effects of multiple regions and different distances on the target object by utilizing the ultrahigh-definition video cameras with different focal lengths, and sending the obtained data into the core control panel for analysis and fusion processing.
In this embodiment, two ultra high definition micro cameras are taken as an example, and two or more than two cameras are all within the protection scope of the present invention.
The video sensor 1 is a wide-angle high-definition camera with ultra-low illumination, and is mainly used for visual AI processing, the focal length of a lens of the camera is between 4mm and 12mm, and the image definition is more than 200 ten thousand pixels. The visible range is 5-300 meters, and the main monitoring range is 5-150 meters. The method has the main functions of obtaining the characteristic information of the vehicle or the pedestrian within the range of 5-150 meters, analyzing obstacles, extracting vehicle characteristics, extracting vehicle tracks, extracting pedestrian characteristics, extracting tracks, analyzing obstacles, analyzing remote environments and analyzing weather (detecting rain, snow, fog, haze and dust based on videos).
The video sensor 2 is an ultra-low illumination narrow-angle high-definition camera which is mainly used for visual AI processing, the focal length of a lens of the camera is between 24mm and 50mm, and the image definition is more than 200 ten thousand pixels. The visible range is 80-400 meters, and the main monitoring range is 80-300 meters. The method has the main functions of obtaining the characteristic information of the vehicle or the pedestrian within the range of 100-300 meters, analyzing obstacles, extracting vehicle characteristics, extracting vehicle tracks, extracting pedestrian characteristics, extracting tracks, analyzing obstacles, analyzing remote environment and analyzing weather (detecting rain, snow, fog, haze and dust based on videos).
The two micro cameras are arranged in an array mode, the video sensor 1 is responsible for monitoring a short-distance area and extracting AI video image data, the video sensor 2 is responsible for monitoring a long-distance area and extracting AI video image data, and a certain overlapping area is arranged between the two sensors to meet the functional requirements of free image switching under different focal lengths and mutual data connection and transmission between the two sensors. Through the overlapping accumulation of different monitoring areas of the two cameras, high-definition monitoring pictures and required image information at any angle, any range and any position in a continuous detection range of 5-300 meters can be obtained. In addition, the system can acquire the three-dimensional contour dimension information and the three-dimensional target characteristic information of the tracked target through the video image processing technology at the part where the two video sensors are overlapped with each other.
The technology of mutual fusion of physical zooming and electronic zooming video monitoring can be realized through the superposition of images of two cameras with different focal lengths: because the equipment adopts a compound eye array type camera arrangement structure, each camera adopts ultra-high-definition miniature monitoring cameras with different focal lengths, the video sensor 1 is a main monitoring picture, and when a manager wants to check a distant target in the video sensor 1, the system can call the ultra-high-definition picture output by the video sensor 1 and assist an electronic zooming technology. When the definition of the target to be viewed by the manager and the surrounding picture cannot meet the application requirement, the system can automatically call and switch to the ultra-high-definition picture of the video sensor 2 to meet the requirements of the manager on monitoring, watching and amplifying viewing of the far target in more detail. When the manager operates in the reverse direction, the device is zoomed by the electronic zooming technology of the far point (the position where the target is monitored) of the video sensor 2, zoomed by the electronic zooming of the video sensor 2 to the physical zooming of the video sensor 2, and then zoomed by the electronic zooming technology of the video sensor 1 until the physical zooming of the video sensor 1 cannot zoom. A reverse reduction process.
Output of the video stream:
1) When a third-party platform needs to call a historical video image of the device within a certain time period, for example, when no people operate the device within the time period, the system can simultaneously record and store the initial focal length images of two paths of video sensor devices, and store the initial focal length images in the same directory according to a preset coding sequence and an association mechanism, and when a manager calls any video sensor image, the other video sensor image can be simultaneously associated, lifted and waited to be called. The mode system supports the mutual calling and switching of images between the two and the mutual connection of the continuous electronic zooming and the zooming of the images between the two, and supports the mode of previewing while operating.
2) When a third-party platform needs to call a historical video image of the device within a certain time period, if the device is operated by a person within the time period, the system can simultaneously record and store two paths of images after the operation of the video sensor device, and the images are stored in the same directory according to a preset coding sequence and an association mechanism, and when a manager calls any one of the video sensor images, the other video sensor image can be simultaneously associated, lifted and waited to be called. The mode system supports mutual image call switching between the two and mutual connection of indirect (with an operated image picture) or continuous (without an operated image picture) image electronic zooming and zooming between the two, and supports a preview mode while operating.
In this embodiment, the multi-state potential sensing sensor further includes an infrared light supplement lamp, and the infrared light supplement lamp is used for turning on or off the infrared light supplement lamp according to actual light intensity when ambient light around the device cannot meet the application requirements of optimal AI video graphic analysis or monitoring and monitoring, so as to meet the functional modes of night vision monitoring and image analysis of the device in dim light or no light.
In this embodiment, the plurality of sensing sensors further include a temperature and humidity sensor, and the temperature and humidity sensor is configured to acquire temperature and humidity information of an ambient environment.
In this embodiment, the plurality of sensing sensors further include an illuminance sensor, and the illuminance sensor is configured to collect illuminance information of the surrounding environment.
The brightness and illumination sensor is used for acquiring the illumination intensity of the surrounding environment of the equipment and respectively sending the real-time numerical value to the protocol transmission module and the core control module. The core control module calls corresponding preset optimal working mode parameters of the camera in the data storage \ cache module according to the change range value of ambient light (the core control module can also receive numerical values sent by the expansion sensor, such as the numerical values of a rainfall sensor, a wind speed direction sensor and the like), and sends the working parameters to the video sensor 1 and the video sensor 2, and after the two video sensors receive the numerical values from the data storage \ cache module, the working state of the two video sensors is periodically and dynamically adjusted to enable the performance of the two video sensors to be optimal, so that the requirements of rain, snow, fog, haze and dust in different environments are met; the best working performance requirements under different light rays such as day, night, sunny day, cloudy day, morning, evening and the like. Secondly, when the ambient light of the equipment cannot meet the requirements of optimal AI video graphic analysis or monitoring application, the core control module can turn on and turn off the infrared light supplement lamp according to the actual light intensity to meet the functional modes of night vision monitoring and image analysis of the equipment under dim light or no illumination. The infrared light filling lamp is non-visible light filling equipment, and the equipment can not produce the visible light source that ordinary incandescent lamp, LED lamp or the infrared light filling of tradition sent, and this kind of light source can not observed by the people, but can the camera catch, consequently can be at night, fine avoided equipment because of the light source give the interference that pedestrian, vehicle caused, improved the whole security of equipment.
The millimeter wave radar sensor is used for continuously tracking and detecting a static target and a moving target in a radar detection area by using high-frequency radio waves, and sending obtained data into the core control board for analysis and fusion processing. The radar sensor is a directional short-distance millimeter wave radar sensor, is mainly used for performing high-frequency radio wave detection processing on a static target and a moving target and is used for simulating the touch of a human. Fig. 2 shows the detection range covered by the millimeter wave radar sensor, the video sensor 1, and the video sensor 2 after fusion, and fig. 3 shows the best sensing coverage range of the device.
The core control panel comprises a core processor module, a core control module, a data storage/cache module, a GPS/Beidou positioning clock time service module and a clock module, wherein the core processor module is used for acquiring detection data of the sensing sensors corresponding to detection areas to analyze and process when a target enters the detection areas of different sensing sensors, and correlating and fusing the detection data of different sensing sensors in overlapped detection coverage areas, the core control module is used for completing the control function of each subunit in the equipment according to instruction information, and the GPS/Beidou positioning clock time service module is used for outputting longitude and latitude information, height information and clock information in real time and sending the information into the core control panel so as to extract data of different layers in the monitoring range covered by the equipment after the information is subjected to multi-angle, omnibearing and comprehensive fusion with the data acquired by the millimeter wave radar sensor and the compound eye type video sensor, so that the functions of extracting the characteristics, the states, the tracks and the positions of different types of targets in the range of several meters to several hundred meters can be realized, and the functions of monitoring, positioning, perception, state prejudgment and the like of the targets and the detection areas can be integrated. The data storage/cache module is used for storing embedded application programs and algorithms required by all the perception sensors operated by the equipment, and various system parameters and temporary data which are set in advance, and the clock module is used for completing time service of all data in the local equipment, equipment time service and time service of external software and a third-party platform.
In this embodiment, the multi-state potential sensing sensor further includes a data interaction module, where the data interaction module is configured to complete data interaction and analysis with external software or a third-party platform, and push the data interaction and analysis content to the core control module, the data storage/cache module, and each sensing sensor respectively according to the analysis content and the interaction content to complete and implement a corresponding instruction.
In this embodiment, the multi-state potential sensing sensor further includes a protocol input/output module, and the protocol input/output module is configured to output final data obtained by the device to external software or a third-party platform or send data information sent by external system software and the third-party platform to the data interaction module for analysis, and send the analyzed content to each sub-module to execute a corresponding command, mainly according to a communication protocol data transmission format pre-programmed by the device.
In this embodiment, the multi-state potential sensing sensor further includes an extension sensor, where the extension sensor is an external enhancement sensor, and sends data of the external sensor to the protocol input/output module through an external extension interface of the device, and then sends the data to the corresponding subunit or external software and the third-party platform through the protocol input/output module.
The processing method of the multi-state potential sensing sensor based on the multi-data fusion provided by the embodiment of the invention comprises the following steps that a target sequentially passes through the video sensor 2 and the video sensor 1 from the farthest end of the detection distance of the radar sensor until the target leaves the optimal data acquisition range set by equipment:
when a target enters a detection area of the millimeter radar sensor, the millimeter radar sensor outputs original point cloud data of the detected target and target data processed by the original point cloud data through a radar algorithm to a core control board in real time for analysis and processing, wherein the target data comprises real-time speed information, relative position information, acceleration information, XY axis variable information, point/track information, type information, target ID number information and the like of the target object;
the core processor module calls longitude and latitude information and height information which are output by the GPS/Beidou positioning clock time service module in real time, combines relative position information in target data, processes and analyzes motion state information, abnormal event information, track information, position information and other contents of a target object obtained in an application program through radar data in the running data storage/cache module, performs correlation fusion on the two types of data source information by using a correlation algorithm to obtain the motion state data, abnormal event data, track data, position data and other contents of the target object with the longitude and latitude information and the height information, and accurately matches position area information where the target is located according to preset target detection area information in the data storage/cache module;
the core processor module calls the clock module to acquire time information, marks the motion state data, abnormal event data, track data, position data and other contents of the target object into radar comprehensive data information with a timestamp, and continuously sends the radar comprehensive data information into the data storage \ cache module to wait for the calling and outputting of the core processor module.
When a target enters the detection range of the video sensor, the following method is adopted to acquire target information data of the video sensor, and the method for acquiring the target information content of the video sensor 1 is the same as the method for acquiring the target information content of the video sensor 2:
when a target enters the detection range of the video sensor, the equipment starts and calls an AI video processing algorithm in the video sensor to obtain characteristic information of the target, judges and identifies the type of the target through the characteristic information, and simultaneously obtains a two-dimensional outline and target central point information of the target through the AI video processing algorithm so that a system can draw and obtain motion track information, motion direction information, motion distance information and front, back, left and right distance information between every two targets according to the target central point information;
the core processing module calls time information in the clock module, and all data information acquired based on the video sensor is superposed with a timestamp to generate video comprehensive data information, and the video comprehensive data information is sent to the data storage \ cache module to wait for the calling and the output of the core processing module.
When the target moves from the farthest end of the detection distance of the radar sensor to the detection range of the video sensor 2, the overlapped detection coverage area of the video sensor 2 and the radar sensor is formed, and the video sensor 2 data and the radar data are fused by adopting the following method:
when a target enters an overlapping detection coverage area of a single video sensor and a single radar sensor, the core processor module performs fusion association and combination on radar comprehensive data information and video comprehensive data information according to a mutual fusion association mechanism of radar and video data to generate fusion data of the target with motion attributes and characteristic attributes, and the core processor module respectively superimposes the fusion data on original video images of different video sensors to generate different video image information.
When the detection range of the target video sensor 2 moves to the detection range of the video sensor 1, which is the overlapped detection coverage area of the video sensor 1, the video sensor 2 and the radar sensor, the data of the video sensor 1, the video sensor 2 and the radar data are fused by adopting the following method:
when a target enters an overlapping detection coverage area of two video sensors and a radar sensor, according to the change of a detection area where the target is located, the device calls an AI video processing algorithm in a video sensor (namely, the video sensor 2) which is started in advance and a video sensor (namely, the video sensor 1) which is started in advance to acquire characteristic information of the target, continuously compares and associates characteristic attributes of the target acquired from different video sensors, determines whether the target concerned by each sensor is the same target according to a mutual fusion and association calculation mechanism of radar and video data, if so, transmits data information acquired based on the video sensor (namely, the video sensor 2) to the current video sensor (namely, the video sensor 1) through the association and fusion mechanism, and continuously detects the target and acquires data information through the current video sensor (namely, the video sensor 1) and secondarily supplements target data information acquired through the current video sensor (namely, the video sensor 1) if the characteristic information and other data information of the target acquired from the video sensor (namely, the video sensor 2) are incomplete;
the core processing module extracts and runs an AI video image processing analysis program in the data storage \ cache module, extracts real-time video comprehensive data information of a prior video sensor (namely, the video sensor 2) and a current video sensor (namely, the video sensor 1) to carry out secondary AI video analysis processing, obtains depth information and three-dimensional outline size information of the same target, simultaneously combines the information into video comprehensive data information of the current video sensor (namely, the video sensor 1) to generate secondary video comprehensive data information, and inputs the secondary video comprehensive data information into the data storage \ cache module to wait for the calling of the core processing module;
the core processing module calls radar comprehensive data information with a timestamp and secondary video comprehensive data information with a timestamp in the data storage/cache module, fusion association and combination are carried out on the radar comprehensive data information and the secondary video comprehensive data information collected by the two sensors according to a mutual fusion association computing mechanism of radar and video data to form secondary fusion data with more comprehensive motion attributes and characteristic attributes of a target, and the secondary fusion data are respectively superposed to original video images of a previous video sensor (namely the video sensor 2) and a current video sensor (namely the video sensor 1) by the core processing module to generate more complete different secondary video image information.
AI video processing, analysis, correlation, fusion, merging, etc. between video sensor 1 and video sensor 2 until the target leaves the area where the two video sensors overlap. The above process is that the target passes through the video sensor 2 and the video sensor 1 from the farthest end of the detection distance of the radar sensor until the target leaves the optimal data acquisition range set by the equipment, if the target moves from the nearest position to the farthest end of the equipment, the above data is generated by the reverse working process from the video sensor 1 to the radar sensor and then to the video sensor 2, and the mode, the data content and the output data content are consistent with the above description.
Further, the method further comprises:
the method comprises the steps that original data information acquired based on different types of sensing sensors and/or data information acquired after equipment processing, such as secondary fusion data, secondary video image information of different video sensors, environment data information acquired by a temperature and humidity sensor or an illumination sensor, equipment position information, time information and data information acquired by an expansion sensor, original video images (if needed) of a video sensor 1 and a video sensor 2, radar point cloud data (if needed) and the like are packaged and sent to a data interaction module according to a preset data format, the data interaction module compresses the data according to the preset compression format and sends the compressed data to a protocol input and output module, the protocol input and output module periodically outputs protocol data according to the preset data communication protocol format, and the data are output and used by a third party system or a platform until a target leaves a detection area covered by the equipment.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.
Claims (10)
1. A multi-situation perception sensor based on multi-data fusion is characterized in that the multi-situation perception sensor comprises a plurality of perception sensors of different types and a core control panel connected with the perception sensors, and the perception sensors comprise a compound eye type video sensor and a millimeter wave radar sensor which are formed by arranging at least two ultra-high-definition micro video cameras with equal pixels and different focal lengths in a compound eye type array;
the compound eye type video sensor is used for acquiring ultra-high-definition video image information of targets in different detection distance ranges through ultra-high-definition micro video cameras with different focal lengths respectively, performing AI video image analysis processing on the video images to obtain structural data, characteristic data, state data, track information, abnormal event information and position information of a target object, acquiring three-dimensional structural data of the target object in a mode of mutually combining double video sensors or multiple video sensors, realizing continuous monitoring and detection effects of multiple regions and different distances on the target object by utilizing the ultra-high-definition micro video cameras with different focal lengths, and sending the obtained data into a core control board for analysis and fusion processing, wherein the millimeter wave radar sensor is used for continuously tracking and detecting static targets and moving targets in a radar detection region by using high-frequency radio waves and sending the obtained data into the core control board for analysis and fusion processing;
the core control panel comprises a core processor module, a core control module, a data storage/cache module, a GPS/Beidou positioning clock time service module and a clock module, wherein the core processor module is used for acquiring detection data of the sensing sensors corresponding to detection areas for analysis and processing when a target enters the detection areas of different sensing sensors, and correlating and fusing the detection data of different sensing sensors in overlapped detection coverage areas, the core control module is used for completing the control function of each subunit in the equipment according to instruction information, the GPS/Beidou positioning clock time service module is used for outputting longitude and latitude information, height information and clock information in real time and sending the information into the core control panel so as to perform multi-angle, omnibearing and comprehensive fusion with data acquired by the millimeter wave radar sensor and the compound eye type video sensor, and then acquiring more different system parameters and temporary data in the monitoring range covered by the equipment, the data storage/cache module is used for storing embedded application programs and algorithms required by each sensing sensor operated by the equipment, and various system parameters and temporary data set in advance, and the clock module is used for completing time service software and time service of each piece of data in local equipment and an external platform.
2. The multi-data fusion-based multi-state perception sensor as claimed in claim 1, wherein the multi-state perception sensor further includes an infrared fill-in light lamp, and the infrared fill-in light lamp is used for turning on or off the infrared fill-in light lamp according to actual light intensity when ambient light around the device cannot meet requirements of optimal AI video graphic analysis or monitoring and monitoring application, so as to meet functional modes of night vision monitoring and image analysis of the device in dim light or no light.
3. The multi-data fusion based multi-state potential sensing sensor according to claim 1, wherein the sensing sensor further comprises a temperature and humidity sensor, and the temperature and humidity sensor is used for collecting temperature and humidity information of the surrounding environment.
4. The multi-data fusion-based multi-state potential sensing sensor as claimed in claim 1, wherein the sensing sensor further comprises an illumination sensor for collecting illumination information of the surrounding environment.
5. The multi-situation awareness sensor based on multi-data fusion as claimed in claim 1, wherein the multi-situation awareness sensor further comprises a data interaction module, and the data interaction module is configured to complete data interaction and analysis with external software or a third party platform, and push the data interaction and analysis contents to the core control module, the data storage/cache module, and each awareness sensor respectively to complete and implement corresponding instructions.
6. The multi-situation awareness sensor based on multi-data fusion according to claim 5, further comprising a protocol input and output module, wherein the protocol input and output module is configured to complete outputting final data obtained by the device to an external software or a third party platform or sending data information sent by an external system software and the third party platform to the data interaction module for parsing according to a communication protocol data transmission format pre-programmed by the device, and sending the parsed content to each sub-module for executing a corresponding command.
7. The multi-data fusion-based multi-state perception sensor according to claim 6, wherein the multi-state perception sensor further comprises an extension sensor, the extension sensor is an external enhanced sensor, external sensor data is sent to the protocol input and output module through an external extension interface of the device, and then the data is sent to the corresponding subunit or external software and a third party platform through the protocol input and output module.
8. The method for processing the multidata fusion based polymorphic potential sensing sensor according to any of claims 1-7, wherein the method comprises:
when a target enters a detection area of the millimeter wave radar sensor, the millimeter wave radar sensor outputs original point cloud data of the detected target and target data processed by the original point cloud data through a radar algorithm to a core control board in real time for analysis and processing, wherein the target data comprises real-time speed information, relative position information, acceleration information, XY axis variable information, point/track information, type information and target ID number information of a target object;
the core processor module calls longitude and latitude information and height information which are output by a GPS/Beidou positioning clock time service module in real time, combines relative position information in the target data, processes and analyzes motion state information, abnormal event information, track information and position information of a target object obtained in an application program through radar data in an operating data storage/cache module, performs correlation fusion on the longitude and latitude information and the height information and the motion state information, the abnormal event information, the track information and the position information of the target object by utilizing a correlation algorithm to obtain the motion state data, the abnormal event data, the track data and the position data of the target object with the longitude and latitude information and the height information, and accurately matches position area information where the target is located according to target detection area information preset in the data storage/cache module;
the core processor module calls a clock module to acquire time information, marks motion state data, abnormal event data, track data and position data of the target object into radar comprehensive data information with a timestamp, and continuously sends the radar comprehensive data information into a data storage \ cache module to wait for the calling and outputting of the core processor module;
when a target enters the detection range of the video sensor, the equipment starts and calls an AI video processing algorithm in the video sensor to obtain characteristic information of the target, judges and identifies the type of the target according to the characteristic information, and simultaneously obtains a two-dimensional outline and target central point information of the target through the AI video processing algorithm so that a system can draw and obtain the motion track information, motion direction information, motion distance information and front, back, left and right distance information between every two targets according to the target central point information;
the core processing module calls time information in the clock module, and all data information acquired based on the video sensor is superposed with a timestamp to generate video comprehensive data information which is sent to the data storage \ cache module to wait for the calling and the output of the core processing module;
when a target enters an overlapping detection coverage area of a single video sensor and a single radar sensor, the core processor module performs fusion association and combination on the radar comprehensive data information and the video comprehensive data information according to a mutual fusion association mechanism of radar and video data to generate fusion data of the target with motion attributes and characteristic attributes, and the core processor module respectively superimposes the fusion data on original video images of different video sensors to generate different video image information;
when a target enters an overlapping detection coverage area of two video sensors and a radar sensor, according to the change of a detection area where the target is located, the equipment calls an AI video processing algorithm in a video sensor which is started in advance and a video sensor which is started at present to obtain characteristic information of the target, continuously compares and associates characteristic attributes of the target obtained from different video sensors, determines whether the target concerned by each sensor is the same target according to a mutual fusion association mechanism of the radar and video data, if so, transmits data information collected based on the video sensor in advance to the current video sensor through the association fusion mechanism, continuously detects the target and obtains the data information through the current video sensor, and secondarily supplements the data information collected based on the video sensor in advance;
the core processing module extracts and runs an AI video image processing analysis program in the data storage \ cache module, extracts real-time video comprehensive data information of a prior video sensor and a current video sensor to perform secondary AI video analysis processing, acquires depth information and three-dimensional outline size information of the same target, simultaneously combines the information into the video comprehensive data information of the current video sensor to generate secondary video comprehensive data information, and inputs the secondary video comprehensive data information into the data storage \ cache module to wait for the calling of the core processing module;
the core processing module calls radar comprehensive data information with a timestamp and secondary video comprehensive data information with the timestamp in the data storage/cache module, fusion association and combination are carried out on the radar comprehensive data information and the secondary video comprehensive data information according to a mutual fusion association computing mechanism of radar and video data, secondary fusion data with a target having more comprehensive motion attributes and characteristic attributes are formed, and the core processing module respectively superimposes the secondary fusion data on original video images of a previous video sensor and a current video sensor to generate more complete different secondary video image information.
9. The method for processing a multi-data fusion based multi-state potential sensing sensor as claimed in claim 8, wherein the method further comprises:
the method comprises the steps that original data information collected based on different types of sensing sensors and/or data information obtained after processing by equipment are packaged and packaged according to a preset data format and are sent to a data interaction module, the data interaction module compresses the data according to a preset compression format and sends the compressed data to a protocol input and output module, the protocol input and output module periodically outputs protocol data to the data according to a preset data communication protocol format and supplies the data to a third-party system or a platform for use after the data are output, and the process is carried out until a target leaves a detection area covered by the equipment.
10. The method as claimed in claim 8, wherein the target type includes a vehicle, a pedestrian or an obstacle, wherein the vehicle characteristic attribute information includes key information of a vehicle brand, a vehicle model, a vehicle license plate, a vehicle color, a vehicle type, a home, and an appearance of a driver, and the pedestrian characteristic attribute information includes key information of a man, a woman, an age group, a garment, and a face.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110224479.7A CN113074714B (en) | 2021-03-01 | 2021-03-01 | Multi-state potential sensing sensor based on multi-data fusion and processing method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110224479.7A CN113074714B (en) | 2021-03-01 | 2021-03-01 | Multi-state potential sensing sensor based on multi-data fusion and processing method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113074714A CN113074714A (en) | 2021-07-06 |
CN113074714B true CN113074714B (en) | 2022-11-01 |
Family
ID=76609668
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110224479.7A Active CN113074714B (en) | 2021-03-01 | 2021-03-01 | Multi-state potential sensing sensor based on multi-data fusion and processing method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113074714B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114280601B (en) * | 2021-12-28 | 2023-03-28 | 河北德冠隆电子科技有限公司 | Multi-angle adjustable radar vision all-in-one machine sensor |
CN115148023B (en) * | 2022-06-23 | 2024-06-14 | 阿里云计算有限公司 | Path fusion method and device and electronic equipment |
CN115985095B (en) * | 2022-12-23 | 2024-09-20 | 河北德冠隆电子科技有限公司 | Wisdom is multidimensional thunder vision integration all-in-one for transportation |
CN116953704A (en) * | 2022-12-23 | 2023-10-27 | 河北德冠隆电子科技有限公司 | Wisdom is adjustable omnidirectionally scanning millimeter wave radar of multidimension angle for transportation |
CN117912194B (en) * | 2024-03-20 | 2024-06-07 | 吉林大学 | System and method for monitoring high-risk gas in limited space based on wireless communication network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108322698A (en) * | 2017-12-28 | 2018-07-24 | 北京交通大学 | The system and method merged based on multiple-camera and Inertial Measurement Unit |
CN108802758A (en) * | 2018-05-30 | 2018-11-13 | 北京应互科技有限公司 | A kind of Intelligent security monitoring device, method and system based on laser radar |
CN109557534A (en) * | 2018-09-29 | 2019-04-02 | 河北德冠隆电子科技有限公司 | A kind of more element omnidirectional trackings detection radar sensor device and its application method |
CN110602388A (en) * | 2019-08-29 | 2019-12-20 | 安徽农业大学 | Zooming bionic compound eye moving target tracking system and method |
CN111457916A (en) * | 2020-03-30 | 2020-07-28 | 中国人民解放军国防科技大学 | Space debris target tracking method and device based on expansion mark random finite set |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6862537B2 (en) * | 2002-03-21 | 2005-03-01 | Ford Global Technologies Llc | Sensor fusion system architecture |
US8229166B2 (en) * | 2009-07-07 | 2012-07-24 | Trimble Navigation, Ltd | Image-based tracking |
-
2021
- 2021-03-01 CN CN202110224479.7A patent/CN113074714B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108322698A (en) * | 2017-12-28 | 2018-07-24 | 北京交通大学 | The system and method merged based on multiple-camera and Inertial Measurement Unit |
CN108802758A (en) * | 2018-05-30 | 2018-11-13 | 北京应互科技有限公司 | A kind of Intelligent security monitoring device, method and system based on laser radar |
CN109557534A (en) * | 2018-09-29 | 2019-04-02 | 河北德冠隆电子科技有限公司 | A kind of more element omnidirectional trackings detection radar sensor device and its application method |
CN110602388A (en) * | 2019-08-29 | 2019-12-20 | 安徽农业大学 | Zooming bionic compound eye moving target tracking system and method |
CN111457916A (en) * | 2020-03-30 | 2020-07-28 | 中国人民解放军国防科技大学 | Space debris target tracking method and device based on expansion mark random finite set |
Non-Patent Citations (2)
Title |
---|
A measurement method of motion parameters in aircraft ground tests using computer vision;Jiashan CuiYunhui LiCong Li;《Measurement》;20210117;全文 * |
基于多传感器融合的无人机自主避障研究;何守印;《中国博士学位论文全文数据库 (工程科技Ⅱ辑)》;20180615;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113074714A (en) | 2021-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113074714B (en) | Multi-state potential sensing sensor based on multi-data fusion and processing method thereof | |
CN107274695B (en) | Intelligent lighting system, intelligent vehicle and vehicle driving assisting system and method thereof | |
US11833966B2 (en) | Switchable display during parking maneuvers | |
EP1030188B1 (en) | Situation awareness system | |
CN111368706A (en) | Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision | |
CN112099040A (en) | Whole-course continuous track vehicle tracking system and method based on laser radar network | |
CN101377811B (en) | Method and system for recognizing license plate | |
US9736369B2 (en) | Virtual video patrol system and components therefor | |
WO2019194091A1 (en) | Intrusion detection system and intrusion detection method | |
CN104833368A (en) | Live-action navigation system and method | |
WO2007096004A1 (en) | Video retrieval system, method and computer program for surveillance of moving objects | |
KR102118581B1 (en) | Image processing method of cctv camera with radar module and image processing method of cctv system including thereof | |
CN208351710U (en) | A kind of comprehensive guidance system of pedestrains safety street crossing based on 3 D human body perception | |
CN115004273A (en) | Digital reconstruction method, device and system for traffic road | |
US20220044558A1 (en) | Method and device for generating a digital representation of traffic on a road | |
CN111402286A (en) | Target tracking method, device and system and electronic equipment | |
CN114002669A (en) | Road target detection system based on radar and video fusion perception | |
KR102362304B1 (en) | Traffic accidents notification system for accident sensing and secondary traffic accident prevention | |
KR102104351B1 (en) | Cctv camera for detecting moving object using radar module and cctv system including thereof | |
CN110648538B (en) | Traffic information sensing system and method based on laser radar network | |
CN114419572B (en) | Multi-radar target detection method and device, electronic equipment and storage medium | |
CN110070724A (en) | A kind of video monitoring method, device, video camera and image information supervisory systems | |
JP5790788B2 (en) | Monitoring system | |
CN104702967B (en) | Virtual video cruising inspection system and its component | |
Koutsia et al. | Intelligent traffic monitoring and surveillance with multiple cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 050030 2002, block B, white commercial plaza, 105 Huaian East Road, Yuhua District, Shijiazhuang City, Hebei Province Applicant after: HEBEI DEGUROON ELECTRONIC TECHNOLOGY Co.,Ltd. Address before: 050030 Xinhai Tiantian Residential Building, 295 Donggang Road, Yuhua District, Shijiazhuang City, Hebei Province, 15-3-1303 Applicant before: HEBEI DEGUROON ELECTRONIC TECHNOLOGY Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |